From varnish-bugs at varnish-cache.org Mon Aug 1 09:07:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 09:07:20 -0000 Subject: [Varnish] #970: partial content code 206 - visible as '206' in varnishncsa and as '200' from VCL code Message-ID: <046.cdbe6b5a9773b161b3c6a61062dfc851@varnish-cache.org> #970: partial content code 206 - visible as '206' in varnishncsa and as '200' from VCL code -------------------------------------------------+-------------------------- Reporter: jhalfmoon | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Keywords: 206 partial content varnishncsa vcl | -------------------------------------------------+-------------------------- Hi there, It seems that when varnishd returns partial content to a browser, Varnishnsca properly logs the transfer with a code 206 'partial content', but logging the exact same transfer from VCL code, it returns code '200', which is incorrect. I have tried using VRT_r_obj_status, VRT_r_rest_status and VRT_r_beresp status to no avail. This is the case with both Varnish 2.1.5 and 3.0.0. Is this done on purpose by the code or can it be registered as a bug? This is how I test it (examples using Varnish 3.0.0): #=== The command to do the RANGE request (note the returncode 206 'partial content) === testproxy02 ~ # http_proxy=http://127.0.0.1:80 curl -v -r 500-1000 http://vi.nl . . Proxy-Connection: Keep-Alive < HTTP/1.1 206 Partial Content < Server: Apache . ...... #=== This is the output of varnishncsa (note the code '206')=== testproxy02 ~ # varnishncsa 127.0.0.1 - - [28/Jul/2011:17:17:23 +0200] "GET http://bla.nlhttp://bla.nl HTTP/1.1" 206 501 "-" "curl/7.15.5 (x86_64 -redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5" #=== This is the output I get in syslog (note the code '200') === testproxy02 ~ # tail -f /var/log/varnish/current code:200 127.0.0.1 - - "Thu, 28 Jul 2011 15:16:56 GMT" "GET http://bla.nl" 200 58944 "-" "curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5" X:616508938 B:bla_nl V:bla.nl h:1 #=== This is the bit of inline C code used to do the loggin from VCL === int mycode = VRT_r_resp_status(sp); if (mycode == NULL) { mycode = 0; } syslog(LOG_DEBUG, "code:%u ", mycode ); . ...... #=== EOF === -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 10:20:18 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 10:20:18 -0000 Subject: [Varnish] #970: partial content code 206 - visible as '206' in varnishncsa and as '200' from VCL code In-Reply-To: <046.cdbe6b5a9773b161b3c6a61062dfc851@varnish-cache.org> References: <046.cdbe6b5a9773b161b3c6a61062dfc851@varnish-cache.org> Message-ID: <055.6d9b8005d5c7518eceec4264296f6cdd@varnish-cache.org> #970: partial content code 206 - visible as '206' in varnishncsa and as '200' from VCL code -----------------------+---------------------------------------------------- Reporter: jhalfmoon | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: 206 partial content varnishncsa vcl -----------------------+---------------------------------------------------- Changes (by phk): * owner: => phk Comment: This is sort of by design, we only decide to do range delivery after vcl_deliver{} has returned, because we want to make it possible to remove the Accept-Range header and this prevent range delivery in vcl_deliver{}. Is this a problem or just an annoyance for you ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 10:27:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 10:27:14 -0000 Subject: [Varnish] #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked In-Reply-To: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> References: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> Message-ID: <050.0f9a21f06a8fd540ce26b080ec01e752@varnish-cache.org> #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked ----------------------+----------------------------------------------------- Reporter: sctb | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: chunked hang transfer-encoding 503 ----------------------+----------------------------------------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 10:33:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 10:33:17 -0000 Subject: [Varnish] #965: Restarting a request in vcl_miss causes Varnish client crash. In-Reply-To: <042.7a34951b806a6025ec80b0b6379c27a4@varnish-cache.org> References: <042.7a34951b806a6025ec80b0b6379c27a4@varnish-cache.org> Message-ID: <051.7574267939ae3413964816e3239cd42c@varnish-cache.org> #965: Restarting a request in vcl_miss causes Varnish client crash. -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Description changed by phk: Old description: > Hello! > > I've done extensive testing on this, and I believe I've found a bug in > Varnish. The VCL below loads correctly, but causes the Varnish client to > crash after 1-2 requests. The backstory is available in the forums here > https://www.varnish-cache.org/forum/topic/65 - if you have any questions > that are not answered in that post, please ask away; I'd love to explain > and make more sense of what I'm trying to do. This isn't the exact VCL > I'd use in production, it is a minimalistic version that is able to > reproduce the crash: > > sub vcl_recv { > if (!req.http.X-Forwarded-For) { set req.http.X-Forwarded-For = > client.ip; } > if (req.http.X-Banned == "check") { remove req.http.X-Banned; } > elseif (req.restarts == 0) { > set req.http.X-Banned = "check"; > return (lookup); > } > } > > sub vcl_hash { > ## Check if they have a ban in the cache, or if they are going to be > banned in cache. > if (req.http.X-Banned) { > hash_data(req.http.X-Forwarded-For); > return (hash); > } > } > > sub vcl_error { > if (obj.status == 988) { return (restart); } > } > > sub vcl_miss { > if (req.http.X-Banned == "check") { error 988 "restarting"; } > } > > This is a successful request. After this request, the second request will > not work (Varnish restarts): > 0 Debug - "VCL_error(988, restarting)" > 9 BackendOpen b production[47] 172.21.4.182 41813 172.21.4.111 80 > 9 TxRequest b GET > 9 TxURL b http://www.site.com/ > 9 TxProtocol b HTTP/1.1 > 9 TxHeader b User-Agent: curl/7.19.7 (universal-apple-darwin10.0) > libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3 > 9 TxHeader b Host: www.site.com > 9 TxHeader b Accept: */* > 9 TxHeader b Proxy-Connection: Keep-Alive > 9 TxHeader b X-Forwarded-For: 127.0.0.1 > 9 TxHeader b X-Varnish: 746583022 > 9 TxHeader b Accept-Encoding: gzip > 9 RxProtocol b HTTP/1.1 > 9 RxStatus b 200 > 9 RxResponse b OK > 9 RxHeader b Date: Thu, 16 Jun 2011 23:45:54 GMT > 9 RxHeader b Server: Apache/2.2.3 (CentOS) > 9 RxHeader b Cache-Control: private, proxy-revalidate > 9 RxHeader b ETag: "9658bc1e80033b21277323e725948c91" > 9 RxHeader b Content-Encoding: gzip > 9 RxHeader b Vary: Accept-Encoding > 9 RxHeader b Content-length: 11452 > 9 RxHeader b Content-Type: text/html; charset=utf-8 > 9 RxHeader b Content-Language: en > 9 Fetch_Body b 4 0 1 > 9 Length b 11452 > 9 BackendReuse b production[47] > 3 SessionOpen c 127.0.0.1 1204 :80 > 3 ReqStart c 127.0.0.1 1204 746583022 > 3 RxRequest c HEAD > 3 RxURL c http://www.site.com/ > 3 RxProtocol c HTTP/1.1 > 3 RxHeader c User-Agent: curl/7.19.7 (universal-apple-darwin10.0) > libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3 > 3 RxHeader c Host: www.site.com > 3 RxHeader c Accept: */* > 3 RxHeader c Proxy-Connection: Keep-Alive > 3 VCL_call c recv lookup > 3 VCL_call c hash > 3 Hash c 127.0.0.1 > 3 VCL_return c hash > 3 VCL_call c miss error > 3 VCL_call c error restart > 3 VCL_call c recv lookup > 3 VCL_call c hash > 3 Hash c http://www.site.com/ > 3 Hash c www.site.com > 3 VCL_return c hash > 3 VCL_call c miss fetch > 3 Backend c 9 production production[47] > 3 TTL c 746583022 RFC 300 1308267955 0 0 0 0 > 3 VCL_call c fetch deliver > 3 ObjProtocol c HTTP/1.1 > 3 ObjResponse c OK > 3 ObjHeader c Date: Thu, 16 Jun 2011 23:45:54 GMT > 3 ObjHeader c Server: Apache/2.2.3 (CentOS) > 3 ObjHeader c Cache-Control: private, proxy-revalidate > 3 ObjHeader c ETag: "9658bc1e80033b21277323e725948c91" > 3 ObjHeader c Content-Encoding: gzip > 3 ObjHeader c Vary: Accept-Encoding > 3 ObjHeader c Content-Type: text/html; charset=utf-8 > 3 ObjHeader c Content-Language: en > 3 Gzip c u F - 11452 46817 80 80 91551 > 3 VCL_call c deliver deliver > 3 TxProtocol c HTTP/1.1 > 3 TxStatus c 200 > 3 TxResponse c OK > 3 TxHeader c Server: Apache/2.2.3 (CentOS) > 3 TxHeader c Cache-Control: private, proxy-revalidate > 3 TxHeader c ETag: "9658bc1e80033b21277323e725948c91" > 3 TxHeader c Vary: Accept-Encoding > 3 TxHeader c Content-Type: text/html; charset=utf-8 > 3 TxHeader c Content-Language: en > 3 TxHeader c Date: Thu, 16 Jun 2011 23:45:54 GMT > 3 TxHeader c X-Varnish: 746583022 > 3 TxHeader c Age: 0 > 3 TxHeader c Via: 1.1 varnish > 3 TxHeader c Connection: keep-alive > 3 Length c 0 > 3 ReqEnd c 746583022 1308267954.333659887 1308267954.527986050 > 0.000048876 0.194267035 0.000059128 > 3 Debug c herding > 3 SessionClose c no request > 3 StatSess c 127.0.0.1 1204 0 1 1 0 0 1 344 0 > 0 Backend_health - production[33] Still healthy ------- 4 3 8 > 0.000000 0.000272 > 3 SessionOpen c 172.21.4.16 57711 :80 > 3 SessionClose c EOF > > Regards, > -david New description: Hello! I've done extensive testing on this, and I believe I've found a bug in Varnish. The VCL below loads correctly, but causes the Varnish client to crash after 1-2 requests. The backstory is available in the forums here https://www.varnish-cache.org/forum/topic/65 - if you have any questions that are not answered in that post, please ask away; I'd love to explain and make more sense of what I'm trying to do. This isn't the exact VCL I'd use in production, it is a minimalistic version that is able to reproduce the crash: {{{ sub vcl_recv { if (!req.http.X-Forwarded-For) { set req.http.X-Forwarded-For = client.ip; } if (req.http.X-Banned == "check") { remove req.http.X-Banned; } elseif (req.restarts == 0) { set req.http.X-Banned = "check"; return (lookup); } } sub vcl_hash { ## Check if they have a ban in the cache, or if they are going to be banned in cache. if (req.http.X-Banned) { hash_data(req.http.X-Forwarded-For); return (hash); } } sub vcl_error { if (obj.status == 988) { return (restart); } } sub vcl_miss { if (req.http.X-Banned == "check") { error 988 "restarting"; } } This is a successful request. After this request, the second request will not work (Varnish restarts): 0 Debug - "VCL_error(988, restarting)" 9 BackendOpen b production[47] 172.21.4.182 41813 172.21.4.111 80 9 TxRequest b GET 9 TxURL b http://www.site.com/ 9 TxProtocol b HTTP/1.1 9 TxHeader b User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3 9 TxHeader b Host: www.site.com 9 TxHeader b Accept: */* 9 TxHeader b Proxy-Connection: Keep-Alive 9 TxHeader b X-Forwarded-For: 127.0.0.1 9 TxHeader b X-Varnish: 746583022 9 TxHeader b Accept-Encoding: gzip 9 RxProtocol b HTTP/1.1 9 RxStatus b 200 9 RxResponse b OK 9 RxHeader b Date: Thu, 16 Jun 2011 23:45:54 GMT 9 RxHeader b Server: Apache/2.2.3 (CentOS) 9 RxHeader b Cache-Control: private, proxy-revalidate 9 RxHeader b ETag: "9658bc1e80033b21277323e725948c91" 9 RxHeader b Content-Encoding: gzip 9 RxHeader b Vary: Accept-Encoding 9 RxHeader b Content-length: 11452 9 RxHeader b Content-Type: text/html; charset=utf-8 9 RxHeader b Content-Language: en 9 Fetch_Body b 4 0 1 9 Length b 11452 9 BackendReuse b production[47] 3 SessionOpen c 127.0.0.1 1204 :80 3 ReqStart c 127.0.0.1 1204 746583022 3 RxRequest c HEAD 3 RxURL c http://www.site.com/ 3 RxProtocol c HTTP/1.1 3 RxHeader c User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3 3 RxHeader c Host: www.site.com 3 RxHeader c Accept: */* 3 RxHeader c Proxy-Connection: Keep-Alive 3 VCL_call c recv lookup 3 VCL_call c hash 3 Hash c 127.0.0.1 3 VCL_return c hash 3 VCL_call c miss error 3 VCL_call c error restart 3 VCL_call c recv lookup 3 VCL_call c hash 3 Hash c http://www.site.com/ 3 Hash c www.site.com 3 VCL_return c hash 3 VCL_call c miss fetch 3 Backend c 9 production production[47] 3 TTL c 746583022 RFC 300 1308267955 0 0 0 0 3 VCL_call c fetch deliver 3 ObjProtocol c HTTP/1.1 3 ObjResponse c OK 3 ObjHeader c Date: Thu, 16 Jun 2011 23:45:54 GMT 3 ObjHeader c Server: Apache/2.2.3 (CentOS) 3 ObjHeader c Cache-Control: private, proxy-revalidate 3 ObjHeader c ETag: "9658bc1e80033b21277323e725948c91" 3 ObjHeader c Content-Encoding: gzip 3 ObjHeader c Vary: Accept-Encoding 3 ObjHeader c Content-Type: text/html; charset=utf-8 3 ObjHeader c Content-Language: en 3 Gzip c u F - 11452 46817 80 80 91551 3 VCL_call c deliver deliver 3 TxProtocol c HTTP/1.1 3 TxStatus c 200 3 TxResponse c OK 3 TxHeader c Server: Apache/2.2.3 (CentOS) 3 TxHeader c Cache-Control: private, proxy-revalidate 3 TxHeader c ETag: "9658bc1e80033b21277323e725948c91" 3 TxHeader c Vary: Accept-Encoding 3 TxHeader c Content-Type: text/html; charset=utf-8 3 TxHeader c Content-Language: en 3 TxHeader c Date: Thu, 16 Jun 2011 23:45:54 GMT 3 TxHeader c X-Varnish: 746583022 3 TxHeader c Age: 0 3 TxHeader c Via: 1.1 varnish 3 TxHeader c Connection: keep-alive 3 Length c 0 3 ReqEnd c 746583022 1308267954.333659887 1308267954.527986050 0.000048876 0.194267035 0.000059128 3 Debug c herding 3 SessionClose c no request 3 StatSess c 127.0.0.1 1204 0 1 1 0 0 1 344 0 0 Backend_health - production[33] Still healthy ------- 4 3 8 0.000000 0.000272 3 SessionOpen c 172.21.4.16 57711 :80 3 SessionClose c EOF }}} Regards, -david -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 10:35:23 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 10:35:23 -0000 Subject: [Varnish] #965: Restarting a request in vcl_miss causes Varnish client crash. In-Reply-To: <042.7a34951b806a6025ec80b0b6379c27a4@varnish-cache.org> References: <042.7a34951b806a6025ec80b0b6379c27a4@varnish-cache.org> Message-ID: <051.addf5f2e3bcfc17e85e72596b99fa3b0@varnish-cache.org> #965: Restarting a request in vcl_miss causes Varnish client crash. --------------------+------------------------------------------------------- Reporter: david | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- Changes (by tfheen): * owner: => tfheen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 10:37:00 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 10:37:00 -0000 Subject: [Varnish] #964: varnish is loosing (java) session data (was: varnish is loosing session data) In-Reply-To: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> References: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> Message-ID: <056.f9404268debdd00be88814be721c5d80@varnish-cache.org> #964: varnish is loosing (java) session data ------------------------+--------------------------------------------------- Reporter: pravenjohn | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.0 | Severity: critical Keywords: | ------------------------+--------------------------------------------------- Changes (by phk): * component: build => varnishd -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 10:37:52 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 10:37:52 -0000 Subject: [Varnish] #964: varnish is losing (java) session data (was: varnish is loosing (java) session data) In-Reply-To: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> References: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> Message-ID: <056.226d1acdeff5ddbf7e40b1e2c89a81f5@varnish-cache.org> #964: varnish is losing (java) session data ------------------------+--------------------------------------------------- Reporter: pravenjohn | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.0 | Severity: critical Keywords: | ------------------------+--------------------------------------------------- Comment(by tfheen): Can you please provide varnishlog showing what the problem is? Also, a copy of your VCL (or even better, a minimal VCL showing the problem) would be useful. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 10:55:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 10:55:32 -0000 Subject: [Varnish] #966: Varnish Header In-Reply-To: <045.a09638ec6a9784d78e37f64576048f44@varnish-cache.org> References: <045.a09638ec6a9784d78e37f64576048f44@varnish-cache.org> Message-ID: <054.804d3e3e9e9a997495500dd8e2cf5b50@varnish-cache.org> #966: Varnish Header ----------------------+----------------------------------------------------- Reporter: frozts91 | Owner: tfheen Type: defect | Status: new Priority: highest | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: ----------------------+----------------------------------------------------- Changes (by tfheen): * owner: => tfheen Comment: First, please file separate tickets for separate bugs, it makes it easier for us to keep track. Can you please provide varnishstat from when you see the 100% load happening? Also, please provide varnishlog for your second problem. Varnish won't change the headers out of the box, apart from adding a few, those you can unset using remove or unset in VCL. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 10:56:27 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 10:56:27 -0000 Subject: [Varnish] #969: init.d stop() function improvement In-Reply-To: <048.1a4b58ae1889d78fdbc19c4917c74b3a@varnish-cache.org> References: <048.1a4b58ae1889d78fdbc19c4917c74b3a@varnish-cache.org> Message-ID: <057.0a6e7585a3ca0c8d5e0d81cdafcb50dd@varnish-cache.org> #969: init.d stop() function improvement -------------------------+-------------------------------------------------- Reporter: David Busby | Owner: ingvar Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: init.d stop() -------------------------+-------------------------------------------------- Changes (by tfheen): * owner: => ingvar Comment: Ingvar, any chance you could take a look at an comment on this? I think it looks sane, but you know the RH init script side better than I do. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 12:06:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 12:06:32 -0000 Subject: [Varnish] #848: varnishlog -r seems broken In-Reply-To: <042.f4d125bf7db6e317d56c6e53ae887681@varnish-cache.org> References: <042.f4d125bf7db6e317d56c6e53ae887681@varnish-cache.org> Message-ID: <051.d27549a175222ae7ee9d0d8a19684386@varnish-cache.org> #848: varnishlog -r seems broken ------------------------+--------------------------------------------------- Reporter: perbu | Owner: kristian Type: defect | Status: closed Priority: normal | Milestone: Component: varnishlog | Version: trunk Severity: normal | Resolution: fixed Keywords: varnishlog | ------------------------+--------------------------------------------------- Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [66101b90620c1ba08b1e60a059d38835c2395ccd]) Fix up reading of saved log files Make sure we compensate for sizeof(int) and the stuff we have already read. Fixes: #848 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 12:20:59 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 12:20:59 -0000 Subject: [Varnish] #969: init.d stop() function improvement In-Reply-To: <048.1a4b58ae1889d78fdbc19c4917c74b3a@varnish-cache.org> References: <048.1a4b58ae1889d78fdbc19c4917c74b3a@varnish-cache.org> Message-ID: <057.b3a07f373152e33a6876b96a6ad19174@varnish-cache.org> #969: init.d stop() function improvement ---------------------------+------------------------------------------------ Reporter: David Busby | Owner: ingvar Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: worksforme Keywords: init.d stop() | ---------------------------+------------------------------------------------ Changes (by ingvar): * status: new => closed * resolution: => worksforme Comment: This is already the default in rpm packages from RHEL5 and up. RHEL4 packages or packages built without setting %rhel or %fedora (for example suse) will get a default behaviour without -p, for max compatibility. From the spec file: %if 0%{?fedora}%{?rhel} == 0 || 0%{?rhel} <= 4 && 0%{?fedora} <= 8 # Old style daemon function sed -i 's,--pidfile \$pidfile,,g; s,status -p \$pidfile,status,g; s,killproc -p \$pidfile,killproc,g' \ redhat/varnish.initrc redhat/varnishlog.initrc redhat/varnishncsa.initrc %endif -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 12:43:00 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 12:43:00 -0000 Subject: [Varnish] #969: init.d stop() function improvement In-Reply-To: <048.1a4b58ae1889d78fdbc19c4917c74b3a@varnish-cache.org> References: <048.1a4b58ae1889d78fdbc19c4917c74b3a@varnish-cache.org> Message-ID: <057.712de7fc7ac3e347c747cc32b8406e19@varnish-cache.org> #969: init.d stop() function improvement ---------------------------+------------------------------------------------ Reporter: David Busby | Owner: ingvar Type: enhancement | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: Keywords: init.d stop() | ---------------------------+------------------------------------------------ Changes (by David Busby): * status: closed => reopened * resolution: worksforme => Comment: This was post installation on a CentOS 5.6 x64 host (EL5), I had to make the init.d script amends manually. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 13:04:36 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 13:04:36 -0000 Subject: [Varnish] #969: init.d stop() function improvement In-Reply-To: <048.1a4b58ae1889d78fdbc19c4917c74b3a@varnish-cache.org> References: <048.1a4b58ae1889d78fdbc19c4917c74b3a@varnish-cache.org> Message-ID: <057.bcea25f473ee092626fb0d598e028417@varnish-cache.org> #969: init.d stop() function improvement ---------------------------+------------------------------------------------ Reporter: David Busby | Owner: ingvar Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: init.d stop() | ---------------------------+------------------------------------------------ Changes (by Tollef Fog Heen ): * status: reopened => closed * resolution: => fixed Comment: (In [8d662f9c5be529c5f0ab0a3c1da6efda51a62b5d]) Only strip out -p to status/killproc for old fedora/RHEL It seems most RPM-based distros has the -p switch to killproc and status those days, so only blacklist the ones we know about that does not have it. Fixes: #969 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 14:53:58 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 14:53:58 -0000 Subject: [Varnish] #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked In-Reply-To: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> References: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> Message-ID: <050.458245f50950d8a5af48296b2b33ddd7@varnish-cache.org> #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked ----------------------+----------------------------------------------------- Reporter: sctb | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: chunked hang transfer-encoding 503 ----------------------+----------------------------------------------------- Comment(by martin): What you are seeing is a timeout waiting for the response on a reused backend connection (Varnish will try to reuse backend connections when they are marked as supporting this). Does your backend logs give any indication about why the request isn't being served? Also do you have any Varnishlog output from a connection when you are using Content-Length? Would be interesting to compare those. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 15:20:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 15:20:14 -0000 Subject: [Varnish] #970: partial content code 206 - visible as '206' in varnishncsa and as '200' from VCL code In-Reply-To: <046.cdbe6b5a9773b161b3c6a61062dfc851@varnish-cache.org> References: <046.cdbe6b5a9773b161b3c6a61062dfc851@varnish-cache.org> Message-ID: <055.3ea5562a516debf48ffd0f875ed40e52@varnish-cache.org> #970: partial content code 206 - visible as '206' in varnishncsa and as '200' from VCL code -----------------------+---------------------------------------------------- Reporter: jhalfmoon | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: 206 partial content varnishncsa vcl -----------------------+---------------------------------------------------- Comment(by johnnyh): > Is this a problem or just an annoyance for you ? We use the returncodes for per-site statistics and customer support on multi-site Varnish machines, so it would be very nice to have the logs and graphs complete, including 206 stats. If it were a manager asking me your question my answer would be: "Yes Sir. This definitely is a major problem." But in this case my answer is: It is an eyesore and the autist in me is screaming and breaking stuff, but honestly, I think I can sit this one out. I'll just have forge my TPS reports. If somewhere in v3.2 or there-abouts this can get fixed, I'll be a happy camper. Do you have any idea wether this will happen any time soon? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 1 16:25:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Aug 2011 16:25:20 -0000 Subject: [Varnish] #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked In-Reply-To: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> References: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> Message-ID: <050.f15da324f8106fc1ee49dc71fca9a981@varnish-cache.org> #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked ----------------------+----------------------------------------------------- Reporter: sctb | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: chunked hang transfer-encoding 503 ----------------------+----------------------------------------------------- Comment(by sctb): Martin, Thanks for the response. I looked into the backend (which is a proxy in this configuration) and determined that it wasn't setting headers in the response that it received from its backend. After fixing that, I was able to get a successful round-trip with varnish: {{{ 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1312214571 1.0 16 BackendOpen b default 127.0.0.1 37387 127.0.0.1 8000 16 TxRequest b POST 16 TxURL b /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 16 TxProtocol b HTTP/1.1 16 TxHeader b content-length: 103 16 TxHeader b x-varnish: 461203745 16 TxHeader b host: 127.0.0.1 16 TxHeader b X-Forwarded-For: 127.0.0.1, 127.0.0.1 16 TxHeader b X-Varnish: 461203746 16 RxProtocol b HTTP/1.1 16 RxStatus b 200 16 RxResponse b OK 16 RxHeader b content-type: application/json; charset=UTF-8 16 RxHeader b expires: Mon, 26 Jul 1997 05:00:00 GMT 16 RxHeader b cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 16 RxHeader b pragma: no-cache 16 RxHeader b Connection: keep-alive 16 RxHeader b Transfer-Encoding: chunked 16 Fetch_Body b 3 0 1 16 Length b 40 16 BackendReuse b default 15 SessionOpen c 127.0.0.1 56494 0.0.0.0:8080 15 ReqStart c 127.0.0.1 56494 461203746 15 RxRequest c POST 15 RxURL c /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 15 RxProtocol c HTTP/1.1 15 RxHeader c content-length: 103 15 RxHeader c x-forwarded-for: 127.0.0.1 15 RxHeader c x-varnish: 461203745 15 RxHeader c host: 127.0.0.1 15 RxHeader c Connection: close 15 VCL_call c recv pass 15 VCL_call c hash 15 Hash c /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 15 Hash c 127.0.0.1 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 16 default default 15 TTL c 461203746 RFC 0 1312214573 0 869893200 0 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c content-type: application/json; charset=UTF-8 15 ObjHeader c expires: Mon, 26 Jul 1997 05:00:00 GMT 15 ObjHeader c cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c pragma: no-cache 15 VCL_call c deliver deliver 15 TxProtocol c HTTP/1.1 15 TxStatus c 200 15 TxResponse c OK 15 TxHeader c content-type: application/json; charset=UTF-8 15 TxHeader c expires: Mon, 26 Jul 1997 05:00:00 GMT 15 TxHeader c cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 TxHeader c pragma: no-cache 15 TxHeader c Content-Length: 40 15 TxHeader c Accept-Ranges: bytes 15 TxHeader c Date: Mon, 01 Aug 2011 16:02:52 GMT 15 TxHeader c X-Varnish: 461203746 15 TxHeader c Age: 0 15 TxHeader c Via: 1.1 varnish 15 TxHeader c Connection: close 15 Length c 40 15 ReqEnd c 461203746 1312214572.955148458 1312214572.957819939 0.000035286 0.002634048 0.000037432 15 SessionClose c Connection: close 15 StatSess c 127.0.0.1 56494 0 1 1 0 1 1 349 40 14 BackendOpen b default 127.0.0.1 37385 127.0.0.1 8000 14 TxRequest b POST 14 TxURL b /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 14 TxProtocol b HTTP/1.1 14 TxHeader b content-length: 103 14 TxHeader b X-Forwarded-For: 127.0.0.1 14 TxHeader b X-Varnish: 461203745 14 TxHeader b Host: 127.0.0.1 14 RxProtocol b HTTP/1.1 14 RxStatus b 200 14 RxResponse b OK 14 RxHeader b content-type: application/json; charset=UTF-8 14 RxHeader b expires: Mon, 26 Jul 1997 05:00:00 GMT 14 RxHeader b cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 14 RxHeader b pragma: no-cache 14 RxHeader b content-length: 40 14 RxHeader b accept-ranges: bytes 14 RxHeader b date: Mon, 01 Aug 2011 16:02:52 GMT 14 RxHeader b x-varnish: 461203746 14 RxHeader b age: 0 14 RxHeader b via: 1.1 varnish 14 RxHeader b connection: close 14 Fetch_Body b 4 0 1 14 Length b 40 14 BackendClose b default 13 SessionOpen c 127.0.0.1 56492 0.0.0.0:8080 13 ReqStart c 127.0.0.1 56492 461203745 13 RxRequest c POST 13 RxURL c /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 13 RxProtocol c HTTP/1.1 13 RxHeader c content-length: 103 13 RxHeader c Connection: close 13 VCL_call c recv pass 13 VCL_call c hash 13 Hash c /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 13 Hash c 127.0.0.1 13 VCL_return c hash 13 VCL_call c pass pass 13 Backend c 14 default default 13 TTL c 461203745 RFC 0 1312214573 1312214572 869893200 0 0 13 VCL_call c fetch hit_for_pass 13 ObjProtocol c HTTP/1.1 13 ObjResponse c OK 13 ObjHeader c content-type: application/json; charset=UTF-8 13 ObjHeader c expires: Mon, 26 Jul 1997 05:00:00 GMT 13 ObjHeader c cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 13 ObjHeader c pragma: no-cache 13 ObjHeader c content-length: 40 13 ObjHeader c accept-ranges: bytes 13 ObjHeader c date: Mon, 01 Aug 2011 16:02:52 GMT 13 ObjHeader c x-varnish: 461203746 13 ObjHeader c age: 0 13 ObjHeader c via: 1.1 varnish 13 VCL_call c deliver deliver 13 TxProtocol c HTTP/1.1 13 TxStatus c 200 13 TxResponse c OK 13 TxHeader c content-type: application/json; charset=UTF-8 13 TxHeader c expires: Mon, 26 Jul 1997 05:00:00 GMT 13 TxHeader c cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 13 TxHeader c pragma: no-cache 13 TxHeader c accept-ranges: bytes 13 TxHeader c x-varnish: 461203746 13 TxHeader c age: 0 13 TxHeader c via: 1.1 varnish 13 TxHeader c Content-Length: 40 13 TxHeader c Accept-Ranges: bytes 13 TxHeader c Date: Mon, 01 Aug 2011 16:02:52 GMT 13 TxHeader c X-Varnish: 461203745 13 TxHeader c Age: 0 13 TxHeader c Via: 1.1 varnish 13 TxHeader c Connection: close 13 Length c 40 13 ReqEnd c 461203745 1312214572.952928543 1312214572.958472490 0.000050783 0.005512714 0.000031233 13 SessionClose c Connection: close 13 StatSess c 127.0.0.1 56492 0 1 1 0 1 1 419 40 20 BackendOpen b default 127.0.0.1 37394 127.0.0.1 8000 20 TxRequest b GET 20 TxURL b /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 20 TxProtocol b HTTP/1.1 20 TxHeader b host: localhost:8080 20 TxHeader b x-varnish: 461203749 20 TxHeader b X-Forwarded-For: 127.0.0.1, 127.0.0.1 20 TxHeader b X-Varnish: 461203750 20 TxHeader b Accept-Encoding: gzip 20 RxProtocol b HTTP/1.1 20 RxStatus b 200 20 RxResponse b OK 20 RxHeader b content-type: application/json; charset=UTF-8 20 RxHeader b etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 20 RxHeader b cache-control: public, max-age=500000000 20 RxHeader b expires: Wed, 1 Jul 2099 05:00:00 GMT 20 RxHeader b Connection: keep-alive 20 RxHeader b Transfer-Encoding: chunked 20 Fetch_Body b 3 0 1 20 Length b 201 20 BackendReuse b default 19 SessionOpen c 127.0.0.1 56501 0.0.0.0:8080 19 ReqStart c 127.0.0.1 56501 461203750 19 RxRequest c GET 19 RxURL c /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 19 RxProtocol c HTTP/1.1 19 RxHeader c host: localhost:8080 19 RxHeader c x-forwarded-for: 127.0.0.1 19 RxHeader c x-varnish: 461203749 19 RxHeader c accept-encoding: gzip 19 RxHeader c Connection: close 19 VCL_call c recv lookup 19 VCL_call c hash 19 Hash c /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 19 Hash c localhost:8080 19 VCL_return c hash 19 VCL_call c miss fetch 19 Backend c 20 default default 19 TTL c 461203750 RFC 5e+08 1312214573 0 0 500000000 0 19 VCL_call c fetch deliver 19 ObjProtocol c HTTP/1.1 19 ObjResponse c OK 19 ObjHeader c content-type: application/json; charset=UTF-8 19 ObjHeader c etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 19 ObjHeader c cache-control: public, max-age=500000000 19 ObjHeader c expires: Wed, 1 Jul 2099 05:00:00 GMT 19 VCL_call c deliver deliver 19 TxProtocol c HTTP/1.1 19 TxStatus c 200 19 TxResponse c OK 19 TxHeader c content-type: application/json; charset=UTF-8 19 TxHeader c etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 19 TxHeader c cache-control: public, max-age=500000000 19 TxHeader c expires: Wed, 1 Jul 2099 05:00:00 GMT 19 TxHeader c Content-Length: 201 19 TxHeader c Accept-Ranges: bytes 19 TxHeader c Date: Mon, 01 Aug 2011 16:02:52 GMT 19 TxHeader c X-Varnish: 461203750 19 TxHeader c Age: 0 19 TxHeader c Via: 1.1 varnish 19 TxHeader c Connection: close 19 Length c 201 19 ReqEnd c 461203750 1312214572.962742567 1312214572.964076996 0.000032902 0.001300812 0.000033617 19 SessionClose c Connection: close 19 StatSess c 127.0.0.1 56501 0 1 1 0 0 1 342 201 18 BackendOpen b default 127.0.0.1 37392 127.0.0.1 8000 18 TxRequest b GET 18 TxURL b /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 18 TxProtocol b HTTP/1.1 18 TxHeader b Host: localhost:8080 18 TxHeader b X-Forwarded-For: 127.0.0.1 18 TxHeader b X-Varnish: 461203749 18 TxHeader b Accept-Encoding: gzip 18 RxProtocol b HTTP/1.1 18 RxStatus b 200 18 RxResponse b OK 18 RxHeader b content-type: application/json; charset=UTF-8 18 RxHeader b etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 18 RxHeader b cache-control: public, max-age=500000000 18 RxHeader b expires: Wed, 1 Jul 2099 05:00:00 GMT 18 RxHeader b content-length: 201 18 RxHeader b accept-ranges: bytes 18 RxHeader b date: Mon, 01 Aug 2011 16:02:52 GMT 18 RxHeader b x-varnish: 461203750 18 RxHeader b age: 0 18 RxHeader b via: 1.1 varnish 18 RxHeader b connection: close 18 Fetch_Body b 4 0 1 18 Length b 201 18 BackendClose b default 17 SessionOpen c 127.0.0.1 56499 0.0.0.0:8080 17 ReqStart c 127.0.0.1 56499 461203749 17 RxRequest c GET 17 RxURL c /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 17 RxProtocol c HTTP/1.1 17 RxHeader c Host: localhost:8080 17 RxHeader c content-length: 0 17 RxHeader c Connection: close 17 VCL_call c recv lookup 17 VCL_call c hash 17 Hash c /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 17 Hash c localhost:8080 17 VCL_return c hash 17 VCL_call c miss fetch 17 Backend c 18 default default 17 TTL c 461203749 RFC 5e+08 1312214573 0 0 500000000 0 17 VCL_call c fetch deliver 17 ObjProtocol c HTTP/1.1 17 ObjResponse c OK 17 ObjHeader c content-type: application/json; charset=UTF-8 17 ObjHeader c etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 17 ObjHeader c cache-control: public, max-age=500000000 17 ObjHeader c expires: Wed, 1 Jul 2099 05:00:00 GMT 17 ObjHeader c date: Mon, 01 Aug 2011 16:02:52 GMT 17 ObjHeader c x-varnish: 461203750 17 ObjHeader c via: 1.1 varnish 17 VCL_call c deliver deliver 17 TxProtocol c HTTP/1.1 17 TxStatus c 200 17 TxResponse c OK 17 TxHeader c content-type: application/json; charset=UTF-8 17 TxHeader c etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 17 TxHeader c cache-control: public, max-age=500000000 17 TxHeader c expires: Wed, 1 Jul 2099 05:00:00 GMT 17 TxHeader c x-varnish: 461203750 17 TxHeader c via: 1.1 varnish 17 TxHeader c Content-Length: 201 17 TxHeader c Accept-Ranges: bytes 17 TxHeader c Date: Mon, 01 Aug 2011 16:02:52 GMT 17 TxHeader c X-Varnish: 461203749 17 TxHeader c Age: 0 17 TxHeader c Via: 1.1 varnish 17 TxHeader c Connection: close 17 Length c 201 17 ReqEnd c 461203749 1312214572.961805344 1312214572.964606047 0.000031710 0.002772570 0.000028133 17 SessionClose c Connection: close 17 StatSess c 127.0.0.1 56499 0 1 1 0 0 1 382 201 15 BackendOpen b default 127.0.0.1 37390 127.0.0.1 8000 15 TxRequest b POST 15 TxURL b /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 15 TxProtocol b HTTP/1.1 15 TxHeader b content-length: 66 15 TxHeader b x-varnish: 461203747 15 TxHeader b host: 127.0.0.1 15 TxHeader b X-Forwarded-For: 127.0.0.1, 127.0.0.1 15 TxHeader b X-Varnish: 461203748 15 RxProtocol b HTTP/1.1 15 RxStatus b 200 15 RxResponse b OK 15 RxHeader b content-type: application/json; charset=UTF-8 15 RxHeader b expires: Mon, 26 Jul 1997 05:00:00 GMT 15 RxHeader b cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 RxHeader b pragma: no-cache 15 RxHeader b Connection: keep-alive 15 RxHeader b Transfer-Encoding: chunked 15 Fetch_Body b 3 0 1 15 Length b 40 15 BackendReuse b default 14 SessionOpen c 127.0.0.1 56497 0.0.0.0:8080 14 ReqStart c 127.0.0.1 56497 461203748 14 RxRequest c POST 14 RxURL c /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 14 RxProtocol c HTTP/1.1 14 RxHeader c content-length: 66 14 RxHeader c x-forwarded-for: 127.0.0.1 14 RxHeader c x-varnish: 461203747 14 RxHeader c host: 127.0.0.1 14 RxHeader c Connection: close 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 14 Hash c 127.0.0.1 14 VCL_return c hash 14 VCL_call c pass pass 14 Backend c 15 default default 14 TTL c 461203748 RFC 0 1312214573 0 869893200 0 0 14 VCL_call c fetch hit_for_pass 14 ObjProtocol c HTTP/1.1 14 ObjResponse c OK 14 ObjHeader c content-type: application/json; charset=UTF-8 14 ObjHeader c expires: Mon, 26 Jul 1997 05:00:00 GMT 14 ObjHeader c cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 14 ObjHeader c pragma: no-cache 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 200 14 TxResponse c OK 14 TxHeader c content-type: application/json; charset=UTF-8 14 TxHeader c expires: Mon, 26 Jul 1997 05:00:00 GMT 14 TxHeader c cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 14 TxHeader c pragma: no-cache 14 TxHeader c Content-Length: 40 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Mon, 01 Aug 2011 16:02:52 GMT 14 TxHeader c X-Varnish: 461203748 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 40 14 ReqEnd c 461203748 1312214572.960617781 1312214572.967000961 0.000028610 0.006352186 0.000030994 14 SessionClose c Connection: close 14 StatSess c 127.0.0.1 56497 0 1 1 0 1 1 349 40 16 TxRequest b POST 16 TxURL b /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 16 TxProtocol b HTTP/1.1 16 TxHeader b content-length: 66 16 TxHeader b X-Forwarded-For: 127.0.0.1 16 TxHeader b X-Varnish: 461203747 16 TxHeader b Host: 127.0.0.1 16 RxProtocol b HTTP/1.1 16 RxStatus b 200 16 RxResponse b OK 16 RxHeader b content-type: application/json; charset=UTF-8 16 RxHeader b expires: Mon, 26 Jul 1997 05:00:00 GMT 16 RxHeader b cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 16 RxHeader b pragma: no-cache 16 RxHeader b content-length: 40 16 RxHeader b accept-ranges: bytes 16 RxHeader b date: Mon, 01 Aug 2011 16:02:52 GMT 16 RxHeader b x-varnish: 461203748 16 RxHeader b age: 0 16 RxHeader b via: 1.1 varnish 16 RxHeader b connection: close 16 Fetch_Body b 4 0 1 16 Length b 40 16 BackendClose b default 13 SessionOpen c 127.0.0.1 56496 0.0.0.0:8080 13 ReqStart c 127.0.0.1 56496 461203747 13 RxRequest c POST 13 RxURL c /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 13 RxProtocol c HTTP/1.1 13 RxHeader c content-length: 66 13 RxHeader c Connection: close 13 VCL_call c recv pass 13 VCL_call c hash 13 Hash c /simplest- calc?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 13 Hash c 127.0.0.1 13 VCL_return c hash 13 VCL_call c pass pass 13 Backend c 16 default default 13 TTL c 461203747 RFC 0 1312214573 1312214572 869893200 0 0 13 VCL_call c fetch hit_for_pass 13 ObjProtocol c HTTP/1.1 13 ObjResponse c OK 13 ObjHeader c content-type: application/json; charset=UTF-8 13 ObjHeader c expires: Mon, 26 Jul 1997 05:00:00 GMT 13 ObjHeader c cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 13 ObjHeader c pragma: no-cache 13 ObjHeader c content-length: 40 13 ObjHeader c accept-ranges: bytes 13 ObjHeader c date: Mon, 01 Aug 2011 16:02:52 GMT 13 ObjHeader c x-varnish: 461203748 13 ObjHeader c age: 0 13 ObjHeader c via: 1.1 varnish 13 VCL_call c deliver deliver 13 TxProtocol c HTTP/1.1 13 TxStatus c 200 13 TxResponse c OK 13 TxHeader c content-type: application/json; charset=UTF-8 13 TxHeader c expires: Mon, 26 Jul 1997 05:00:00 GMT 13 TxHeader c cache-control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 13 TxHeader c pragma: no-cache 13 TxHeader c accept-ranges: bytes 13 TxHeader c x-varnish: 461203748 13 TxHeader c age: 0 13 TxHeader c via: 1.1 varnish 13 TxHeader c Content-Length: 40 13 TxHeader c Accept-Ranges: bytes 13 TxHeader c Date: Mon, 01 Aug 2011 16:02:52 GMT 13 TxHeader c X-Varnish: 461203747 13 TxHeader c Age: 0 13 TxHeader c Via: 1.1 varnish 13 TxHeader c Connection: close 13 Length c 40 13 ReqEnd c 461203747 1312214572.959781885 1312214572.967619658 0.000032187 0.007809162 0.000028610 13 SessionClose c Connection: close 13 StatSess c 127.0.0.1 56496 0 1 1 0 1 1 419 40 20 TxRequest b GET 20 TxURL b /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 20 TxProtocol b HTTP/1.1 20 TxHeader b x-varnish: 461203751 20 TxHeader b host: 127.0.0.1 20 TxHeader b X-Forwarded-For: 127.0.0.1, 127.0.0.1 20 TxHeader b X-Varnish: 461203752 20 TxHeader b Accept-Encoding: gzip 20 RxProtocol b HTTP/1.1 20 RxStatus b 200 20 RxResponse b OK 20 RxHeader b content-type: application/json; charset=UTF-8 20 RxHeader b etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 20 RxHeader b cache-control: public, max-age=500000000 20 RxHeader b expires: Wed, 1 Jul 2099 05:00:00 GMT 20 RxHeader b Connection: keep-alive 20 RxHeader b Transfer-Encoding: chunked 20 Fetch_Body b 3 0 1 20 Length b 243 20 BackendReuse b default 14 SessionOpen c 127.0.0.1 56504 0.0.0.0:8080 14 ReqStart c 127.0.0.1 56504 461203752 14 RxRequest c GET 14 RxURL c /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 14 RxProtocol c HTTP/1.1 14 RxHeader c x-forwarded-for: 127.0.0.1 14 RxHeader c x-varnish: 461203751 14 RxHeader c accept-encoding: gzip 14 RxHeader c host: 127.0.0.1 14 RxHeader c Connection: close 14 VCL_call c recv lookup 14 VCL_call c hash 14 Hash c /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21&backend=true 14 Hash c 127.0.0.1 14 VCL_return c hash 14 VCL_call c miss fetch 14 Backend c 20 default default 14 TTL c 461203752 RFC 5e+08 1312214573 0 0 500000000 0 14 VCL_call c fetch deliver 14 ObjProtocol c HTTP/1.1 14 ObjResponse c OK 14 ObjHeader c content-type: application/json; charset=UTF-8 14 ObjHeader c etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 14 ObjHeader c cache-control: public, max-age=500000000 14 ObjHeader c expires: Wed, 1 Jul 2099 05:00:00 GMT 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 200 14 TxResponse c OK 14 TxHeader c content-type: application/json; charset=UTF-8 14 TxHeader c etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 14 TxHeader c cache-control: public, max-age=500000000 14 TxHeader c expires: Wed, 1 Jul 2099 05:00:00 GMT 14 TxHeader c Content-Length: 243 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Mon, 01 Aug 2011 16:02:52 GMT 14 TxHeader c X-Varnish: 461203752 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 243 14 ReqEnd c 461203752 1312214572.968909264 1312214572.969332218 0.000026703 0.000396490 0.000026464 14 SessionClose c Connection: close 14 StatSess c 127.0.0.1 56504 0 1 1 0 0 1 342 243 15 TxRequest b GET 15 TxURL b /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 15 TxProtocol b HTTP/1.1 15 TxHeader b X-Forwarded-For: 127.0.0.1 15 TxHeader b X-Varnish: 461203751 15 TxHeader b Accept-Encoding: gzip 15 TxHeader b Host: 127.0.0.1 15 RxProtocol b HTTP/1.1 15 RxStatus b 200 15 RxResponse b OK 15 RxHeader b content-type: application/json; charset=UTF-8 15 RxHeader b etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 15 RxHeader b cache-control: public, max-age=500000000 15 RxHeader b expires: Wed, 1 Jul 2099 05:00:00 GMT 15 RxHeader b content-length: 243 15 RxHeader b accept-ranges: bytes 15 RxHeader b date: Mon, 01 Aug 2011 16:02:52 GMT 15 RxHeader b x-varnish: 461203752 15 RxHeader b age: 0 15 RxHeader b via: 1.1 varnish 15 RxHeader b connection: close 15 Fetch_Body b 4 0 1 15 Length b 243 15 BackendClose b default 13 SessionOpen c 127.0.0.1 56503 0.0.0.0:8080 13 ReqStart c 127.0.0.1 56503 461203751 13 RxRequest c GET 13 RxURL c /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 13 RxProtocol c HTTP/1.1 13 RxHeader c content-length: 0 13 RxHeader c Connection: close 13 VCL_call c recv lookup 13 VCL_call c hash 13 Hash c /.blob/7b17c1a1333d8cdce833b81259587af7ecf27395?bkey=464cb64ea78a3a382b018c21711fa100bf7c4e21 13 Hash c 127.0.0.1 13 VCL_return c hash 13 VCL_call c miss fetch 13 Backend c 15 default default 13 TTL c 461203751 RFC 5e+08 1312214573 0 0 500000000 0 13 VCL_call c fetch deliver 13 ObjProtocol c HTTP/1.1 13 ObjResponse c OK 13 ObjHeader c content-type: application/json; charset=UTF-8 13 ObjHeader c etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 13 ObjHeader c cache-control: public, max-age=500000000 13 ObjHeader c expires: Wed, 1 Jul 2099 05:00:00 GMT 13 ObjHeader c date: Mon, 01 Aug 2011 16:02:52 GMT 13 ObjHeader c x-varnish: 461203752 13 ObjHeader c via: 1.1 varnish 13 VCL_call c deliver deliver 13 TxProtocol c HTTP/1.1 13 TxStatus c 200 13 TxResponse c OK 13 TxHeader c content-type: application/json; charset=UTF-8 13 TxHeader c etag: 7b17c1a1333d8cdce833b81259587af7ecf27395 13 TxHeader c cache-control: public, max-age=500000000 13 TxHeader c expires: Wed, 1 Jul 2099 05:00:00 GMT 13 TxHeader c x-varnish: 461203752 13 TxHeader c via: 1.1 varnish 13 TxHeader c Content-Length: 243 13 TxHeader c Accept-Ranges: bytes 13 TxHeader c Date: Mon, 01 Aug 2011 16:02:52 GMT 13 TxHeader c X-Varnish: 461203751 13 TxHeader c Age: 0 13 TxHeader c Via: 1.1 varnish 13 TxHeader c Connection: close 13 Length c 243 13 ReqEnd c 461203751 1312214572.968373775 1312214572.969892979 0.000033379 0.001492262 0.000026941 13 SessionClose c Connection: close 13 StatSess c 127.0.0.1 56503 0 1 1 0 0 1 382 243 15 BackendOpen b default 127.0.0.1 37399 127.0.0.1 8000 15 TxRequest b POST 15 TxURL b /?backend=true 15 TxProtocol b HTTP/1.1 15 TxHeader b content-length: 48 15 TxHeader b x-varnish: 461203753 15 TxHeader b host: 127.0.0.1 15 TxHeader b X-Forwarded-For: 127.0.0.1, 127.0.0.1 15 TxHeader b X-Varnish: 461203754 15 RxProtocol b HTTP/1.1 15 RxStatus b 200 15 RxResponse b OK 15 RxHeader b Connection: keep-alive 15 RxHeader b Transfer-Encoding: chunked 15 Fetch_Body b 3 0 1 15 Length b 0 15 BackendReuse b default 14 SessionOpen c 127.0.0.1 56506 0.0.0.0:8080 14 ReqStart c 127.0.0.1 56506 461203754 14 RxRequest c POST 14 RxURL c /?backend=true 14 RxProtocol c HTTP/1.1 14 RxHeader c content-length: 48 14 RxHeader c x-forwarded-for: 127.0.0.1 14 RxHeader c x-varnish: 461203753 14 RxHeader c host: 127.0.0.1 14 RxHeader c Connection: close 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c /?backend=true 14 Hash c 127.0.0.1 14 VCL_return c hash 14 VCL_call c pass pass 14 Backend c 15 default default 14 TTL c 461203754 RFC 120 1312214573 0 0 0 0 14 VCL_call c fetch hit_for_pass 14 ObjProtocol c HTTP/1.1 14 ObjResponse c OK 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 200 14 TxResponse c OK 14 TxHeader c Content-Length: 0 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Mon, 01 Aug 2011 16:02:52 GMT 14 TxHeader c X-Varnish: 461203754 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 0 14 ReqEnd c 461203754 1312214572.971561193 1312214572.972298384 0.000029564 0.000707388 0.000029802 14 SessionClose c Connection: close 14 StatSess c 127.0.0.1 56506 0 1 1 0 1 1 164 0 20 TxRequest b POST 20 TxURL b / 20 TxProtocol b HTTP/1.1 20 TxHeader b content-length: 48 20 TxHeader b X-Forwarded-For: 127.0.0.1 20 TxHeader b X-Varnish: 461203753 20 TxHeader b Host: 127.0.0.1 20 RxProtocol b HTTP/1.1 20 RxStatus b 200 20 RxResponse b OK 20 RxHeader b content-length: 0 20 RxHeader b accept-ranges: bytes 20 RxHeader b date: Mon, 01 Aug 2011 16:02:52 GMT 20 RxHeader b x-varnish: 461203754 20 RxHeader b age: 0 20 RxHeader b via: 1.1 varnish 20 RxHeader b connection: close 20 Fetch_Body b 4 0 1 20 Length b 0 20 BackendClose b default 13 SessionOpen c 127.0.0.1 56505 0.0.0.0:8080 13 ReqStart c 127.0.0.1 56505 461203753 13 RxRequest c POST 13 RxURL c / 13 RxProtocol c HTTP/1.1 13 RxHeader c content-length: 48 13 RxHeader c Connection: close 13 VCL_call c recv pass 13 VCL_call c hash 13 Hash c / 13 Hash c 127.0.0.1 13 VCL_return c hash 13 VCL_call c pass pass 13 Backend c 20 default default 13 TTL c 461203753 RFC 120 1312214573 0 0 0 0 13 VCL_call c fetch hit_for_pass 13 ObjProtocol c HTTP/1.1 13 ObjResponse c OK 13 ObjHeader c content-length: 0 13 ObjHeader c accept-ranges: bytes 13 ObjHeader c date: Mon, 01 Aug 2011 16:02:52 GMT 13 ObjHeader c x-varnish: 461203754 13 ObjHeader c age: 0 13 ObjHeader c via: 1.1 varnish 13 VCL_call c deliver deliver 13 TxProtocol c HTTP/1.1 13 TxStatus c 200 13 TxResponse c OK 13 TxHeader c accept-ranges: bytes 13 TxHeader c x-varnish: 461203754 13 TxHeader c age: 0 13 TxHeader c via: 1.1 varnish 13 TxHeader c Content-Length: 0 13 TxHeader c Accept-Ranges: bytes 13 TxHeader c Date: Mon, 01 Aug 2011 16:02:52 GMT 13 TxHeader c X-Varnish: 461203753 13 TxHeader c Age: 0 13 TxHeader c Via: 1.1 varnish 13 TxHeader c Connection: close 13 Length c 0 13 ReqEnd c 461203753 1312214572.971080780 1312214572.972627163 0.000027657 0.001520157 0.000026226 13 SessionClose c Connection: close 13 StatSess c 127.0.0.1 56505 0 1 1 0 1 1 234 0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1312214574 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1312214577 1.0 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 07:37:18 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 07:37:18 -0000 Subject: [Varnish] #930: Caching rules didn't work after upgrading to 2.1.5 In-Reply-To: <041.ec38446941555ede73ce64c0a7745e21@varnish-cache.org> References: <041.ec38446941555ede73ce64c0a7745e21@varnish-cache.org> Message-ID: <050.270a363f6f2101e2865d4af4d91b8ffd@varnish-cache.org> #930: Caching rules didn't work after upgrading to 2.1.5 ----------------------------------+----------------------------------------- Reporter: pdah | Type: defect Status: closed | Priority: high Milestone: Varnish 2.1 release | Component: varnishd Version: 2.1.5 | Severity: major Resolution: worksforme | Keywords: ----------------------------------+----------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => worksforme Comment: No response from submitter, closing. Please reopen the bug if you can provide the requested information. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 07:39:09 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 07:39:09 -0000 Subject: [Varnish] #939: Error 400 if a single header exceeds 2048 characters In-Reply-To: <042.040f5d1e4f9b5d1da3859fd4e9ea4778@varnish-cache.org> References: <042.040f5d1e4f9b5d1da3859fd4e9ea4778@varnish-cache.org> Message-ID: <051.f812e77c85461809670490e892d9562a@varnish-cache.org> #939: Error 400 if a single header exceeds 2048 characters ---------------------+------------------------------------------------------ Reporter: david | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: Closing as fixed; we've changed the response code and increased the limit. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 09:48:08 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 09:48:08 -0000 Subject: [Varnish] #958: Transient Storage memleak? In-Reply-To: <042.0d328ed4c5bf692db0b6192b0f39bfb1@varnish-cache.org> References: <042.0d328ed4c5bf692db0b6192b0f39bfb1@varnish-cache.org> Message-ID: <051.56eacb07c334fde4847c5fe3ee80fad9@varnish-cache.org> #958: Transient Storage memleak? ----------------------+----------------------------------------------------- Reporter: fdy84 | Owner: phk Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: duplicate Keywords: memleak | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => duplicate Comment: This is a dup of #953 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 09:49:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 09:49:42 -0000 Subject: [Varnish] #953: Leak in the TransientStorage? In-Reply-To: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> References: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> Message-ID: <052.7cbc5437995a8d8c4a39305bcc17e9cc@varnish-cache.org> #953: Leak in the TransientStorage? ----------------------+----------------------------------------------------- Reporter: elurin | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [beb0c5b1f4f49d711822e90ca73d69bbed683a71]) Cap the TTL (to param "shortlived") when we use the Transient storage to avoid dropping an object on out of storage conditions. I belive this... Fixes #953 Otherwise please reopen. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 12:07:07 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 12:07:07 -0000 Subject: [Varnish] #960: Test suite not working on a single cpu host on linux In-Reply-To: <044.245b526f140a9829b75db17f895978d1@varnish-cache.org> References: <044.245b526f140a9829b75db17f895978d1@varnish-cache.org> Message-ID: <053.5842f8abe96b4ab6b1a1de54c016bc84@varnish-cache.org> #960: Test suite not working on a single cpu host on linux ---------------------+------------------------------------------------------ Reporter: pmialon | Owner: tfheen Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ---------------------+------------------------------------------------------ Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: I believe this has been fixed in: commit b5c9ba0e7c087bede4506b948b78755682eeb373 Author: Tollef Fog Heen Date: Tue Aug 2 14:05:15 2011 +0200 Be more aggressive in getting rid of threads and slower in adding threads, hopefully fixing c00002 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 12:08:09 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 12:08:09 -0000 Subject: [Varnish] #965: Restarting a request in vcl_miss causes Varnish client crash. In-Reply-To: <042.7a34951b806a6025ec80b0b6379c27a4@varnish-cache.org> References: <042.7a34951b806a6025ec80b0b6379c27a4@varnish-cache.org> Message-ID: <051.c707ace0c05c04296a1104b74f1d3c5c@varnish-cache.org> #965: Restarting a request in vcl_miss causes Varnish client crash. --------------------+------------------------------------------------------- Reporter: david | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- Description changed by tfheen: Old description: > Hello! > > I've done extensive testing on this, and I believe I've found a bug in > Varnish. The VCL below loads correctly, but causes the Varnish client to > crash after 1-2 requests. The backstory is available in the forums here > https://www.varnish-cache.org/forum/topic/65 - if you have any questions > that are not answered in that post, please ask away; I'd love to explain > and make more sense of what I'm trying to do. This isn't the exact VCL > I'd use in production, it is a minimalistic version that is able to > reproduce the crash: > > {{{ > sub vcl_recv { > if (!req.http.X-Forwarded-For) { set req.http.X-Forwarded-For = > client.ip; } > if (req.http.X-Banned == "check") { remove req.http.X-Banned; } > elseif (req.restarts == 0) { > set req.http.X-Banned = "check"; > return (lookup); > } > } > > sub vcl_hash { > ## Check if they have a ban in the cache, or if they are going to be > banned in cache. > if (req.http.X-Banned) { > hash_data(req.http.X-Forwarded-For); > return (hash); > } > } > > sub vcl_error { > if (obj.status == 988) { return (restart); } > } > > sub vcl_miss { > if (req.http.X-Banned == "check") { error 988 "restarting"; } > } > > This is a successful request. After this request, the second request will > not work (Varnish restarts): > 0 Debug - "VCL_error(988, restarting)" > 9 BackendOpen b production[47] 172.21.4.182 41813 172.21.4.111 80 > 9 TxRequest b GET > 9 TxURL b http://www.site.com/ > 9 TxProtocol b HTTP/1.1 > 9 TxHeader b User-Agent: curl/7.19.7 (universal-apple-darwin10.0) > libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3 > 9 TxHeader b Host: www.site.com > 9 TxHeader b Accept: */* > 9 TxHeader b Proxy-Connection: Keep-Alive > 9 TxHeader b X-Forwarded-For: 127.0.0.1 > 9 TxHeader b X-Varnish: 746583022 > 9 TxHeader b Accept-Encoding: gzip > 9 RxProtocol b HTTP/1.1 > 9 RxStatus b 200 > 9 RxResponse b OK > 9 RxHeader b Date: Thu, 16 Jun 2011 23:45:54 GMT > 9 RxHeader b Server: Apache/2.2.3 (CentOS) > 9 RxHeader b Cache-Control: private, proxy-revalidate > 9 RxHeader b ETag: "9658bc1e80033b21277323e725948c91" > 9 RxHeader b Content-Encoding: gzip > 9 RxHeader b Vary: Accept-Encoding > 9 RxHeader b Content-length: 11452 > 9 RxHeader b Content-Type: text/html; charset=utf-8 > 9 RxHeader b Content-Language: en > 9 Fetch_Body b 4 0 1 > 9 Length b 11452 > 9 BackendReuse b production[47] > 3 SessionOpen c 127.0.0.1 1204 :80 > 3 ReqStart c 127.0.0.1 1204 746583022 > 3 RxRequest c HEAD > 3 RxURL c http://www.site.com/ > 3 RxProtocol c HTTP/1.1 > 3 RxHeader c User-Agent: curl/7.19.7 (universal-apple-darwin10.0) > libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3 > 3 RxHeader c Host: www.site.com > 3 RxHeader c Accept: */* > 3 RxHeader c Proxy-Connection: Keep-Alive > 3 VCL_call c recv lookup > 3 VCL_call c hash > 3 Hash c 127.0.0.1 > 3 VCL_return c hash > 3 VCL_call c miss error > 3 VCL_call c error restart > 3 VCL_call c recv lookup > 3 VCL_call c hash > 3 Hash c http://www.site.com/ > 3 Hash c www.site.com > 3 VCL_return c hash > 3 VCL_call c miss fetch > 3 Backend c 9 production production[47] > 3 TTL c 746583022 RFC 300 1308267955 0 0 0 0 > 3 VCL_call c fetch deliver > 3 ObjProtocol c HTTP/1.1 > 3 ObjResponse c OK > 3 ObjHeader c Date: Thu, 16 Jun 2011 23:45:54 GMT > 3 ObjHeader c Server: Apache/2.2.3 (CentOS) > 3 ObjHeader c Cache-Control: private, proxy-revalidate > 3 ObjHeader c ETag: "9658bc1e80033b21277323e725948c91" > 3 ObjHeader c Content-Encoding: gzip > 3 ObjHeader c Vary: Accept-Encoding > 3 ObjHeader c Content-Type: text/html; charset=utf-8 > 3 ObjHeader c Content-Language: en > 3 Gzip c u F - 11452 46817 80 80 91551 > 3 VCL_call c deliver deliver > 3 TxProtocol c HTTP/1.1 > 3 TxStatus c 200 > 3 TxResponse c OK > 3 TxHeader c Server: Apache/2.2.3 (CentOS) > 3 TxHeader c Cache-Control: private, proxy-revalidate > 3 TxHeader c ETag: "9658bc1e80033b21277323e725948c91" > 3 TxHeader c Vary: Accept-Encoding > 3 TxHeader c Content-Type: text/html; charset=utf-8 > 3 TxHeader c Content-Language: en > 3 TxHeader c Date: Thu, 16 Jun 2011 23:45:54 GMT > 3 TxHeader c X-Varnish: 746583022 > 3 TxHeader c Age: 0 > 3 TxHeader c Via: 1.1 varnish > 3 TxHeader c Connection: keep-alive > 3 Length c 0 > 3 ReqEnd c 746583022 1308267954.333659887 1308267954.527986050 > 0.000048876 0.194267035 0.000059128 > 3 Debug c herding > 3 SessionClose c no request > 3 StatSess c 127.0.0.1 1204 0 1 1 0 0 1 344 0 > 0 Backend_health - production[33] Still healthy ------- 4 3 8 > 0.000000 0.000272 > 3 SessionOpen c 172.21.4.16 57711 :80 > 3 SessionClose c EOF > }}} > Regards, > -david New description: Hello! I've done extensive testing on this, and I believe I've found a bug in Varnish. The VCL below loads correctly, but causes the Varnish client to crash after 1-2 requests. The backstory is available in the forums here https://www.varnish-cache.org/forum/topic/65 - if you have any questions that are not answered in that post, please ask away; I'd love to explain and make more sense of what I'm trying to do. This isn't the exact VCL I'd use in production, it is a minimalistic version that is able to reproduce the crash: {{{ sub vcl_recv { if (!req.http.X-Forwarded-For) { set req.http.X-Forwarded-For = client.ip; } if (req.http.X-Banned == "check") { remove req.http.X-Banned; } elseif (req.restarts == 0) { set req.http.X-Banned = "check"; return (lookup); } } sub vcl_hash { ## Check if they have a ban in the cache, or if they are going to be banned in cache. if (req.http.X-Banned) { hash_data(req.http.X-Forwarded-For); return (hash); } } sub vcl_error { if (obj.status == 988) { return (restart); } } sub vcl_miss { if (req.http.X-Banned == "check") { error 988 "restarting"; } } }}} This is a successful request. After this request, the second request will not work (Varnish restarts): {{{ 0 Debug - "VCL_error(988, restarting)" 9 BackendOpen b production[47] 172.21.4.182 41813 172.21.4.111 80 9 TxRequest b GET 9 TxURL b http://www.site.com/ 9 TxProtocol b HTTP/1.1 9 TxHeader b User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3 9 TxHeader b Host: www.site.com 9 TxHeader b Accept: */* 9 TxHeader b Proxy-Connection: Keep-Alive 9 TxHeader b X-Forwarded-For: 127.0.0.1 9 TxHeader b X-Varnish: 746583022 9 TxHeader b Accept-Encoding: gzip 9 RxProtocol b HTTP/1.1 9 RxStatus b 200 9 RxResponse b OK 9 RxHeader b Date: Thu, 16 Jun 2011 23:45:54 GMT 9 RxHeader b Server: Apache/2.2.3 (CentOS) 9 RxHeader b Cache-Control: private, proxy-revalidate 9 RxHeader b ETag: "9658bc1e80033b21277323e725948c91" 9 RxHeader b Content-Encoding: gzip 9 RxHeader b Vary: Accept-Encoding 9 RxHeader b Content-length: 11452 9 RxHeader b Content-Type: text/html; charset=utf-8 9 RxHeader b Content-Language: en 9 Fetch_Body b 4 0 1 9 Length b 11452 9 BackendReuse b production[47] 3 SessionOpen c 127.0.0.1 1204 :80 3 ReqStart c 127.0.0.1 1204 746583022 3 RxRequest c HEAD 3 RxURL c http://www.site.com/ 3 RxProtocol c HTTP/1.1 3 RxHeader c User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3 3 RxHeader c Host: www.site.com 3 RxHeader c Accept: */* 3 RxHeader c Proxy-Connection: Keep-Alive 3 VCL_call c recv lookup 3 VCL_call c hash 3 Hash c 127.0.0.1 3 VCL_return c hash 3 VCL_call c miss error 3 VCL_call c error restart 3 VCL_call c recv lookup 3 VCL_call c hash 3 Hash c http://www.site.com/ 3 Hash c www.site.com 3 VCL_return c hash 3 VCL_call c miss fetch 3 Backend c 9 production production[47] 3 TTL c 746583022 RFC 300 1308267955 0 0 0 0 3 VCL_call c fetch deliver 3 ObjProtocol c HTTP/1.1 3 ObjResponse c OK 3 ObjHeader c Date: Thu, 16 Jun 2011 23:45:54 GMT 3 ObjHeader c Server: Apache/2.2.3 (CentOS) 3 ObjHeader c Cache-Control: private, proxy-revalidate 3 ObjHeader c ETag: "9658bc1e80033b21277323e725948c91" 3 ObjHeader c Content-Encoding: gzip 3 ObjHeader c Vary: Accept-Encoding 3 ObjHeader c Content-Type: text/html; charset=utf-8 3 ObjHeader c Content-Language: en 3 Gzip c u F - 11452 46817 80 80 91551 3 VCL_call c deliver deliver 3 TxProtocol c HTTP/1.1 3 TxStatus c 200 3 TxResponse c OK 3 TxHeader c Server: Apache/2.2.3 (CentOS) 3 TxHeader c Cache-Control: private, proxy-revalidate 3 TxHeader c ETag: "9658bc1e80033b21277323e725948c91" 3 TxHeader c Vary: Accept-Encoding 3 TxHeader c Content-Type: text/html; charset=utf-8 3 TxHeader c Content-Language: en 3 TxHeader c Date: Thu, 16 Jun 2011 23:45:54 GMT 3 TxHeader c X-Varnish: 746583022 3 TxHeader c Age: 0 3 TxHeader c Via: 1.1 varnish 3 TxHeader c Connection: keep-alive 3 Length c 0 3 ReqEnd c 746583022 1308267954.333659887 1308267954.527986050 0.000048876 0.194267035 0.000059128 3 Debug c herding 3 SessionClose c no request 3 StatSess c 127.0.0.1 1204 0 1 1 0 0 1 344 0 0 Backend_health - production[33] Still healthy ------- 4 3 8 0.000000 0.000272 3 SessionOpen c 172.21.4.16 57711 :80 3 SessionClose c EOF }}} Regards, -david -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 12:36:39 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 12:36:39 -0000 Subject: [Varnish] #960: Test suite not working on a single cpu host on linux In-Reply-To: <044.245b526f140a9829b75db17f895978d1@varnish-cache.org> References: <044.245b526f140a9829b75db17f895978d1@varnish-cache.org> Message-ID: <053.83617d9935c32dc85d5a901126a3bdd1@varnish-cache.org> #960: Test suite not working on a single cpu host on linux ---------------------+------------------------------------------------------ Reporter: pmialon | Owner: tfheen Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ---------------------+------------------------------------------------------ Comment(by pmialon): Replying to [comment:2 tfheen]: > I believe this has been fixed in: > > > commit b5c9ba0e7c087bede4506b948b78755682eeb373 > Author: Tollef Fog Heen > Date: Tue Aug 2 14:05:15 2011 +0200 > > Be more aggressive in getting rid of threads and slower in adding threads, hopefully fixing c00002 I confirm with the latest git checkout, the test didn't fail anymore. Thank you ! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 12:57:27 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 12:57:27 -0000 Subject: [Varnish] #965: Restarting a request in vcl_miss causes Varnish client crash. In-Reply-To: <042.7a34951b806a6025ec80b0b6379c27a4@varnish-cache.org> References: <042.7a34951b806a6025ec80b0b6379c27a4@varnish-cache.org> Message-ID: <051.f29968a326f4a260ecd2c4f796841caf@varnish-cache.org> #965: Restarting a request in vcl_miss causes Varnish client crash. --------------------+------------------------------------------------------- Reporter: david | Owner: tfheen Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [a39f3ee67c0a88bb3b5a0b97a9070bcc6a11f92a]) Reset bereq http struct on restart from vcl_miss and vcl_pass Thanks a lot to David for minimised test case showing the bug. Fixes: #965 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 14:07:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 14:07:14 -0000 Subject: [Varnish] #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked In-Reply-To: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> References: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> Message-ID: <050.3c1e6f58262bbd3a2e5cedd3b2cc2c50@varnish-cache.org> #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked ----------------------+----------------------------------------------------- Reporter: sctb | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: chunked hang transfer-encoding 503 ----------------------+----------------------------------------------------- Comment(by martin): So do I understand you correctly that it wasn't a Varnish problem and I can close the ticket? -Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 15:08:22 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 15:08:22 -0000 Subject: [Varnish] #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked In-Reply-To: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> References: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> Message-ID: <050.d4fd3db9cb6c69f7ef5ac70b877d24c7@varnish-cache.org> #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked ----------------------+----------------------------------------------------- Reporter: sctb | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: chunked hang transfer-encoding 503 ----------------------+----------------------------------------------------- Comment(by sctb): Replying to [comment:4 martin]: > So do I understand you correctly that it wasn't a Varnish problem and I can close the ticket? > > -Martin Yup, close away. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 17:57:33 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 17:57:33 -0000 Subject: [Varnish] #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked In-Reply-To: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> References: <041.a50a0a6d6d797e6900b6e7fc153284ff@varnish-cache.org> Message-ID: <050.623635ba0b677a49f68040ff8a8509ec@varnish-cache.org> #968: "Resource temporarily unavailable" error when backend responds with Transfer-Encoding: chunked ------------------------------------------------+--------------------------- Reporter: sctb | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: invalid Keywords: chunked hang transfer-encoding 503 | ------------------------------------------------+--------------------------- Changes (by martin): * status: new => closed * resolution: => invalid Comment: Not a Varnish bug, closing ticket. -Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 18:36:12 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 18:36:12 -0000 Subject: [Varnish] #964: varnish is losing (java) session data In-Reply-To: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> References: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> Message-ID: <056.c13925ed381ccf6491331125a0b8c1d4@varnish-cache.org> #964: varnish is losing (java) session data ------------------------+--------------------------------------------------- Reporter: pravenjohn | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.0 | Severity: critical Keywords: | ------------------------+--------------------------------------------------- Comment(by pravenjohn): sorry since its already been over 2 weeks, we no londer have any of the old logs corresponding to that time (this is our production environment). The above printed is our entire VCL, the only difference betwee this and our present Prod env (which we implemented to "fix" the above issue) is we removed the pipe- if (req.request == "POST") { return(pipe); } sub vcl_pipe { set bereq.http.connection = "close"; } After this change, we no longer loose java session data. But now we're back to one of our original issues, wherein a few clients while carrying out POST transactions, get 503 ... Regards Praven John -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 19:48:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 19:48:21 -0000 Subject: [Varnish] #823: Ability to determine number of healthy backends from varnishstat In-Reply-To: <042.95f938109ab74caae44fa84d51715c76@varnish-cache.org> References: <042.95f938109ab74caae44fa84d51715c76@varnish-cache.org> Message-ID: <051.698a2223e0e2642f189e68e011619977@varnish-cache.org> #823: Ability to determine number of healthy backends from varnishstat ------------------------------------------+--------------------------------- Reporter: glenk | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Later Component: varnishstat | Version: trunk Severity: normal | Resolution: fixed Keywords: varnishstat healthy backends | ------------------------------------------+--------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: Well, it has hit now. Varnish 3.0 has dynamic counters, amongst these the health status of backends. Suggestions and ideas are, as always, welcome. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 2 22:23:47 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Aug 2011 22:23:47 -0000 Subject: [Varnish] #964: varnish is losing (java) session data In-Reply-To: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> References: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> Message-ID: <056.a3cfb198d3fead42bc5ccedc12b5f266@varnish-cache.org> #964: varnish is losing (java) session data ------------------------+--------------------------------------------------- Reporter: pravenjohn | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.0 | Severity: critical Keywords: | ------------------------+--------------------------------------------------- Comment(by pravenjohn): I'll attach the logs within another 12 hrs... I'm at present trying to recreate the issue on our staging env. -- Ticket URL: Varnish The Varnish HTTP Accelerator From jordi.prats at gmail.com Wed Aug 3 15:38:40 2011 From: jordi.prats at gmail.com (Jordi Prats) Date: Wed, 3 Aug 2011 17:38:40 +0200 Subject: keepalive timeout? Message-ID: Hi everybody, Accodring to the man page, default sess_timeout value is 5 seconds: sess_timeout Units: seconds Default: 5 But with a network capture you can see the time between the first packet and the last one is aproximately as 10 seconds. Futhermore, if you set sess_timeout to, for instance, 4 seconds, in the network capture you get exactly that. I know that sess_timeout is not exactly that, but i'm assuming a really quick request so it does not take more than a few milliseconds Can you correct the value that is wrong, the man page or the code? thanks, -- Jordi From varnish-bugs at varnish-cache.org Thu Aug 4 10:45:07 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 04 Aug 2011 10:45:07 -0000 Subject: [Varnish] #971: Broken DNS director Message-ID: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> #971: Broken DNS director -------------------+-------------------------------------------------------- Reporter: rdvn | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- '''Description of the problem:''' Child process segfaults when using a DNS director. '''Version-Release number of selected component (if applicable):''' varnish-3.0.0-2.el5.x86_64 '''How reproducible:''' Every time. '''Step to reproduce:''' 1. Set up a DNS director 2. Start varnishd '''Actual results:''' Child process segfaults. '''Expected results:''' Working DNS director. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 4 10:45:50 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 04 Aug 2011 10:45:50 -0000 Subject: [Varnish] #971: Broken DNS director In-Reply-To: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> References: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> Message-ID: <050.46bcbb66dc6608c386ce1bd445f260e8@varnish-cache.org> #971: Broken DNS director -------------------+-------------------------------------------------------- Reporter: rdvn | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by rdvn): gdb backtrace {{{ #0 VBE_UseHealth (vdi=0x0) at cache_backend.c:428 #1 0x000000000043236c in ccf_config_use (cli=, av=, priv=) at cache_vcl.c:309 #2 0x00000031ed806b4e in cls_dispatch (priv=0x7f8d58a0d780, av=0x7f8d521ae880) at cli_serve.c:228 #3 cls_vlu2 (priv=0x7f8d58a0d780, av=0x7f8d521ae880) at cli_serve.c:284 #4 0x00000031ed80702d in cls_vlu (priv=0x7f8d58a0d780, p=) at cli_serve.c:339 #5 0x00000031ed809fe9 in LineUpProcess (l=0x7f8d58a02790) at vlu.c:154 #6 0x00000031ed805dbf in VCLS_Poll (cs=0x7f8d58ac00b0, timeout=) at cli_serve.c:528 #7 0x0000000000418481 in CLI_Run () at cache_cli.c:112 #8 0x000000000042b093 in child_main () at cache_main.c:138 #9 0x000000000043d34e in start_child (cli=0x7f8d58a0d570) at mgt_child.c:345 #10 0x00000031ed806b4e in cls_dispatch (priv=0x7f8d58a0d540, av=0x7f8d58a11380) at cli_serve.c:228 #11 cls_vlu2 (priv=0x7f8d58a0d540, av=0x7f8d58a11380) at cli_serve.c:284 #12 0x00000031ed80702d in cls_vlu (priv=0x7f8d58a0d540, p=) at cli_serve.c:339 #13 0x00000031ed809fe9 in LineUpProcess (l=0x7f8d58a02520) at vlu.c:154 #14 0x00000031ed806020 in VCLS_PollFd (cs=0x7f8d58ac0060, fd=, timeout=0) at cli_serve.c:489 #15 0x00000031ed808e2d in vev_schedule_one (evb=0x7f8d58a04040) at vev.c:498 #16 0x00000031ed8090a8 in vev_schedule (evb=0x7f8d58a04040) at vev.c:363 #17 0x000000000043d535 in MGT_Run () at mgt_child.c:602 #18 0x000000000044c308 in main (argc=, argv=) at varnishd.c:649 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 4 13:02:15 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 04 Aug 2011 13:02:15 -0000 Subject: [Varnish] #971: Broken DNS director In-Reply-To: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> References: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> Message-ID: <050.4936fa7b806adf3e000bdbee704d0407@varnish-cache.org> #971: Broken DNS director -------------------+-------------------------------------------------------- Reporter: rdvn | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by kristian): Can you attach the VCL you use for this? How long does it take to segfault? Does it serve any traffic? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 4 14:07:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 04 Aug 2011 14:07:38 -0000 Subject: [Varnish] #971: Broken DNS director In-Reply-To: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> References: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> Message-ID: <050.c178d02a01541256bd0b267f204d2aa8@varnish-cache.org> #971: Broken DNS director -------------------+-------------------------------------------------------- Reporter: rdvn | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by rdvn): I used this VCL for testing/generating the core dump. But first I found this out I was using my production VCL, which was working OK on Varnish 2.1.3. Of course, after making needed modifications. It segfaults right after trying to use VCL. Attaching strace output. {{{ backend default { .host = "127.0.0.1"; .port = "80"; } director test dns { .list = { .port = "80"; "192.168.16.128"/25; } .ttl = 15m; } sub vcl_recv { if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } set req.backend = test; if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization || req.http.Cookie) { /* Not cacheable by default */ return (pass); } return (lookup); } }}} {{{ [pid 1619] writev(13, [{"200 36 \n", 13}, {"Loaded \"./vcl.bDmyBp0R.so\" as \"b"..., 36}, {"\n", 1}], 3 [pid 1621] set_robust_list(0x7fd42bdf19e0, 0x18) = 0 [pid 1623] set_robust_list(0x7fd42a7fe9e0, 0x18) = 0 [pid 1623] nanosleep({180, 0}, [pid 1622] set_robust_list(0x7fd42b1ff9e0, 0x18) = 0 [pid 1622] nanosleep({1, 0}, [pid 1624] set_robust_list(0x7fd429dfd9e0, 0x18) = 0 [pid 1624] nanosleep({0, 10000000}, [pid 1596] <... poll resumed> ) = 1 ([{fd=12, revents=POLLIN}]) [pid 1596] read(12, "200 36 \n", 13) = 13 [pid 1596] poll([{fd=12, events=POLLIN}], 1, 10000) = 1 ([{fd=12, revents=POLLIN}]) [pid 1596] read(12, "Loaded \"./vcl.bDmyBp0R.so\" as \"b"..., 37) = 37 [pid 1596] write(11, "vcl.use \"boot\"\n", 15) = 15 [pid 1596] poll([{fd=12, events=POLLIN}], 1, 10000 [pid 1619] <... writev resumed> ) = 50 [pid 1619] poll([{fd=10, events=POLLIN}], 1, -1) = 1 ([{fd=10, revents=POLLIN}]) [pid 1619] read(10, "vcl.use \"boot\"\n", 8191) = 15 [pid 1619] --- SIGSEGV (Segmentation fault) @ 0 (0) --- }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 4 14:13:24 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 04 Aug 2011 14:13:24 -0000 Subject: [Varnish] #971: Broken DNS director In-Reply-To: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> References: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> Message-ID: <050.df67bf82257ec61c949eea4899295863@varnish-cache.org> #971: Broken DNS director -------------------+-------------------------------------------------------- Reporter: rdvn | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by rdvn): I just found out another thing. When I define .connect_timeout, as described in documentation. I can't even compile VCL. {{{ director test dns { .list = { .port = "80"; .connect_timeout = 0.4; "192.168.16.128"/25; } .ttl = 15m; } }}} {{{ Message from VCC-compiler: Expected ID got ';' (program line 186), at ('input' Line 9 Pos 27) .connect_timeout = 0.4; --------------------------# Expected '.' got '"192.168.16.128"' (program line 98), at ('input' Line 10 Pos 5) "192.168.16.128"/25; ----################---- In director specification starting at: ('input' Line 6 Pos 1) director test dns { ########----------- Running VCC-compiler failed, exit 1 VCL compilation failed }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 4 14:31:26 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 04 Aug 2011 14:31:26 -0000 Subject: [Varnish] #971: Broken DNS director In-Reply-To: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> References: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> Message-ID: <050.82c29c867bbfd3b08c36eec884acafdc@varnish-cache.org> #971: Broken DNS director -------------------+-------------------------------------------------------- Reporter: rdvn | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by kristian): Ok, got it... While you wait for a proper fix, inverting the order of your director and backend definition should avoid triggering this. Now that I can reproduce, it shouldn't be hard to fix. For the connect_timeout, you're just missing a unit. Try 0.4s instead of 0.4. That error message isn't all too intuitive, I'm afraid, and I noticed you got an other level of error messages that you shouldn't have gotten (the one about 192.168...), but that's a different issue, and mostly cosmetic. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 4 15:03:51 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 04 Aug 2011 15:03:51 -0000 Subject: [Varnish] #971: Broken DNS director In-Reply-To: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> References: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> Message-ID: <050.ab84f79587ca813b9b153629c43601b5@varnish-cache.org> #971: Broken DNS director -------------------+-------------------------------------------------------- Reporter: rdvn | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by rdvn): Thanks Kristian, seems like it helped. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 4 19:22:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 04 Aug 2011 19:22:34 -0000 Subject: [Varnish] #972: varnishd asserts when trying to do 304 while streaming Message-ID: <043.63c660f49aad5f89126917dbddf04a06@varnish-cache.org> #972: varnishd asserts when trying to do 304 while streaming --------------------+------------------------------------------------------- Reporter: martin | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- When the criteria for doing a 304 is met and do_stream is set, varnishd asserts. Problem spotted by DocWilco. *** v1 0.6 debug| Child (11528) Panic message: Assert error in RES_StreamStart(), cache_response.c line 414:\n *** v1 0.6 debug| Condition((sp->wantbody) != 0) not true.\n *** v1 0.6 debug| thread = (cache-worker)\n *** v1 0.6 debug| ident = Linux,2.6.38-10-generic,x86_64,-sfile,-smalloc,-hcritbit,epoll\n *** v1 0.6 debug| Backtrace:\n *** v1 0.6 debug| 0x42c935: pan_ic+b5\n *** v1 0.6 debug| 0x430095: RES_StreamStart+145\n *** v1 0.6 debug| 0x415186: cnt_streambody+c6\n *** v1 0.6 debug| 0x41684d: CNT_Session+141d\n *** v1 0.6 debug| 0x42dbb8: wrk_do_cnt_sess+b8\n *** v1 0.6 debug| 0x42e5e9: wrk_thread_real+409\n *** v1 0.6 debug| 0x7f34b65c6d8c: _end+7f34b5f4eab4\n *** v1 0.6 debug| 0x7f34b631204d: _end+7f34b5c99d75\n *** v1 0.6 debug| sp = 0x7f34b0c14008 {\n *** v1 0.6 debug| f... -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 4 19:25:01 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 04 Aug 2011 19:25:01 -0000 Subject: [Varnish] #972: varnishd asserts when trying to do 304 while streaming In-Reply-To: <043.63c660f49aad5f89126917dbddf04a06@varnish-cache.org> References: <043.63c660f49aad5f89126917dbddf04a06@varnish-cache.org> Message-ID: <052.9f074ee24440d7a36cf39a8c5b0ee01e@varnish-cache.org> #972: varnishd asserts when trying to do 304 while streaming --------------------+------------------------------------------------------- Reporter: martin | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- Description changed by martin: Old description: > When the criteria for doing a 304 is met and do_stream is set, varnishd > asserts. > > Problem spotted by DocWilco. > > *** v1 0.6 debug| Child (11528) Panic message: Assert error in > RES_StreamStart(), cache_response.c line 414:\n > *** v1 0.6 debug| Condition((sp->wantbody) != 0) not true.\n > *** v1 0.6 debug| thread = (cache-worker)\n > *** v1 0.6 debug| ident = > Linux,2.6.38-10-generic,x86_64,-sfile,-smalloc,-hcritbit,epoll\n > *** v1 0.6 debug| Backtrace:\n > *** v1 0.6 debug| 0x42c935: pan_ic+b5\n > *** v1 0.6 debug| 0x430095: RES_StreamStart+145\n > *** v1 0.6 debug| 0x415186: cnt_streambody+c6\n > *** v1 0.6 debug| 0x41684d: CNT_Session+141d\n > *** v1 0.6 debug| 0x42dbb8: wrk_do_cnt_sess+b8\n > *** v1 0.6 debug| 0x42e5e9: wrk_thread_real+409\n > *** v1 0.6 debug| 0x7f34b65c6d8c: _end+7f34b5f4eab4\n > *** v1 0.6 debug| 0x7f34b631204d: _end+7f34b5c99d75\n > *** v1 0.6 debug| sp = 0x7f34b0c14008 {\n > *** v1 0.6 debug| f... New description: When the criteria for doing a 304 is met and do_stream is set, varnishd asserts. Problem spotted by DocWilco. {{{ *** v1 0.6 debug| Child (11528) Panic message: Assert error in RES_StreamStart(), cache_response.c line 414:\n *** v1 0.6 debug| Condition((sp->wantbody) != 0) not true.\n *** v1 0.6 debug| thread = (cache-worker)\n *** v1 0.6 debug| ident = Linux,2.6.38-10-generic,x86_64,-sfile,-smalloc,-hcritbit,epoll\n *** v1 0.6 debug| Backtrace:\n *** v1 0.6 debug| 0x42c935: pan_ic+b5\n *** v1 0.6 debug| 0x430095: RES_StreamStart+145\n *** v1 0.6 debug| 0x415186: cnt_streambody+c6\n *** v1 0.6 debug| 0x41684d: CNT_Session+141d\n *** v1 0.6 debug| 0x42dbb8: wrk_do_cnt_sess+b8\n *** v1 0.6 debug| 0x42e5e9: wrk_thread_real+409\n *** v1 0.6 debug| 0x7f34b65c6d8c: _end+7f34b5f4eab4\n *** v1 0.6 debug| 0x7f34b631204d: _end+7f34b5c99d75\n *** v1 0.6 debug| sp = 0x7f34b0c14008 {\n *** v1 0.6 debug| f... }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 4 23:12:45 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 04 Aug 2011 23:12:45 -0000 Subject: [Varnish] #959: Src packages don't compile on debian squeeze for varnish 3.0.0 In-Reply-To: <044.27558ba8ccd77cd0292e1020af9da81a@varnish-cache.org> References: <044.27558ba8ccd77cd0292e1020af9da81a@varnish-cache.org> Message-ID: <053.e0d5dd28bae26272dd70b53775e429ef@varnish-cache.org> #959: Src packages don't compile on debian squeeze for varnish 3.0.0 ---------------------+------------------------------------------------------ Reporter: pmialon | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: blocker Keywords: | ---------------------+------------------------------------------------------ Comment(by scoof): I was able to reproduce on a vm by running make check from /root/varnish- cache dlopen fails because varnishtest setuids the child process, and /root is not world readable by default. Maybe logging dlerror would be appropriate? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 5 07:22:24 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 05 Aug 2011 07:22:24 -0000 Subject: [Varnish] #959: Src packages don't compile on debian squeeze for varnish 3.0.0 In-Reply-To: <044.27558ba8ccd77cd0292e1020af9da81a@varnish-cache.org> References: <044.27558ba8ccd77cd0292e1020af9da81a@varnish-cache.org> Message-ID: <053.12ec3d345ec40d204dfb7027a8eee7fc@varnish-cache.org> #959: Src packages don't compile on debian squeeze for varnish 3.0.0 ----------------------+----------------------------------------------------- Reporter: pmialon | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: blocker Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [97ff8c4b29a01cb5f3e1c0463a326347f109793b]) Report dlerror if dlopen fails dlopen typically only fails here if the child process does not have access to the build directory and the user runs varnishtest as root (meaning the child setuids to nobody). Report the dlerror and give a hopefully helpful hint to help diagnose the error. Fixes: #959 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Aug 7 16:28:57 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 07 Aug 2011 16:28:57 -0000 Subject: [Varnish] #953: Leak in the TransientStorage? In-Reply-To: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> References: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> Message-ID: <052.dd609450b3d760117e0ed19fba8fffbf@varnish-cache.org> #953: Leak in the TransientStorage? ----------------------+----------------------------------------------------- Reporter: elurin | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by elurin): Problem exists in varnish-trunk+2011-08-07 {{{ SMA.s0.nreq 220489 372.45 Allocator requests SMA.s0.nobj 15709 . Outstanding allocations SMA.s0.nbytes 104841272 . Outstanding bytes SMA.s0.balloc 105080154 . Bytes allocated SMA.s0.bfree 238882 . Bytes free SMA.Transient.nreq 409389 691.54 Allocator requests SMA.Transient.nobj 409320 . Outstanding allocations SMA.Transient.nbytes 2738791985 . Outstanding bytes SMA.Transient.balloc 2738955377 . Bytes allocated SMA.Transient.bfree 163392 . Bytes free }}} some time later {{{ MA.s0.nreq 238371 370.14 Allocator requests SMA.s0.nobj 15709 . Outstanding allocations SMA.s0.nbytes 104841272 . Outstanding bytes SMA.s0.balloc 105080154 . Bytes allocated SMA.s0.bfree 238882 . Bytes free SMA.Transient.nreq 445139 691.21 Allocator requests SMA.Transient.nobj 445056 . Outstanding allocations SMA.Transient.nbytes 2981510250 . Outstanding bytes SMA.Transient.balloc 2981706794 . Bytes allocated SMA.Transient.bfree 196544 . Bytes free }}} Shortlived param set in 1 sec {{{ shortlived 1.000000 [s] }}} And many expires locks, but no expired objects {{{ n_expired 0 . N expired objects n_ban_re_test 0 0.00 N regexps tested against LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 445117 350.21 Lock Operations LCK.exp.colls 0 0.00 Collisions }}} May i turn off shortlived and all Transient storage? Thank you for your help -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Aug 7 16:29:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 07 Aug 2011 16:29:21 -0000 Subject: [Varnish] #953: Leak in the TransientStorage? In-Reply-To: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> References: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> Message-ID: <052.be8e7330a1a092a12b31823e4c6e7399@varnish-cache.org> #953: Leak in the TransientStorage? ----------------------+----------------------------------------------------- Reporter: elurin | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Changes (by elurin): * status: closed => reopened * resolution: fixed => -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Aug 7 23:02:55 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 07 Aug 2011 23:02:55 -0000 Subject: [Varnish] #956: Value of obj.ttl in vcl_hit In-Reply-To: <044.2a08cb9c53d237bb5ca8919e3e5bfe1f@varnish-cache.org> References: <044.2a08cb9c53d237bb5ca8919e3e5bfe1f@varnish-cache.org> Message-ID: <053.0f7be6834ac53aac721820ea5e95b0ad@varnish-cache.org> #956: Value of obj.ttl in vcl_hit ----------------------+----------------------------------------------------- Reporter: pmialon | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by drwilco): There's a further problem with obj.ttl, when obj.ttl is set in vcl_hit(), the expiry time will be TTL since the object entered cache, not TTL from the moment it is set. I contemplated altering obj->entered, but only very briefly. See attachment 956.patch for a fix, and the other one to be able to run the testcase. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 8 13:55:53 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Aug 2011 13:55:53 -0000 Subject: [Varnish] #973: errno = 13 (Permission denied) Message-ID: <051.b7bded098efb0cf05075020c868825f7@varnish-cache.org> #973: errno = 13 (Permission denied) ----------------------------+----------------------------------------------- Reporter: shaunlonghurst | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 2.1.5 | Severity: normal Keywords: | ----------------------------+----------------------------------------------- Hi I am trying to push varnish to our live server but keep getting this error Assert error in start_child(), mgt_child.c line 388: Condition(open("/dev/null", O_RDONLY) == STDIN_FILENO) not true. errno = 13 (Permission denied) child (18395) Started Pushing vcls failed: CLI communication error Stopping Child 200 0 I am running a ubuntu virtual server, using openVZ. Any ideas on what could be the issue? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 8 14:02:49 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Aug 2011 14:02:49 -0000 Subject: [Varnish] #973: errno = 13 (Permission denied) In-Reply-To: <051.b7bded098efb0cf05075020c868825f7@varnish-cache.org> References: <051.b7bded098efb0cf05075020c868825f7@varnish-cache.org> Message-ID: <060.6c7463773ecac2b251609da14a3dd8dc@varnish-cache.org> #973: errno = 13 (Permission denied) ----------------------------+----------------------------------------------- Reporter: shaunlonghurst | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 2.1.5 | Severity: normal Keywords: | ----------------------------+----------------------------------------------- Comment(by mattiasgeniar): This is specific to OpenVZ (you can get the same problem with /dev/random), have a look here on how to recreate your /dev/null device: http://forum.openvz.org/index.php?t=msg&goto=24676& -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 8 15:10:31 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Aug 2011 15:10:31 -0000 Subject: [Varnish] #973: errno = 13 (Permission denied) In-Reply-To: <051.b7bded098efb0cf05075020c868825f7@varnish-cache.org> References: <051.b7bded098efb0cf05075020c868825f7@varnish-cache.org> Message-ID: <060.c60e4f4311785c342cccdcdeb47f0673@varnish-cache.org> #973: errno = 13 (Permission denied) ----------------------------+----------------------------------------------- Reporter: shaunlonghurst | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 2.1.5 | Severity: normal Keywords: | ----------------------------+----------------------------------------------- Comment(by shaunlonghurst): hi, yep that fixed it thanks -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 9 08:02:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Aug 2011 08:02:38 -0000 Subject: [Varnish] #973: errno = 13 (Permission denied) In-Reply-To: <051.b7bded098efb0cf05075020c868825f7@varnish-cache.org> References: <051.b7bded098efb0cf05075020c868825f7@varnish-cache.org> Message-ID: <060.e40e5b46a5ad4531519cee8a47d3fcd0@varnish-cache.org> #973: errno = 13 (Permission denied) -----------------------------+---------------------------------------------- Reporter: shaunlonghurst | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 2.1.5 | Severity: normal Resolution: invalid | Keywords: -----------------------------+---------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => invalid Comment: Closing as invalid since it's not a Varnish bug. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 9 10:16:40 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Aug 2011 10:16:40 -0000 Subject: [Varnish] #953: Leak in the TransientStorage? In-Reply-To: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> References: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> Message-ID: <052.65bf4dfbc1956a05738a0f9bf45a2ac2@varnish-cache.org> #953: Leak in the TransientStorage? ----------------------+----------------------------------------------------- Reporter: elurin | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment(by phk): Ok, I went over the stevedore/storage statistics and that was clearly bogotified, that's fixed in -trunk now. Can you either include your VCL or mail it to me privately, I need to understand how you get objects to not expire. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 9 14:18:09 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Aug 2011 14:18:09 -0000 Subject: [Varnish] #956: Value of obj.ttl in vcl_hit In-Reply-To: <044.2a08cb9c53d237bb5ca8919e3e5bfe1f@varnish-cache.org> References: <044.2a08cb9c53d237bb5ca8919e3e5bfe1f@varnish-cache.org> Message-ID: <053.f1512a9d6fcaae438dd7658fe562bb4d@varnish-cache.org> #956: Value of obj.ttl in vcl_hit ----------------------+----------------------------------------------------- Reporter: pmialon | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by drwilco): the varnishlogtest.patch needs some more work, I'm not liking how tests randomly fail on a slow box. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 9 15:11:02 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Aug 2011 15:11:02 -0000 Subject: [Varnish] #974: Add new ESI syntax to the upgrade page Message-ID: <045.c8797036194c6fcef63245ebd45c0937@varnish-cache.org> #974: Add new ESI syntax to the upgrade page ----------------------+----------------------------------------------------- Reporter: smerrill | Type: documentation Status: new | Priority: low Milestone: | Component: documentation Version: 3.0.0 | Severity: minor Keywords: | ----------------------+----------------------------------------------------- I was helping a friend upgrade to 3.0, and https://www.varnish- cache.org/docs/trunk/installation/upgrade.html is great, but it's missing the new ESI keyword. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 9 16:08:33 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Aug 2011 16:08:33 -0000 Subject: [Varnish] #964: varnish is losing (java) session data In-Reply-To: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> References: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> Message-ID: <056.633ba2d2cf4c536aba3bdedecaee8f81@varnish-cache.org> #964: varnish is losing (java) session data ------------------------+--------------------------------------------------- Reporter: pravenjohn | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.0 | Severity: critical Keywords: | ------------------------+--------------------------------------------------- Comment(by pravenjohn): 11 SessionOpen c 3.122.54.121 57895 3.34.188.87:80 11 Debug c herding 11 SessionClose c pipe 19 TxRequest - POST 19 TxURL - /portal/site/communications-2010/template.PAGE/gold_standard/assess_self/employee_assessment/?javax.portlet.tpst=75f6907bfd084d126b25a8251bdda730&javax.portlet.prp_75f6907bfd084d126b25a8251bdda730=viewID%3DMY_PORTAL_VIEW&javax.portlet.begCacheTok=com.vign 19 TxProtocol - HTTP/1.1 19 TxHeader - Host: cihcispapp691v.corporate.ge.com 19 TxHeader - User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:5.0.1) Gecko/20100101 Firefox/5.0.1 19 TxHeader - Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 19 TxHeader - Accept-Language: en-us,en;q=0.5 19 TxHeader - Accept-Encoding: gzip, deflate 19 TxHeader - Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 19 TxHeader - Referer: http://cihcispapp691v.corporate.ge.com/portal/site/communications-2010/gold_standard/assess_self/employee_assessment 19 TxHeader - Cookie: selectedNav=6ecbe7c536f5c19373b5c4761bdda730; CALENDAR_PERSON=18527; BusinessEdition=Corporate; disclaimer=; IGESESSION=f0ef93061b614e8e14f3efef82248b306kbGW0Pe%2FNUzXXjVuVWbvg%3D%3D%0A; contentids_10289=4950175|1312904555564; contentids_20991=495 19 TxHeader - Content-Type: application/x-www-form-urlencoded 19 TxHeader - Content-Length: 550 19 TxHeader - X-Varnish: 187448030 19 TxHeader - connection: close 19 BackendClose - ige_static 11 ReqStart c 3.122.54.121 57895 187448030 11 RxRequest c POST 11 RxURL c /portal/site/communications-2010/template.PAGE/gold_standard/assess_self/employee_assessment/?javax.portlet.tpst=75f6907bfd084d126b25a8251bdda730&javax.portlet.prp_75f6907bfd084d126b25a8251bdda730=viewID%3DMY_PORTAL_VIEW&javax.portlet.begCacheTok=com.vign 11 RxProtocol c HTTP/1.1 11 RxHeader c Host: cihcispapp691v.corporate.ge.com 11 RxHeader c User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:5.0.1) Gecko/20100101 Firefox/5.0.1 11 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 11 RxHeader c Accept-Language: en-us,en;q=0.5 11 RxHeader c Accept-Encoding: gzip, deflate 11 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 11 RxHeader c Connection: keep-alive 11 RxHeader c Referer: http://cihcispapp691v.corporate.ge.com/portal/site/communications-2010/gold_standard/assess_self/employee_assessment 11 RxHeader c Cookie: selectedNav=6ecbe7c536f5c19373b5c4761bdda730; CALENDAR_PERSON=18527; BusinessEdition=Corporate; disclaimer=; IGESESSION=f0ef93061b614e8e14f3efef82248b306kbGW0Pe%2FNUzXXjVuVWbvg%3D%3D%0A; contentids_10289=4950175|1312904555564; contentids_20991=495 11 RxHeader c Content-Type: application/x-www-form-urlencoded 11 RxHeader c Content-Length: 550 11 VCL_call c recv pipe 11 VCL_call c hash 11 Hash c /portal/site/communications-2010/template.PAGE/gold_standard/assess_self/employee_assessment/?javax.portlet.tpst=75f6907bfd084d126b25a8251bdda730&javax.portlet.prp_75f6907bfd084d126b25a8251bdda730=viewID%3DMY_PORTAL_VIEW&javax.portlet.begCacheTok=com.vig 11 Hash c cihcispapp691v.corporate.ge.com 11 VCL_return c hash 11 VCL_call c pipe pipe 11 Backend c 19 ige_static ige_static 11 ReqEnd c 187448030 1312905942.033900976 1312905942.342040062 0.027658939 0.000061989 0.308077097 11 StatSess c 3.122.54.121 57895 0 1 1 1 0 0 0 0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1312905944 1.0 are the varnish logs for the error -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 9 16:09:47 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Aug 2011 16:09:47 -0000 Subject: [Varnish] #964: varnish is losing (java) session data In-Reply-To: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> References: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> Message-ID: <056.b1f5d04e3a04584e5d39885ce0c1777a@varnish-cache.org> #964: varnish is losing (java) session data ------------------------+--------------------------------------------------- Reporter: pravenjohn | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.0 | Severity: critical Keywords: | ------------------------+--------------------------------------------------- Comment(by pravenjohn): i've attached the error as well... Praven -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 10 07:56:58 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 10 Aug 2011 07:56:58 -0000 Subject: [Varnish] #956: Value of obj.ttl in vcl_hit In-Reply-To: <044.2a08cb9c53d237bb5ca8919e3e5bfe1f@varnish-cache.org> References: <044.2a08cb9c53d237bb5ca8919e3e5bfe1f@varnish-cache.org> Message-ID: <053.9c471e2a972085aee346ac9ccd45b009@varnish-cache.org> #956: Value of obj.ttl in vcl_hit ----------------------+----------------------------------------------------- Reporter: pmialon | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [c1a6faaf55ac0d5c4fee8bbc0b9281f5c1118592]) Move 'age' and 'entered' into struct exp. Simplify the move from sp->wrk to sp->obj accordingly. Add a diagram that explains how ttl, grace & keep related to each other. Fix VCL's obj.ttl variable to appear to be relative to "now" rather than obj->entered. (#956) Also log SLT_TTL when we change obj.grace and obj.keep Make SLT_TTL always have the same format: XID "RFC" or "VCL" obj.ttl obj.grace obj.keep obj.entered obj.age In addition "RFC" has: obj.date obj.expires obj.max-age Fixes #956 Thanks to: DocWilco -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 10 09:32:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 10 Aug 2011 09:32:28 -0000 Subject: [Varnish] #974: Add new ESI syntax to the upgrade page In-Reply-To: <045.c8797036194c6fcef63245ebd45c0937@varnish-cache.org> References: <045.c8797036194c6fcef63245ebd45c0937@varnish-cache.org> Message-ID: <054.066056f7614e1417808aacc8daa8fad4@varnish-cache.org> #974: Add new ESI syntax to the upgrade page -----------------------+---------------------------------------------------- Reporter: smerrill | Type: documentation Status: closed | Priority: low Milestone: | Component: documentation Version: 3.0.0 | Severity: minor Resolution: fixed | Keywords: -----------------------+---------------------------------------------------- Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [d6a5687f01bd09d96a868820c11beaf0c518ee5d]) Document do_esi Put esi ? do_esi in upgrade checklist, fix markup typo. Fixes #974 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 10 12:00:57 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 10 Aug 2011 12:00:57 -0000 Subject: [Varnish] #970: partial content code 206 - visible as '206' in varnishncsa and as '200' from VCL code In-Reply-To: <046.cdbe6b5a9773b161b3c6a61062dfc851@varnish-cache.org> References: <046.cdbe6b5a9773b161b3c6a61062dfc851@varnish-cache.org> Message-ID: <055.e01920b494a063bf281b8bc9c26cebb3@varnish-cache.org> #970: partial content code 206 - visible as '206' in varnishncsa and as '200' from VCL code -------------------------------------------------+-------------------------- Reporter: jhalfmoon | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: invalid Keywords: 206 partial content varnishncsa vcl | -------------------------------------------------+-------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: I've thought about this issue a bit, and I think I have reached a conclusion. There are a number of optimizations we can enable during delivery, most importantly (and presently only implemented:) Conditionals (If-Modified- Since etc) and Range delivery. In both cases, a 200 gets turned into something else to indicate the optimization. If we exposed 206/304 to VCL and the VCL removes the headers they depend on, we would have to fall back to 200 again efter VCL returns. There is simply no way to ensure that VCL knows the actual return status 100% consistently, without disabling parts of VCL that may be useful to people. I understand from your emails, that you ran into this because you try to collect your log-data from VCL code, rather than to pick it up from VSL. While I can appreciate the reasons you may have taken this shortcut, I do not condone it, and I do not see it as a valid reason to make VCL writing more tricky for users in general. I have therefore decided, that we will not expose 206/304 in VCL. VCL will see only the 200 status, and the delivery code will, headers permitting, try to optimize with 206/304 (or, come to think of it: Gzip) and actual status will be logged to the VSL. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 10 13:30:29 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 10 Aug 2011 13:30:29 -0000 Subject: [Varnish] #975: How do i stop varnish appending varnish IP to X-Client-IP field of header Message-ID: <046.ceeac44a214559b385b60acdde6cfd2e@varnish-cache.org> #975: How do i stop varnish appending varnish IP to X-Client-IP field of header -----------------------+---------------------------------------------------- Reporter: tamilmani | Type: task Status: new | Priority: high Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | -----------------------+---------------------------------------------------- HTTP_X_CLIENT_IP 59.162.86.164, '''''172.16.200.42''''' I don't want varnish to append that 172.16.200.42 which is varnish server's IP to this header field. Help Thanks in advance -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 10 13:34:53 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 10 Aug 2011 13:34:53 -0000 Subject: [Varnish] #972: varnishd asserts when trying to do 304 while streaming In-Reply-To: <043.63c660f49aad5f89126917dbddf04a06@varnish-cache.org> References: <043.63c660f49aad5f89126917dbddf04a06@varnish-cache.org> Message-ID: <052.7d962d5e968ce9fa991d1245663e79fa@varnish-cache.org> #972: varnishd asserts when trying to do 304 while streaming --------------------+------------------------------------------------------- Reporter: martin | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [5c8bb0448dcf2d3b4e091ff4e7a285f578bb3f38]) Don't panic if we can both do conditional (IMS) and stream, prefer IMS, it is less work and less data to transmit. Fixes #972 Thanks to: Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 10 13:38:27 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 10 Aug 2011 13:38:27 -0000 Subject: [Varnish] #975: How do i stop varnish appending varnish IP to X-Client-IP field of header In-Reply-To: <046.ceeac44a214559b385b60acdde6cfd2e@varnish-cache.org> References: <046.ceeac44a214559b385b60acdde6cfd2e@varnish-cache.org> Message-ID: <055.c669d2059cebf7186b73cdee5f326532@varnish-cache.org> #975: How do i stop varnish appending varnish IP to X-Client-IP field of header ------------------------+--------------------------------------------------- Reporter: tamilmani | Type: task Status: closed | Priority: high Milestone: | Component: build Version: 3.0.0 | Severity: normal Resolution: invalid | Keywords: ------------------------+--------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: This kind of question should not be filed as a ticket, but instead asked via email, irc or forum. Presumably the variable is set from the X-Forwarded-For: header. That header is created by the VCL code in your varnish, see the first lines of default.vcl's vcl_recv{} function. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 10 16:31:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 10 Aug 2011 16:31:20 -0000 Subject: [Varnish] #975: How do i stop varnish appending varnish IP to X-Client-IP field of header In-Reply-To: <046.ceeac44a214559b385b60acdde6cfd2e@varnish-cache.org> References: <046.ceeac44a214559b385b60acdde6cfd2e@varnish-cache.org> Message-ID: <055.aa0436bfb1f771808b010ea542621607@varnish-cache.org> #975: How do i stop varnish appending varnish IP to X-Client-IP field of header ------------------------+--------------------------------------------------- Reporter: tamilmani | Type: task Status: closed | Priority: high Milestone: | Component: build Version: 3.0.0 | Severity: normal Resolution: invalid | Keywords: ------------------------+--------------------------------------------------- Comment(by tamilmani): Sorry i dint know. But thanks anyways It's not changing even if I edit that -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 11 08:31:25 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 11 Aug 2011 08:31:25 -0000 Subject: [Varnish] #967: Assert error in object_cmp(), cache_expire.c line 449: Condition((bb) != NULL) not true. thread = (cache-timeout) In-Reply-To: <044.1798b199401153df82dd03112e822cf1@varnish-cache.org> References: <044.1798b199401153df82dd03112e822cf1@varnish-cache.org> Message-ID: <053.890e828266065e45ad1a80eea12dd34a@varnish-cache.org> #967: Assert error in object_cmp(), cache_expire.c line 449: Condition((bb) != NULL) not true. thread = (cache-timeout) ---------------------+------------------------------------------------------ Reporter: pmialon | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: critical Keywords: | ---------------------+------------------------------------------------------ Comment(by phk): See also #827 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 11 11:01:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 11 Aug 2011 11:01:14 -0000 Subject: [Varnish] #967: Assert error in object_cmp(), cache_expire.c line 449: Condition((bb) != NULL) not true. thread = (cache-timeout) In-Reply-To: <044.1798b199401153df82dd03112e822cf1@varnish-cache.org> References: <044.1798b199401153df82dd03112e822cf1@varnish-cache.org> Message-ID: <053.15a8dd8c2a04d289119ba09f1510eb44@varnish-cache.org> #967: Assert error in object_cmp(), cache_expire.c line 449: Condition((bb) != NULL) not true. thread = (cache-timeout) ----------------------+----------------------------------------------------- Reporter: pmialon | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: critical Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [a6ddafc0e87bcec71f016b8cae77b78641e617d7]) Clamp rather than overflow on child indexes when we get to the end of the UINT_MAX items we can support. Found by adding a lot of asserts and brute force testing, both of which I have left in. Fixes #967 May also be relevant to #827 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 11 11:26:24 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 11 Aug 2011 11:26:24 -0000 Subject: [Varnish] #967: Assert error in object_cmp(), cache_expire.c line 449: Condition((bb) != NULL) not true. thread = (cache-timeout) In-Reply-To: <044.1798b199401153df82dd03112e822cf1@varnish-cache.org> References: <044.1798b199401153df82dd03112e822cf1@varnish-cache.org> Message-ID: <053.c09e07c22de4ba705b718829b9fee61d@varnish-cache.org> #967: Assert error in object_cmp(), cache_expire.c line 449: Condition((bb) != NULL) not true. thread = (cache-timeout) ----------------------+----------------------------------------------------- Reporter: pmialon | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: critical Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Comment(by Poul-Henning Kamp ): (In [726d93ff5a815774d7a3b4a23dcba2efb3c08ca7]) Duh! Git committing from the wrong directory doesn't do what you expect: Clamp rather than overflow on child indexes when we get to the end of the UINT_MAX items we can support. Found by adding a lot of asserts and brute force testing, both of which I have left in. Fixes #967 May also be relevant to #827 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 11 14:12:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 11 Aug 2011 14:12:32 -0000 Subject: [Varnish] #962: Persistent storage don't work on Linux with Address space layout randomization In-Reply-To: <044.84a8ca3ebbf5734f4d5e6e14f868313f@varnish-cache.org> References: <044.84a8ca3ebbf5734f4d5e6e14f868313f@varnish-cache.org> Message-ID: <053.316d8c361612e62650fc7b1c2dcdb46f@varnish-cache.org> #962: Persistent storage don't work on Linux with Address space layout randomization ----------------------+----------------------------------------------------- Reporter: pmialon | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [e8a63f4d5e687cf3a04d34f696c661446d6c8d25]) Try to read the silo signature to find the correct address to map the silo into VM. If this fails or we get garbage, the silo will be cleared. Fixes #962 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 11 20:04:54 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 11 Aug 2011 20:04:54 -0000 Subject: [Varnish] #946: ExpKill disappeared from exp_timer in 3.0 In-Reply-To: <042.177cc31fd16288dfa6d14ef04c0e001c@varnish-cache.org> References: <042.177cc31fd16288dfa6d14ef04c0e001c@varnish-cache.org> Message-ID: <051.da54006a73bc258a8a0edc9cef0d476a@varnish-cache.org> #946: ExpKill disappeared from exp_timer in 3.0 -------------------+-------------------------------------------------------- Reporter: scoof | Type: defect Status: new | Priority: low Milestone: | Component: build Version: 3.0.0 | Severity: trivial Keywords: | -------------------+-------------------------------------------------------- Comment(by scoof): Proposed patch attached. I see no other way to log xid than fetching object. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Aug 14 12:11:31 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 14 Aug 2011 12:11:31 -0000 Subject: [Varnish] #976: Updated zope-plone.vcl to Varnish 3.x Message-ID: <050.32c296dc1c548fa6d72c1307524fb42e@varnish-cache.org> #976: Updated zope-plone.vcl to Varnish 3.x ---------------------------+------------------------------------------------ Reporter: cleberjsantos | Type: enhancement Status: new | Priority: normal Milestone: Later | Component: documentation Version: 3.0.0 | Severity: normal Keywords: update | ---------------------------+------------------------------------------------ I performed the update the ''zope-plone.vcl'' to ''Varnish 3.x'', but I am not succeeding in giving a git push is returning me the message: '''"fatal: The remote end hung up unexpectedly"'''. Please you have merge the attach diff file? Best Regards Cleber J Santos -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 08:29:48 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 08:29:48 -0000 Subject: [Varnish] #971: Broken DNS director In-Reply-To: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> References: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> Message-ID: <050.c32c714a23de82e594dedc2cbceb852a@varnish-cache.org> #971: Broken DNS director ---------------------+------------------------------------------------------ Reporter: rdvn | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: critical Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Changes (by Kristian Lyngstol ): * status: new => closed * resolution: => fixed Comment: (In [b231cf2bf2e949cc2baf516aa8fa2d15710d2979]) Fix the test for #971 Ironically, this does NOT fix #971 (see if you can handle this, trac). (WIP) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 09:09:55 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 09:09:55 -0000 Subject: [Varnish] #971: Broken DNS director In-Reply-To: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> References: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> Message-ID: <050.c9e178b4fb0e4bd2c46ae4c188a15033@varnish-cache.org> #971: Broken DNS director -----------------------+---------------------------------------------------- Reporter: rdvn | Type: defect Status: reopened | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Resolution: | Keywords: -----------------------+---------------------------------------------------- Changes (by kristian): * status: closed => reopened * resolution: fixed => * component: build => varnishd * severity: critical => normal Comment: Trac-confusion. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 09:57:12 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 09:57:12 -0000 Subject: [Varnish] #971: Broken DNS director In-Reply-To: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> References: <041.9508303297df487382a44071219d6bc8@varnish-cache.org> Message-ID: <050.9d2d69541524d69f897cdbecee9df7d7@varnish-cache.org> #971: Broken DNS director ----------------------+----------------------------------------------------- Reporter: rdvn | Owner: kristian Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Changes (by kristian): * owner: => kristian * status: reopened => new * version: 3.0.0 => trunk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 10:01:26 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 10:01:26 -0000 Subject: [Varnish] #964: varnish is losing (java) session data In-Reply-To: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> References: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> Message-ID: <056.ccd853fb793751c8d6a6ba9201076d47@varnish-cache.org> #964: varnish is losing (java) session data ------------------------+--------------------------------------------------- Reporter: pravenjohn | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Keywords: | ------------------------+--------------------------------------------------- Changes (by kristian): * priority: high => normal * severity: critical => normal Old description: > I've setup a varnish server (3.0) with an apache-tomcat server on the > backend... > > My default.vcl is > backend ige_static1 { > .host = "XXXXXXX"; > .port = "80"; > .connect_timeout = 600s; > .first_byte_timeout = 600s; > .between_bytes_timeout = 600s; > } > backend ige_dynamic { > .host = "XXXXXX"; > .port = "80"; > .connect_timeout = 600s; > .first_byte_timeout = 600s; > .between_bytes_timeout = 600s; > } > sub vcl_recv > { > if (req.request == "PURGE") > { > if (!client.ip ~ purge) > { > error 405 "Not allowed."; > } > ban(req.url == req.url); > error 200 "Purging Done"; > } > if (req.request == "POST") { > return(pipe); > } > if (req.url ~ "^/portal|^/saml") > { > set req.backend = ige_dynamic; > return(pass); > } > set req.backend = ige_static1; > return(lookup); > } > sub vcl_fetch > { > if (req.url ~ "\.(png|gif|jpg|swf|css|js|html|txt|pdf)$") > { > unset beresp.http.set-cookie; > } > } > sub vcl_deliver > { > if (obj.hits > 0) > { > set resp.http.X-Cache = "HIT"; > } > else > { > set resp.http.X-Cache = "MISS"; > } > } > sub vcl_hit { > if (req.request == "PURGE") { > ban (req.url == req.url); > error 200 "Purged."; > } > } > sub vcl_miss { > if (req.request == "PURGE") > { > ban (req.url == req.url); > error 200 "Not in cache"; > } > } > sub vcl_pipe > { > set bereq.http.connection = "close"; > } > > But for some reason, I seem to be loosing Java session dat along the way. > I set some data on the JAVA side, and after 2-3 pages interaction, the > data disappears... I checked the same againist my backend apache-tomcat > server and it works fine over there... > > Any suggestions you can provide would be great... Unless we can fix this > issue, we'll probably have to migrate out to some other caching > solution... > > Regards > Praven John New description: I've setup a varnish server (3.0) with an apache-tomcat server on the backend... My default.vcl is {{{ backend ige_static1 { .host = "XXXXXXX"; .port = "80"; .connect_timeout = 600s; .first_byte_timeout = 600s; .between_bytes_timeout = 600s; } backend ige_dynamic { .host = "XXXXXX"; .port = "80"; .connect_timeout = 600s; .first_byte_timeout = 600s; .between_bytes_timeout = 600s; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } ban(req.url == req.url); error 200 "Purging Done"; } if (req.request == "POST") { return(pipe); } if (req.url ~ "^/portal|^/saml") { set req.backend = ige_dynamic; return(pass); } set req.backend = ige_static1; return(lookup); } sub vcl_fetch { if (req.url ~ "\.(png|gif|jpg|swf|css|js|html|txt|pdf)$") { unset beresp.http.set-cookie; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } sub vcl_hit { if (req.request == "PURGE") { ban (req.url == req.url); error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { ban (req.url == req.url); error 200 "Not in cache"; } } sub vcl_pipe { set bereq.http.connection = "close"; } }}} But for some reason, I seem to be loosing Java session dat along the way. I set some data on the JAVA side, and after 2-3 pages interaction, the data disappears... I checked the same againist my backend apache-tomcat server and it works fine over there... Any suggestions you can provide would be great... Unless we can fix this issue, we'll probably have to migrate out to some other caching solution... Regards Praven John -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 10:12:04 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 10:12:04 -0000 Subject: [Varnish] #477: Change defaults to respect Cache-Control: private In-Reply-To: <041.9226b1c4ffd27a15ef76220ae7e29120@varnish-cache.org> References: <041.9226b1c4ffd27a15ef76220ae7e29120@varnish-cache.org> Message-ID: <050.1be49ee30bbb9e1d28379517d1e35451@varnish-cache.org> #477: Change defaults to respect Cache-Control: private ----------------------+----------------------------------------------------- Reporter: olau | Owner: sky Type: defect | Status: closed Priority: normal | Milestone: Varnish 3.0 dev Component: varnishd | Version: trunk Severity: normal | Resolution: wontfix Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => wontfix Comment: I'm closing this ticket after bug-wash consensus: 1) There is not much evidence of actual trouble. 2) Most "private" usage is (or should be) over HTTPS these days 3) You can do it from VCL -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 11:10:41 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 11:10:41 -0000 Subject: [Varnish] #501: Stats variable to monitor how many threads are actually doing something (n_wrk_busy): busy threads monitoring In-Reply-To: <044.0ab34fea791f1245ab0db53b04425d49@varnish-cache.org> References: <044.0ab34fea791f1245ab0db53b04425d49@varnish-cache.org> Message-ID: <053.b004547ae7e5bb07bc79ebe0f7d901b8@varnish-cache.org> #501: Stats variable to monitor how many threads are actually doing something (n_wrk_busy): busy threads monitoring -------------------------------------+-------------------------------------- Reporter: stockrt | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Resolution: worksforme Keywords: busy threads monitoring | -------------------------------------+-------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: We revisted the oldest tickets on the bug-wash today, and finally(!) made our mind up about this one: The central concern here is that the locking necessary for such a counter would be expensive, performance wise, because it would serialize all sessions on that lock. The plans presently is to make the thread-pools even more independent and as a result of that, we will get per-thread-pool statistics. For each thread-pool, having a counter of number of busy treads is actually pretty cheap, so that we can do. What is needed then is for libvarnishapi or varnishstat to tally up the global sum for all the threadpools. I'm closing this ticket, because it does not really serve any function going forward, seeing that the ultimate solution is much different from what is proposed. Sorry for taking so long to resolve this. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 11:19:00 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 11:19:00 -0000 Subject: [Varnish] #788: v2.1.3 w/ http_range_support on fails to support sets in byte range request In-Reply-To: <049.afaae7a6fae04074165e9905b721ea43@varnish-cache.org> References: <049.afaae7a6fae04074165e9905b721ea43@varnish-cache.org> Message-ID: <058.2824a085b30f4692162ca99ef1bf6603@varnish-cache.org> #788: v2.1.3 w/ http_range_support on fails to support sets in byte range request -----------------------------------------------+---------------------------- Reporter: jim.robinson | Owner: kristian Type: defect | Status: closed Priority: normal | Milestone: Varnish 3.0 dev Component: varnishd | Version: 2.1.3 Severity: normal | Resolution: invalid Keywords: http_range_support byte range set | -----------------------------------------------+---------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: We revisted the oldest tickets on the bug-wash today, and finally(!) made our mind up about this one: The present single-range support was based on a survey of range requests I collected from various sites. The survey found only very few instances of multi-range requests on varnish servers and thus the added complexity was not implemented. Circumstances may have changed, and I welcome data showing that, but this ticket really is a feature-request, and I have moved it to wiki:Future_Feature as such. I do notice that the multi-range requests above ask for overlapping areas: {{{ Range: bytes=8380-32397,8380-8381 }}} I wonder if that is bug in acrobat ? I'm closing this ticket as "invalid" according to our policy, because it is a feature-request, we only use tickets for "real bugs", in order to not have tickets that cannot be resolved in finite time. Sorry for the delay in dealing with this ticket. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 11:25:55 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 11:25:55 -0000 Subject: [Varnish] #792: Bandwidth management / rate-limiting In-Reply-To: <045.471ad86ca05d5fe7c0d0b3b55d3daa42@varnish-cache.org> References: <045.471ad86ca05d5fe7c0d0b3b55d3daa42@varnish-cache.org> Message-ID: <054.9bae0e608fdf4e853dc0e9b5bf27e7c2@varnish-cache.org> #792: Bandwidth management / rate-limiting ----------------------------------+----------------------------------------- Reporter: tmagnien | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Resolution: invalid Keywords: bandwidth rate-limit | ----------------------------------+----------------------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: We revisted the oldest tickets on the bug-wash today, and finally(!) made our mind up about this one: I'm closing this ticket as invalid, because it is a feature request. I have put a back-link to it from wiki:Future_Feature, please read long explanation there. Sorry about taking so long before dealing with this ticket. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 11:59:04 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 11:59:04 -0000 Subject: [Varnish] #977: Client and hash-directors doesn't retry on different backends Message-ID: <045.e6a9f9b617acc9165a08bb124753eb07@varnish-cache.org> #977: Client and hash-directors doesn't retry on different backends ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- (Ticket to get a bug number) Using the client- or hash-directors, the retry mechanism doesn't consider any other backends than the canonical backend when retrying after a failed backend connection. This is caused by faulty retry-logic. If the random-director is used, the backend to try is randomized for each retry, but such is not the case for client/hash. Test and fix coming up. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 14:47:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 14:47:14 -0000 Subject: [Varnish] #477: Change defaults to respect Cache-Control: private In-Reply-To: <041.9226b1c4ffd27a15ef76220ae7e29120@varnish-cache.org> References: <041.9226b1c4ffd27a15ef76220ae7e29120@varnish-cache.org> Message-ID: <050.96bf6ff108f957f0c4a9fe596a82b1d7@varnish-cache.org> #477: Change defaults to respect Cache-Control: private ----------------------+----------------------------------------------------- Reporter: olau | Owner: sky Type: defect | Status: closed Priority: normal | Milestone: Varnish 3.0 dev Component: varnishd | Version: trunk Severity: normal | Resolution: wontfix Keywords: | ----------------------+----------------------------------------------------- Comment(by olau): Hah. :) Gave me a good laugh. You admit it's a bug, it's simple enough to fix, right? Two lines of default VCL? But you won't do it? Why not? I still don't understand. When we "discussed" this on the mailing list, I got 4 emails evading the subject, telling me that Varnish is not covered by the RFC, which I only quoted to try to explain why the default behaviour came as a surprise. I should perhaps add that the reason you haven't been flooded with bug reports could be that what at least used to be a prominently featured introduction in the wiki showed how to fix the bug yourself in VCL (I missed this since I had been using Varnish before that intro). Anyway, if you just don't think it's worth fixing, feel free to ignore me. I've abandoned Varnish some time ago anyway, when it became clear you are not targeting smaller shops. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 14:50:45 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 14:50:45 -0000 Subject: [Varnish] #477: Change defaults to respect Cache-Control: private In-Reply-To: <041.9226b1c4ffd27a15ef76220ae7e29120@varnish-cache.org> References: <041.9226b1c4ffd27a15ef76220ae7e29120@varnish-cache.org> Message-ID: <050.0ce8241583869c67956c7960f5b8375a@varnish-cache.org> #477: Change defaults to respect Cache-Control: private ----------------------+----------------------------------------------------- Reporter: olau | Owner: sky Type: defect | Status: closed Priority: normal | Milestone: Varnish 3.0 dev Component: varnishd | Version: trunk Severity: normal | Resolution: wontfix Keywords: | ----------------------+----------------------------------------------------- Comment(by phk): I'm sorry to hear that Varnish did not meet your expectations. We cannot be everything to everybody, and we realize that. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 19:48:49 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 19:48:49 -0000 Subject: [Varnish] #977: Client and hash-directors doesn't retry on different backends In-Reply-To: <045.e6a9f9b617acc9165a08bb124753eb07@varnish-cache.org> References: <045.e6a9f9b617acc9165a08bb124753eb07@varnish-cache.org> Message-ID: <054.91778570c72241f1ef60e7032c7ab6cc@varnish-cache.org> #977: Client and hash-directors doesn't retry on different backends ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [149c4d4f6e256deb1ed145cf22849e2bd03ab5b7]) Implement a consistent retry policy in the random/client/hash director: If the first (policy-chosen) backend fails to get us a connection, retry a random backend (still according to their weight) until retries are exhausted. Kristian sent a proof of concept patch, I just cleaned it up and made it compile. Thanks to: Kristian Fixes #977 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 15 20:48:39 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Aug 2011 20:48:39 -0000 Subject: [Varnish] #963: hsh_rush not waking exponentially In-Reply-To: <043.b65620d53d58410fe9c4b705207dd9eb@varnish-cache.org> References: <043.b65620d53d58410fe9c4b705207dd9eb@varnish-cache.org> Message-ID: <052.80da724f0d49bd77936893e73f4de15e@varnish-cache.org> #963: hsh_rush not waking exponentially --------------------+------------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [b38682bb5aa5d36c856a634e121b15c0994c4fa6]) Make sure the entire waiting list is rushed when an object goes non-busy. Not sure what I thought when I changed it last, but it was clearly not smart thinking. Spotted by: Martin Test case by: Martin Fixes #963 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 16 20:34:48 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 16 Aug 2011 20:34:48 -0000 Subject: [Varnish] #978: esi_level > 0 does not clear do_stream, resulting in assert Message-ID: <043.852309eb87b93e7bd7a2564fd0817efb@varnish-cache.org> #978: esi_level > 0 does not clear do_stream, resulting in assert ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: major | Keywords: ----------------------+----------------------------------------------------- do_stream is cleared only on do_esi, not also when esi_level > 0, causing assert in RES_StreamStart on res_mode. See attached test case. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 16 21:51:13 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 16 Aug 2011 21:51:13 -0000 Subject: [Varnish] #979: sp->wrk->h_content_length is not cleared when do_stream and restart in vcl_deliver Message-ID: <043.244df71023deeaeb441eefa160bdd171@varnish-cache.org> #979: sp->wrk->h_content_length is not cleared when do_stream and restart in vcl_deliver ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: ----------------------+----------------------------------------------------- sp->wrk->h_content_length is set on Content-Length-based transfer after receiving backend headers, but not cleared if the request is restarted in vcl_deliver and the backend connection is closed. Causes assert on next pass through fetch. See attached test case -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 16 23:12:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 16 Aug 2011 23:12:42 -0000 Subject: [Varnish] #980: Varnish tries to do content-length transfer when do_stream and do_gzip Message-ID: <043.974b19aa62a03f4a4c7bba5b9817d08a@varnish-cache.org> #980: Varnish tries to do content-length transfer when do_stream and do_gzip ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: ----------------------+----------------------------------------------------- If Varnish receives a content-length header from the server, it will use this when streaming even if it is going to gzip the content. See attached test case -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 00:06:08 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 00:06:08 -0000 Subject: [Varnish] #981: /etc/init.d/varnish status always returns 0 on debian/ubuntu Message-ID: <041.05a93061a1e57b5616501e44a449d598@varnish-cache.org> #981: /etc/init.d/varnish status always returns 0 on debian/ubuntu --------------------------------------------+------------------------------- Reporter: kane | Type: defect Status: new | Priority: normal Milestone: After Varnish 2.1 | Component: packaging Version: 2.1.5 | Severity: normal Keywords: init.d debian ubuntu packaging | --------------------------------------------+------------------------------- That is because the return value of status_of_proc is ignored. This leads to problems with systems like Puppet, who check the return value of /etc/init.d/$service status to see if a service is running or not. The patch is trivial and follows how /etc/init.d/ssh does it. Unfortunately, I couldn't find the git repository holding the debian packaging, so I've added the patch inline below: --- /tmp/varnish 2011-08-16 23:58:41.000000000 +0000 +++ /etc/init.d/varnish 2011-08-16 23:57:40.000000000 +0000 @@ -85,7 +85,7 @@ } status_varnishd() { - status_of_proc -p "${PIDFILE}" "${DAEMON}" "${NAME}" + status_of_proc -p "${PIDFILE}" "${DAEMON}" "${NAME}" && exit 0 || exit $? } case "$1" in -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 01:03:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 01:03:44 -0000 Subject: [Varnish] #982: Param "Must be no more than 500", but diagnostic does not say which param. Message-ID: <041.20633ecb3ac0e3d67c4fa30ba784d333@varnish-cache.org> #982: Param "Must be no more than 500", but diagnostic does not say which param. -------------------+-------------------------------------------------------- Reporter: kane | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.5 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- When starting 2.1.5, I got this error: $ sudo /etc/init.d/varnish start * Starting HTTP accelerator varnishd ...fail! storage_malloc: max size 1032 MB. Error: Must be no more than 500 The diagnostic doesn't mention which param should be no more than 500, so tracking this down was quite cumbersome. Perhaps the name of the parameter can be printed as part of the diagnostic. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 01:07:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 01:07:14 -0000 Subject: [Varnish] #983: reload-vcl doesn't support all varnishd flags Message-ID: <041.0cb660329720dc98ab74dc03e3e600d5@varnish-cache.org> #983: reload-vcl doesn't support all varnishd flags -------------------+-------------------------------------------------------- Reporter: kane | Type: defect Status: new | Priority: lowest Milestone: | Component: varnishd Version: 3.0.0 | Severity: trivial Keywords: | -------------------+-------------------------------------------------------- Seems reload-vcl has a hardcoded list of parameters that varnishd takes, but at least in 2.1.5 they're out of sync. So when running reload, the following happens (non-fatal): $ sudo /etc/init.d/varnish reload * Reloading HTTP accelerator varnishd Illegal option -i ...done. The patch is trivial and inline below: --- /usr/share/varnish/reload-vcl 2011-01-27 16:24:36.000000000 +0000 +++ /tmp/reload-vcl 2011-08-17 01:03:06.000000000 +0000 @@ -87,7 +87,7 @@ # Extract the -f and the -T option, and (try to) ensure that the # management interface is on the form hostname:address. OPTIND=1 -while getopts a:b:dFf:g:h:l:n:P:p:S:s:T:t:u:Vw: flag $DAEMON_OPTS +while getopts a:b:CdFf:g:h:i:l:M:n:P:p:S:s:T:t:u:Vw: flag $DAEMON_OPTS do case $flag in f) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 06:54:37 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 06:54:37 -0000 Subject: [Varnish] #983: reload-vcl doesn't support all varnishd flags In-Reply-To: <041.0cb660329720dc98ab74dc03e3e600d5@varnish-cache.org> References: <041.0cb660329720dc98ab74dc03e3e600d5@varnish-cache.org> Message-ID: <050.d54fafa8718909599b0332111be00937@varnish-cache.org> #983: reload-vcl doesn't support all varnishd flags -------------------+-------------------------------------------------------- Reporter: kane | Type: defect Status: new | Priority: lowest Milestone: | Component: packaging Version: 3.0.0 | Severity: trivial Keywords: | -------------------+-------------------------------------------------------- Changes (by phk): * component: varnishd => packaging -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 06:55:19 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 06:55:19 -0000 Subject: [Varnish] #982: Param "Must be no more than 500", but diagnostic does not say which param. In-Reply-To: <041.20633ecb3ac0e3d67c4fa30ba784d333@varnish-cache.org> References: <041.20633ecb3ac0e3d67c4fa30ba784d333@varnish-cache.org> Message-ID: <050.238cb65c25369a39b89980031ca3aa27@varnish-cache.org> #982: Param "Must be no more than 500", but diagnostic does not say which param. -------------------+-------------------------------------------------------- Reporter: kane | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.5 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Description changed by phk: Old description: > When starting 2.1.5, I got this error: > > $ sudo /etc/init.d/varnish start > * Starting HTTP accelerator varnishd > ...fail! > storage_malloc: max size 1032 MB. > Error: > Must be no more than 500 > > The diagnostic doesn't mention which param should be no more than 500, so > tracking this down was quite cumbersome. > > Perhaps the name of the parameter can be printed as part of the > diagnostic. New description: When starting 2.1.5, I got this error: {{{ $ sudo /etc/init.d/varnish start * Starting HTTP accelerator varnishd ...fail! storage_malloc: max size 1032 MB. Error: Must be no more than 500 }}} The diagnostic doesn't mention which param should be no more than 500, so tracking this down was quite cumbersome. Perhaps the name of the parameter can be printed as part of the diagnostic. -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 06:58:31 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 06:58:31 -0000 Subject: [Varnish] #982: Param "Must be no more than 500", but diagnostic does not say which param. In-Reply-To: <041.20633ecb3ac0e3d67c4fa30ba784d333@varnish-cache.org> References: <041.20633ecb3ac0e3d67c4fa30ba784d333@varnish-cache.org> Message-ID: <050.8c74209b7f61995396d037c0dac39bc4@varnish-cache.org> #982: Param "Must be no more than 500", but diagnostic does not say which param. -------------------------+-------------------------------------------------- Reporter: kane | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 2.1.5 | Severity: normal Resolution: worksforme | Keywords: -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: This is already fixed in 3.0.0 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 07:10:10 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 07:10:10 -0000 Subject: [Varnish] #980: Varnish tries to do content-length transfer when do_stream and do_gzip In-Reply-To: <043.974b19aa62a03f4a4c7bba5b9817d08a@varnish-cache.org> References: <043.974b19aa62a03f4a4c7bba5b9817d08a@varnish-cache.org> Message-ID: <052.9ba77e55ac96dc8330b69f7918d3ee6f@varnish-cache.org> #980: Varnish tries to do content-length transfer when do_stream and do_gzip ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [298c7c382b731dbde290ce32d3199d076a4d845d]) We can not use a lenght based response when we transform (gzip/gunzip) we stream and backend didn't send c-l. Fixes #980 Testcase by: Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 07:16:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 07:16:34 -0000 Subject: [Varnish] #979: sp->wrk->h_content_length is not cleared when do_stream and restart in vcl_deliver In-Reply-To: <043.244df71023deeaeb441eefa160bdd171@varnish-cache.org> References: <043.244df71023deeaeb441eefa160bdd171@varnish-cache.org> Message-ID: <052.ab1c91040fa8e394ea71b8adfc1b6faa@varnish-cache.org> #979: sp->wrk->h_content_length is not cleared when do_stream and restart in vcl_deliver ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [eab8652f3f2c29275aa2236e5c967d0d1d07de24]) restart in vcl_deliver{} would crash in vcl_fetch{} due to missing cleanup. Found & Fixed by: Martin Fixes #979 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 07:24:53 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 07:24:53 -0000 Subject: [Varnish] #978: esi_level > 0 does not clear do_stream, resulting in assert In-Reply-To: <043.852309eb87b93e7bd7a2564fd0817efb@varnish-cache.org> References: <043.852309eb87b93e7bd7a2564fd0817efb@varnish-cache.org> Message-ID: <052.1efdd070354d2dbf4733b7cfd0ec1df5@varnish-cache.org> #978: esi_level > 0 does not clear do_stream, resulting in assert ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: major | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [81010e415ca34634c01db5d6245c224e2e538f70]) Cleare do_stream on all esi objects, including included objects. Found & Fixed by: Martin Fixes #978 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 09:34:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 09:34:28 -0000 Subject: [Varnish] #942: Varnish stalls receiving a "weird" response from a backend In-Reply-To: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> References: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> Message-ID: <057.e1ab569d32f540245d7b281eaa474d47@varnish-cache.org> #942: Varnish stalls receiving a "weird" response from a backend -------------------------+-------------------------------------------------- Reporter: andreacampi | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [b54440ffda04ffa093e21de578249d85e648c05d]) If the backend used chunked encoding and sent junk after the gzip data, the thread would go into a spin. Fixes #942 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 09:48:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 09:48:21 -0000 Subject: [Varnish] #951: varnish stalls connections on high traffic to non-cacheable urls In-Reply-To: <041.b0e12d879eedb94a6deebe2f0af76fa7@varnish-cache.org> References: <041.b0e12d879eedb94a6deebe2f0af76fa7@varnish-cache.org> Message-ID: <050.a85d419f7ff90bf0f1117be5e46cb68a@varnish-cache.org> #951: varnish stalls connections on high traffic to non-cacheable urls ----------------------------------+----------------------------------------- Reporter: tttt | Type: defect Status: closed | Priority: normal Milestone: Varnish 2.1 release | Component: varnishd Version: 2.1.5 | Severity: major Resolution: worksforme | Keywords: ----------------------------------+----------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: I have read this ticket twice now, and I fail to see the issue as being anything but a configuration error of some kind. If the objects are non-cacheable, the best thing to do would be to pass them in vcl_recv{} and be done with it. If you do not configure this, the normal waiting-list and "hit for pass" policies will kick in, and the problem you see is very likely the expected pile-up when the "hit for pass" object times out. Failing to spot anything that doesn't work as it should, I'm closing this ticket. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 09:51:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 09:51:42 -0000 Subject: [Varnish] #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 In-Reply-To: <043.d74ebc3e11e35ac4f435f670a0217608@varnish-cache.org> References: <043.d74ebc3e11e35ac4f435f670a0217608@varnish-cache.org> Message-ID: <052.cd784f04409b9a40e062313ccc044178@varnish-cache.org> #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 -------------------------+-------------------------------------------------- Reporter: kdajka | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: worksforme | Keywords: -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: I'm timing this one out for lack of substance we can act on. If the problem still exist, feel free to reopen the ticket with more details. Please notice that Varnish 3.0 supports gzip now, so that would save you the duplicate objects. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 17 09:29:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Aug 2011 09:29:14 -0000 Subject: [Varnish] #978: esi_level > 0 does not clear do_stream, resulting in assert In-Reply-To: <043.852309eb87b93e7bd7a2564fd0817efb@varnish-cache.org> References: <043.852309eb87b93e7bd7a2564fd0817efb@varnish-cache.org> Message-ID: <052.5a4b54f7cd94c449e0ab403745b2ebe2@varnish-cache.org> #978: esi_level > 0 does not clear do_stream, resulting in assert ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: major | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [0f8805f5e56f4d3312721c0cec7a3b1fa371be5f]) Cleare do_stream on all esi objects, including included objects. Found & Fixed by: Martin Fixes #978 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 19 10:16:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 19 Aug 2011 10:16:44 -0000 Subject: [Varnish] #942: Varnish stalls receiving a "weird" response from a backend In-Reply-To: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> References: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> Message-ID: <057.d69640eaa648582afc85e1cbedbb353b@varnish-cache.org> #942: Varnish stalls receiving a "weird" response from a backend -------------------------+-------------------------------------------------- Reporter: andreacampi | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Comment(by Tollef Fog Heen ): (In [e6e34d24b7b2e47d936867a4a1d7714ca568b7ae]) If the backend used chunked encoding and sent junk after the gzip data, the thread would go into a spin. Fixes #942 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 19 15:59:23 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 19 Aug 2011 15:59:23 -0000 Subject: [Varnish] #984: [PATCH] When a non-cacheable status is returned, default TTL is applied Message-ID: <044.f1393dbac28989a3e46ad14cb18e285b@varnish-cache.org> #984: [PATCH] When a non-cacheable status is returned, default TTL is applied ---------------------+------------------------------------------------------ Reporter: drwilco | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | ---------------------+------------------------------------------------------ In cache_center.c: {{{ sp->wrk->exp.ttl = RFC2616_Ttl(sp); }}} In RFC2616_Ttl: {{{ ttl = params->default_ttl; }}} ... {{{ switch (sp->err_code) { default: sp->wrk->exp.ttl = -1.; break; case 200: /* OK */ case 203: /* Non-Authoritative Information */ case 300: /* Multiple Choices */ case 301: /* Moved Permanently */ case 302: /* Moved Temporarily */ case 307: /* Temporary Redirect */ case 410: /* Gone */ case 404: /* Not Found */ }}} ... {{{ return (ttl); }}} So even though RFC2616_Ttl sets sp->wrk->exp.ttl to -1, the return(ttl) at the end overwrites that through the assignment in cache_center.c of the return value. So anything that's not a 200, 203, 300, 301, 302, 307, 410 or 404 will get the default TTL instead of -1. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 19 16:05:36 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 19 Aug 2011 16:05:36 -0000 Subject: [Varnish] #985: Assert error in wrk_thread_real Message-ID: <044.ab6ca26cc4d063c8c8fe45ce7006f8b2@varnish-cache.org> #985: Assert error in wrk_thread_real ----------------------------+----------------------------------------------- Reporter: davidlc | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: critical Keywords: transient #953 | ----------------------------+----------------------------------------------- After being severly bitten by bug #953, I managed to limit the transient storage size thanks to the doc commited on 2011/08/17. Using a very little memory setting to test proper nuking: {{{ DAEMON_OPTS="-a :80 -p shortlived=1 -s malloc,3M -s Transient=malloc,3M \ -f /etc/varnish/varnish.vcl -n /var/run/varnish -p thread_pool_max=2000 -p \ thread_pools=2 -p lru_interval=6 -p thread_pool_min=100 -p thread_pool_add_delay=2 \ -p session_linger=100 -p listen_depth=4096 -p sess_workspace=65536 -p ping_interval=2 \ -p sess_timeout=1 -t 120 -T 127.0.0.1:8888" }}} I launch a script requesting thousands of URLs. Then every 12s or so, the varnish instance crashes with: {{{ /var/run/varnish[10287]: Child (16198) Panic message: Assert error in wrk_thread_real(), cache_pool.c line 187: Condition((w->bereq->ws) == 0) not true. thread = (cache-worker) ident = Linux,2.6.18-238.12.1.el5,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42b386: pan_ic+b6 0x42cb97: wrk_thread_real+427 0x311fa0673d: _end+311f393d65 0x311f2d44bd: _end+311ec61ae5 }}} The varnishstat output just before a crash looks like: [http://pastebin.com/RU41NRxX] The OS is CentOS 5.6 The VCL used is empty/default: just a backend. {{{ backend default { .host = "mybackend"; .port = "80"; .saintmode_threshold = 20000; .probe = { .request = "GET /test.html HTTP/1.1" "Host: lbcheck" "Connection: close"; .timeout = 0.2 s; .interval = 1s; .window = 4; .threshold = 2; } } }}} The served objects are quite big and numerous (filling this small cache). But they should have a large TTL as the backend makes them expire in a day (dont ask why we cache video chunks ;): {{{ 11 TxHeader c Server: nginx 11 TxHeader c Content-Type: video/MP2T 11 TxHeader c Last-Modified: Fri, 06 May 2011 00:13:41 GMT 11 TxHeader c Expires: Sat, 20 Aug 2011 15:21:34 GMT 11 TxHeader c Cache-Control: max-age=86400 11 TxHeader c Content-Length: 19740 11 TxHeader c Accept-Ranges: bytes 11 TxHeader c Date: Fri, 19 Aug 2011 15:21:34 GMT 11 TxHeader c X-Varnish: 1136508484 11 TxHeader c Age: 0 11 TxHeader c Via: 1.1 varnish }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 19 16:12:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 19 Aug 2011 16:12:38 -0000 Subject: [Varnish] #953: Leak in the TransientStorage? In-Reply-To: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> References: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> Message-ID: <052.de8dbcc85f6c08c5a756613a8c82153c@varnish-cache.org> #953: Leak in the TransientStorage? ----------------------+----------------------------------------------------- Reporter: elurin | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment(by davidlc): While working around this bug (why all my objects get 'transiented' ? :) by using '-s Transient=malloc,3M' I stumbled upon another bug, thus created bug #985 Maybe the details there can help with this bug too ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 22 08:23:31 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Aug 2011 08:23:31 -0000 Subject: [Varnish] #985: Assert error in wrk_thread_real In-Reply-To: <044.ab6ca26cc4d063c8c8fe45ce7006f8b2@varnish-cache.org> References: <044.ab6ca26cc4d063c8c8fe45ce7006f8b2@varnish-cache.org> Message-ID: <053.dbceaa54e658c20eac938f8eabc887df@varnish-cache.org> #985: Assert error in wrk_thread_real ----------------------+----------------------------------------------------- Reporter: davidlc | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: critical Resolution: fixed | Keywords: transient #953 ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [380e7f201bd2f2632df817157558801cf18ee41d]) Properly clean up if we bail out of cnt_error because we cannot allocate an object. Fixes #985 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 22 08:23:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Aug 2011 08:23:28 -0000 Subject: [Varnish] #953: Leak in the TransientStorage? In-Reply-To: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> References: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> Message-ID: <052.6d06fd9d78f78e35c4bab3f33ddcb353@varnish-cache.org> #953: Leak in the TransientStorage? ----------------------+----------------------------------------------------- Reporter: elurin | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: reopened => closed * resolution: => fixed Comment: (In [7ed5f2b1f5f10991c3a82c3c1da90b6b59ca2cc6]) The law of unconsidered consequences strikes again: When we pushed the object allocation into the stevedores for -spersistent, we did not add LRU eviction to that allocation path. Then we added the Transient storage as a fallback for objects we could not allocate, and they all went there. Change the way object allocation works as follows: If VCL set a stevedore hint and it is valid, we stick with it, and LRU that stevedore attempting to make space. If no valid hint is given, try all stevedores in turn, then LRU one of them to make space. Fixes #953 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 22 10:11:48 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Aug 2011 10:11:48 -0000 Subject: [Varnish] #983: reload-vcl doesn't support all varnishd flags In-Reply-To: <041.0cb660329720dc98ab74dc03e3e600d5@varnish-cache.org> References: <041.0cb660329720dc98ab74dc03e3e600d5@varnish-cache.org> Message-ID: <050.4dd69e14da1c4bfe0d4582e7a753d05e@varnish-cache.org> #983: reload-vcl doesn't support all varnishd flags -----------------------+---------------------------------------------------- Reporter: kane | Owner: ssm Type: defect | Status: new Priority: lowest | Milestone: Component: packaging | Version: 3.0.0 Severity: trivial | Keywords: -----------------------+---------------------------------------------------- Changes (by tfheen): * owner: => ssm -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 22 10:25:01 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Aug 2011 10:25:01 -0000 Subject: [Varnish] #964: varnish is losing (java) session data In-Reply-To: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> References: <047.f73ad54aee10de8567510bfb13271915@varnish-cache.org> Message-ID: <056.8c8ff0afa3ab99a084ea126102f277f2@varnish-cache.org> #964: varnish is losing (java) session data -------------------------+-------------------------------------------------- Reporter: pravenjohn | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Resolution: invalid | Keywords: -------------------------+-------------------------------------------------- Changes (by kristian): * status: new => closed * resolution: => invalid Comment: After reviewing this a few times, it looks like this is a configuration issue of some sort related to your site (in combination with Varnish). I recommend you use the varnish-misc mail list for this (you'll have a much bigger audience). I'm closing the bug report as this doesn't point to a real bug in Varnish. Feel free to re-open the bug if it seems like it really is a bug after consulting the varnish-misc mail list. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 22 10:26:35 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Aug 2011 10:26:35 -0000 Subject: [Varnish] #966: Varnish Header In-Reply-To: <045.a09638ec6a9784d78e37f64576048f44@varnish-cache.org> References: <045.a09638ec6a9784d78e37f64576048f44@varnish-cache.org> Message-ID: <054.b6611855d92fa02871244d7f1ea8ca2f@varnish-cache.org> #966: Varnish Header ----------------------+----------------------------------------------------- Reporter: frozts91 | Owner: tfheen Type: defect | Status: new Priority: highest | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by tfheen): No response from submitter; closing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 22 10:46:04 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Aug 2011 10:46:04 -0000 Subject: [Varnish] #838: file backend storage file should be removed at package upgrade In-Reply-To: <043.0a5310132f8efd5e6fb2a78ef728781f@varnish-cache.org> References: <043.0a5310132f8efd5e6fb2a78ef728781f@varnish-cache.org> Message-ID: <052.4a19b69cec36b074c42c0f2e8f5d9483@varnish-cache.org> #838: file backend storage file should be removed at package upgrade -----------------------+---------------------------------------------------- Reporter: ingvar | Owner: Type: defect | Status: closed Priority: normal | Milestone: Later Component: packaging | Version: trunk Severity: normal | Resolution: worksforme Keywords: | -----------------------+---------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: I'm closing this ticket, as "Overtaken By Events". I think we have deduced that it is not the -sfile storage, but the VSM file that is being talked about, and that has been changed drastically since 2.1.0, hopefully for the better. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 24 07:24:37 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 24 Aug 2011 07:24:37 -0000 Subject: [Varnish] #953: Leak in the TransientStorage? In-Reply-To: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> References: <043.d411d994202d179d7348bd87ecffbaea@varnish-cache.org> Message-ID: <052.9d7a68b85bb9602f58fbd3f948633842@varnish-cache.org> #953: Leak in the TransientStorage? ----------------------+----------------------------------------------------- Reporter: elurin | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [5626096010669ede48b9dd4241651e0655c8e797]) The law of unconsidered consequences strikes again: When we pushed the object allocation into the stevedores for -spersistent, we did not add LRU eviction to that allocation path. Then we added the Transient storage as a fallback for objects we could not allocate, and they all went there. Change the way object allocation works as follows: If VCL set a stevedore hint and it is valid, we stick with it, and LRU that stevedore attempting to make space. If no valid hint is given, try all stevedores in turn, then LRU one of them to make space. Fixes #953 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 24 07:24:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 24 Aug 2011 07:24:38 -0000 Subject: [Varnish] #985: Assert error in wrk_thread_real In-Reply-To: <044.ab6ca26cc4d063c8c8fe45ce7006f8b2@varnish-cache.org> References: <044.ab6ca26cc4d063c8c8fe45ce7006f8b2@varnish-cache.org> Message-ID: <053.76e8a0ee7b172c1c10f541abea1c5373@varnish-cache.org> #985: Assert error in wrk_thread_real ----------------------+----------------------------------------------------- Reporter: davidlc | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: critical Resolution: fixed | Keywords: transient #953 ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [4c4ef9603212a67b8b71ec66d45f8eca3a211d4c]) Properly clean up if we bail out of cnt_error because we cannot allocate an object. Fixes #985 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 24 13:46:59 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 24 Aug 2011 13:46:59 -0000 Subject: [Varnish] #986: Stream-related Assert error in AssertObjCorePassOrBusy(), cache.h line 1011: Message-ID: <045.25ab28f04a20ff106ca9496eec857fa0@varnish-cache.org> #986: Stream-related Assert error in AssertObjCorePassOrBusy(), cache.h line 1011: ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Discovered during testing of the streaming code. {{{ debug: 2 (6s) Executing tests log: 1 (25s) Server tristran checked out varnish-3.0.0-beta2-155-g555735f warning: 0 (233s) httperf TEST (streaming): Panic detected. I think! warning: 0 (0s) httperf TEST (streaming): Last panic at: Wed, 24 Aug 2011 11:30:16 GMT Assert error in AssertObjCorePassOrBusy(), cache.h line 1011: Condition((oc->flags & (1<<1)) != 0) not true. thread = (cache-worker) ident = Linux,2.6.32-33-server,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42e368: pan_ic+d8 0x41872b: cnt_prepresp+3bb 0x418cbd: CNT_Session+3cd 0x42fb58: wrk_do_cnt_sess+b8 0x42ffe1: wrk_thread_real+411 0x7f44249779ca: _end+7f44242fe712 0x7f44246d470d: _end+7f442405b455 sp = 0x7f43dfd13008 { fd = 157, id = 157, xid = 1969712598, client = 10.20.100.9 6936, step = STP_PREPRESP, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7f43dfd13080 { id = "sess", {s,f,r,e} = {0x7f43dfd13cc8,+152,(nil),+65536}, }, http[req] = { ws = 0x7f43dfd13080[sess] "GET", "/9/1/3.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.4", "X-Forwarded-For: 10.20.100.9", }, worker = 0x7f4400af9b60 { ws = 0x7f4400af9d08 { id = "wrk", {s,f,r,e} = {0x7f4400ae7af0,0x7f4400ae7af0,(nil),+65536}, }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7f4418f4c300 { xid = 1969712501, ws = 0x7f4418f4c318 { id = "obj", {s,f,r,e} = {0x7f4418f4c4d0,+192,(nil),+224}, }, http[obj] = { ws = 0x7f4418f4c318[obj] "HTTP/1.1", "OK", "Server: nginx/0.7.65", "Date: Wed, 24 Aug 2011 11:30:01 GMT", "Content-Type: text/plain", "Last-Modified: Wed, 24 Aug 2011 11:27:50 GMT", "Content-Length: 1048576", }, len = 1048576, store = { 1048576 { 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| [1048512 more] }, }, }, }, log: 1 (0s) httperf TEST (streaming): Varnishstat uptime and measured run- time is too large (measured: 145 stat: 0 diff: 145). Did we crash? warning: 0 (0s) httperf TEST (streaming): Out of bounds: client_conn(0) less than lower boundary 9900 warning: 0 (0s) httperf TEST (streaming): Out of bounds: client_req(0) less than lower boundary 9800 log: 1 (0s) httperf TEST (streaming): Test name: streaming log: 1 (0s) httperf TEST (streaming): Varnish options: log: 1 (0s) httperf TEST (streaming): -t=1 log: 1 (0s) httperf TEST (streaming): Varnish parameters: log: 1 (0s) httperf TEST (streaming): thread_pool_add_delay=1 log: 1 (0s) httperf TEST (streaming): http_gzip_support=off log: 1 (0s) httperf TEST (streaming): default_grace=0 log: 1 (0s) httperf TEST (streaming): Payload size (excludes headers): 1M log: 1 (0s) httperf TEST (streaming): Branch: master log: 1 (0s) httperf TEST (streaming): Number of clients involved: 24 log: 1 (0s) httperf TEST (streaming): Type of test: httperf log: 1 (0s) httperf TEST (streaming): Test iterations: 1 log: 1 (0s) httperf TEST (streaming): Runtime: 145 seconds log: 1 (0s) httperf TEST (streaming): VCL: backend foo { .host = "localhost"; .port = "80"; .connect_timeout = 10s; } sub vcl_fetch { set beresp.do_stream = true; set beresp.grace = 0s; set beresp.ttl = 15s; } sub vcl_deliver { set resp.http.x-fryer = "some test"; } log: 1 (0s) httperf TEST (streaming): Number of total connections: 10000 log: 1 (0s) httperf TEST (streaming): Note: connections are subject to rounding when divided among clients. Expect slight deviations. log: 1 (0s) httperf TEST (streaming): Requests per connection: 1 log: 1 (0s) httperf TEST (streaming): Extra options to httperf: --wset=1000,0.1 --rate 3 log: 1 (0s) httperf TEST (streaming): Httperf command (last client): httperf --hog --timeout 60 --num-calls 1 --num-conns 416 --port 8080 --burst-length 1 --client 23/24 --server 10.20.100.4 --wset=1000,0.1 --rate 3 warning: 0 (0s) Tests finished with problems detected. Failed expectations: 4 Total run time: 265 seconds debug: 2 (0s) Sending mail }}} As discussed on IRC already. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 24 13:50:24 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 24 Aug 2011 13:50:24 -0000 Subject: [Varnish] #986: Stream-related Assert error in AssertObjCorePassOrBusy(), cache.h line 1011: In-Reply-To: <045.25ab28f04a20ff106ca9496eec857fa0@varnish-cache.org> References: <045.25ab28f04a20ff106ca9496eec857fa0@varnish-cache.org> Message-ID: <054.12a290e3ec622c0952f53642201dbbb3@varnish-cache.org> #986: Stream-related Assert error in AssertObjCorePassOrBusy(), cache.h line 1011: ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [8c534e2503ca621d9e51ae4a12627769c250330c]) Be much more bombastic about the per-request flags. This highlights that they really need to go into a struct or bitmap for clarity, but I'm not doing that right before 3.0.1 Fixes #986 Many Thanks To: Kristian -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 25 10:59:47 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 25 Aug 2011 10:59:47 -0000 Subject: [Varnish] #966: Varnish Header In-Reply-To: <045.a09638ec6a9784d78e37f64576048f44@varnish-cache.org> References: <045.a09638ec6a9784d78e37f64576048f44@varnish-cache.org> Message-ID: <054.4c07e4a4376c4a3ccf2e82e53d763699@varnish-cache.org> #966: Varnish Header ----------------------+----------------------------------------------------- Reporter: frozts91 | Owner: tfheen Type: defect | Status: closed Priority: highest | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: invalid Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: I'm closing this ticket for lack of response. The one thread spinning could possibly be related to the bug fixed in #942 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 25 13:08:37 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 25 Aug 2011 13:08:37 -0000 Subject: [Varnish] #987: Panic in cache_vrt.c with Varnish 3.0.0 Message-ID: <041.322157ba79fc5ef3f310ad88e240cb9a@varnish-cache.org> #987: Panic in cache_vrt.c with Varnish 3.0.0 -------------------+-------------------------------------------------------- Reporter: hplc | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Vanish Panic about everyone 2 days.[[BR]] Platform: Linux CentOS 5.6 32bit (2.6.18-238.19.1.el5) {{{ $ rpm -qa|grep varnish varnish-release-3.0-1 varnish-3.0.0-2.el5 varnish-libs-3.0.0-2.el5 }}} {{{ Aug 23 23:09:41 centos varnishd[3485]: Platform: Linux,2.6.18-238.19.1.el5,i686,-sfile,-smalloc,-hcritbit Aug 23 23:09:41 centos varnishd[3485]: child (3486) Started Aug 23 23:09:41 centos varnishd[3485]: Child (3486) said Child starts Aug 23 23:09:41 centos varnishd[3485]: Child (3486) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Aug 25 06:26:32 centos varnishd[3485]: Child (3486) died signal=6 Aug 25 06:26:32 centos varnishd[3485]: Child (3486) Panic message: Assert error in VRT_IP_string(), cache_vrt.c line 310: Condition((p = WS_Alloc(sp->http->ws, len)) != 0) not true. thread = (cache-worker) ident = Linux,2.6.18-238.19.1.el5,i686,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x807652c: /usr/sbin/varnishd [0x807652c] 0x807f585: /usr/sbin/varnishd(VRT_IP_string+0x155) [0x807f585] 0x213343: ./vcl.7Cb57Ikc.so [0x213343] 0x807e654: /usr/sbin/varnishd(VCL_recv_method+0x54) [0x807e654] 0x805ef76: /usr/sbin/varnishd(CNT_Session+0x8d6) [0x805ef76] 0x80791ff: /usr/sbin/varnishd [0x80791ff] 0x80781b2: /usr/sbin/varnishd [0x80781b2] 0x807879f: /usr/sbin/varnishd [0x807879f] 0xa0e832: /lib/libpthread.so.0 [0xa0e832] 0x94e45e: /lib/libc.so.6(clone+0x5e) [0x94e45e] sp = 0xb7ed8004 { fd = 32, id = 32, xid = 156301678, client = 183.13.72.122 50306, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0xb7ed8054 { overflow id = "sess", {s,f,r,e} = {0xb7ed87 Aug 25 06:26:32 centos varnishd[3485]: child (711) Started Aug 25 06:26:32 centos varnishd[3485]: Child (711) said Child starts Aug 25 06:26:32 centos varnishd[3485]: Child (711) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Aug 25 07:13:35 centos varnishd[3485]: Manager got SIGINT Aug 25 07:13:37 centos varnishd[2784]: Platform: Linux,2.6.18-238.19.1.el5,i686,-sfile,-smalloc,-hcritbit Aug 25 07:13:37 centos varnishd[2784]: child (2785) Started }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 25 17:47:07 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 25 Aug 2011 17:47:07 -0000 Subject: [Varnish] #988: On 32bit systems http_req_size's default is not reduced Message-ID: <044.e6092d0e59e91963c9ebd74d5bb6b492@varnish-cache.org> #988: On 32bit systems http_req_size's default is not reduced ---------------------+------------------------------------------------------ Reporter: drwilco | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | ---------------------+------------------------------------------------------ On 32bit systems sess_workspace and thread_pool_workspace (among other parameters) are reduced from 64K to 16K, but http_req_size is left at 32K. Meaning that with a request larger than 16K varnish will panic, even on the default vcl. http_req_size should probably be reduced to 12K or so. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 26 07:21:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 26 Aug 2011 07:21:28 -0000 Subject: [Varnish] #984: [PATCH] When a non-cacheable status is returned, default TTL is applied In-Reply-To: <044.f1393dbac28989a3e46ad14cb18e285b@varnish-cache.org> References: <044.f1393dbac28989a3e46ad14cb18e285b@varnish-cache.org> Message-ID: <053.f052e1e1dba2f2c7dfaad8cdda85d545@varnish-cache.org> #984: [PATCH] When a non-cacheable status is returned, default TTL is applied ----------------------+----------------------------------------------------- Reporter: drwilco | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [b22995084401716d63fac0882ef5b56725af0c49]) Recent changes to the RFC2616 ttl calculation confused what ttl value was actually being used. Fixed by: DocWilco Fixes #984 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 26 09:04:11 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 26 Aug 2011 09:04:11 -0000 Subject: [Varnish] #988: On 32bit systems http_req_size's default is not reduced In-Reply-To: <044.e6092d0e59e91963c9ebd74d5bb6b492@varnish-cache.org> References: <044.e6092d0e59e91963c9ebd74d5bb6b492@varnish-cache.org> Message-ID: <053.82f637ac0b00b823fdf551b3ff39fed2@varnish-cache.org> #988: On 32bit systems http_req_size's default is not reduced ----------------------+----------------------------------------------------- Reporter: drwilco | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [94139b3066691d77e199dc5806f02148a3ecdde7]) Reduce http_req_size on 32 bit machines Fixes #988 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 26 22:39:15 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 26 Aug 2011 22:39:15 -0000 Subject: [Varnish] #989: tests/g00002.vtc failing on ppc64 Message-ID: <043.096f9bd7c1c19720d2704d4ee907550c@varnish-cache.org> #989: tests/g00002.vtc failing on ppc64 --------------------+------------------------------------------------------- Reporter: ingvar | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- tests/g00002.vtc is failing on ppc64. My test case was 3.0.0, compiled compiled from srpm, but without jemalloc (jemalloc is still defective on ppc64). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 26 22:45:13 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 26 Aug 2011 22:45:13 -0000 Subject: [Varnish] #989: tests/g00002.vtc failing on ppc64 In-Reply-To: <043.096f9bd7c1c19720d2704d4ee907550c@varnish-cache.org> References: <043.096f9bd7c1c19720d2704d4ee907550c@varnish-cache.org> Message-ID: <052.f299cf19184aa91a4877242edcb90c86@varnish-cache.org> #989: tests/g00002.vtc failing on ppc64 --------------------+------------------------------------------------------- Reporter: ingvar | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- Comment(by ingvar): This is rhel6 on ppc64. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 26 23:01:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 26 Aug 2011 23:01:32 -0000 Subject: [Varnish] #990: vcl(7) does not document vcl_init and vcl_fini Message-ID: <042.fa08c8075f047374fbdb86272be31640@varnish-cache.org> #990: vcl(7) does not document vcl_init and vcl_fini -------------------+-------------------------------------------------------- Reporter: fgsch | Type: defect Status: new | Priority: normal Milestone: | Component: documentation Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- As per summary, default.vcl is included in the doc, which has vcl_init and vcl_fini but they're not documented anywhere in the manpage. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Aug 27 09:51:41 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 27 Aug 2011 09:51:41 -0000 Subject: [Varnish] #991: vcl(7) does not document rollback, synthetic or panic Message-ID: <042.1bfd826bdef13cdfffb10d25f6f62507@varnish-cache.org> #991: vcl(7) does not document rollback, synthetic or panic -------------------+-------------------------------------------------------- Reporter: fgsch | Type: defect Status: new | Priority: normal Milestone: | Component: documentation Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- As per summary. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Aug 28 09:43:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 28 Aug 2011 09:43:44 -0000 Subject: [Varnish] #611: Websockets support In-Reply-To: <044.e857926f03d1cd58dcb558810a7e0704@varnish-cache.org> References: <044.e857926f03d1cd58dcb558810a7e0704@varnish-cache.org> Message-ID: <053.ae30f00e872809f05d481ac9842b88f7@varnish-cache.org> #611: Websockets support -------------------------+-------------------------------------------------- Reporter: wesnoth | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: invalid Keywords: | -------------------------+-------------------------------------------------- Comment(by jdespatis): Just a notice, I've coded in fact a websocket server that can handle websocket clients (only chrome for now), long polling clients (nearly all the other browsers), and telnet clients I've tested the connection to my server through varnish (version 3, I've not tested with varnish v2), with several settings, each has successfully worked! (websocket clients, long polling clients) My settings in the vcl was as simple as that (copy from [https://www .varnish-cache.org/docs/3.0/tutorial/advanced_backend_servers.html] indeed): {{{ backend djserver { .host = "192.168.69.45"; .port = "12345"; } sub vcl_recv { if (req.url ~ "^/djserver$") { set req.backend = djserver; return (pipe); } else { set req.backend = default; } } }}} So wesnoth, if your websocket server doesn't work through varnish, you should investigate on your websocket server because it's buggy... or problem of configuration of it (but varnish is out of the scope) For varnish team: you should claim that varnish has no problem with websockets, long polling clients, and update [https://www.varnish- cache.org/trac/wiki/Future_Protocols] Hope this helps... -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 07:19:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 07:19:32 -0000 Subject: [Varnish] #989: tests/g00002.vtc failing on ppc64 In-Reply-To: <043.096f9bd7c1c19720d2704d4ee907550c@varnish-cache.org> References: <043.096f9bd7c1c19720d2704d4ee907550c@varnish-cache.org> Message-ID: <052.8c176b6b2c37d83ec68490122d4fd043@varnish-cache.org> #989: tests/g00002.vtc failing on ppc64 --------------------+------------------------------------------------------- Reporter: ingvar | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- Comment(by phk): What is the pagesize on this arch ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 07:22:56 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 07:22:56 -0000 Subject: [Varnish] #987: Panic in cache_vrt.c with Varnish 3.0.0 In-Reply-To: <041.322157ba79fc5ef3f310ad88e240cb9a@varnish-cache.org> References: <041.322157ba79fc5ef3f310ad88e240cb9a@varnish-cache.org> Message-ID: <050.6d1a5f168a34d0383cf1da4720da7192@varnish-cache.org> #987: Panic in cache_vrt.c with Varnish 3.0.0 -------------------+-------------------------------------------------------- Reporter: hplc | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Description changed by phk: Old description: > Vanish Panic about everyone 2 days.[[BR]] > Platform: Linux CentOS 5.6 32bit (2.6.18-238.19.1.el5) > > {{{ > $ rpm -qa|grep varnish > varnish-release-3.0-1 > varnish-3.0.0-2.el5 > varnish-libs-3.0.0-2.el5 > }}} > > > {{{ > Aug 23 23:09:41 centos varnishd[3485]: Platform: > Linux,2.6.18-238.19.1.el5,i686,-sfile,-smalloc,-hcritbit > Aug 23 23:09:41 centos varnishd[3485]: child (3486) Started > Aug 23 23:09:41 centos varnishd[3485]: Child (3486) said Child starts > Aug 23 23:09:41 centos varnishd[3485]: Child (3486) said SMF.s0 mmap'ed > 1073741824 bytes of 1073741824 > Aug 25 06:26:32 centos varnishd[3485]: Child (3486) died signal=6 > Aug 25 06:26:32 centos varnishd[3485]: Child (3486) Panic message: Assert > error in VRT_IP_string(), cache_vrt.c line 310: Condition((p = > WS_Alloc(sp->http->ws, len)) != 0) not true. thread = (cache-worker) > ident = Linux,2.6.18-238.19.1.el5,i686,-sfile,-smalloc,-hcritbit,epoll > Backtrace: 0x807652c: /usr/sbin/varnishd [0x807652c] 0x807f585: > /usr/sbin/varnishd(VRT_IP_string+0x155) [0x807f585] 0x213343: > ./vcl.7Cb57Ikc.so [0x213343] 0x807e654: > /usr/sbin/varnishd(VCL_recv_method+0x54) [0x807e654] 0x805ef76: > /usr/sbin/varnishd(CNT_Session+0x8d6) [0x805ef76] 0x80791ff: > /usr/sbin/varnishd [0x80791ff] 0x80781b2: /usr/sbin/varnishd > [0x80781b2] 0x807879f: /usr/sbin/varnishd [0x807879f] 0xa0e832: > /lib/libpthread.so.0 [0xa0e832] 0x94e45e: /lib/libc.so.6(clone+0x5e) > [0x94e45e] sp = 0xb7ed8004 { fd = 32, id = 32, xid = 156301678, > client = 183.13.72.122 50306, step = STP_RECV, handling = deliver, > restarts = 0, esi_level = 0 ws = 0xb7ed8054 { overflow id = "sess", > {s,f,r,e} = {0xb7ed87 > Aug 25 06:26:32 centos varnishd[3485]: child (711) Started > Aug 25 06:26:32 centos varnishd[3485]: Child (711) said Child starts > Aug 25 06:26:32 centos varnishd[3485]: Child (711) said SMF.s0 mmap'ed > 1073741824 bytes of 1073741824 > Aug 25 07:13:35 centos varnishd[3485]: Manager got SIGINT > Aug 25 07:13:37 centos varnishd[2784]: Platform: > Linux,2.6.18-238.19.1.el5,i686,-sfile,-smalloc,-hcritbit > Aug 25 07:13:37 centos varnishd[2784]: child (2785) Started > }}} New description: Vanish Panic about everyone 2 days.[[BR]] Platform: Linux CentOS 5.6 32bit (2.6.18-238.19.1.el5) {{{ $ rpm -qa|grep varnish varnish-release-3.0-1 varnish-3.0.0-2.el5 varnish-libs-3.0.0-2.el5 }}} {{{ Aug 23 23:09:41 centos varnishd[3485]: Platform: Linux,2.6.18-238.19.1.el5,i686,-sfile,-smalloc,-hcritbit Aug 23 23:09:41 centos varnishd[3485]: child (3486) Started Aug 23 23:09:41 centos varnishd[3485]: Child (3486) said Child starts Aug 23 23:09:41 centos varnishd[3485]: Child (3486) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Aug 25 06:26:32 centos varnishd[3485]: Child (3486) died signal=6 Aug 25 06:26:32 centos varnishd[3485]: Child (3486) Panic message: Assert error in VRT_IP_string(), cache_vrt.c line 310: Condition((p = WS_Alloc(sp->http->ws, len)) != 0) not true. thread = (cache-worker) ident = Linux,2.6.18-238.19.1.el5,i686,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x807652c: /usr/sbin/varnishd [0x807652c] 0x807f585: /usr/sbin/varnishd(VRT_IP_string+0x155) [0x807f585] 0x213343: ./vcl.7Cb57Ikc.so [0x213343] 0x807e654: /usr/sbin/varnishd(VCL_recv_method+0x54) [0x807e654] 0x805ef76: /usr/sbin/varnishd(CNT_Session+0x8d6) [0x805ef76] 0x80791ff: /usr/sbin/varnishd [0x80791ff] 0x80781b2: /usr/sbin/varnishd [0x80781b2] 0x807879f: /usr/sbin/varnishd [0x807879f] 0xa0e832: /lib/libpthread.so.0 [0xa0e832] 0x94e45e: /lib/libc.so.6(clone+0x5e) [0x94e45e] sp = 0xb7ed8004 { fd = 32, id = 32, xid = 156301678, client = 183.13.72.122 50306, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0xb7ed8054 { overflow id = "sess", {s,f,r,e} = {0xb7ed87 Aug 25 06:26:32 centos varnishd[3485]: child (711) Started Aug 25 06:26:32 centos varnishd[3485]: Child (711) said Child starts Aug 25 06:26:32 centos varnishd[3485]: Child (711) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Aug 25 07:13:35 centos varnishd[3485]: Manager got SIGINT Aug 25 07:13:37 centos varnishd[2784]: Platform: Linux,2.6.18-238.19.1.el5,i686,-sfile,-smalloc,-hcritbit Aug 25 07:13:37 centos varnishd[2784]: child (2785) Started }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 07:27:27 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 07:27:27 -0000 Subject: [Varnish] #987: Panic in cache_vrt.c with Varnish 3.0.0 In-Reply-To: <041.322157ba79fc5ef3f310ad88e240cb9a@varnish-cache.org> References: <041.322157ba79fc5ef3f310ad88e240cb9a@varnish-cache.org> Message-ID: <050.fce9684abe8d046d992b33fc0d024920@varnish-cache.org> #987: Panic in cache_vrt.c with Varnish 3.0.0 ----------------------+----------------------------------------------------- Reporter: hplc | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Resolution: invalid | Keywords: ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: You run out of workspace in the session, increase the sess_workspace parameter. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 08:46:25 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 08:46:25 -0000 Subject: [Varnish] #988: On 32bit systems http_req_size's default is not reduced In-Reply-To: <044.e6092d0e59e91963c9ebd74d5bb6b492@varnish-cache.org> References: <044.e6092d0e59e91963c9ebd74d5bb6b492@varnish-cache.org> Message-ID: <053.f0b177bcf1de616c0afdc292512cc7dc@varnish-cache.org> #988: On 32bit systems http_req_size's default is not reduced ----------------------+----------------------------------------------------- Reporter: drwilco | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [773948499888ae36b1b9bc972a4b90cd22f1d1b1]) Reduce http_req_size on 32 bit machines Fixes #988 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 08:46:30 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 08:46:30 -0000 Subject: [Varnish] #984: [PATCH] When a non-cacheable status is returned, default TTL is applied In-Reply-To: <044.f1393dbac28989a3e46ad14cb18e285b@varnish-cache.org> References: <044.f1393dbac28989a3e46ad14cb18e285b@varnish-cache.org> Message-ID: <053.ecc3cfef78d44ff8a13134a8960df35e@varnish-cache.org> #984: [PATCH] When a non-cacheable status is returned, default TTL is applied ----------------------+----------------------------------------------------- Reporter: drwilco | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [de5f2e36b55ea89c7e7057dd5c43ffc928de8179]) Recent changes to the RFC2616 ttl calculation confused what ttl value was actually being used. Fixed by: DocWilco Fixes #984 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 10:45:06 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 10:45:06 -0000 Subject: [Varnish] #990: vcl(7) does not document vcl_init and vcl_fini In-Reply-To: <042.fa08c8075f047374fbdb86272be31640@varnish-cache.org> References: <042.fa08c8075f047374fbdb86272be31640@varnish-cache.org> Message-ID: <051.bc3e3c11a1a8159e56debc8f26c74a71@varnish-cache.org> #990: vcl(7) does not document vcl_init and vcl_fini -------------------+-------------------------------------------------------- Reporter: fgsch | Type: defect Status: new | Priority: normal Milestone: | Component: documentation Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by scoof): Remember to document ok action too. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 10:47:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 10:47:44 -0000 Subject: [Varnish] #946: ExpKill disappeared from exp_timer in 3.0 In-Reply-To: <042.177cc31fd16288dfa6d14ef04c0e001c@varnish-cache.org> References: <042.177cc31fd16288dfa6d14ef04c0e001c@varnish-cache.org> Message-ID: <051.704dc4df3dbf0007a58dec0b55dc9753@varnish-cache.org> #946: ExpKill disappeared from exp_timer in 3.0 ----------------------+----------------------------------------------------- Reporter: scoof | Owner: phk Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: 3.0.0 Severity: trivial | Keywords: ----------------------+----------------------------------------------------- Changes (by phk): * owner: => phk * component: build => varnishd -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 10:48:08 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 10:48:08 -0000 Subject: [Varnish] #837: varnishadm purge.list fails frequently In-Reply-To: <043.a2d15772321998574922abae75b7be94@varnish-cache.org> References: <043.a2d15772321998574922abae75b7be94@varnish-cache.org> Message-ID: <052.dc5b078b9e9d00ffddc72fede683121f@varnish-cache.org> #837: varnishadm purge.list fails frequently ----------------------+----------------------------------------------------- Reporter: jelder | Owner: phk Type: defect | Status: new Priority: lowest | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Keywords: purge.list ----------------------+----------------------------------------------------- Changes (by phk): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 10:50:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 10:50:17 -0000 Subject: [Varnish] #940: ETag for gzip'd variant identical to ETag of ungzipped variant. In-Reply-To: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> References: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> Message-ID: <051.a95663a63076ae362610a98f3dbe8ddd@varnish-cache.org> #940: ETag for gzip'd variant identical to ETag of ungzipped variant. -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by tfheen): Did you have a chance to take a look at this to see if it works correctly? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 11:03:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 11:03:14 -0000 Subject: [Varnish] #992: s_bodybytes increased by storage-size not sent size Message-ID: <045.e0582ad2fc8e7c6f06ab198451604622@varnish-cache.org> #992: s_bodybytes increased by storage-size not sent size ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: minor | Keywords: ----------------------+----------------------------------------------------- With GZIP enabled, the content is stored gzipped. When a gzipped object is delivered, the s_bodybytes counter is increased by the size of the object. However, if Varnish is gunzipping the object on the fly, s_bodybytes is still increased as if it was sent gzipped. This is most prominent when testing with large, sparse 0'ed-files, where 24 requests of 5GB of data can be counted as a few hundred MB. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 11:08:02 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 11:08:02 -0000 Subject: [Varnish] #993: 32-bit somewhere math in s_bodybytes Message-ID: <045.818f4a5681e0c9bf025f0897ac2a5620@varnish-cache.org> #993: 32-bit somewhere math in s_bodybytes ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- A 5GB object is typically counted as 1GB. The counter itself is of appropriate size and the object is successfully transferred. As per IRC: {{{ < phk> Ohh, that last one is simple: WRW_Write returns unsigned... }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 11:59:07 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 11:59:07 -0000 Subject: [Varnish] #993: 32-bit somewhere math in s_bodybytes In-Reply-To: <045.818f4a5681e0c9bf025f0897ac2a5620@varnish-cache.org> References: <045.818f4a5681e0c9bf025f0897ac2a5620@varnish-cache.org> Message-ID: <054.0a3fa7a778a5b0c235aaa8bea8ddbe23@varnish-cache.org> #993: 32-bit somewhere math in s_bodybytes ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by kristian): As en example: The value of s_bodybytes reads 25769803776 after 24 objects of 5G (truncate --size 5G ...) has been successfully transferred. That's Precisely 24GB. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 12:33:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 12:33:14 -0000 Subject: [Varnish] #994: Assert error in http_GetHdr(), cache_http.c Message-ID: <044.f0d406a02a1c326238466e3afe45c0f3@varnish-cache.org> #994: Assert error in http_GetHdr(), cache_http.c ---------------------+------------------------------------------------------ Reporter: pmialon | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: blocker Keywords: | ---------------------+------------------------------------------------------ From Git af353a6b6a45e2a47e17aa84389950a1c65854ec With the debian package varnish 3.0.0 this bug didn't appear, it seems that this is a regression. This bugs is hit frequently, on our servers varnish never reaches one hour of uptime. {{{ Aug 29 14:24:34 cloud3 varnishd[3495]: Child (19452) Panic message: Assert error in http_GetHdr(), cache_http.c line 266: Condition(l == strlen(hdr + 1)) not true. thread = (cache-worker) ident = Linux,2.6.32-5-amd64,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x42e4c8: /usr/sbin/varnishd() [0x42e4c8] 0x429c08: /usr/sbin/varnishd(http_GetHdr+0x68) [0x429c08] 0x433c47: /usr/sbin/varnishd(VRY_Match+0xf7) [0x433c47] 0x427e86: /usr/sbin/varnishd(HSH_Lookup+0x2a6) [0x427e86] 0x415a3b: /usr/sbin/varnishd() [0x415a3b] 0x418fc5: /usr/sbin/varnishd(CNT_Session+0x675) [0x418fc5] 0x430c78: /usr/sbin/varnishd() [0x430c78] 0x42fe49: /usr/sbin/varnishd() [0x42fe49] 0x7ffff6ab48ba: /lib/libpthread.so.0(+0x68ba) [0x7ffff6ab48ba] 0x7ffff681c02d: /lib/libc.so.6(clone+0x6d) [0x7ffff681c02d] sp = 0x7ec790b21008 { fd = 53, id = 53, xid = 2071385764, client = 127.0.0.1 11364, step = STP_LOOKUP, handling = hash, restarts = 0, esi_level = 0 flags = bodystatus = 4 ws = 0x7ec790b21080 { id = "sess", {s,f,r,e} = {0x7ec790b21cc8,+3344,+65536,+65536}, }, http[req] = { ws = 0x7ec790b21080[sess] "GET", "/searchkw/xml/?_q%5B0%5D=%28suzuki%7Bw%3D1%7D+115%7Bw%3D1%7D%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D4&_q%5B1%5D=%28suzuki%7Bw%3D1%7D%29+OPT%28115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B2%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B3%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AES+%28category%3Amiscellaneous%29+querywords%3E%3D2&_vn%5B0%5D=defaultkw_new&_vn%5B1%5D=defaultkw_new&_vn%5B2%5D=defaultkw_new&_vn%5B3%5D=seo_keywords_round_new&_cc%5B0%5D=IT&_cc%5B1%5D=IT&_cc%5B2%5D=IT&_cc%5B3%5D=ES&_comp=gzip&_fmt=JSON&_hashq%5B1%5D=1&_hashq%5B2%5D=1&_hashq%5B3%5D=1&_hstart%5B2%5D=1", "HTTP/1.1", "Connection: Close", "X-URL: /searchkw/xml/?_q%5B0%5D=%28suzuki%7Bw%3D1%7D+115%7Bw%3D1%7D%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D4&_q%5B1%5D=%28suzuki%7Bw%3D1%7D%29+OPT%28115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B2%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B3%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AES+%28category%3Amiscellaneous%29+querywords%3E%3D2&_vn%5B0%5D=defaultkw_new&_vn%5B1%5D=defaultkw_new&_vn%5B2%5D=defaultkw_new&_vn%5B3%5D=seo_keywords_round_new&_cc%5B0%5D=IT&_cc%5B1%5D=IT&_cc%5B2%5D=IT&_cc%5B3%5D=ES&_comp=gzip&_fmt=JSON&_hashq%5B1%5D=1&_hashq%5B2%5D=1&_hashq%5B3%5D=1&_hstart%5B2%5D=1&ttls=672", }, worker = 0x7ec7616f8b90 { ws = 0x7ec7616f8d38 { id = "wrk", {s,f,r,e} = {0x7ec7616e6b20,0x7ec7616e6b20,(nil),+65536}, }, }, vcl = { srcname = { "input", "Default", }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 18:01:35 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 18:01:35 -0000 Subject: [Varnish] #995: [PATCH] two minor bugs in vsm_open Message-ID: <044.6d2c86d698fdaed33fc78e542bbe0a8a@varnish-cache.org> #995: [PATCH] two minor bugs in vsm_open ---------------------+------------------------------------------------------ Reporter: drwilco | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | ---------------------+------------------------------------------------------ 1) While opening the VSM, slh.allocseq is read instead of vd->VSM_head->alloc_seq. This makes the delay loop not have any effect. (Found by Federico G. Schwindt) 2) If the file is uninitialized, the VSM_data struct is reset, except for VSM_head. So if VSM_Open is called again, VSM_head will not be zero and an assert error will occur. (Found by !DocWilco) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 18:07:02 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 18:07:02 -0000 Subject: [Varnish] #995: [PATCH] two minor bugs in vsm_open In-Reply-To: <044.6d2c86d698fdaed33fc78e542bbe0a8a@varnish-cache.org> References: <044.6d2c86d698fdaed33fc78e542bbe0a8a@varnish-cache.org> Message-ID: <053.9f23ae7689a84eb7c5f608a92bd1b0dd@varnish-cache.org> #995: [PATCH] two minor bugs in vsm_open ---------------------+------------------------------------------------------ Reporter: drwilco | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | ---------------------+------------------------------------------------------ Comment(by drwilco): 3) if the map fails, no cleaning up is done at all. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 19:04:23 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 19:04:23 -0000 Subject: [Varnish] #940: ETag for gzip'd variant identical to ETag of ungzipped variant. In-Reply-To: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> References: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> Message-ID: <051.b93bd0aebfa40ea05d563046ca02abf0@varnish-cache.org> #940: ETag for gzip'd variant identical to ETag of ungzipped variant. -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by david): Hey Tollef, I modified my VCL to change the etag for gz content on the way out and back in, as was discussed on this ticket some time ago. Has the code been patched to hand out varying etags now? I have the 3.0.1 RC, but I cannot get it to compile to test it. I sent in my compiler errors on another (support) ticket. Regards, -david -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 20:30:43 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 20:30:43 -0000 Subject: [Varnish] #990: vcl(7) does not document vcl_init and vcl_fini In-Reply-To: <042.fa08c8075f047374fbdb86272be31640@varnish-cache.org> References: <042.fa08c8075f047374fbdb86272be31640@varnish-cache.org> Message-ID: <051.a9550519483887dc52b9b72c8c5b9ea7@varnish-cache.org> #990: vcl(7) does not document vcl_init and vcl_fini -------------------+-------------------------------------------------------- Reporter: fgsch | Type: defect Status: new | Priority: normal Milestone: | Component: documentation Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by scoof): Patch attached -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 29 21:31:51 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Aug 2011 21:31:51 -0000 Subject: [Varnish] #996: vcl_fini can prevent vcl from unloading Message-ID: <042.0de7733d205636ecf10e79963224da87@varnish-cache.org> #996: vcl_fini can prevent vcl from unloading -------------------+-------------------------------------------------------- Reporter: scoof | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- error'ing in vcl_fini will prevent a vcl from being discarded. Either CLI needs a way to force a discard, or error should be disallowed. vcl_init can also error. Maybe this makes more sense, maybe not. This is documented in the doc changes in #990, so any changes to behaviour need an updated vcl.rst -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 00:22:35 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 00:22:35 -0000 Subject: [Varnish] #997: vcl(7) does not documents probe's expected_response or interval Message-ID: <042.09e85bcd8e611c8a0148bcbbd15f5618@varnish-cache.org> #997: vcl(7) does not documents probe's expected_response or interval -------------------+-------------------------------------------------------- Reporter: fgsch | Type: defect Status: new | Priority: normal Milestone: | Component: documentation Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- As per summary. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 07:13:13 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 07:13:13 -0000 Subject: [Varnish] #996: vcl_fini can prevent vcl from unloading In-Reply-To: <042.0de7733d205636ecf10e79963224da87@varnish-cache.org> References: <042.0de7733d205636ecf10e79963224da87@varnish-cache.org> Message-ID: <051.8d924154eeb6c067a124ec184a79abcd@varnish-cache.org> #996: vcl_fini can prevent vcl from unloading ---------------------+------------------------------------------------------ Reporter: scoof | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Changes (by phk): * status: new => closed * resolution: => fixed Comment: I have disallowed "error" in both vcl_fini{} and vcl_init{} A mechanism to fail vcl_init{} needs to be thought about, as returning a sensible diagnostic to the CLI is required. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 07:18:33 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 07:18:33 -0000 Subject: [Varnish] #990: vcl(7) does not document vcl_init and vcl_fini In-Reply-To: <042.fa08c8075f047374fbdb86272be31640@varnish-cache.org> References: <042.fa08c8075f047374fbdb86272be31640@varnish-cache.org> Message-ID: <051.b8ece3e541b62402018590c65b9cb30a@varnish-cache.org> #990: vcl(7) does not document vcl_init and vcl_fini ---------------------+------------------------------------------------------ Reporter: fgsch | Type: defect Status: closed | Priority: normal Milestone: | Component: documentation Version: 3.0.0 | Severity: normal Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [5b65d5a87e8dae66fb958144f0472c8cb7aafd36]) Document vcl_init{} and vcl_fini{} Fixes #990 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 07:22:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 07:22:20 -0000 Subject: [Varnish] #995: [PATCH] two minor bugs in vsm_open In-Reply-To: <044.6d2c86d698fdaed33fc78e542bbe0a8a@varnish-cache.org> References: <044.6d2c86d698fdaed33fc78e542bbe0a8a@varnish-cache.org> Message-ID: <053.14ab68a9f1175b4b0602a411acf16b24@varnish-cache.org> #995: [PATCH] two minor bugs in vsm_open ----------------------+----------------------------------------------------- Reporter: drwilco | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [204198fa1e78b5dda55ec880431f193926720592]) Don't look at a static version of the VSM during startup. Fixes #995 Patch by DocWilco -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 07:28:08 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 07:28:08 -0000 Subject: [Varnish] #994: Assert error in http_GetHdr(), cache_http.c In-Reply-To: <044.f0d406a02a1c326238466e3afe45c0f3@varnish-cache.org> References: <044.f0d406a02a1c326238466e3afe45c0f3@varnish-cache.org> Message-ID: <053.df03dccf740b8f4771efcd597acc26c3@varnish-cache.org> #994: Assert error in http_GetHdr(), cache_http.c ---------------------+------------------------------------------------------ Reporter: pmialon | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: blocker Keywords: | ---------------------+------------------------------------------------------ Old description: > From Git af353a6b6a45e2a47e17aa84389950a1c65854ec > > With the debian package varnish 3.0.0 this bug didn't appear, it seems > that this is a regression. > > This bugs is hit frequently, on our servers varnish never reaches one > hour of uptime. > > {{{ > Aug 29 14:24:34 cloud3 varnishd[3495]: Child (19452) Panic message: > Assert error in http_GetHdr(), cache_http.c line 266: > Condition(l == strlen(hdr + 1)) not true. > thread = (cache-worker) > ident = Linux,2.6.32-5-amd64,x86_64,-sfile,-smalloc,-hcritbit,epoll > Backtrace: > 0x42e4c8: /usr/sbin/varnishd() [0x42e4c8] > 0x429c08: /usr/sbin/varnishd(http_GetHdr+0x68) [0x429c08] > 0x433c47: /usr/sbin/varnishd(VRY_Match+0xf7) [0x433c47] > 0x427e86: /usr/sbin/varnishd(HSH_Lookup+0x2a6) [0x427e86] > 0x415a3b: /usr/sbin/varnishd() [0x415a3b] > 0x418fc5: /usr/sbin/varnishd(CNT_Session+0x675) [0x418fc5] > 0x430c78: /usr/sbin/varnishd() [0x430c78] > 0x42fe49: /usr/sbin/varnishd() [0x42fe49] > 0x7ffff6ab48ba: /lib/libpthread.so.0(+0x68ba) [0x7ffff6ab48ba] > 0x7ffff681c02d: /lib/libc.so.6(clone+0x6d) [0x7ffff681c02d] > sp = 0x7ec790b21008 { > fd = 53, id = 53, xid = 2071385764, > client = 127.0.0.1 11364, > step = STP_LOOKUP, > handling = hash, > restarts = 0, esi_level = 0 > flags = > bodystatus = 4 > ws = 0x7ec790b21080 { > id = "sess", > {s,f,r,e} = {0x7ec790b21cc8,+3344,+65536,+65536}, > }, > http[req] = { > ws = 0x7ec790b21080[sess] > "GET", > "/searchkw/xml/?_q%5B0%5D=%28suzuki%7Bw%3D1%7D+115%7Bw%3D1%7D%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D4&_q%5B1%5D=%28suzuki%7Bw%3D1%7D%29+OPT%28115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B2%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B3%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AES+%28category%3Amiscellaneous%29+querywords%3E%3D2&_vn%5B0%5D=defaultkw_new&_vn%5B1%5D=defaultkw_new&_vn%5B2%5D=defaultkw_new&_vn%5B3%5D=seo_keywords_round_new&_cc%5B0%5D=IT&_cc%5B1%5D=IT&_cc%5B2%5D=IT&_cc%5B3%5D=ES&_comp=gzip&_fmt=JSON&_hashq%5B1%5D=1&_hashq%5B2%5D=1&_hashq%5B3%5D=1&_hstart%5B2%5D=1", > "HTTP/1.1", > "Connection: Close", > "X-URL: > /searchkw/xml/?_q%5B0%5D=%28suzuki%7Bw%3D1%7D+115%7Bw%3D1%7D%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D4&_q%5B1%5D=%28suzuki%7Bw%3D1%7D%29+OPT%28115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B2%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B3%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AES+%28category%3Amiscellaneous%29+querywords%3E%3D2&_vn%5B0%5D=defaultkw_new&_vn%5B1%5D=defaultkw_new&_vn%5B2%5D=defaultkw_new&_vn%5B3%5D=seo_keywords_round_new&_cc%5B0%5D=IT&_cc%5B1%5D=IT&_cc%5B2%5D=IT&_cc%5B3%5D=ES&_comp=gzip&_fmt=JSON&_hashq%5B1%5D=1&_hashq%5B2%5D=1&_hashq%5B3%5D=1&_hstart%5B2%5D=1&ttls=672", > }, > worker = 0x7ec7616f8b90 { > ws = 0x7ec7616f8d38 { > id = "wrk", > {s,f,r,e} = {0x7ec7616e6b20,0x7ec7616e6b20,(nil),+65536}, > }, > }, > vcl = { > srcname = { > "input", > "Default", > }, > }, > }, > > }}} New description: From Git af353a6b6a45e2a47e17aa84389950a1c65854ec With the debian package varnish 3.0.0 this bug didn't appear, it seems that this is a regression. This bugs is hit frequently, on our servers varnish never reaches one hour of uptime. {{{ Aug 29 14:24:34 cloud3 varnishd[3495]: Child (19452) Panic message: Assert error in http_GetHdr(), cache_http.c line 266: Condition(l == strlen(hdr + 1)) not true. thread = (cache-worker) ident = Linux,2.6.32-5-amd64,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x42e4c8: /usr/sbin/varnishd() [0x42e4c8] 0x429c08: /usr/sbin/varnishd(http_GetHdr+0x68) [0x429c08] 0x433c47: /usr/sbin/varnishd(VRY_Match+0xf7) [0x433c47] 0x427e86: /usr/sbin/varnishd(HSH_Lookup+0x2a6) [0x427e86] 0x415a3b: /usr/sbin/varnishd() [0x415a3b] 0x418fc5: /usr/sbin/varnishd(CNT_Session+0x675) [0x418fc5] 0x430c78: /usr/sbin/varnishd() [0x430c78] 0x42fe49: /usr/sbin/varnishd() [0x42fe49] 0x7ffff6ab48ba: /lib/libpthread.so.0(+0x68ba) [0x7ffff6ab48ba] 0x7ffff681c02d: /lib/libc.so.6(clone+0x6d) [0x7ffff681c02d] sp = 0x7ec790b21008 { fd = 53, id = 53, xid = 2071385764, client = 127.0.0.1 11364, step = STP_LOOKUP, handling = hash, restarts = 0, esi_level = 0 flags = bodystatus = 4 ws = 0x7ec790b21080 { id = "sess", {s,f,r,e} = {0x7ec790b21cc8,+3344,+65536,+65536}, }, http[req] = { ws = 0x7ec790b21080[sess] "GET", "/searchkw/xml/?_q%5B0%5D=%28suzuki%7Bw%3D1%7D+115%7Bw%3D1%7D%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D4&_q%5B1%5D=%28suzuki%7Bw%3D1%7D%29+OPT%28115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B2%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B3%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AES+%28category%3Amiscellaneous%29+querywords%3E%3D2&_vn%5B0%5D=defaultkw_new&_vn%5B1%5D=defaultkw_new&_vn%5B2%5D=defaultkw_new&_vn%5B3%5D=seo_keywords_round_new&_cc%5B0%5D=IT&_cc%5B1%5D=IT&_cc%5B2%5D=IT&_cc%5B3%5D=ES&_comp=gzip&_fmt=JSON&_hashq%5B1%5D=1&_hashq%5B2%5D=1&_hashq%5B3%5D=1&_hstart%5B2%5D=1", "HTTP/1.1", "Connection: Close", "X-URL: /searchkw/xml/?_q%5B0%5D=%28suzuki%7Bw%3D1%7D+115%7Bw%3D1%7D%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D4&_q%5B1%5D=%28suzuki%7Bw%3D1%7D%29+OPT%28115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B2%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AIT+%28category%3Amiscellaneous%29+querywords%3E%3D2+querywords%3C%3D3&_q%5B3%5D=OPT%28suzuki+OR+115%29+-category%3Aall+country%3AES+%28category%3Amiscellaneous%29+querywords%3E%3D2&_vn%5B0%5D=defaultkw_new&_vn%5B1%5D=defaultkw_new&_vn%5B2%5D=defaultkw_new&_vn%5B3%5D=seo_keywords_round_new&_cc%5B0%5D=IT&_cc%5B1%5D=IT&_cc%5B2%5D=IT&_cc%5B3%5D=ES&_comp=gzip&_fmt=JSON&_hashq%5B1%5D=1&_hashq%5B2%5D=1&_hashq%5B3%5D=1&_hstart%5B2%5D=1&ttls=672", }, worker = 0x7ec7616f8b90 { ws = 0x7ec7616f8d38 { id = "wrk", {s,f,r,e} = {0x7ec7616e6b20,0x7ec7616e6b20,(nil),+65536}, }, }, vcl = { srcname = { "input", "Default", }, }, }, }}} -- Comment(by phk): I would be very interested in knowing what the Vary: header it was processing looked like. You can either find this in varnishlog, if you can identify the object it is trying to match in the hash, or if the crash results in a core dump, it can be extracted from there. Alternatively, if you can simply get me the argument passed to http_GetHdr() from the coredump that would help too. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 07:31:01 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 07:31:01 -0000 Subject: [Varnish] #989: tests/g00002.vtc failing on ppc64 In-Reply-To: <043.096f9bd7c1c19720d2704d4ee907550c@varnish-cache.org> References: <043.096f9bd7c1c19720d2704d4ee907550c@varnish-cache.org> Message-ID: <052.77ca3acc20818a469bb21a4b2260413f@varnish-cache.org> #989: tests/g00002.vtc failing on ppc64 --------------------+------------------------------------------------------- Reporter: ingvar | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: This got fixed by using SMA which does not care about pagesize. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 07:37:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 07:37:42 -0000 Subject: [Varnish] #946: ExpKill disappeared from exp_timer in 3.0 In-Reply-To: <042.177cc31fd16288dfa6d14ef04c0e001c@varnish-cache.org> References: <042.177cc31fd16288dfa6d14ef04c0e001c@varnish-cache.org> Message-ID: <051.8b2d60a00797908c7c119211ddef2913@varnish-cache.org> #946: ExpKill disappeared from exp_timer in 3.0 ----------------------+----------------------------------------------------- Reporter: scoof | Owner: phk Type: defect | Status: closed Priority: low | Milestone: Component: varnishd | Version: 3.0.0 Severity: trivial | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [40575c5d132c654d10aa0a28cf0530dd0b7b178a]) Reintroduce ExpKill VSL records. Fixes #946 Patch by scoof -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 08:26:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 08:26:20 -0000 Subject: [Varnish] #995: [PATCH] two minor bugs in vsm_open In-Reply-To: <044.6d2c86d698fdaed33fc78e542bbe0a8a@varnish-cache.org> References: <044.6d2c86d698fdaed33fc78e542bbe0a8a@varnish-cache.org> Message-ID: <053.15880317755e355dbf28e36950668337@varnish-cache.org> #995: [PATCH] two minor bugs in vsm_open ----------------------+----------------------------------------------------- Reporter: drwilco | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [8488a8b27a485b763d1df95f44045fb4e1a18a61]) Clean up VSM setup if mmap fails Fixes #995 (the rest of it). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 08:29:00 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 08:29:00 -0000 Subject: [Varnish] #994: Assert error in http_GetHdr(), cache_http.c In-Reply-To: <044.f0d406a02a1c326238466e3afe45c0f3@varnish-cache.org> References: <044.f0d406a02a1c326238466e3afe45c0f3@varnish-cache.org> Message-ID: <053.552b2af8b8f9d50721e90fd38431f016@varnish-cache.org> #994: Assert error in http_GetHdr(), cache_http.c ---------------------+------------------------------------------------------ Reporter: pmialon | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: blocker Keywords: | ---------------------+------------------------------------------------------ Comment(by pmialon): We recompiled the package with the debug info, here a panic with correct references. Unfortunately we didn't have the coredump. We succeed to produce one but it was incomplete. Perhaps we need to use less space on our storage to keep place for the core ? Now we are dedicating 80% of the space to varnish store file. We start varnishlog and we will update this ticket with the content of Vary: header as soon as varnish will crash. {{{ /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -t 3600 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -w 100,4000 -p thread_pools 8 -p listen_depth 4096 -p thread_pool_add_delay 2 -p session_linger 50/100/150 -p sess_workspace 262144 -s file,/var/lib/varnish/varnish_storage.bin,80% }}} {{{ Last panic at: Tue, 30 Aug 2011 08:15:55 GMT Assert error in http_GetHdr(), cache_http.c line 266: Condition(l == strlen(hdr + 1)) not true. thread = (cache-worker) ident = Linux,2.6.32-5-amd64,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x437201: pan_backtrace+19 0x4374d6: pan_ic+1ad 0x430d57: http_GetHdr+67 0x43e69a: VRY_Match+ac 0x42e37d: HSH_Lookup+657 0x41b302: cnt_lookup+230 0x41d11c: CNT_Session+66d 0x4391d8: wrk_do_cnt_sess+130 0x438a40: wrk_thread_real+897 0x438e3a: wrk_thread+12a sp = 0x7ec5e6102008 { fd = 56, id = 56, xid = 1001188130, client = 127.0.0.1 47062, step = STP_LOOKUP, handling = hash, restarts = 0, esi_level = 0 flags = bodystatus = 4 ws = 0x7ec5e6102080 { id = "sess", {s,f,r,e} = {0x7ec5e6102cc8,+3640,+262144,+262144}, }, http[req] = { ws = 0x7ec5e6102080[sess] "GET", "/searchkw/xml/?_q%5B0%5D=%28maison%7Bw%3D1%7D+a%7Bw%3D1%7D+vendre%7Bw%3D1%7D%29+-category%3Aall+%28Croix%7Bw%3D1%7D%29+country%3AFR+%28category%3Ahousing%29+querywords%3E%3D2+querywords%3C%3D7&_q%5B1%5D=%28maison%7Bw%3D1%7D+a%7Bw%3D1%7D%29+OPT%28vendre%29+-category%3Aall+OPT%28Croix%29+country%3AFR+%28category%3Ahousing%29+querywords%3E%3D2+querywords%3C%3D6&_q%5B2%5D=OPT%28maison+OR+a+OR+vendre%29+-category%3Aall+OPT%28Croix%29+country%3AFR+%28category%3Ahousing%29+querywords%3E%3D2+querywords%3C%3D6&_q%5B3%5D=OPT%28maison+OR+a+OR+vendre%29+-category%3Aall+country%3AUS+%28category%3Ahousing%29+querywords%3E%3D2&_vn%5B0%5D=defaultkw_new&_vn%5B1%5D=defaultkw_new&_vn%5B2%5D=defaultkw_new&_vn%5B3%5D=seo_keywords_round_new&_cc%5B0%5D=FR&_cc%5B1%5D=FR&_cc%5B2%5D=FR&_cc%5B3%5D=US&_comp=gzip&_fmt=JSON&_hashq%5B1%5D=1&_hashq%5B2%5D=1&_hashq%5B3%5D=1&_hstart%5B2%5D=1", "HTTP/1.1", "Connection: Close", "X-URL: /searchkw/xml/?_q%5B0%5D=%28maison%7Bw%3D1%7D+a%7Bw%3D1%7D+vendre%7Bw%3D1%7D%29+-category%3Aall+%28Croix%7Bw%3D1%7D%29+country%3AFR+%28category%3Ahousing%29+querywords%3E%3D2+querywords%3C%3D7&_q%5B1%5D=%28maison%7Bw%3D1%7D+a%7Bw%3D1%7D%29+OPT%28vendre%29+-category%3Aall+OPT%28Croix%29+country%3AFR+%28category%3Ahousing%29+querywords%3E%3D2+querywords%3C%3D6&_q%5B2%5D=OPT%28maison+OR+a+OR+vendre%29+-category%3Aall+OPT%28Croix%29+country%3AFR+%28category%3Ahousing%29+querywords%3E%3D2+querywords%3C%3D6&_q%5B3%5D=OPT%28maison+OR+a+OR+vendre%29+-category%3Aall+country%3AUS+%28category%3Ahousing%29+querywords%3E%3D2&_vn%5B0%5D=defaultkw_new&_vn%5B1%5D=defaultkw_new&_vn%5B2%5D=defaultkw_new&_vn%5B3%5D=seo_keywords_round_new&_cc%5B0%5D=FR&_cc%5B1%5D=FR&_cc%5B2%5D=FR&_cc%5B3%5D=US&_comp=gzip&_fmt=JSON&_hashq%5B1%5D=1&_hashq%5B2%5D=1&_hashq%5B3%5D=1&_hstart%5B2%5D=1&ttls=672", }, worker = 0x7ec7319e0b70 { ws = 0x7ec7319e0d18 { id = "wrk", {s,f,r,e} = {0x7ec7319ceac0,0x7ec7319ceac0,(nil),+65536}, }, }, vcl = { srcname = { "input", "Default", }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 08:31:48 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 08:31:48 -0000 Subject: [Varnish] #995: [PATCH] two minor bugs in vsm_open In-Reply-To: <044.6d2c86d698fdaed33fc78e542bbe0a8a@varnish-cache.org> References: <044.6d2c86d698fdaed33fc78e542bbe0a8a@varnish-cache.org> Message-ID: <053.5eb0c91c1933b077eefc2428e5a9d508@varnish-cache.org> #995: [PATCH] two minor bugs in vsm_open ----------------------+----------------------------------------------------- Reporter: drwilco | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [ceda65ddc3d0622d5a7224399131d15cc6dc4a1b]) Clean up VSM setup if mmap fails Fixes #995 (the rest of it). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 14:36:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 14:36:28 -0000 Subject: [Varnish] #994: Assert error in http_GetHdr(), cache_http.c In-Reply-To: <044.f0d406a02a1c326238466e3afe45c0f3@varnish-cache.org> References: <044.f0d406a02a1c326238466e3afe45c0f3@varnish-cache.org> Message-ID: <053.fd043881ed3238095b7a37c18e739184@varnish-cache.org> #994: Assert error in http_GetHdr(), cache_http.c ---------------------+------------------------------------------------------ Reporter: pmialon | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: blocker Keywords: | ---------------------+------------------------------------------------------ Comment(by pmialon): The panic message followed by the relevant varnishlog lines. {{{ Aug 30 12:56:12 cloud3 varnishd[19294]: Child (19295) Panic message: Assert error in http_GetHdr(), cache_http.c line 266: Condition(l == strlen(hdr + 1)) not true. errno = 32 (Broken pipe) thread = (cache-worker) ident = Linux,2.6.32-5-amd64,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x437201: pan_backtrace+19 0x4374d6: pan_ic+1ad 0x430d57: http_GetHdr+67 0x43e69a: VRY_Match+ac 0x42e37d: HSH_Lookup+657 0x41b302: cnt_lookup+230 0x41d11c: CNT_Session+66d 0x4391d8: wrk_do_cnt_sess+130 0x438a40: wrk_thread_real+897 0x438e3a: wrk_thread+12a sp = 0x7ec5fea8a008 { fd = 4, id = 4, xid = 571229089, client = 127.0.0.1 27205, step = STP_LOOKUP, handling = hash, restarts = 0, esi_level = 0 flags = bodystatus = 4 ws = 0x7ec5fea8a080 { id = "sess", {s,f,r,e} = {0x7ec5fea8acc8,+3792,+262144,+262144}, }, http[req] = { ws = 0x7ec5fea8a080[sess] "GET", "/searchkw/xml/?_q%5B0%5D=%28norton%7Bw%3D1%7D+atlas%7Bw%3D1%7D+motorcycle%7Bw%3D1%7D+parts%7Bw%3D1%7D%29+-category%3Aall+country%3AUS+%28category%3Amotorbikes%29+querywords%3E%3D2+querywords%3C%3D6&_q%5B1%5D=%28norton%7Bw%3D1%7D+atlas%7Bw%3D1%7D%29+OPT%28motorcycle+OR+parts%29+-category%3Aall+country%3AUS+%28category%3Amotorbikes%29+querywords%3E%3D2+querywords%3C%3D5&_q%5B2%5D=OPT%28norton+OR+atlas+OR+motorcycle+OR+parts%29+-category%3Aall+country%3AUS+%28category%3Amotorbikes%29+querywords%3E%3D2+querywords%3C%3D5&_q%5B3%5D=OPT%28norton+OR+atlas+OR+motorcycle+OR+parts%29+-category%3Aall+country%3ACA+%28category%3Amotorbikes%29+querywords%3E%3D2&_vn%5B0%5D=defaultkw_new&_vn%5B1%5D=defaultkw_new&_vn%5B2%5D=defaultkw_new&_vn%5B3%5D=seo_keywords_round_new&_cc%5B0%5D=US&_cc%5B1%5D=US&_cc%5B2%5D=US&_cc%5B3%5D=CA&_comp=gzip&_fmt=JSON&_hashq%5B1%5D=1&_hashq%5B2%5D=1&_hashq%5B3%5D=1&_hstart%5B2%5D=1", "HTTP/1.1", "Connection: Close", "X-URL: /searchkw/xml/?_q%5B0%5D=%28norton%7Bw%3D1%7D+atlas%7Bw%3D1%7D+motorcycle%7Bw%3D1%7D+parts%7Bw%3D1%7D%29+-category%3Aall+country%3AUS+%28category%3Amotorbikes%29+querywords%3E%3D2+querywords%3C%3D6&_q%5B1%5D=%28norton%7Bw%3D1%7D+atlas%7Bw%3D1%7D%29+OPT%28motorcycle+OR+parts%29+-category%3Aall+country%3AUS+%28category%3Amotorbikes%29+querywords%3E%3D2+querywords%3C%3D5&_q%5B2%5D=OPT%28norton+OR+atlas+OR+motorcycle+OR+parts%29+-category%3Aall+country%3AUS+%28category%3Amotorbikes%29+querywords%3E%3D2+querywords%3C%3D5&_q%5B3%5D=OPT%28norton+OR+atlas+OR+motorcycle+OR+parts%29+-category%3Aall+country%3ACA+%28category%3Amotorbikes%29+querywords%3E%3D2&_vn%5B0%5D=defaultkw_new&_vn%5B1%5D=defaultkw_new&_vn%5B2%5D=defaultkw_new&_vn%5B3%5D=seo_keywords_round_new&_cc%5B0%5D=US&_cc%5B1%5D=US&_cc%5B2%5D=US&_cc%5B3%5D=CA&_comp=gzip&_fmt=JSON&_hashq%5B1%5D=1&_hashq%5B2%5D=1&_hashq%5B3%5D=1&_hstart%5B2%5D=1&ttls=672", }, worker = 0x7ec70f4fbb70 { ws = 0x7ec70f4fbd18 { id = "wrk", {s,f,r,e} = {0x7ec70f4e9ac0,0x7ec70f4e9ac0,(nil),+65536}, }, }, vcl = { srcname = { "input", "Default", }, }, }, }}} {{{ 4 SessionOpen c 192.168.131.59 33558 :80 4 ReqStart c 192.168.131.59 33558 571229087 4 RxRequest c POST 4 RxURL c /searchkw/xml/?_q%5B0%5D=%28norton%7Bw%3D1%7D+atlas%7Bw%3D1%7D+motorcycle%7Bw%3D1%7D+parts%7Bw%3D1%7D%29+-category%3Aall+country%3AUS+%28category%3Amotorbikes%29+querywords%3E%3D2+querywords%3C%3D6&_q%5B1%5D=%28norton%7Bw%3D1%7D+atlas%7Bw%3D1%7D%29+OPT%28 4 RxProtocol c HTTP/1.1 4 RxHeader c Host: cloud3 4 RxHeader c Accept: */* 4 RxHeader c Accept-Encoding: identity 4 RxHeader c Content-Length: 0 4 RxHeader c Content-Type: application/x-www-form-urlencoded 4 VCL_call c recv lookup 4 VCL_call c hash 4 Hash c /searchkw/xml/?_q%5B0%5D=%28norton%7Bw%3D1%7D+atlas%7Bw%3D1%7D+motorcycle%7Bw%3D1%7D+parts%7Bw%3D1%7D%29+-category%3Aall+country%3AUS+%28category%3Amotorbikes%29+querywords%3E%3D2+querywords%3C%3D6&_q%5B1%5D=%28norton%7Bw%3D1%7D+atlas%7Bw%3D1%7D%29+OPT%2 4 VCL_return c hash 4 VCL_call c miss error 4 VCL_call c error deliver 4 VCL_call c deliver deliver 4 TxProtocol c HTTP/1.1 4 TxStatus c 407 4 TxResponse c Delayed fetch 4 TxHeader c Server: Varnish 4 TxHeader c Content-Length: 0 4 TxHeader c Accept-Ranges: bytes 4 TxHeader c Date: Tue, 30 Aug 2011 10:56:11 GMT 4 TxHeader c Connection: close 4 Length c 0 4 ReqEnd c 571229087 1314701771.821617365 1314701771.822000980 0.000048399 0.000356197 0.000027418 4 SessionClose c error 4 StatSess c 192.168.131.59 33558 0 1 1 0 0 0 144 0 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 30 23:30:45 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Aug 2011 23:30:45 -0000 Subject: [Varnish] #998: Separation of Varnish and vmod builds Message-ID: <043.126822ebe98abf838ee4b489ec46bd96@varnish-cache.org> #998: Separation of Varnish and vmod builds --------------------+------------------------------------------------------- Reporter: anders | Owner: tfheen Type: task | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- To make building of vmods work painless in source-based package systems, it should be possible to build without relying on the Varnish source: - Varnish and vmods come with separate autoconf configure scripts, they are meant to be built separately? - To be able to build a vmod at convenience, the complete Varnish source must be installed? I had a brief discussion with Kristian/Tollef about this on the chat, and they said it should not be necessary to need the (compiled source) when buildin a vmod like libvmod-header. Should it need the source at all except what follows from a normal Varnish install? In order to make it possible to build and install libvmod-header without VARNISHSRC dir, I had to: - install vmod.py - install lots of additional header files: vct.h vmod_abi.h vrt.h vqueue.h vsb.h libvarnish.h miniobj.h vas.h vav.h http_headers.h vcl_returns.h (in include dir) cache.h heritage.h steps.h common.h acct_fields.h locks.h (in bin/varnishd dir). - remove VARNISHSRC checks in configure.ac for libvmod-header and replace with: +AC_CHECK_HEADERS([varnish/varnishapi.h], , AC_MSG_ERROR([Could not find varnish/varnishapi.h])) +AC_CHECK_PROGS(VARNISHTEST, varnishtest, [AC_MSG_ERROR([Could not find varnishtest binary])]) - adjust various paths for header files and vmod.py. The configure script for Varnish should either have an option to add vmods outside the Varnish tree, or vmods should be possible to build without VARNISHSRC. :-) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 08:20:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 08:20:44 -0000 Subject: [Varnish] #999: format keyword "handling" and the '>' in %s Message-ID: <040.1057bddb127c1f0c6e49cfad280c49bf@varnish-cache.org> #999: format keyword "handling" and the '>' in %s -----------------------------+---------------------------------------------- Reporter: Kai | Type: defect Status: new | Priority: normal Milestone: Varnish 3.0 dev | Component: varnishncsa Version: trunk | Severity: normal Keywords: | -----------------------------+---------------------------------------------- Hi, the format keyword for the handling is "Varnish:handling" not "handling". The example format string contains a '>' for "Status sent to the client" but only "%s" is valid. You find a patch attached to fix these little bugs. bye Kai -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 11:06:33 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 11:06:33 -0000 Subject: [Varnish] #1000: server.ip can`t beused as a string? Message-ID: <041.ef67335559fd176ab5bfb80ff70d2468@varnish-cache.org> #1000: server.ip can`t beused as a string? -------------------+-------------------------------------------------------- Reporter: xrow | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- wienadmin at slvieprodweb01:~$ sudo /etc/init.d/varnish restart Stopping HTTP accelerator: varnishd. Starting HTTP accelerator: varnishd failed! SMA.s0: max size 3072 MB. Message from VCC-compiler: Operator + not possible on type IP. ('input' Line 227 Pos 43) set resp.http.X-Cache = server.ip + ":HIT:" + obj.hits; ------------------------------------------#-------------------- Running VCC-compiler failed, exit 1 VCL compilation failed ------------- something like that was posible with 2.x -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 12:46:36 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 12:46:36 -0000 Subject: [Varnish] #1000: server.ip can`t beused as a string? In-Reply-To: <041.ef67335559fd176ab5bfb80ff70d2468@varnish-cache.org> References: <041.ef67335559fd176ab5bfb80ff70d2468@varnish-cache.org> Message-ID: <050.a1f9eb031c377416ee326daa532cd600@varnish-cache.org> #1000: server.ip can`t beused as a string? -------------------+-------------------------------------------------------- Reporter: xrow | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by scoof): You can work around this by doing: set resp.http.X-Cache = "" + server.ip + ":HIT:" + obj.hits; -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 13:29:05 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 13:29:05 -0000 Subject: [Varnish] #1001: Varnish 3.0.1 crashes in http_GetHdr Message-ID: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> #1001: Varnish 3.0.1 crashes in http_GetHdr ----------------------+----------------------------------------------------- Reporter: anders | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Keywords: ----------------------+----------------------------------------------------- After upgrading from 3.0.0 to 3.0.1 I get a lot of panics: varnish> panic.show 200 Last panic at: Wed, 31 Aug 2011 12:49:13 GMT Assert error in http_GetHdr(), cache_http.c line 266: Condition(l == strlen(hdr + 1)) not true. thread = (cache-worker) ident = FreeBSD,8.1-RELEASE-p1,amd64,-smalloc,-smalloc,-hcritbit,kqueue -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 13:29:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 13:29:38 -0000 Subject: [Varnish] #1001: Varnish 3.0.1 crashes in http_GetHdr In-Reply-To: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> References: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> Message-ID: <052.1b974d35e85020cb5d373f73f61cc11e@varnish-cache.org> #1001: Varnish 3.0.1 crashes in http_GetHdr ----------------------+----------------------------------------------------- Reporter: anders | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.0 Severity: critical | Keywords: ----------------------+----------------------------------------------------- Changes (by anders): * priority: normal => high * severity: normal => critical -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 13:29:49 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 13:29:49 -0000 Subject: [Varnish] #1001: Varnish 3.0.1 crashes in http_GetHdr In-Reply-To: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> References: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> Message-ID: <052.fc030a1a9351e370fa027cc0c2e5b461@varnish-cache.org> #1001: Varnish 3.0.1 crashes in http_GetHdr ----------------------+----------------------------------------------------- Reporter: anders | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.1 Severity: critical | Keywords: ----------------------+----------------------------------------------------- Changes (by anders): * version: 3.0.0 => 3.0.1 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 13:31:24 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 13:31:24 -0000 Subject: [Varnish] #1001: Varnish 3.0.1 crashes in http_GetHdr In-Reply-To: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> References: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> Message-ID: <052.211046262311256b311d173572277322@varnish-cache.org> #1001: Varnish 3.0.1 crashes in http_GetHdr ----------------------+----------------------------------------------------- Reporter: anders | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.1 Severity: critical | Keywords: ----------------------+----------------------------------------------------- Comment(by anders): Oh. It should be said. I use libvmod-header. Not sure if that has anything to do with it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 13:55:10 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 13:55:10 -0000 Subject: [Varnish] #1001: Varnish 3.0.1 crashes in http_GetHdr In-Reply-To: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> References: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> Message-ID: <052.7c8dbdf1eb90263d75082495ffc5b2b7@varnish-cache.org> #1001: Varnish 3.0.1 crashes in http_GetHdr ----------------------+----------------------------------------------------- Reporter: anders | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.1 Severity: critical | Keywords: ----------------------+----------------------------------------------------- Comment(by anders): Backtrace: {{{ (gdb) bt full #0 0x00000008008c9e7f in getframeaddr (level=Variable "level" is not available. ) at execinfo.c:273 No locals. #1 0x00000008008c9ebc in backtrace (buffer=Variable "buffer" is not available. ) at execinfo.c:53 i = 2 #2 0x000000000042d849 in pan_ic (func=0x455df0 "http_GetHdr", file=0x4557f0 "cache_http.c", line=266, cond=0x455900 "l == strlen(hdr + 1)", err=Variable "err" is not available. ) at cache_panic.c:283 q = Variable "q" is not available. (gdb) frame 2 #2 0x000000000042d849 in pan_ic (func=0x455df0 "http_GetHdr", file=0x4557f0 "cache_http.c", line=266, cond=0x455900 "l == strlen(hdr + 1)", err=Variable "err" is not available. ) at cache_panic.c:283 283 size = backtrace (array, 10); }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 14:07:22 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 14:07:22 -0000 Subject: [Varnish] #1001: Varnish 3.0.1 crashes in http_GetHdr In-Reply-To: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> References: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> Message-ID: <052.dcd0ecd5b9996c4e0f4dbb2f2e0aca0a@varnish-cache.org> #1001: Varnish 3.0.1 crashes in http_GetHdr ----------------------+----------------------------------------------------- Reporter: anders | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.1 Severity: critical | Keywords: ----------------------+----------------------------------------------------- Comment(by anders): Attaching gdb to the child process I get: {{{ [New Thread 80120ac80 (LWP 100086)] [New Thread 80120ae40 (LWP 100085)] [New Thread 80130d1c0 (LWP 100084)] [New Thread 8012041c0 (LWP 100948)] Loaded symbols for /lib/libthr.so.3 Reading symbols from /lib/libc.so.7...done. Loaded symbols for /lib/libc.so.7 Error while reading shared library symbols: ./vcl.38pJGBBN.so: No such file or directory. Reading symbols from /usr/local/lib/varnish/vmods/libvmod_header.so...done. Loaded symbols for /usr/local/lib/varnish/vmods/libvmod_header.so Reading symbols from /libexec/ld-elf.so.1...done. Loaded symbols for /libexec/ld-elf.so.1 [Switching to Thread 806669d40 (LWP 101919)] 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 (gdb) cont Continuing. [Thread 806669d40 (LWP 101919) exited] [New Thread 806669d40 (LWP 101919)] [New Thread 806669b80 (LWP 101919)] [New Thread 8066699c0 (LWP 101922)] [New Thread 806669800 (LWP 101929)] [New Thread 806669640 (LWP 101931)] [New Thread 806669480 (LWP 101933)] [New Thread 8066692c0 (LWP 101935)] [New Thread 806669100 (LWP 101937)] [New Thread 806668f40 (LWP 101938)] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 806612d40 (LWP 100480)] 0x00000008008c9e7f in getframeaddr (level=Variable "level" is not available. ) at execinfo.c:273 273 case 2: return __builtin_frame_address(3); (gdb) bt #0 0x00000008008c9e7f in getframeaddr (level=Variable "level" is not available. ) at execinfo.c:273 #1 0x00000008008c9ebc in backtrace (buffer=Variable "buffer" is not available. ) at execinfo.c:53 #2 0x000000000042d849 in pan_ic (func=0x455df0 "http_GetHdr", file=0x4557f0 "cache_http.c", line=266, cond=0x455900 "l == strlen(hdr + 1)", err=Variable "err" is not available. ) at cache_panic.c:283 #3 0x0000000000429afc in http_GetHdr (hp=0x80a6373e8, hdr=0x80a34deca ".91.37.197", ptr=0x7ffffb7c94c8) at cache_http.c:266 #4 0x000000000043339f in VRY_Match (sp=0x80a637008, vary=0x80a34dec8 "80.91.37.197") at cache_vary.c:192 #5 0x000000000042630b in HSH_Lookup (sp=0x80a637008, poh=0x7ffffb7c9580) at cache_hash.c:358 #6 0x0000000000414592 in cnt_lookup (sp=0x80a637008) at cache_center.c:1088 #7 0x0000000000417aaf in CNT_Session (sp=0x80a637008) at steps.h:38 #8 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffb7dcc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #9 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #10 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #11 0x0000000000000000 in ?? () Error accessing memory address 0x7ffffb7dd000: Bad address. }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 14:12:39 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 14:12:39 -0000 Subject: [Varnish] #1001: Varnish 3.0.1 crashes in http_GetHdr In-Reply-To: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> References: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> Message-ID: <052.d7fcbeba9d0532d7755d428e39e9a7ec@varnish-cache.org> #1001: Varnish 3.0.1 crashes in http_GetHdr ----------------------+----------------------------------------------------- Reporter: anders | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.1 Severity: critical | Keywords: ----------------------+----------------------------------------------------- Comment(by phk): This is a duplicate of #994 Can you find out how the Vary string that trips this up looks ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 14:45:56 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 14:45:56 -0000 Subject: [Varnish] #1002: [PATCH] VCL doesn't like REAL comparisons. Message-ID: <044.d9248eb092b0ffb3307c10cc2a94ae4b@varnish-cache.org> #1002: [PATCH] VCL doesn't like REAL comparisons. ---------------------+------------------------------------------------------ Reporter: drwilco | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | ---------------------+------------------------------------------------------ {{{ import std; ... if (std.random(0,5) < 1.0) { ... }}} Results in: {{{ Message from VCC-compiler: Operator < not possible on REAL }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 15:18:10 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 15:18:10 -0000 Subject: [Varnish] #1002: [PATCH] VCL doesn't like REAL comparisons. In-Reply-To: <044.d9248eb092b0ffb3307c10cc2a94ae4b@varnish-cache.org> References: <044.d9248eb092b0ffb3307c10cc2a94ae4b@varnish-cache.org> Message-ID: <053.355a2ed22de225f3a9e8e48734c201c4@varnish-cache.org> #1002: [PATCH] VCL doesn't like REAL comparisons. ----------------------+----------------------------------------------------- Reporter: drwilco | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [863c22b821fae10287e110adb4aa6ba2f86494f6]) Allow relational comparisons on REAL type. Fixes #1002 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 15:20:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 15:20:38 -0000 Subject: [Varnish] #1000: server.ip can`t beused as a string? In-Reply-To: <041.ef67335559fd176ab5bfb80ff70d2468@varnish-cache.org> References: <041.ef67335559fd176ab5bfb80ff70d2468@varnish-cache.org> Message-ID: <050.e7db4774cfd4c5a790c96dd0420ac442@varnish-cache.org> #1000: server.ip can`t beused as a string? -------------------------+-------------------------------------------------- Reporter: xrow | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Resolution: worksforme | Keywords: -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Old description: > wienadmin at slvieprodweb01:~$ sudo /etc/init.d/varnish restart > Stopping HTTP accelerator: varnishd. > Starting HTTP accelerator: varnishd failed! > SMA.s0: max size 3072 MB. > Message from VCC-compiler: > Operator + not possible on type IP. > ('input' Line 227 Pos 43) > set resp.http.X-Cache = server.ip + ":HIT:" + obj.hits; > ------------------------------------------#-------------------- > > Running VCC-compiler failed, exit 1 > > VCL compilation failed > > ------------- > something like that was posible with 2.x New description: {{{ wienadmin at slvieprodweb01:~$ sudo /etc/init.d/varnish restart Stopping HTTP accelerator: varnishd. Starting HTTP accelerator: varnishd failed! SMA.s0: max size 3072 MB. Message from VCC-compiler: Operator + not possible on type IP. ('input' Line 227 Pos 43) set resp.http.X-Cache = server.ip + ":HIT:" + obj.hits; ------------------------------------------#-------------------- Running VCC-compiler failed, exit 1 VCL compilation failed }}} ------------- something like that was posible with 2.x -- Comment: This is because the VCC compiler builds expressions left to right, so seeing server.ip it has type "IP", then you try to add a string to that and it barfs. The workaround (see above) starts out with STRING, then adds IP, but since everything, including IP, can be converted to string, that works out fine. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 19:33:11 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 19:33:11 -0000 Subject: [Varnish] #1001: Varnish 3.0.1 crashes in http_GetHdr In-Reply-To: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> References: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> Message-ID: <052.37c4e12f5dc4fa534674de828d58db6a@varnish-cache.org> #1001: Varnish 3.0.1 crashes in http_GetHdr ----------------------+----------------------------------------------------- Reporter: anders | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.1 Severity: critical | Keywords: ----------------------+----------------------------------------------------- Comment(by anders): When this happens, the last lines outputted by varnishlog -O was: {{{ 395 RxHeader c Accept-Encoding: gzip, deflate 395 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 395 RxHeader c Connection: keep-alive 395 RxHeader c Referer: http://www.aftenbladet.no/ 395 RxHeader c Cookie: rsi_segs=; xtvrn=$398888$ 395 RxHeader c If-Modified-Since: Wed, 31 Aug 2011 18:07:49 GMT 395 RxHeader c Cache-Control: max-age=0 395 VCL_call c recv 395 VCL_acl c NO_MATCH nocacheclients 395 VCL_return c lookup 395 VCL_call c hash 395 Hash c /incoming/article2857960.ece/ALTERNATES/w680c169/halltoll.jpg 395 Hash c www.aftenbladet.no 395 VCL_return c hash 395 VCL_call c miss 395 VCL_return c fetch 398 BackendOpen b sa 80.91.37.197 64009 80.91.41.74 80 395 Backend c 398 sa sa 398 TxRequest b GET 398 TxURL b /incoming/article2857960.ece/ALTERNATES/w680c169/halltoll.jpg 398 TxProtocol b HTTP/1.1 398 TxHeader b Host: mediahash2.aftenbladet.no 398 TxHeader b User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) Gecko/20100101 Firefox/6.0 398 TxHeader b Accept: image/png,image/*;q=0.8,*/*;q=0.5 398 TxHeader b Accept-Language: nb-no,nb;q=0.9,no-no;q=0.8,no;q=0.6 ,nn-no;q=0.5,nn;q=0.4,en-us;q=0.3,en;q=0.1 398 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 398 TxHeader b Referer: http://www.aftenbladet.no/ 398 TxHeader b Cookie: rsi_segs=; xtvrn=$398888$ 398 TxHeader b X-Varnish: 534089652 398 TxHeader b Accept-Encoding: gzip }}} The last request from Varnish as seen from ngrep: {{{ T 80.91.37.197:64009 -> 80.91.41.74:80 [AP] GET /incoming/article2857960.ece/ALTERNATES/w680c169/halltoll.jpg HTTP/1.1. Host: mediahash2.aftenbladet.no. User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) Gecko/20100101 Firefox/6 .0. Accept: image/png,image/*;q=0.8,*/*;q=0.5. Accept-Language: nb-no,nb;q=0.9,no-no;q=0.8,no;q=0.6,nn-no;q=0.5,nn;q=0.4 ,en-us; q=0.3,en;q=0.1. Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7. Referer: http://www.aftenbladet.no/. Cookie: rsi_segs=; xtvrn=$398888$. X-Varnish: 534089652. Accept-Encoding: gzip. }}} And the response: {{{ T 80.91.41.74:80 -> 80.91.37.197:64009 [A] HTTP/1.1 200 OK. Date: Wed, 31 Aug 2011 19:12:11 GMT. Server: Apache-Coyote/1.1. Last-Modified: Wed, 31 Aug 2011 18:10:15 GMT. Cache-Control: public, max-age=31536000. Content-Type: image/jpeg. Content-Length: 38891. Set-Cookie: JSESSIONID=E5FBDEB1DA51A5C3431A9520E31385D6; Path=/sa. X-Backend: saweb3. Connection: close. . ......JFIF.............C. (picture data..) }}} A gdb trace (gdb attached to process) with this particular crash: {{{ Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 806640b80 (LWP 101300)] 0x00000008008c9e7f in getframeaddr (level=Variable "level" is not available. ) at execinfo.c:273 273 case 2: return __builtin_frame_address(3); (gdb) bt #0 0x00000008008c9e7f in getframeaddr (level=Variable "level" is not available. ) at execinfo.c:273 #1 0x00000008008c9ebc in backtrace (buffer=Variable "buffer" is not available. ) at execinfo.c:53 #2 0x000000000042d849 in pan_ic (func=0x455df0 "http_GetHdr", file=0x4557f0 "cache_http.c", line=266, cond=0x455900 "l == strlen(hdr + 1)", err=Variable "err" is not available. ) at cache_panic.c:283 #3 0x0000000000429afc in http_GetHdr (hp=0x80907b3e8, hdr=0x807060e42 ".91.37.197", ptr=0x7ffff3b8b4c8) at cache_http.c:266 #4 0x000000000043339f in VRY_Match (sp=0x80907b008, vary=0x807060e40 "80.91.37.197") at cache_vary.c:192 #5 0x000000000042630b in HSH_Lookup (sp=0x80907b008, poh=0x7ffff3b8b580) at cache_hash.c:358 #6 0x0000000000414592 in cnt_lookup (sp=0x80907b008) at cache_center.c:1088 #7 0x0000000000417aaf in CNT_Session (sp=0x80907b008) at steps.h:38 #8 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff3b9ec80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #9 0x000000000042f3ba in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #10 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #11 0x0000000000000000 in ?? () Error accessing memory address 0x7ffff3b9f000: Bad address. (gdb) frame 4 #4 0x000000000043339f in VRY_Match (sp=0x80907b008, vary=0x807060e40 "80.91.37.197") at cache_vary.c:192 192 i = http_GetHdr(sp->http, (const char*)(vary+2), &h); (gdb) print sp->http $1 = (struct http *) 0x80907b3e8 (gdb) print *sp->http $2 = {magic = 1680389577, logtag = HTTP_Rx, ws = 0x80907b080, hd = 0x80907b410, hdf = 0x80907b810 "", shd = 64, nhd = 14, status = 0, protover = 11 '\v', conds = 0 '\0'} (gdb) print vary $3 = (const uint8_t *) 0x807060e40 "80.91.37.197" (gdb) print *vary $4 = 56 '8' (gdb) print *sp $5 = {magic = 741317722, fd = 384, id = 384, xid = 534089653, restarts = 0, esi_level = 0, disable_esi = 0, hash_ignore_busy = 0 '\0', hash_always_miss = 0 '\0', wrk = 0x7ffff3b9ec80, sockaddrlen = 16, mysockaddrlen = 128, sockaddr = 0x80907b2e8, mysockaddr = 0x80907b368, mylsock = 0x8012224f0, addr = 0x80907bcb8 "80.202.232.18", port = 0x80907bcc8 "3166", client_identity = 0x0, doclose = 0x0, http = 0x80907b3e8, http0 = 0x80907b850, ws = {{magic = 905626964, overflow = 0, id = 0x453366 "sess", s = 0x80907bcb8 "80.202.232.18", f = 0x80907bfa8 "", r = 0x80908bcb8 "", e = 0x80908bcb8 ""}}, ws_ses = 0x80907bcd0 "GET", ws_req = 0x80907bf78 "Accept-Encoding: gzip", digest = "A?f\201Lw?-p\031/?hq*?\032\210l\202\213^?*Vp\217?xJqO", vary_b = 0x80907bfa8 "", vary_l = 0x0, vary_e = 0x80908bcb8 "", htc = {{ magic = 1041886673, fd = 384, maxbytes = 32768, maxhdr = 4096, ws = 0x80907b080, rxbuf = {b = 0x80907bcd0 "GET", e = 0x80907bf75 ""}, pipeline = {b = 0x0, e = 0x0}}}, t_open = 1314817931.3116455, t_req = 1314817931.3199937, t_resp = nan(0x8000000000000), t_end = 1314817931.3116455, exp = {ttl = -1, grace = 21600, keep = -1, age = 0, entered = 0}, step = STP_LOOKUP, cur_method = 0, handling = 3, sendbody = 0 '\0', wantbody = 1 '\001', err_code = 0, err_reason = 0x0, list = { vtqe_next = 0x0, vtqe_prev = 0x0}, director = 0x801213978, vbc = 0x0, obj = 0x0, objcore = 0x0, vcl = 0x80139ee28, hash_objhead = 0x0, mem = 0x80907b000, workreq = {list = {vtqe_next = 0x0, vtqe_prev = 0x0}, func = 0x430160 , priv = 0x80907b008}, acct_tmp = {first = 0, sess = 1, req = 1, pipe = 0, pass = 0, fetch = 0, hdrbytes = 0, bodybytes = 0}, acct_req = {first = 0, sess = 0, req = 0, pipe = 0, pass = 0, fetch = 0, hdrbytes = 0, bodybytes = 0}, acct_ses = {first = 1314817931.3116455, sess = 0, req = 0, pipe = 0, pass = 0, fetch = 0, hdrbytes = 0, bodybytes = 0}} (gdb) print h $8 = 0x42d295 "?E\020\001" (gdb) print *h $9 = -57 '?' }}} So it doesn't seem to be a Vary header? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 19:37:59 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 19:37:59 -0000 Subject: [Varnish] #1003: Fix libedit (libreadline) support for FreeBSD Message-ID: <043.f95607e90f4cdd104e1a516653ef63ba@varnish-cache.org> #1003: Fix libedit (libreadline) support for FreeBSD -------------------------+-------------------------------------------------- Reporter: anders | Owner: tfheen Type: enhancement | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.1 Severity: normal | Keywords: -------------------------+-------------------------------------------------- I am using this patch to fix libedit support for FreeBSD, which has these libraries installed in the base system and not as a package. It makes sense to use the readline library I think. I have to run all the auto tools to generate a new configure script because of the configure.ac patch. Would prefer not to. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 20:34:49 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 20:34:49 -0000 Subject: [Varnish] #1004: Removing auto-start of service under deb install Message-ID: <042.497d2a98979a31b609781e323b7eca3b@varnish-cache.org> #1004: Removing auto-start of service under deb install ------------------------------------------+--------------------------------- Reporter: eitch | Type: enhancement Status: new | Priority: normal Milestone: | Component: packaging Version: 3.0.0 | Severity: normal Keywords: package, service, auto-start | ------------------------------------------+--------------------------------- Before release 3.0, Debian/Ubuntu packages would include a "START=no" line, therefore not starting the service just after the installation is done and with the default configuration. This changed on release 3.0+, and it broke some of automatic configuration and deploy on the cloud for me. IMHO it's not useful to start varnish with default configuration :-) Is this the desired behavior? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 20:59:57 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 20:59:57 -0000 Subject: [Varnish] #1004: Removing auto-start of service under deb install In-Reply-To: <042.497d2a98979a31b609781e323b7eca3b@varnish-cache.org> References: <042.497d2a98979a31b609781e323b7eca3b@varnish-cache.org> Message-ID: <051.7f3fc5ed78c19426ec9b313105e0a698@varnish-cache.org> #1004: Removing auto-start of service under deb install ------------------------------------------+--------------------------------- Reporter: eitch | Type: enhancement Status: new | Priority: normal Milestone: | Component: packaging Version: 3.0.0 | Severity: normal Keywords: package, service, auto-start | ------------------------------------------+--------------------------------- Comment(by eitch): Looking at the alternatives on not starting the service at package install, i came across to this post (http://ubuntuforums.org/showthread.php?t=856815). A workaround is to use the "/usr/sbin/policy-rc.d" script to restrict the initialization. More info about this policy script: http://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt I use chef to deploy varnish servers automatically, so creating a "stub" policyrc.d with exit status 101 (forbidden action for starting the service) ''before'' the package install, then removing it ''after'' is one of a ugly workaround =) Looking forward for a "START=no" again. :-) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 31 14:05:16 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 31 Aug 2011 14:05:16 -0000 Subject: [Varnish] #1001: Varnish 3.0.1 crashes in http_GetHdr In-Reply-To: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> References: <043.27ab9d55feea7248ae5fbc7018024693@varnish-cache.org> Message-ID: <052.7eb1d2a638c4fde1a27d447f58211778@varnish-cache.org> #1001: Varnish 3.0.1 crashes in http_GetHdr ----------------------+----------------------------------------------------- Reporter: anders | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.1 Severity: critical | Keywords: ----------------------+----------------------------------------------------- Comment(by anders): Thread dump from gdb, Kristian asked for it: {{{ (gdb) thread apply all bt Thread 158 (Thread 8012041c0 (LWP 101072)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x00000008006b1dc3 in VCLS_Poll (cs=0x801213240, timeout=-1) at cli_serve.c:519 #3 0x0000000000418d11 in CLI_Run () at cache_cli.c:113 #4 0x000000000042c528 in child_main () at cache_main.c:138 #5 0x000000000043fa92 in start_child (cli=0x0) at mgt_child.c:345 #6 0x00000000004403f7 in mgt_sigchld (e=Variable "e" is not available. ) at mgt_child.c:524 #7 0x00000008006b4b0b in vev_sched_signal (evb=0x8012202c0) at vev.c:435 #8 0x00000008006b51c8 in vev_schedule (evb=0x8012202c0) at vev.c:363 #9 0x000000000043fc7d in MGT_Run () at mgt_child.c:602 #10 0x000000000044f38a in main (argc=37, argv=0x7fffffffe8e8) at varnishd.c:650 Thread 157 (Thread 80130d1c0 (LWP 100084)): #0 0x000000080104a6cc in nanosleep () from /lib/libc.so.7 #1 0x0000000800e63965 in nanosleep () from /lib/libthr.so.3 #2 0x00000008006b3898 in TIM_sleep (t=Variable "t" is not available. ) at time.c:166 #3 0x000000000042fce3 in wrk_herdtimer_thread (priv=Variable "priv" is not available. ) at cache_pool.c:461 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffffbff000 Thread 156 (Thread 80120ae40 (LWP 100085)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x56f1d8, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042ec4b in wrk_herder_thread (priv=Variable "priv" is not available. ) at cache_pool.c:541 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffff9fe000 Thread 155 (Thread 80120ac80 (LWP 100086)): #0 0x000000080104a6cc in nanosleep () from /lib/libc.so.7 #1 0x0000000800e63965 in nanosleep () from /lib/libthr.so.3 #2 0x00000008006b3898 in TIM_sleep (t=Variable "t" is not available. ) at time.c:166 #3 0x0000000000421298 in exp_timer (sp=0x806616008, priv=Variable "priv" is not available. ) at cache_expire.c:350 #4 0x000000000042fdd8 in wrk_bgthread (arg=Variable "arg" is not available. ) at cache_pool.c:579 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffff7fd000 Thread 154 (Thread 80120aac0 (LWP 100087)): #0 0x000000080104a6cc in nanosleep () from /lib/libc.so.7 #1 0x0000000800e63965 in nanosleep () from /lib/libthr.so.3 #2 0x00000008006b3898 in TIM_sleep (t=Variable "t" is not available. ) at time.c:166 #3 0x000000000043e860 in hcb_cleaner (priv=Variable "priv" is not available. ) at hash_critbit.c:371 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffff5fc000 ---Type to continue, or q to quit--- Thread 153 (Thread 806614c80 (LWP 100090)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffff3fad78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffff3fb000 Thread 152 (Thread 80120a900 (LWP 100091)): #0 0x000000080104a6cc in nanosleep () from /lib/libc.so.7 #1 0x0000000800e63965 in nanosleep () from /lib/libthr.so.3 #2 0x00000008006b3898 in TIM_sleep (t=Variable "t" is not available. ) at time.c:166 #3 0x0000000000413c40 in ban_lurker (sp=0x806629008, priv=Variable "priv" is not available. ) at cache_ban.c:825 #4 0x000000000042fdd8 in wrk_bgthread (arg=Variable "arg" is not available. ) at cache_pool.c:579 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffff1fa000 Thread 151 (Thread 80120a3c0 (LWP 100095)): #0 0x000000080104b6fc in kevent () from /lib/libc.so.7 #1 0x000000000040ccfc in vca_kqueue_main (arg=Variable "arg" is not available. ) at cache_waiter_kqueue.c:168 #2 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #3 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffeff9000 Thread 150 (Thread 80120a200 (LWP 100097)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x000000000040be5a in vca_acct (arg=Variable "arg" is not available. ) at cache_acceptor.c:271 #3 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #4 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffedf8000 Thread 149 (Thread 806614ac0 (LWP 100105)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffebf6d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffebf7000 Thread 148 (Thread 806614900 (LWP 100106)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffe9f5d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 ---Type to continue, or q to quit--- #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffe9f6000 Thread 147 (Thread 806614740 (LWP 100113)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffe7f4d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffe7f5000 Thread 146 (Thread 806614580 (LWP 100133)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffe5f3d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffe5f4000 Thread 145 (Thread 8066143c0 (LWP 100134)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffe3f2d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffe3f3000 Thread 144 (Thread 806614200 (LWP 100135)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffe1f1d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffe1f2000 Thread 143 (Thread 806614040 (LWP 100150)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffdff0d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) ---Type to continue, or q to quit--- at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffdff1000 Thread 142 (Thread 806613e80 (LWP 100170)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffddefd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffddf0000 Thread 141 (Thread 806613cc0 (LWP 100177)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffdbeed78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffdbef000 Thread 140 (Thread 806613b00 (LWP 100205)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffd9edd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffd9ee000 Thread 139 (Thread 806613940 (LWP 100216)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x808a7b008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffd7ecc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffd7ed000 Thread 138 (Thread 806613780 (LWP 100219)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffd5ebd78, lck=Variable "lck" is not available. ) ---Type to continue, or q to quit--- at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffd5ec000 Thread 137 (Thread 8066135c0 (LWP 100225)): #0 0x00000008008c9e7f in getframeaddr (level=Variable "level" is not available. ) at execinfo.c:273 #1 0x00000008008c9ebc in backtrace (buffer=Variable "buffer" is not available. ) at execinfo.c:53 #2 0x000000000042d849 in pan_ic (func=0x455df0 "http_GetHdr", file=0x4557f0 "cache_http.c", line=266, cond=0x455900 "l == strlen(hdr + 1)", err=Variable "err" is not available. ) at cache_panic.c:283 #3 0x0000000000429afc in http_GetHdr (hp=0x8098243e8, hdr=0x806e37eaa ".91.37.197", ptr=0x7ffffd3d74c8) at cache_http.c:266 #4 0x000000000043339f in VRY_Match (sp=0x809824008, vary=0x806e37ea8 "80.91.37.197") at cache_vary.c:192 #5 0x000000000042630b in HSH_Lookup (sp=0x809824008, poh=0x7ffffd3d7580) at cache_hash.c:358 #6 0x0000000000414592 in cnt_lookup (sp=0x809824008) at cache_center.c:1088 #7 0x0000000000417aaf in CNT_Session (sp=0x809824008) at steps.h:38 #8 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffd3eac80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #9 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #10 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #11 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffd3eb000 Thread 136 (Thread 806613400 (LWP 100227)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffd1e9d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffd1ea000 Thread 135 (Thread 806613240 (LWP 100231)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffcfe8d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffcfe9000 Thread 134 (Thread 806613080 (LWP 100250)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 ---Type to continue, or q to quit--- #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffcde7d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffcde8000 Thread 133 (Thread 806612ec0 (LWP 100251)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffcbe6d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffcbe7000 Thread 132 (Thread 806612d00 (LWP 100260)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffc9e5d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffc9e6000 Thread 131 (Thread 806612b40 (LWP 100299)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffc7e4d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffc7e5000 Thread 130 (Thread 806612980 (LWP 100336)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffc5e3d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffc5e4000 Thread 129 (Thread 8066127c0 (LWP 100365)): ---Type to continue, or q to quit--- #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x80968c008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffc3e2c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffc3e3000 Thread 128 (Thread 806612600 (LWP 100375)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x807c7b008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffc1e1c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffc1e2000 Thread 127 (Thread 806612440 (LWP 100387)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x807dbe008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffbfe0c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffbfe1000 Thread 126 (Thread 806612280 (LWP 100433)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x8083e1008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffbddfc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffbde0000 Thread 125 (Thread 8066120c0 (LWP 100435)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x8073ad008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffbbdec80, priv=Variable "priv" is not available. ) at cache_pool.c:301 ---Type to continue, or q to quit--- #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffbbdf000 Thread 124 (Thread 806611f00 (LWP 100455)): #0 0x000000080104dfac in writev () from /lib/libc.so.7 #1 0x0000000800e6308e in writev () from /lib/libthr.so.3 #2 0x000000000043b944 in WRW_Flush (w=0x7ffffb9ddc80) at cache_wrw.c:125 #3 0x0000000000430981 in res_WriteGunzipObj (sp=0x807259008) at cache_response.c:189 #4 0x0000000000430f05 in RES_WriteObj (sp=0x807259008) at cache_response.c:313 #5 0x0000000000415ba0 in cnt_deliver (sp=0x807259008) at cache_center.c:279 #6 0x00000000004179ac in CNT_Session (sp=0x807259008) at steps.h:45 #7 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffb9ddc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #8 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #9 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #10 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffb9de000 Thread 123 (Thread 806611d40 (LWP 100472)): #0 0x000000080104dfac in writev () from /lib/libc.so.7 #1 0x0000000800e6308e in writev () from /lib/libthr.so.3 #2 0x000000000043b944 in WRW_Flush (w=0x7ffffb7dcc80) at cache_wrw.c:125 #3 0x000000000043c0a6 in WRW_FlushRelease (w=0x7ffffb7dcc80) at cache_wrw.c:148 #4 0x0000000000430ae8 in RES_WriteObj (sp=0x809015008) at cache_response.c:322 #5 0x0000000000415ba0 in cnt_deliver (sp=0x809015008) at cache_center.c:279 #6 0x00000000004179ac in CNT_Session (sp=0x809015008) at steps.h:45 #7 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffb7dcc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #8 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #9 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #10 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffb7dd000 Thread 122 (Thread 806611b80 (LWP 100480)): #0 0x000000080106550a in read () from /lib/libc.so.7 #1 0x0000000800e63760 in read () from /lib/libthr.so.3 #2 0x000000000042bf88 in HTC_Rx (htc=0x7ffffb5dbdc8) at cache_httpconn.c:170 #3 0x0000000000422ad9 in FetchHdr (sp=0x808b7a008) at cache_fetch.c:445 #4 0x00000000004167cc in cnt_fetch (sp=0x808b7a008) at cache_center.c:546 #5 0x0000000000417a40 in CNT_Session (sp=0x808b7a008) at steps.h:41 #6 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffb5dbc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #7 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #8 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 ---Type to continue, or q to quit--- #9 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffb5dc000 Thread 121 (Thread 8066119c0 (LWP 100501)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x8096e1008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffffb3dac80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012134c0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffb3db000 Thread 120 (Thread 806611800 (LWP 100544)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffb1d9d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffb1da000 Thread 119 (Thread 806611640 (LWP 100547)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffafd8d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffafd9000 Thread 118 (Thread 806611480 (LWP 100555)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffadd7d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffadd8000 Thread 117 (Thread 8066112c0 (LWP 100557)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffabd6d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) ---Type to continue, or q to quit--- at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffabd7000 Thread 116 (Thread 806611100 (LWP 100561)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffa9d5d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffa9d6000 Thread 115 (Thread 806610f40 (LWP 100562)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffa7d4d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffa7d5000 Thread 114 (Thread 806610d80 (LWP 100579)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffa5d3d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffa5d4000 Thread 113 (Thread 806610bc0 (LWP 100601)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffa3d2d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffa3d3000 Thread 112 (Thread 806610a00 (LWP 100640)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffffa1d1d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 ---Type to continue, or q to quit--- #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffffa1d2000 Thread 111 (Thread 806610840 (LWP 100649)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff9fd0d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff9fd1000 Thread 110 (Thread 806610680 (LWP 100663)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff9dcfd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff9dd0000 Thread 109 (Thread 8066104c0 (LWP 100707)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff9bced78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff9bcf000 Thread 108 (Thread 806610300 (LWP 100709)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x809648008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff99cdc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff99ce000 Thread 107 (Thread 806610140 (LWP 100715)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 ---Type to continue, or q to quit--- #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff97ccd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff97cd000 Thread 106 (Thread 80660ff80 (LWP 100742)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff95cbd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff95cc000 Thread 105 (Thread 80660fdc0 (LWP 100745)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff93cad78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff93cb000 Thread 104 (Thread 80660fc00 (LWP 100749)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff91c9d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff91ca000 Thread 103 (Thread 80660fa40 (LWP 100757)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x807347008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff8fc8c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff8fc9000 Thread 102 (Thread 80660f880 (LWP 100764)): ---Type to continue, or q to quit--- #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff8dc7d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff8dc8000 Thread 101 (Thread 80660f6c0 (LWP 100766)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff8bc6d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff8bc7000 Thread 100 (Thread 80660f500 (LWP 100786)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff89c5d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff89c6000 Thread 99 (Thread 80660f340 (LWP 100805)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff87c4d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff87c5000 Thread 98 (Thread 80660f180 (LWP 100815)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x80969d008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff85c3c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff85c4000 ---Type to continue, or q to quit--- Thread 97 (Thread 80660efc0 (LWP 100824)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff83c2d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff83c3000 Thread 96 (Thread 80660ee00 (LWP 100841)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff81c1d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff81c2000 Thread 95 (Thread 80660ec40 (LWP 100850)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff7fc0d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff7fc1000 Thread 94 (Thread 80660ea80 (LWP 100870)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff7dbfd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff7dc0000 Thread 93 (Thread 80660e8c0 (LWP 100902)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x8096d0008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff7bbec80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 ---Type to continue, or q to quit--- #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff7bbf000 Thread 92 (Thread 80660e700 (LWP 100927)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x808ad0008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff79bdc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff79be000 Thread 91 (Thread 80660e540 (LWP 100932)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x8096f2008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff77bcc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213560, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff77bd000 Thread 90 (Thread 80660e380 (LWP 100933)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff75bbd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff75bc000 Thread 89 (Thread 80660e1c0 (LWP 100945)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff73bad78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff73bb000 Thread 88 (Thread 8067fdac0 (LWP 100949)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff71b9d78, lck=Variable "lck" is not available. ) ---Type to continue, or q to quit--- at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff71ba000 Thread 87 (Thread 8067fd900 (LWP 100950)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff6fb8d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff6fb9000 Thread 86 (Thread 8067fd740 (LWP 100985)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff6db7d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff6db8000 Thread 85 (Thread 8067fd580 (LWP 100987)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x8096ae008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff6bb6c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff6bb7000 Thread 84 (Thread 8067fd3c0 (LWP 100990)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x806e59008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff69b5c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff69b6000 Thread 83 (Thread 8067fd200 (LWP 100994)): ---Type to continue, or q to quit--- #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff67b4d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff67b5000 Thread 82 (Thread 8067fd040 (LWP 101004)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x809659008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff65b3c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff65b4000 Thread 81 (Thread 8067fce80 (LWP 101025)): #0 inflateInit2_ (strm=0x807544df0, windowBits=31, version=Variable "version" is not available. ) at inflate.c:193 #1 0x000000000042474f in VGZ_NewUngzip (sp=Variable "sp" is not available. ) at cache_gzip.c:172 #2 0x00000000004247cf in vfp_testgzip_begin (sp=Variable "sp" is not available. ) at cache_gzip.c:595 #3 0x0000000000422118 in FetchBody (sp=0x8087df008) at cache_fetch.c:212 #4 0x00000000004157b8 in cnt_fetchbody (sp=0x8087df008) at cache_center.c:847 #5 0x0000000000417a1b in CNT_Session (sp=0x8087df008) at steps.h:42 #6 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff63b2c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #7 0x000000000042f3ba in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #8 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #9 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff63b3000 Thread 80 (Thread 8067fccc0 (LWP 101027)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff61b1d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff61b2000 Thread 79 (Thread 8067fcb00 (LWP 101031)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff5fb0d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 ---Type to continue, or q to quit--- #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff5fb1000 Thread 78 (Thread 8067fc940 (LWP 101043)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff5dafd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff5db0000 Thread 77 (Thread 8067fc780 (LWP 101044)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff5baed78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff5baf000 Thread 76 (Thread 8067fc5c0 (LWP 101048)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff59add78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff59ae000 Thread 75 (Thread 8067fc400 (LWP 101100)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff57acd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff57ad000 Thread 74 (Thread 8067fc240 (LWP 101108)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff55abd78, lck=Variable "lck" is not available. ) ---Type to continue, or q to quit--- at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff55ac000 Thread 73 (Thread 8067fc080 (LWP 101145)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff53aad78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff53ab000 Thread 72 (Thread 8067fbec0 (LWP 101158)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff51a9d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff51aa000 Thread 71 (Thread 8067fbd00 (LWP 101164)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff4fa8d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff4fa9000 Thread 70 (Thread 8067fbb40 (LWP 101177)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff4da7d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff4da8000 Thread 69 (Thread 8067fb980 (LWP 101178)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 ---Type to continue, or q to quit--- #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff4ba6d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff4ba7000 Thread 68 (Thread 8067fb7c0 (LWP 101180)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff49a5d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff49a6000 Thread 67 (Thread 8067fb600 (LWP 101206)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff47a4d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff47a5000 Thread 66 (Thread 8067fb440 (LWP 101226)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x807d36008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff45a3c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff45a4000 Thread 65 (Thread 8067fb280 (LWP 101247)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff43a2d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff43a3000 Thread 64 (Thread 8067fb0c0 (LWP 101261)): ---Type to continue, or q to quit--- #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff41a1d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff41a2000 Thread 63 (Thread 8067faf00 (LWP 101273)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x8090e1008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff3fa0c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff3fa1000 Thread 62 (Thread 8067fad40 (LWP 101288)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff3d9fd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff3da0000 Thread 61 (Thread 8067fab80 (LWP 101291)): #0 0x000000080104dfac in writev () from /lib/libc.so.7 #1 0x0000000800e6308e in writev () from /lib/libthr.so.3 #2 0x000000000043b944 in WRW_Flush (w=0x7ffff3b9ec80) at cache_wrw.c:125 #3 0x000000000043c0a6 in WRW_FlushRelease (w=0x7ffff3b9ec80) at cache_wrw.c:148 #4 0x0000000000430ae8 in RES_WriteObj (sp=0x8086c3008) at cache_response.c:322 #5 0x0000000000415ba0 in cnt_deliver (sp=0x8086c3008) at cache_center.c:279 #6 0x00000000004179ac in CNT_Session (sp=0x8086c3008) at steps.h:45 #7 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff3b9ec80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #8 0x000000000042f3ba in wrk_thread_real (qp=0x801213600, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #9 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #10 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff3b9f000 Thread 60 (Thread 8067fa9c0 (LWP 101312)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 ---Type to continue, or q to quit--- #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff399dd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff399e000 Thread 59 (Thread 8067fa800 (LWP 101320)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x8074ad008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff379cc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff379d000 Thread 58 (Thread 8067fa640 (LWP 101327)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x80966a008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff359bc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff359c000 Thread 57 (Thread 8067fa480 (LWP 101339)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff339ad78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff339b000 Thread 56 (Thread 8067fa2c0 (LWP 101346)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff3199d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff319a000 ---Type to continue, or q to quit--- Thread 55 (Thread 8067fa100 (LWP 101356)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff2f98d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff2f99000 Thread 54 (Thread 8067f9f40 (LWP 101363)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x809147008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff2d97c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff2d98000 Thread 53 (Thread 8067f9d80 (LWP 101373)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x808a9d008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff2b96c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff2b97000 Thread 52 (Thread 8067f9bc0 (LWP 101294)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff2995d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff2996000 Thread 51 (Thread 8067f9a00 (LWP 101413)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff2794d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 ---Type to continue, or q to quit--- #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff2795000 Thread 50 (Thread 8067f9840 (LWP 101444)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff2593d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff2594000 Thread 49 (Thread 8067f9680 (LWP 101446)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x8073be008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff2392c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff2393000 Thread 48 (Thread 8067f94c0 (LWP 101470)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff2191d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff2192000 Thread 47 (Thread 8067f9300 (LWP 101481)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x808a8c008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff1f90c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff1f91000 Thread 46 (Thread 8067f9140 (LWP 101486)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff1d8fd78, lck=Variable "lck" is not available. ) ---Type to continue, or q to quit--- at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff1d90000 Thread 45 (Thread 8067f8f80 (LWP 101493)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff1b8ed78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff1b8f000 Thread 44 (Thread 8067f8dc0 (LWP 101494)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff198dd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff198e000 Thread 43 (Thread 8067f8c00 (LWP 101515)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff178cd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff178d000 Thread 42 (Thread 8067f8a40 (LWP 101517)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff158bd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff158c000 Thread 41 (Thread 8067f8880 (LWP 101537)): #0 0x000000080106550a in read () from /lib/libc.so.7 #1 0x0000000800e63760 in read () from /lib/libthr.so.3 ---Type to continue, or q to quit--- #2 0x000000000042c2ae in HTC_Read (htc=0x7ffff138adc8, d=Variable "d" is not available. ) at cache_httpconn.c:203 #3 0x0000000000421fb2 in FetchBody (sp=0x809037008) at cache_fetch.c:251 #4 0x00000000004157b8 in cnt_fetchbody (sp=0x809037008) at cache_center.c:847 #5 0x0000000000417a1b in CNT_Session (sp=0x809037008) at steps.h:42 #6 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff138ac80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #7 0x000000000042f3ba in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #8 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #9 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff138b000 Thread 40 (Thread 8067f86c0 (LWP 101539)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff1189d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff118a000 Thread 39 (Thread 8067f8500 (LWP 101558)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x80952d008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff0f88c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff0f89000 Thread 38 (Thread 8067f8340 (LWP 101586)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff0d87d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff0d88000 Thread 37 (Thread 8067f8180 (LWP 101588)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff0b86d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) ---Type to continue, or q to quit--- at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff0b87000 Thread 36 (Thread 8067f7fc0 (LWP 101602)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff0985d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff0986000 Thread 35 (Thread 8067f7e00 (LWP 101611)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff0784d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff0785000 Thread 34 (Thread 8067f7c40 (LWP 101615)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff0583d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff0584000 Thread 33 (Thread 8067f7a80 (LWP 101619)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7ffff0382d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff0383000 Thread 32 (Thread 8067f78c0 (LWP 101635)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x807d47008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7ffff0181c80, priv=Variable "priv" is not available. ) ---Type to continue, or q to quit--- at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7ffff0182000 Thread 31 (Thread 8067f7700 (LWP 101677)): #0 0x000000080106550a in read () from /lib/libc.so.7 #1 0x0000000800e63760 in read () from /lib/libthr.so.3 #2 0x000000000042c2ae in HTC_Read (htc=0x7fffeff80dc8, d=Variable "d" is not available. ) at cache_httpconn.c:203 #3 0x0000000000421fb2 in FetchBody (sp=0x807c8c008) at cache_fetch.c:251 #4 0x00000000004157b8 in cnt_fetchbody (sp=0x807c8c008) at cache_center.c:847 #5 0x0000000000417a1b in CNT_Session (sp=0x807c8c008) at steps.h:42 #6 0x0000000000430221 in wrk_do_cnt_sess (w=0x7fffeff80c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #7 0x000000000042f3ba in wrk_thread_real (qp=0x8012136a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #8 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #9 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffeff81000 Thread 30 (Thread 8067f7540 (LWP 101693)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x809266008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7fffefd7fc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffefd80000 Thread 29 (Thread 8067f7380 (LWP 101697)): #0 0x000000080106550a in read () from /lib/libc.so.7 #1 0x0000000800e63760 in read () from /lib/libthr.so.3 #2 0x000000000042c2ae in HTC_Read (htc=0x7fffefb7edc8, d=Variable "d" is not available. ) at cache_httpconn.c:203 #3 0x0000000000421fb2 in FetchBody (sp=0x806e37008) at cache_fetch.c:251 #4 0x00000000004157b8 in cnt_fetchbody (sp=0x806e37008) at cache_center.c:847 #5 0x0000000000417a1b in CNT_Session (sp=0x806e37008) at steps.h:42 #6 0x0000000000430221 in wrk_do_cnt_sess (w=0x7fffefb7ec80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #7 0x000000000042f3ba in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #8 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #9 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffefb7f000 Thread 28 (Thread 8067f71c0 (LWP 101704)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 ---Type to continue, or q to quit--- #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffef97dd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffef97e000 Thread 27 (Thread 806614e40 (LWP 101730)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffef77cd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffef77d000 Thread 26 (Thread 806ff7900 (LWP 101733)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffef57bd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffef57c000 Thread 25 (Thread 806ff7740 (LWP 101740)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x809703008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7fffef37ac80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffef37b000 Thread 24 (Thread 806ff7580 (LWP 101747)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffef179d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffef17a000 ---Type to continue, or q to quit--- Thread 23 (Thread 806ff73c0 (LWP 101758)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffeef78d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffeef79000 Thread 22 (Thread 806ff7200 (LWP 101764)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffeed77d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffeed78000 Thread 21 (Thread 806ff7040 (LWP 101769)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffeeb76d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffeeb77000 Thread 20 (Thread 806ff6e80 (LWP 101774)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffee975d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffee976000 Thread 19 (Thread 806ff6cc0 (LWP 101785)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffee774d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffee775000 ---Type to continue, or q to quit--- Thread 18 (Thread 806ff6b00 (LWP 101792)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffee573d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffee574000 Thread 17 (Thread 806ff6940 (LWP 101806)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffee372d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffee373000 Thread 16 (Thread 806ff6780 (LWP 101808)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffee171d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffee172000 Thread 15 (Thread 806ff65c0 (LWP 101810)): #0 0x000000080106550a in read () from /lib/libc.so.7 #1 0x0000000800e63760 in read () from /lib/libthr.so.3 #2 0x000000000042c2ae in HTC_Read (htc=0x7fffedf70dc8, d=Variable "d" is not available. ) at cache_httpconn.c:203 #3 0x0000000000421fb2 in FetchBody (sp=0x806f58008) at cache_fetch.c:251 #4 0x00000000004157b8 in cnt_fetchbody (sp=0x806f58008) at cache_center.c:847 #5 0x0000000000417a1b in CNT_Session (sp=0x806f58008) at steps.h:42 #6 0x0000000000430221 in wrk_do_cnt_sess (w=0x7fffedf70c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #7 0x000000000042f3ba in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #8 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #9 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffedf71000 Thread 14 (Thread 806ff6400 (LWP 101812)): #0 0x000000080104dfaa in writev () from /lib/libc.so.7 #1 0x0000000800e6308e in writev () from /lib/libthr.so.3 ---Type to continue, or q to quit--- #2 0x000000000043b944 in WRW_Flush (w=0x7fffedd6fc80) at cache_wrw.c:125 #3 0x0000000000424268 in VGZ_WrwGunzip (sp=0x808be0008, vg=0x8086e2880, ibuf=0x8086e3000, ibufl=Variable "ibufl" is not available. ) at cache_gzip.c:382 #4 0x00000000004308d2 in res_WriteGunzipObj (sp=0x808be0008) at cache_response.c:181 #5 0x0000000000430f05 in RES_WriteObj (sp=0x808be0008) at cache_response.c:313 #6 0x0000000000415ba0 in cnt_deliver (sp=0x808be0008) at cache_center.c:279 #7 0x00000000004179ac in CNT_Session (sp=0x808be0008) at steps.h:45 #8 0x0000000000430221 in wrk_do_cnt_sess (w=0x7fffedd6fc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #9 0x000000000042f3ba in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #10 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #11 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffedd70000 Thread 13 (Thread 806ff6240 (LWP 101815)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x806791008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7fffedb6ec80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffedb6f000 Thread 12 (Thread 806ff6080 (LWP 101816)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x8096bf008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7fffed96dc80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffed96e000 Thread 11 (Thread 806ff5ec0 (LWP 101826)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffed76cd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffed76d000 Thread 10 (Thread 806ff5d00 (LWP 101846)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 ---Type to continue, or q to quit--- #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffed56bd78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffed56c000 Thread 9 (Thread 806ff5b40 (LWP 101847)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffed36ad78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffed36b000 Thread 8 (Thread 806ff5980 (LWP 101849)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffed169d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffed16a000 Thread 7 (Thread 806ff57c0 (LWP 101881)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffecf68d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffecf69000 Thread 6 (Thread 806ff5600 (LWP 101894)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffecd67d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffecd68000 Thread 5 (Thread 806ff5440 (LWP 101908)): ---Type to continue, or q to quit--- #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffecb66d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffecb67000 Thread 4 (Thread 806ff5280 (LWP 101911)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffec965d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffec966000 Thread 3 (Thread 806ff50c0 (LWP 101913)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffec764d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffec765000 Thread 2 (Thread 806ff4f00 (LWP 101918)): #0 0x0000000800e6b2bc in __error () from /lib/libthr.so.3 #1 0x0000000800e693b5 in pthread_cond_signal () from /lib/libthr.so.3 #2 0x000000000042cdbb in Lck_CondWait (cond=0x7fffec563d78, lck=Variable "lck" is not available. ) at cache_lck.c:151 #3 0x000000000042f5ee in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:172 #4 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffec564000 Thread 1 (Thread 806ff4d40 (LWP 101919)): #0 0x00000008010107bc in poll () from /lib/libc.so.7 #1 0x0000000800e6385e in poll () from /lib/libthr.so.3 #2 0x0000000000418728 in CNT_Session (sp=0x80967b008) at cache_center.c:102 #3 0x0000000000430221 in wrk_do_cnt_sess (w=0x7fffec362c80, priv=Variable "priv" is not available. ) at cache_pool.c:301 #4 0x000000000042f3ba in wrk_thread_real (qp=0x801213740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:184 #5 0x0000000800e61511 in pthread_getprio () from /lib/libthr.so.3 #6 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffec363000 ---Type to continue, or q to quit--- 0x00000008008c9e7f 273 case 2: return __builtin_frame_address(3); (gdb) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator