From varnish-bugs at varnish-cache.org Tue Jan 1 14:51:36 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Jan 2013 14:51:36 -0000 Subject: [Varnish] #1242: Varnish throws 503 without touching the healthy backend Message-ID: <042.5703e61e55b85c6b8ba6f67cb5b5c6c6@varnish-cache.org> #1242: Varnish throws 503 without touching the healthy backend -------------------+---------------------- Reporter: xjia | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: critical Keywords: | -------------------+---------------------- I'm running Varnish 3.0.0 (revision cbf1284) under Ubuntu 11.10. Requests are sent to Varnish first, then should be dispatched to another server running nginx 1.0.5 and MediaWiki 1.20.2. varnishlog shows the backend is healthy. When I curl a wiki page, which is accessible directly from nginx, Varnish gives 503 Service Unavailable. However, I don't see any request in nginx log files. varnishlog: {{{ 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1357050460 1.0 0 Backend_health - mediawiki Still healthy 4--X-RH 5 3 5 0.202436 0.173041 HTTP/1.1 200 OK 4 SessionOpen c 172.16.6.106 37505 :81 4 ReqStart c 172.16.6.106 37505 1372273948 4 RxRequest c GET 4 RxURL c /wiki/What_The_Fuck 4 RxProtocol c HTTP/1.1 4 RxHeader c User-Agent: curl/7.21.6 (x86_64-pc-linux-gnu) libcurl/7.21.6 OpenSSL/1.0.0e zlib/1.2.3.4 libidn/1.22 librtmp/2.3 4 RxHeader c Host: 172.16.6.105:81 4 RxHeader c Accept: */* 4 VCL_call c recv lookup 4 VCL_call c hash 4 Hash c /wiki/What_The_Fuck 4 Hash c 172.16.6.105:81 4 VCL_return c hash 4 VCL_call c miss fetch 4 Backend c 5 mediawiki mediawiki 4 TTL c 1372273948 RFC 120 1357050462 0 0 0 0 4 VCL_call c fetch deliver 4 ObjProtocol c HTTP/1.1 4 ObjResponse c Service Unavailable 4 ObjHeader c Server: nginx/1.0.5 4 ObjHeader c Date: Tue, 01 Jan 2013 14:35:12 GMT 4 ObjHeader c Content-Type: text/html; charset=utf-8 4 ObjHeader c Retry-After: 5 4 ObjHeader c X-Varnish: 158017276 4 ObjHeader c Via: 1.1 varnish 4 VCL_call c deliver deliver 4 TxProtocol c HTTP/1.1 4 TxStatus c 503 4 TxResponse c Service Unavailable 4 TxHeader c Server: nginx/1.0.5 4 TxHeader c Content-Type: text/html; charset=utf-8 4 TxHeader c Retry-After: 5 4 TxHeader c X-Varnish: 158017276 4 TxHeader c Via: 1.1 varnish 4 TxHeader c Content-Length: 418 4 TxHeader c Accept-Ranges: bytes 4 TxHeader c Date: Tue, 01 Jan 2013 14:27:41 GMT 4 TxHeader c X-Varnish: 1372273948 4 TxHeader c Age: 0 4 TxHeader c Via: 1.1 varnish 4 TxHeader c Connection: keep-alive 4 Length c 418 4 ReqEnd c 1372273948 1357050461.651694775 1357050461.656989098 0.000372171 0.005029202 0.000265121 4 SessionClose c EOF 4 StatSess c 172.16.6.106 37505 0 1 1 0 0 1 306 418 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1357050463 1.0 0 Backend_health - mediawiki Still healthy 4--X-RH 5 3 5 0.225957 0.186270 HTTP/1.1 200 OK }}} I tried to increase timeouts etc. but still doesn't work: {{{ backend mediawiki { .host = "172.16.6.106"; .port = "80"; .connect_timeout = 300s; .first_byte_timeout = 60s; .between_bytes_timeout = 30s; .probe = { .url = "/wiki/Main_Page"; .interval = 5s; .timeout = 3s; .window = 5; .threshold = 3; } } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 2 15:50:51 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 02 Jan 2013 15:50:51 -0000 Subject: [Varnish] #1243: param.show -l default values listed as ^A for user/group (more?) Message-ID: <046.0c92690a77dd03bf9bba56dbf297fa9b@varnish-cache.org> #1243: param.show -l default values listed as ^A for user/group (more?) ----------------------+------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: minor | Keywords: ----------------------+------------------- I'm parsing the output of param.show -l, particularly the default value right now. For user and group, there is no default, however the value is not printed as blank, but an invisible ^A (ASCII 0x01) is printed instead, which complicates parsing as I do not expect non-printable characters. Example result (yeah, some issues in the json still, but ignore that): {{{ { "name": "vcc_err_unref", "value": "on", "default": "on", "unit": "bool", "description": "Unreferenced VCL objects result in error." }, { "name": "vcc_allow_inline_c", "value": "on", "default": "on", "unit": "bool", "description": "Allow inline C code in VCL." }, { "name": "user", "value": "nobody (65534)", "default": "^A", "unit": "", "description": "The unprivileged user to run as. Setting thiswill also set "group" to the specified user'sprimary group.NB: This parameter will not take any effect untilthe child process has been restarted." }, }}} Or, using "varnishadm param.show user | less": {{{ user nobody (65534) Default is ^A The unprivileged user to run as. Setting this will also set "group" to the specified user's primary group. NB: This parameter will not take any effect until the child process has been restarted. }}} (Note that for my terminal/shell, the ^A is not visible without | less) This is a medium-fresh master/trunk Varnish. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 2 15:54:26 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 02 Jan 2013 15:54:26 -0000 Subject: [Varnish] #1243: param.show -l default values listed as ^A for user/group (more?) In-Reply-To: <046.0c92690a77dd03bf9bba56dbf297fa9b@varnish-cache.org> References: <046.0c92690a77dd03bf9bba56dbf297fa9b@varnish-cache.org> Message-ID: <061.7585eda76930b77b80c387695ebd76ba@varnish-cache.org> #1243: param.show -l default values listed as ^A for user/group (more?) ----------------------+-------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: minor | Resolution: Keywords: | ----------------------+-------------------- Description changed by kristian: Old description: > I'm parsing the output of param.show -l, particularly the default value > right now. > > For user and group, there is no default, however the value is not printed > as blank, but an invisible ^A (ASCII 0x01) is printed instead, which > complicates parsing as I do not expect non-printable characters. > > Example result (yeah, some issues in the json still, but ignore that): > > {{{ > > { > "name": "vcc_err_unref", > "value": "on", > "default": "on", > "unit": "bool", > "description": "Unreferenced VCL objects result > in error." > }, > { > "name": "vcc_allow_inline_c", > "value": "on", > "default": "on", > "unit": "bool", > "description": "Allow inline C code in VCL." > }, > { > "name": "user", > "value": "nobody (65534)", > "default": "^A", > "unit": "", > "description": "The unprivileged user to run as. > Setting thiswill also set "group" to the specified user'sprimary > group.NB: This parameter will not take any effect untilthe child process > has been restarted." > }, > }}} > > Or, using "varnishadm param.show user | less": > > {{{ > > user nobody (65534) > Default is ^A > The unprivileged user to run as. Setting > this > will also set "group" to the specified user's > primary group. > > NB: This parameter will not take any effect > until > the child process has been restarted. > > }}} > > (Note that for my terminal/shell, the ^A is not visible without | less) > > This is a medium-fresh master/trunk Varnish. New description: I'm parsing the output of param.show -l, particularly the default value right now. For user and group, there is no default, however the value is not printed as blank, but an invisible `^A` (ASCII 0x01) is printed instead, which complicates parsing as I do not expect non-printable characters. Example result (yeah, some issues in the json still, but ignore that): {{{ { "name": "vcc_err_unref", "value": "on", "default": "on", "unit": "bool", "description": "Unreferenced VCL objects result in error." }, { "name": "vcc_allow_inline_c", "value": "on", "default": "on", "unit": "bool", "description": "Allow inline C code in VCL." }, { "name": "user", "value": "nobody (65534)", "default": "^A", "unit": "", "description": "The unprivileged user to run as. Setting thiswill also set "group" to the specified user'sprimary group.NB: This parameter will not take any effect untilthe child process has been restarted." }, }}} Or, using "`varnishadm param.show user | less`": {{{ user nobody (65534) Default is ^A The unprivileged user to run as. Setting this will also set "group" to the specified user's primary group. NB: This parameter will not take any effect until the child process has been restarted. }}} (Note that for my terminal/shell, the `^A` is not visible without | less) This is a medium-fresh master/trunk Varnish. -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 2 17:31:00 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 02 Jan 2013 17:31:00 -0000 Subject: [Varnish] #1244: param.show always shows 64-bit defaults Message-ID: <046.67509c4594b5589ea56c3a117406f14c@varnish-cache.org> #1244: param.show always shows 64-bit defaults ----------------------+------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- `param.show` can show default values for parameters, but these are not "platform specific", e.g: {{{ kristian at freud:~$ varnishadm param.show workspace_client workspace_client 16k [bytes] Default is 64k }}} (This Varnish-instance is of course running with default values). In this example, 16k is the actual default value for 32-bit systems. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 2 17:45:22 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 02 Jan 2013 17:45:22 -0000 Subject: [Varnish] #1245: param.show mixes false/off for booleans Message-ID: <046.ffe9669057b943d1194c0eac38fe285d@varnish-cache.org> #1245: param.show mixes false/off for booleans ----------------------+------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: ----------------------+------------------- The output of param.show uses "off" when printing the value of a boolean, but "false" when printing the default for the same parameter. E.g: {{{ $ varnishadm param.show busyobj_worker_cache busyobj_worker_cache off Default is false }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 2 19:59:22 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 02 Jan 2013 19:59:22 -0000 Subject: [Varnish] #1246: Assert error in cnt_hit(), cache_center.c line 1025 Message-ID: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> #1246: Assert error in cnt_hit(), cache_center.c line 1025 -------------------+---------------------- Reporter: psa | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.3 | Severity: normal Keywords: | -------------------+---------------------- This is 3.0 HEAD (not trunk, not 3.0.3) {{{ varnish> panic.show 200 Last panic at: Tue, 01 Jan 2013 22:29:47 GMT Assert error in cnt_hit(), cache_center.c line 1025: Condition((sp->wrk->beresp->ws) == 0) not true. thread = (cache-worker) ident = Linux,2.6.32-40-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll sp = 0x7f0a65337008 { fd = 125, id = 125, xid = 4278491724, client = 172.26.1.12 39561, step = STP_HIT, handling = deliver, restarts = 0, esi_level = 0 flags = bodystatus = 4 ws = 0x7f0a65337080 { id = "sess", {s,f,r,e} = {0x7f0a65337c78,+312,(nil),+65536}, }, http[req] = { ws = 0x7f0a65337080[sess] "GET", "/sources", "HTTP/1.1", "Host: 172.26.1.29:9100", "Connection: Keep-Alive", "User-Agent: Apache-HttpClient/4.2.1 (java 1.5)", "X-Local-TTL: 167026.854", }, worker = 0x7f0a5f6fea90 { ws = 0x7f0a5f6fecc8 { id = "wrk", {s,f,r,e} = {0x7f0a5f6eca20,0x7f0a5f6eca20,(nil),+65536}, }, http[beresp] = { ws = 0x7f0a5f6fecc8[wrk] "HTTP/1.1", "200", "OK", "Date: Tue, 01 Jan 2013 22:27:07 GMT", "Server: Jetty/5.1.11RC0 (Linux/2.6.32-45-generic amd64 java/1.6.0_26", "Expires: Tue, 01 Jan 2013 22:27:51 GMT", "Content-Type: text/plain;charset=UTF-8", "Content-Length: 447", "Last-Modified: Tue, 01 Jan 2013 22:26:13 GMT", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7ef95493d400 { xid = 4135337396, ws = 0x7ef95493d418 { id = "obj", {s,f,r,e} = {0x7ef95493d5e0,+288,(nil),+320}, }, http[obj] = { ws = 0x7ef95493d418[obj] "HTTP/1.1", "OK", "Date: Tue, 01 Jan 2013 20:50:55 GMT", "Server: Jetty/5.1.11RC0 (Linux/2.6.32-45-generic amd64 java/1.6.0_26", "Last-Modified: Fri, 28 Dec 2012 16:25:21 GMT", "Content-Type: text/plain;charset=UTF-8", "Expires: Thu, 03 Jan 2013 20:50:55 GMT", "Content-Length: 365", }, len = 365, store = { 365 { 7b 0a 22 73 63 6f 70 65 22 3a 22 70 61 67 65 22 |{."scope":"page"| 2c 0a 22 6c 61 6e 67 75 61 67 65 22 3a 22 65 6e |,."language":"en| 22 2c 0a 22 62 72 61 6e 64 70 72 6f 74 65 63 74 |",."brandprotect| 69 6f 6e 22 3a 7b 0a 09 22 72 61 74 69 6e 67 22 |ion":{.."rating"| [301 more] }, }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 2 21:34:14 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 02 Jan 2013 21:34:14 -0000 Subject: [Varnish] #1246: Assert error in cnt_hit(), cache_center.c line 1025 In-Reply-To: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> References: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> Message-ID: <056.a579dd5fc05a204593dd7ddd6e372719@varnish-cache.org> #1246: Assert error in cnt_hit(), cache_center.c line 1025 ----------------------+-------------------- Reporter: psa | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by psa): Build was done by taking the 3.0.3 Debian packages and then patching 3.0.3 -> HEAD -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 4 09:09:40 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 04 Jan 2013 09:09:40 -0000 Subject: [Varnish] #1247: yum installation error "[Errno -1] Header is not complete." Message-ID: <044.e3db3b22021d9bf26d4bc79f8b887423@varnish-cache.org> #1247: yum installation error "[Errno -1] Header is not complete." --------------------+-------------------- Reporter: Damien | Type: defect Status: new | Priority: high Milestone: | Component: build Version: trunk | Severity: major Keywords: | --------------------+-------------------- Getting the following errors when trying to install varnish via repository provided by varnish-cache.org on a RHEL5 : yum install varnish Loading "installonlyn" plugin Setting up Install Process Setting up repositories epel 100% |=========================| 3.7 kB 00:00 accpkgs 100% |=========================| 951 B 00:00 varnish-3.0 100% |=========================| 951 B 00:00 update 100% |=========================| 951 B 00:00 base 100% |=========================| 1.1 kB 00:00 addons 100% |=========================| 1.9 kB 00:00 extras 100% |=========================| 2.1 kB 00:00 Reading repository metadata in from local files Parsing package install arguments Resolving Dependencies --> Populating transaction set with selected packages. Please wait. ---> Downloading header for varnish to pack into transaction set. varnish-3.0.3-1.el5.cento 100% |=========================| 23 kB 00:00 http://repo.varnish- cache.org/redhat/varnish-3.0/el5/x86_64/varnish-3.0.3-1.el5.centos.x86_64.rpm: [Errno -1] Header is not complete. Trying other mirror. Error: failure: varnish-3.0.3-1.el5.centos.x86_64.rpm from varnish-3.0: [Errno 256] No more mirrors to try. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:08:35 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:08:35 -0000 Subject: [Varnish] #1247: yum installation error "[Errno -1] Header is not complete." In-Reply-To: <044.e3db3b22021d9bf26d4bc79f8b887423@varnish-cache.org> References: <044.e3db3b22021d9bf26d4bc79f8b887423@varnish-cache.org> Message-ID: <059.18e714270f1af25a6d7bdd47a2ce5936@varnish-cache.org> #1247: yum installation error "[Errno -1] Header is not complete." --------------------+-------------------- Reporter: Damien | Owner: Type: defect | Status: new Priority: high | Milestone: Component: build | Version: trunk Severity: major | Resolution: Keywords: | --------------------+-------------------- Description changed by tfheen: Old description: > Getting the following errors when trying to install varnish via > repository provided by varnish-cache.org on a RHEL5 : > yum install varnish > Loading "installonlyn" plugin > Setting up Install Process > Setting up repositories > epel 100% |=========================| 3.7 kB > 00:00 > accpkgs 100% |=========================| 951 B > 00:00 > varnish-3.0 100% |=========================| 951 B > 00:00 > update 100% |=========================| 951 B > 00:00 > base 100% |=========================| 1.1 kB > 00:00 > addons 100% |=========================| 1.9 kB > 00:00 > extras 100% |=========================| 2.1 kB > 00:00 > Reading repository metadata in from local files > Parsing package install arguments > Resolving Dependencies > --> Populating transaction set with selected packages. Please wait. > ---> Downloading header for varnish to pack into transaction set. > varnish-3.0.3-1.el5.cento 100% |=========================| 23 kB > 00:00 > http://repo.varnish- > cache.org/redhat/varnish-3.0/el5/x86_64/varnish-3.0.3-1.el5.centos.x86_64.rpm: > [Errno -1] Header is not complete. > Trying other mirror. > Error: failure: varnish-3.0.3-1.el5.centos.x86_64.rpm from varnish-3.0: > [Errno 256] No more mirrors to try. New description: Getting the following errors when trying to install varnish via repository provided by varnish-cache.org on a RHEL5 : {{{ yum install varnish Loading "installonlyn" plugin Setting up Install Process Setting up repositories epel 100% |=========================| 3.7 kB 00:00 accpkgs 100% |=========================| 951 B 00:00 varnish-3.0 100% |=========================| 951 B 00:00 update 100% |=========================| 951 B 00:00 base 100% |=========================| 1.1 kB 00:00 addons 100% |=========================| 1.9 kB 00:00 extras 100% |=========================| 2.1 kB 00:00 Reading repository metadata in from local files Parsing package install arguments Resolving Dependencies --> Populating transaction set with selected packages. Please wait. ---> Downloading header for varnish to pack into transaction set. varnish-3.0.3-1.el5.cento 100% |=========================| 23 kB 00:00 http://repo.varnish- cache.org/redhat/varnish-3.0/el5/x86_64/varnish-3.0.3-1.el5.centos.x86_64.rpm: [Errno -1] Header is not complete. Trying other mirror. Error: failure: varnish-3.0.3-1.el5.centos.x86_64.rpm from varnish-3.0: [Errno 256] No more mirrors to try. }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:13:32 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:13:32 -0000 Subject: [Varnish] #1247: yum installation error "[Errno -1] Header is not complete." In-Reply-To: <044.e3db3b22021d9bf26d4bc79f8b887423@varnish-cache.org> References: <044.e3db3b22021d9bf26d4bc79f8b887423@varnish-cache.org> Message-ID: <059.9fc35cc13fe8e9fa284b20f656357b57@varnish-cache.org> #1247: yum installation error "[Errno -1] Header is not complete." --------------------+--------------------- Reporter: Damien | Owner: tfheen Type: defect | Status: new Priority: high | Milestone: Component: build | Version: trunk Severity: major | Resolution: Keywords: | --------------------+--------------------- Changes (by tfheen): * owner: => tfheen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:14:48 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:14:48 -0000 Subject: [Varnish] #1245: param.show mixes false/off for booleans In-Reply-To: <046.ffe9669057b943d1194c0eac38fe285d@varnish-cache.org> References: <046.ffe9669057b943d1194c0eac38fe285d@varnish-cache.org> Message-ID: <061.646c5b44580f3c5be30cfadacad71c47@varnish-cache.org> #1245: param.show mixes false/off for booleans ----------------------+-------------------- Reporter: kristian | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Changes (by phk): * owner: => phk * component: build => varnishd -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:15:10 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:15:10 -0000 Subject: [Varnish] #1246: Assert error in cnt_hit(), cache_center.c line 1025 In-Reply-To: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> References: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> Message-ID: <056.a80fc62ebbcf10f66eb140c476a691f0@varnish-cache.org> #1246: Assert error in cnt_hit(), cache_center.c line 1025 ----------------------+--------------------- Reporter: psa | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ----------------------+--------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:15:45 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:15:45 -0000 Subject: [Varnish] #1244: param.show always shows 64-bit defaults In-Reply-To: <046.67509c4594b5589ea56c3a117406f14c@varnish-cache.org> References: <046.67509c4594b5589ea56c3a117406f14c@varnish-cache.org> Message-ID: <061.d2d7298eeb5cfafc481bf7d56381d81f@varnish-cache.org> #1244: param.show always shows 64-bit defaults ----------------------+-------------------- Reporter: kristian | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Changes (by phk): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:16:28 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:16:28 -0000 Subject: [Varnish] #1243: param.show -l default values listed as ^A for user/group (more?) In-Reply-To: <046.0c92690a77dd03bf9bba56dbf297fa9b@varnish-cache.org> References: <046.0c92690a77dd03bf9bba56dbf297fa9b@varnish-cache.org> Message-ID: <061.ed5088d0d838879babe3794d2abca1b9@varnish-cache.org> #1243: param.show -l default values listed as ^A for user/group (more?) ----------------------+-------------------- Reporter: kristian | Owner: phk Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: minor | Resolution: Keywords: | ----------------------+-------------------- Changes (by phk): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:20:03 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:20:03 -0000 Subject: [Varnish] #1242: Varnish throws 503 without touching the healthy backend In-Reply-To: <042.5703e61e55b85c6b8ba6f67cb5b5c6c6@varnish-cache.org> References: <042.5703e61e55b85c6b8ba6f67cb5b5c6c6@varnish-cache.org> Message-ID: <057.71d2440b263c234d4b735d4f23678fcc@varnish-cache.org> #1242: Varnish throws 503 without touching the healthy backend ----------------------+--------------------- Reporter: xjia | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: critical | Resolution: Keywords: | ----------------------+--------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:23:56 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:23:56 -0000 Subject: [Varnish] #1242: Varnish throws 503 without touching the healthy backend In-Reply-To: <042.5703e61e55b85c6b8ba6f67cb5b5c6c6@varnish-cache.org> References: <042.5703e61e55b85c6b8ba6f67cb5b5c6c6@varnish-cache.org> Message-ID: <057.ed21b8a5cf9b7b53d2d0d83e70950d46@varnish-cache.org> #1242: Varnish throws 503 without touching the healthy backend ----------------------+--------------------- Reporter: xjia | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: critical | Resolution: Keywords: | ----------------------+--------------------- Comment (by xjia): I have upgraded Varnish from 3.0.0 to 3.0.3, the problem is still there. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:28:57 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:28:57 -0000 Subject: [Varnish] #1239: Problem with cleaning up "gone" bans In-Reply-To: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> References: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> Message-ID: <057.81129e26441dc6da03c5debac95923e4@varnish-cache.org> #1239: Problem with cleaning up "gone" bans ----------------------+-------------------- Reporter: xani | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Description changed by tfheen: Old description: > Our configuration looks like this: > -Varnish serves application content via ESI > -application server invalidates changed content > -vast majority of invalidations are done by purge, small percentage by > bans > -on average we got < 1 ban/s > -~10GB cache about 3/4 used, ~2 mil object. > > After 3 days we had >170k bans in "gone" state and only few hundred > active, basically none of bans were removed from list > > ban config is: > if (req.http.X-ban-regex) { > ban("obj.http.x-hash ~ ^" + req.http.host + req.http.X > -ban-regex + "$"); > error 200 "Banned"; > } else if (req.http.X-ban-single) { > ban("obj.http.x-hash == " + req.http.host + req.http.X > -ban-single); > error 200 "Banned"; > } > and then x-hash is set to right value. New description: Our configuration looks like this: -Varnish serves application content via ESI -application server invalidates changed content -vast majority of invalidations are done by purge, small percentage by bans -on average we got < 1 ban/s -~10GB cache about 3/4 used, ~2 mil object. After 3 days we had >170k bans in "gone" state and only few hundred active, basically none of bans were removed from list ban config is: {{{ if (req.http.X-ban-regex) { ban("obj.http.x-hash ~ ^" + req.http.host + req.http.X -ban-regex + "$"); error 200 "Banned"; } else if (req.http.X-ban-single) { ban("obj.http.x-hash == " + req.http.host + req.http.X -ban-single); error 200 "Banned"; } }}} and then x-hash is set to right value. -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:31:35 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:31:35 -0000 Subject: [Varnish] #1239: Problem with cleaning up "gone" bans In-Reply-To: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> References: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> Message-ID: <057.5221434096f2bd43837b38a4b374faf7@varnish-cache.org> #1239: Problem with cleaning up "gone" bans ----------------------+--------------------- Reporter: xani | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ----------------------+--------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:34:56 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:34:56 -0000 Subject: [Varnish] #1240: Varnish keeps on restarting every 2 hours In-Reply-To: <045.58854c746b9cf044c0c2d19e5328a582@varnish-cache.org> References: <045.58854c746b9cf044c0c2d19e5328a582@varnish-cache.org> Message-ID: <060.68a9ca10fc8072fba8c53ed02fac828f@varnish-cache.org> #1240: Varnish keeps on restarting every 2 hours ---------------------+---------------------- Reporter: redline | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: invalid Keywords: | ---------------------+---------------------- Changes (by kristian): * status: new => closed * resolution: => invalid Comment: This could be a matter of hitting ulimit (e.g: number of file descriptors exceeded, memory usage etc). That's a reasonable explanation for getting SIGTERMs. It's unlikely that this is an actual Varnish bug, however, since there are no known scenarios where Varnish itself would generate a SIGTERM. If you are using the distro-specific bootup scripts, they should deal with ulimit correctly. If you've rolled your own, it's worth looking at what the distro-provided script do in that regard. Regardless, since we do not suspect that this is a bug, I recommend that you use the "varnish-misc" mail list - it has far more subscribers than our bug tracking system too. https://www.varnish-cache.org/lists/mailman/listinfo For now, I'm closing this ticket. If, after soliciting help from varnish- misc, you still suspect a bug, feel free to re-open or re-post this issue and we'll look closer at it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:35:59 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:35:59 -0000 Subject: [Varnish] #1192: RHEL6: Init-script not giving correct startup In-Reply-To: <044.184ff331e3ed28f8eba95a1c3e7eeaf7@varnish-cache.org> References: <044.184ff331e3ed28f8eba95a1c3e7eeaf7@varnish-cache.org> Message-ID: <059.1fcaa34993cd230e63388032544c367d@varnish-cache.org> #1192: RHEL6: Init-script not giving correct startup ----------------------+----------------------- Reporter: Ueland | Owner: tfheen Type: defect | Status: assigned Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by tfheen): The "could not open socket" might be related to selinux, try turning it off or adjusting the policy to allow Varnish to listen to the port you're asking it to listen to. I agree the init script should be better at detecting that the child is running. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:36:58 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:36:58 -0000 Subject: [Varnish] #1239: Problem with cleaning up "gone" bans In-Reply-To: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> References: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> Message-ID: <057.52eafe3710d00174f1824ae69626e04e@varnish-cache.org> #1239: Problem with cleaning up "gone" bans ----------------------+--------------------- Reporter: xani | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ----------------------+--------------------- Comment (by martin): Hi, Can you please confirm that you are using Varnish version 3.0.3, as there has been some ban related fixes in the latest release. Also, could you send the output of the 'ban.list' Varnish CLI command? That can read by using the following varnishadm command and attaching it to this ticket: $ varnishadm ban.list > banlist In general, for bans to trickle out you will have to be able to use the ban lurker efficiently. This involves never doing any req.* in your ban statements, and also tuning ban_lurker_sleep down might increase the number of bans evicted. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:42:17 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:42:17 -0000 Subject: [Varnish] #1242: Varnish throws 503 without touching the healthy backend In-Reply-To: <042.5703e61e55b85c6b8ba6f67cb5b5c6c6@varnish-cache.org> References: <042.5703e61e55b85c6b8ba6f67cb5b5c6c6@varnish-cache.org> Message-ID: <057.df355f44289311a2be21d40453e7930a@varnish-cache.org> #1242: Varnish throws 503 without touching the healthy backend ----------------------+---------------------- Reporter: xjia | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: critical | Resolution: invalid Keywords: | ----------------------+---------------------- Changes (by martin): * status: new => closed * resolution: => invalid Comment: Hi, I see that the 503s you are getting comes from the nginx backend (ObjHeader "Server: nginx" in varnishlog), so this looks likely to be a configuration issue. Please use the varnish-misc at varnish-cache.org for configuration related questions. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 11:52:32 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 11:52:32 -0000 Subject: [Varnish] #1246: Assert error in cnt_hit(), cache_center.c line 1025 In-Reply-To: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> References: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> Message-ID: <056.92a03956ca3de4979b84e24a2180a2ce@varnish-cache.org> #1246: Assert error in cnt_hit(), cache_center.c line 1025 ----------------------+--------------------- Reporter: psa | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ----------------------+--------------------- Comment (by martin): Hi, In order to get to the bottom of this, please include some more information about how this happens. Is it on every request, or only occasionally. Please also attach the VCL configuration you have been using, and any varnishlog that may be available (on asserts those might not actually be logged though). On the build side, do I understand it correctly that you are using HEAD of the git master branch? An inconsistent source tree might be the culprit then, as there isn't any longer a cache_center.c in the master branch. Please describe in detail the build environment. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 12:09:10 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 12:09:10 -0000 Subject: [Varnish] #1239: Problem with cleaning up "gone" bans In-Reply-To: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> References: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> Message-ID: <057.1b8afaa2c31e38d80705dac15338a9eb@varnish-cache.org> #1239: Problem with cleaning up "gone" bans ----------------------+--------------------- Reporter: xani | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ----------------------+--------------------- Comment (by xani): Replying to [comment:3 martin]: > Hi, > > Can you please confirm that you are using Varnish version 3.0.3, as there has been some ban related fixes in the latest release. varnishd -V varnishd (varnish-3.0.3 revision 9e6a70f) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2011 Varnish Software AS > Also, could you send the output of the 'ban.list' Varnish CLI command? That can read by using the following varnishadm command and attaching it to this ticket: > $ varnishadm ban.list > banlist Generally we use custom hash and then application invalidates all 'single' objects on change via PURGE and uses bans only when there are multiple objects at once: {{{ 1357559121.860319 10 obj.http.x-hash ~ ^www.example.com/myFinishedContestsPart/(323958_[0-9]*|995532_[0-9]*|1100676_[0-9]*|822814_[0-9]*|940584_[0-9]*|822815_[0-9]*|1643792_[0-9]*|1633448_[0-9]*|1391735_[0-9]*|1464036_[0-9]*|782937_[0-9]*|1298164_[0-9]*|50406_[0-9]*|601168_[0-9]*|950033_[0-9]*|1219820_[0-9]*|729463_[0-9]*|256725_[0-9]*|19451_[0-9]*|1582436_[0-9]*|1993872_[0-9]*|144043_[0-9]*|1489478_[0-9]*|1298292_[0-9]*|123232_[0-9]*|2028142_[0-9]*|1424632_[0-9]*|1871383_[0-9]*|41347_[0-9]*|970515_[0-9]*|338983_[0-9]*|944131_[0-9]*|1924520_[0-9]*|2120811_[0-9]*|1849157_[0-9]*|1943777_[0-9]*|851181_[0-9]*|2140555_[0-9]*|18448_[0-9]*|1117478_[0-9]*|896202_[0-9]*|1574681_[0-9]*|1602679_[0-9]*|1637469_[0-9]*|934361_[0-9]*|1747970_[0-9]*|1492240_[0-9]*|1552184_[0-9]*|1706651_[0-9]*|835658_[0-9]*|1903308_[0-9]*|536479_[0-9]*|2134623_[0-9]*|717577_[0-9]*|158809_[0-9]*|642043_[0-9]*|999823_[0-9]*|642305_[0-9]*|85134_[0-9]*|773701_[0-9]*|1003921_[0-9]*|190744_[0-9]*|1183795_[0-9]*|939368_[0-9]*|193287_[0-9]*|468871_[0-9]*|58543_[0-9]*|41294_[0-9]*|1773893_[0-9]*|943076_[0-9]*|696958_[0-9]*|1991626_[0-9]*|703557_[0-9]*|16385_[0-9]*|907590_[0-9]*|1384116_[0-9]*|522346_[0-9]*|893326_[0-9]*|2059269_[0-9]*|732145_[0-9]*|1974262_[0-9]*|419819_[0-9]*|54137_[0-9]*|461531_[0-9]*|1189809_[0-9]*|1131251_[0-9]*|713310_[0-9]*|1258696_[0-9]*|707201_[0-9]*|2141370_[0-9]*|731227_[0-9]*|1448380_[0-9]*|1591377_[0-9]*|1415121_[0-9]*|1365477_[0-9]*|870309_[0-9]*|1520227_[0-9]*|594450_[0-9]*|293565_[0-9]*|55434_[0-9]*|1301944_[0-9]*|1391469_[0-9]*|1637212_[0-9]*|1670274_[0-9]*|1552941_[0-9]*|6734_[0-9]*|1895203_[0-9]*|65122_[0-9]*|1220870_[0-9]*|1586014_[0-9]*|1138652_[0-9]*|1622922_[0-9]*|16985_[0-9]*|1626225_[0-9]*|1282841_[0-9]*|692748_[0-9]*|834571_[0-9]*|871574_[0-9]*|1640875_[0-9]*|1878207_[0-9]*|1468225_[0-9]*|150075_[0-9]*|49950_[0-9]*|1216392_[0-9]*|1782624_[0-9]*|1871290_[0-9]*|1526197_[0-9]*|871123_[0-9]*|1405228_[0-9]*|1278682_[0-9]*|104212_[0-9]*|2042762_[0-9]*|833512_[0-9]*|1202955_[0-9]*|63278_[0-9]*|1884993_[0-9]*|1803277_[0-9]*|2033938_[0-9]*|1570247_[0-9]*|1910831_[0-9]*|1426844_[0-9]*|1386118_[0-9]*|1921870_[0-9]*|1092281_[0-9]*|1118977_[0-9]*|665477_[0-9]*|1290590_[0-9]*|162392_[0-9]*|827200_[0-9]*|2114_[0-9]*|1315011_[0-9]*|1312807_[0-9]*|1036033_[0-9]*|1812164_[0-9]*|236316_[0-9]*|1717238_[0-9]*|88257_[0-9]*|1714282_[0-9]*|1968966_[0-9]*|68881_[0-9]*|1469760_[0-9]*|1908724_[0-9]*|692105_[0-9]*|1527778_[0-9]*|2053489_[0-9]*|1069223_[0-9]*|1534927_[0-9]*|1370497_[0-9]*|1049822_[0-9]*|1353840_[0-9]*|1816028_[0-9]*|859566_[0-9]*|2085934_[0-9]*|962973_[0-9]*|1996184_[0-9]*|1266104_[0-9]*|199439_[0-9]*|1118601_[0-9]*|1581741_[0-9]*|1279431_[0-9]*|1055396_[0-9]*)$ }}} or single article ID (about 99% of all bans): {{{ 1357559069.003583 1443 obj.http.x-hash ~ ^www.example.com/newsCommentList/(91940.*)$ }}} or {{{ obj.http.x-hash ~ ^www.example.com/finishedContestsAjaxPart/.*$ }}} > > In general, for bans to trickle out you will have to be able to use the ban lurker efficiently. This involves never doing any req.* in your ban statements, and also tuning ban_lurker_sleep down might increase the number of bans evicted. We dont do any req.* except of occasional manual one when developer screws something up, all bans in VCL are based on obj.* It seems it's related to CPU load, we tried tuning sleep down to 0.001 and it didn't help, since then we upgraded to more powerful machines (on old ones (4 core machines) load was around 80-100% in peaks, on new ones (8 cores) its about 30%) and (average ~0.5 bans/sec): - with defaults it hovers around 4.6k gone - after tuning lurker to 0.001 its around 500 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 18:43:44 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 18:43:44 -0000 Subject: [Varnish] #1246: Assert error in cnt_hit(), cache_center.c line 1025 In-Reply-To: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> References: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> Message-ID: <056.96cfee2f37473d3d9e20af32aac9f567@varnish-cache.org> #1246: Assert error in cnt_hit(), cache_center.c line 1025 ----------------------+--------------------- Reporter: psa | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ----------------------+--------------------- Comment (by psa): It's only occasionally (every couple of days). {{{ import std; director default round-robin { { .backend = { .between_bytes_timeout = 25ms; .connect_timeout = 0.7s; .first_byte_timeout = 25ms; .host = "mybackend"; .port = "80"; } } } # Clean the requested URL, reject common junk and setup the grace timeout # # Common URL encode substitutions that you might need in this section: # * %23 # # * %26 & # * %2F / # * %3A : # * %3D = # * %3F ? # sub vcl_recv { # If it's a check that we're alive, short circuit if ("/ready" == req.url) { error 200 "OK"; } # We don't care about cookies here and don't want them interfering with # caching unset req.http.cookie; # We only supports GET or HEAD requests if (req.request != "GET" && req.request != "HEAD") { error 401 "Not a GET request"; } # If a URL has spaces in it, it'll get split on the first space and that # will end up in the protocol. Split it in the right place. if (req.proto ~ "\s+") { # If we just have a blank URL, then we want to strip all leading spaces # so that we end up with 'url=http://...' rather than 'url= http://...' if (req.url ~ "(&|\?)url=$") { set req.url = req.url + regsub(req.proto, "\s*([\w]+.*) HTTP.*", "\1"); } else { # Otherwise, keep all the spaces. The split for protocol will suck one # which we need to keep so put it back in the substitution. set req.url = req.url + regsub(req.proto, "([\s\w]+.*) HTTP.*", " \1"); } # Now fix the protocol up again. set req.proto = regsub(req.proto, ".*HTTP", "HTTP"); } # WARNING: # All the filters to clean out junk need to happen after the URL repair # Block all CDN and obvious ad server traffic if (req.url ~ "((&|\?)url=http(s|)(://|%3A%2F%2F)(ad(s|v|server|)|cdn)[0-9]*\.)|ad.doubleclick.net|banner.php(\?|%3F)|(/|%2F)adFrame\.html|(/|%2F)adiframe|(/|%2F)ads(/|%2F)|(/|%2F)vda(/|%2F)iframe\.html") { error 400 "CDN or Ad Server"; } if (req.url ~ "http(s|)(://|%3A%2F%2)(.*\.|)xxx.com/middle\?position=") { error 400 "Ad Server"; } if (req.url ~ "yyy.fr") { error 400 "yyy is all ads"; } # Dump crap if (req.url ~ "(&|\?)url=(file:|C:|/|\s*$)") { error 400 "No URL or request for file object"; } # Prepend missing http (mostly for cache hit) if (req.url !~ "(&|\?)url=http") { set req.url = regsub(req.url, "(&|\?)url=", "\1url=http://"); } if (req.url ~ "http(s|)(://|%3A%2F%2)(www.|%3A%2F%2F)zzz.com(/|%2F)results") { set req.url = regsub(req.url, "zzz.com/.*", "zzz.com/"); } # Append / if it's a plain domain name so that we get better cache rates if (req.url ~ "(&|\?)url=http(s|)://[^/]+$") { set req.url = req.url + "/"; } # Make sure spaces will travel through the system without disrupting anything # further. if (req.url ~ "\s+") { set req.url = regsuball(req.url, "\s", "%20"); } # Remove common ad tags and randomizers if(req.url ~ "(\?|&|%3F|%26)((s|)rnd|subid|adnet_track|gclid|(_|__|)utm_[a-z]+)(=|%3D)") { # It's easier to guard against the trailing '&' by substituting back set req.url = regsuball(req.url, "%26", "&"); set req.url = regsuball(req.url, "((s|)rnd|subid|adnet_track|gclid|(_|__|)utm_[a-z]+)(=|%3D)[^&]+&?", ""); } # Remove fragments from requests as the fragement messes up the cache if (req.url ~ "(#|%23)") { set req.url = regsub(req.url, "(#|%23).*", ""); } # Remove trailing & and ? if (req.url ~ "(\?|&|%3F|%26)$") { set req.url = regsub(req.url, "(\?|&|%26|%3F)$", ""); } set req.grace = 15m; return (lookup); } # Adjust the hashing mechanism to not use the request host header (or # server.ip) and to make http://... the same as https://... so we don't # cache separate pages for secure vs non-secure versions of the same page. sub vcl_hash { hash_data(regsub(req.url, "(&|\?)url=https://", "\1url=http://")); return(hash); } sub vcl_hit { if (obj.ttl < 45s && 5010 == obj.hits) { set obj.ttl = 1d; return (deliver); } # grab a copy of the TTL that can be passed to deliver set req.http.X-Local-TTL = obj.ttl; if (obj.ttl < 45s && (3 == obj.hits || 10 == obj.hits || (obj.hits > 3 && # VCL doesn't have modulo ((obj.hits - (1000 * (obj.hits/1000))) == 0)))) { return (pass); } return (deliver); } sub vcl_fetch { # This is a failing backend. Error immediately so # that we serve a blank rather than serving a blob of HTML. if(500 == beresp.status) { error 500 "Error from backend"; } # grab a copy of the TTL that can be passed to deliver set req.http.X-Local-TTL = beresp.ttl; # Apply a grace time in case we're unable to get an answer from the backend set beresp.grace = 15m; return (deliver); } # Add debugging headers and remove headers we don't want to expose sub vcl_deliver { set resp.http.X-Cache-Hits = obj.hits; set resp.http.X-TTL = req.http.X-Local-TTL; if(200 == resp.status && (std.integer(resp.http.Content-Length, 0) > 10)) { if(std.integer(regsub(req.http.X-Local-TTL, "\.[0-9]+", ""),0) <= 30) { set resp.http.X-Local-Type = "A"; } else { set resp.http.X-Local-Type = "B"; } } else { set resp.http.X-Local-Type = "C"; } set resp.http.X-Local-xxx = regsub(req.url, ".*?(&|\?)xxx=([^&\?]+).*", "\2"); set resp.http.X-Local-URL = regsub(req.url, ".*?(&|\?)url=(.*)", "\2"); unset resp.http.Server; unset resp.http.Varnish; unset resp.http.Via; } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; # return 1 for status checks rather than filling the cache with crap if(200 == obj.status && "/ready" == req.url) { synthetic {"1"}; return(deliver); } # Only tell the client to retry if the error is not permanent if(400 != obj.status) { set obj.http.Retry-After = "5"; } synthetic {""}; return (deliver); } }}} We build by pulling the 3.0.3 Ubuntu source package and then applying a patch for the delta between 3.0.3 and the head of the 3.0 branch. I will attach the last patch we used to build. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 7 18:47:49 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Jan 2013 18:47:49 -0000 Subject: [Varnish] #1246: Assert error in cnt_hit(), cache_center.c line 1025 In-Reply-To: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> References: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> Message-ID: <056.86ca7d62b60454734bbf5399b8ded942@varnish-cache.org> #1246: Assert error in cnt_hit(), cache_center.c line 1025 ----------------------+--------------------- Reporter: psa | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ----------------------+--------------------- Comment (by psa): Makefile for our local build: {{{ PACKAGE=varnish VERSION=3.0.3 BUILD_DIR=$(PACKAGE)-$(VERSION) VARNISH_PKG_LIST=/etc/apt/sources.list.d/varnish.list package: check-depends ( cd varnish-$(VERSION) && \ patch -p1 < ../no-close.patch && \ patch -p1 < ../pre-3.0.4.patch && \ fakeroot debian/rules binary ) check-depends: update-changelog dpkg -s libncurses5-dev | grep "Status: install ok installed" \ || sudo apt-get -y install libncurses5-dev dpkg -s libpcre3-dev | grep "Status: install ok installed" \ || sudo apt-get -y install libpcre3-dev dpkg -s python-docutils | grep "Status: install ok installed" \ || sudo apt-get -y install python-docutils dpkg -s libedit-dev | grep "Status: install ok installed" \ || sudo apt-get -y install libedit-dev touch check-depends update-changelog: $(BUILD_DIR) bin/update-changelog $(BUILD_DIR) $(VERSION)-2~local0 touch update-changelog $(BUILD_DIR): $(VARNISH_PKG_LIST) apt-get source varnish $(VARNISH_PKG_LIST): @echo "You need to setup $(VARNISH_PKG_LIST), see Makefile comments" exit 1 clean: -rm -rf varnish* update-changelog check-depends clobber: clean -rm -f *deb }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 8 11:23:55 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Jan 2013 11:23:55 -0000 Subject: [Varnish] #1244: param.show always shows 64-bit defaults In-Reply-To: <046.67509c4594b5589ea56c3a117406f14c@varnish-cache.org> References: <046.67509c4594b5589ea56c3a117406f14c@varnish-cache.org> Message-ID: <061.9459f5e481d7536a973d19717e031caa@varnish-cache.org> #1244: param.show always shows 64-bit defaults ----------------------+-------------------- Reporter: kristian | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by Poul-Henning Kamp ): In [6afc5a44affb38d844fb76df03a803b28c540f23]: {{{ #!CommitTicketReference repository="" revision="6afc5a44affb38d844fb76df03a803b28c540f23" Make it possible to truly change a parameters default value, rather than hackishly just setting its value to the wanted default. Fixes #1244 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 8 11:23:57 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Jan 2013 11:23:57 -0000 Subject: [Varnish] #1244: param.show always shows 64-bit defaults In-Reply-To: <046.67509c4594b5589ea56c3a117406f14c@varnish-cache.org> References: <046.67509c4594b5589ea56c3a117406f14c@varnish-cache.org> Message-ID: <061.bf1514cb108d99ae25750bb01d6cd367@varnish-cache.org> #1244: param.show always shows 64-bit defaults ----------------------+--------------------- Reporter: kristian | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [6afc5a44affb38d844fb76df03a803b28c540f23]) Make it possible to truly change a parameters default value, rather than hackishly just setting its value to the wanted default. Fixes #1244 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 8 12:00:06 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Jan 2013 12:00:06 -0000 Subject: [Varnish] #1243: param.show -l default values listed as ^A for user/group (more?) In-Reply-To: <046.0c92690a77dd03bf9bba56dbf297fa9b@varnish-cache.org> References: <046.0c92690a77dd03bf9bba56dbf297fa9b@varnish-cache.org> Message-ID: <061.ed1a2026cf5634028ecd6a20716070dd@varnish-cache.org> #1243: param.show -l default values listed as ^A for user/group (more?) ----------------------+-------------------- Reporter: kristian | Owner: phk Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: minor | Resolution: Keywords: | ----------------------+-------------------- Comment (by Poul-Henning Kamp ): In [dabcce2cd278dc2a710777b90ada2aeff0cbadaa]: {{{ #!CommitTicketReference repository="" revision="dabcce2cd278dc2a710777b90ada2aeff0cbadaa" Use the new param-default-setting ability, to simplify the magic surrounding the privsep user/group setting code. Fixes #1243 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 8 12:00:08 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Jan 2013 12:00:08 -0000 Subject: [Varnish] #1243: param.show -l default values listed as ^A for user/group (more?) In-Reply-To: <046.0c92690a77dd03bf9bba56dbf297fa9b@varnish-cache.org> References: <046.0c92690a77dd03bf9bba56dbf297fa9b@varnish-cache.org> Message-ID: <061.c03b46ddbc759f3f47564285e2bd8019@varnish-cache.org> #1243: param.show -l default values listed as ^A for user/group (more?) ----------------------+--------------------- Reporter: kristian | Owner: phk Type: defect | Status: closed Priority: low | Milestone: Component: varnishd | Version: trunk Severity: minor | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [dabcce2cd278dc2a710777b90ada2aeff0cbadaa]) Use the new param-default- setting ability, to simplify the magic surrounding the privsep user/group setting code. Fixes #1243 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 8 13:36:53 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Jan 2013 13:36:53 -0000 Subject: [Varnish] #1247: yum installation error "[Errno -1] Header is not complete." In-Reply-To: <044.e3db3b22021d9bf26d4bc79f8b887423@varnish-cache.org> References: <044.e3db3b22021d9bf26d4bc79f8b887423@varnish-cache.org> Message-ID: <059.63cd06be035780e90141404b2708c77e@varnish-cache.org> #1247: yum installation error "[Errno -1] Header is not complete." --------------------+--------------------- Reporter: Damien | Owner: tfheen Type: defect | Status: new Priority: high | Milestone: Component: build | Version: trunk Severity: major | Resolution: Keywords: | --------------------+--------------------- Comment (by Saud): Faced the same problem .. please fix it! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 9 08:54:00 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Jan 2013 08:54:00 -0000 Subject: [Varnish] #1239: Problem with cleaning up "gone" bans In-Reply-To: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> References: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> Message-ID: <057.ee9dbb09c1bd7e478246d52dc4323b78@varnish-cache.org> #1239: Problem with cleaning up "gone" bans ----------------------+--------------------- Reporter: xani | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ----------------------+--------------------- Comment (by xani): It seems this behaviour is triggered by having any req.* based ban on ban list (even if its one and all other are obj.* bans), when I manually added one bans started to not be removed and as soon as ban was removed by TTL ban list got cleared up -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 9 09:15:10 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Jan 2013 09:15:10 -0000 Subject: [Varnish] #1239: Problem with cleaning up "gone" bans In-Reply-To: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> References: <042.4f254c324d62a995d9a1391b1db24d2f@varnish-cache.org> Message-ID: <057.50b8cba881699e7792c07eb8bb2336d9@varnish-cache.org> #1239: Problem with cleaning up "gone" bans ----------------------+---------------------- Reporter: xani | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: invalid Keywords: | ----------------------+---------------------- Changes (by martin): * status: new => closed * resolution: => invalid Comment: Hi, Ban list length starting to accumulate when you have req.* parts in your ban lists is very much the expected behavior. Due to the way ban lists are implemented, they can only ever be removed off the tail of the list. The ban lurker's responsibility is to work on the tail of the list in order to free the bans there. But the moment a req.* ban is at the tail, the ban lurker can't do anything with it (the ban lurker obviously is not running in the context of a request, thus can't match anything on req.*). So the ban list length then will grow indefinitely until that ban clears any other way (objects it's linked to is actually requested or their TTL elapses). Solution is to not use ban expressions with req.* in them. I'll close this ticket as invalid. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 9 18:40:59 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Jan 2013 18:40:59 -0000 Subject: [Varnish] #932: varnishreplay.c: compile fails on Solaris for 64 bit with -Werror In-Reply-To: <043.4807473cf260a17ce2985bedfeaac1c6@varnish-cache.org> References: <043.4807473cf260a17ce2985bedfeaac1c6@varnish-cache.org> Message-ID: <058.618af28e2d62b3e49cfde9328435ef56@varnish-cache.org> #932: varnishreplay.c: compile fails on Solaris for 64 bit with -Werror --------------------------+------------------------------ Reporter: geoff | Owner: slink Type: defect | Status: assigned Priority: normal | Milestone: Varnish 3.0 dev Component: port:solaris | Version: trunk Severity: normal | Resolution: Keywords: | --------------------------+------------------------------ Changes (by slink): * status: new => assigned Comment: patch is here: https://www.varnish-cache.org/patchwork/patch/86/ -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 10 09:32:59 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Jan 2013 09:32:59 -0000 Subject: [Varnish] #932: varnishreplay.c: compile fails on Solaris for 64 bit with -Werror In-Reply-To: <043.4807473cf260a17ce2985bedfeaac1c6@varnish-cache.org> References: <043.4807473cf260a17ce2985bedfeaac1c6@varnish-cache.org> Message-ID: <058.9db317c682dcb9cadee809bedab18103@varnish-cache.org> #932: varnishreplay.c: compile fails on Solaris for 64 bit with -Werror --------------------------+------------------------------ Reporter: geoff | Owner: slink Type: defect | Status: assigned Priority: normal | Milestone: Varnish 3.0 dev Component: port:solaris | Version: trunk Severity: normal | Resolution: Keywords: | --------------------------+------------------------------ Comment (by Poul-Henning Kamp ): In [703e7e67e93a7c46f43ca37ff405aacba26990f8]: {{{ #!CommitTicketReference repository="" revision="703e7e67e93a7c46f43ca37ff405aacba26990f8" Cast thread_t's all over the place in order to be able to printf them. A more kosher solution might be pthread_getthreadid_np() on the platforms which support it, but I can't be bothered for three debugging printfs :-) Submitted by: Nils Goroll Fixes #932 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 10 09:33:01 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Jan 2013 09:33:01 -0000 Subject: [Varnish] #932: varnishreplay.c: compile fails on Solaris for 64 bit with -Werror In-Reply-To: <043.4807473cf260a17ce2985bedfeaac1c6@varnish-cache.org> References: <043.4807473cf260a17ce2985bedfeaac1c6@varnish-cache.org> Message-ID: <058.c664acc298f68bc25050b387894bc693@varnish-cache.org> #932: varnishreplay.c: compile fails on Solaris for 64 bit with -Werror --------------------------+------------------------------ Reporter: geoff | Owner: slink Type: defect | Status: closed Priority: normal | Milestone: Varnish 3.0 dev Component: port:solaris | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------------+------------------------------ Changes (by Poul-Henning Kamp ): * status: assigned => closed * resolution: => fixed Comment: (In [703e7e67e93a7c46f43ca37ff405aacba26990f8]) Cast thread_t's all over the place in order to be able to printf them. A more kosher solution might be pthread_getthreadid_np() on the platforms which support it, but I can't be bothered for three debugging printfs :-) Submitted by: Nils Goroll Fixes #932 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 10 10:35:06 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Jan 2013 10:35:06 -0000 Subject: [Varnish] #1245: param.show mixes false/off for booleans In-Reply-To: <046.ffe9669057b943d1194c0eac38fe285d@varnish-cache.org> References: <046.ffe9669057b943d1194c0eac38fe285d@varnish-cache.org> Message-ID: <061.67c471cd488a5d1d42e42d026a39de2f@varnish-cache.org> #1245: param.show mixes false/off for booleans ----------------------+-------------------- Reporter: kristian | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by Poul-Henning Kamp ): In [8fa24e2043d62401ebb8bc22508b6b72613a32c9]: {{{ #!CommitTicketReference repository="" revision="8fa24e2043d62401ebb8bc22508b6b72613a32c9" Match output for boolean params to the default value so we can choose to present on/off or true/false, depending on what makes most sense. Fixes #1245 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 10 10:35:08 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Jan 2013 10:35:08 -0000 Subject: [Varnish] #1245: param.show mixes false/off for booleans In-Reply-To: <046.ffe9669057b943d1194c0eac38fe285d@varnish-cache.org> References: <046.ffe9669057b943d1194c0eac38fe285d@varnish-cache.org> Message-ID: <061.3eb8fcc8201dd9e7c0d2221adf0a8f57@varnish-cache.org> #1245: param.show mixes false/off for booleans ----------------------+--------------------- Reporter: kristian | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [8fa24e2043d62401ebb8bc22508b6b72613a32c9]) Match output for boolean params to the default value so we can choose to present on/off or true/false, depending on what makes most sense. Fixes #1245 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 10 16:52:53 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Jan 2013 16:52:53 -0000 Subject: [Varnish] #1248: varnishd crash - Panic message: Assert error in VGZ_Ibuf(), cache_gzip.c Message-ID: <048.b306df775cd9737d62129e8861634e4e@varnish-cache.org> #1248: varnishd crash - Panic message: Assert error in VGZ_Ibuf(), cache_gzip.c -------------------------------------------------+------------------------- Reporter: msallen333 | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: varnishd crash Assert error in | VGZ_Ibuf(), cache_gzip.c | -------------------------------------------------+------------------------- *** PROBLEM DESCRIPTION *** Varnish varnish-3.0.2 crashed with Panic message: Assert error in VGZ_Ibuf(), cache_gzip.c line 222:#012 I have already disabled "Transparent Hugepages". Has anyone else experienced this same problem, and possibly have a solution? ============================================= # /var/log/messages Jan 10 08:47:40 lx11 varnishd[7568]: Child (9172) not responding to CLI, killing it. Jan 10 08:47:44 lx11 abrt[2049]: saved core dump of pid 9172 (/usr/sbin/varnishd) to /var/spool/abrt/ccpp-2013-01-10-08:47:28-9172.new/coredump (2436448256 bytes) Jan 10 08:47:44 lx11 abrtd: Directory 'ccpp-2013-01-10-08:47:28-9172' creation detected Jan 10 08:47:46 lx11 abrtd: Package 'varnish' isn't signed with proper key Jan 10 08:47:46 lx11 abrtd: Corrupted or bad dump /var/spool/abrt/ccpp-2013-01-10-08:47:28-9172 (res:2), deleting Jan 10 08:47:50 lx11 varnishd[7568]: Child (9172) not responding to CLI, killing it. Jan 10 08:48:00 lx11 varnishd[7568]: Child (9172) not responding to CLI, killing it. Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) not responding to CLI, killing it. Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) not responding to CLI, killing it. Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) died signal=6 (core dumped) Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) Panic message: Assert error in VGZ_Ibuf(), cache_gzip.c line 222:#012 Condition((vg->vz.avail_in) == 0) not true.#012errno = 12 (Cannot allocate memory)#012thread = (cache-worker)#012ident = Linux,2.6.32-220.4.2.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit,epoll#012Backtrace:#012 0x42c7a6: /usr/sbin/varnishd() [0x42c7a6]#012 0x4227da: /usr/sbin/varnishd(VGZ_Ibuf+0x7a) [0x4227da]#012 0x422e19: /usr/sbin/varnishd() [0x422e19]#012 0x4215fd: /usr/sbin/varnishd(FetchBody+0x3fd) [0x4215fd]#012 0x4153e8: /usr/sbin/varnishd() [0x4153e8]#012 0x417ab6: /usr/sbin/varnishd(CNT_Session+0x9f6) [0x417ab6]#012 0x42efb8: /usr/sbin/varnishd() [0x42efb8]#012 0x42e19b: /usr/sbin/varnishd() [0x42e19b]#012 0x3deaa077f1: /lib64/libpthread.so.0() [0x3deaa077f1]#012 0x3dea2e592d: /lib64/libc.so.6(clone+0x6d) [0x3dea2e592d]#012sp = 0x7e2f15902008 {#012 fd = 29, id = 29, xid = 1294036898,#012 client = 50.51.7.12 1716,#012 step = STP_FETCHBODY,#012 handling = deliver,#012 err_code = 200, err_reason = (null),#012 restarts = 0, esi_level = 0#012 flags = do_gzip is_gunzip#012 bodystatus = 4#012 ws = 0x7e2f15902080 { #012 id = "sess",#012 {s,f,r,e} = {0x7e2f15902c90,+1488,(nil),+65536},#012 },#012 http[req] = {#012 ws = 0x7e2f15902080[sess]#012 "GET",#012 "/lccn/sn83035143/1863-01-06/ed-1/seq-1/coordinates/;words=Oberlin",#012 "HTTP/1.1",#012 "x-requested-with: XMLHttpRequest",#012 "Accept- Language: en-us",#012 "Referer: http://chroniclingamerica.loc.gov/search/pages/results/?date1=1862&rows=20&searchType=basic&state=Ohio&date2=1863&proxtext=Oberlin&y=19&x=17&dateFilterType=yearRange&page=2&sort=relevance",#012 "Accept: application/json, text/javascript, */*; q=0.01",#012 "User- Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; AlexaToolbar/amzni-3.0; BTRS106374; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; AlexaToolbar/amzni-3.0)",#012 "Host: chroniclingamerica.loc.gov", Jan 10 08:48:03 lx11 varnishd[7568]: child (2072) Started Jan 10 08:48:03 lx11 varnishd[7568]: Child (2072) said Child starts Jan 10 08:48:03 lx11 varnishd[7568]: Child (2072) said SMF.s0 mmap'ed 1589334294528 bytes of 1589334294528 # df -h | grep varnish /dev/mapper/emcvg1-varnish 2.0T 1.5T 432G 78% /varnish # ps -ef | grep arnish varnish 2072 7568 13 08:48 ? 00:24:15 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 1,1000,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/varnish/varnish_storage.bin,98% root 4130 1 0 2012 ? 00:02:33 /usr/bin/varnishlog -a -w /var/log/varnish/varnish.log -D -P /var/run/varnishlog.pid root 4137 1 0 2012 ? 01:33:48 /usr/bin/varnishncsa -a -w /var/log/varnish/varnishncsa.log -D -P /var/run/varnishncsa.pid root 7568 1 0 2012 ? 00:00:49 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 1,1000,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/varnish/varnish_storage.bin,98% # cat /sys/kernel/mm/redhat_transparent_hugepage/enabled always [never] Which varnish version ? # /usr/sbin/varnishd -V varnishd (varnish-3.0.2 revision 55e70a4) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2011 Varnish Software AS # rpm -qa | grep varnish varnish-libs-3.0.2-1.el5.x86_64 varnish-3.0.2-1.el5.x86_64 varnish-release-3.0-1.noarch Which type of CPU ? # more /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 29 model name : Intel(R) Xeon(R) CPU E7440 @ 2.40GHz stepping : 1 cpu MHz : 2400.080 cache size : 16384 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good xtopology aperfmperf pni dtes64 monitor ds_cpl vm x est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 4800.16 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: 32 or 64 bit mode ? 64bit how much RAM ? 128GB # more meminfo MemTotal: 132153720 kB MemFree: 662828 kB Buffers: 334308 kB Cached: 124553164 kB SwapCached: 28 kB Active: 15416940 kB Inactive: 114045600 kB Active(anon): 4406140 kB Inactive(anon): 174144 kB Active(file): 11010800 kB Inactive(file): 113871456 kB Unevictable: 5244 kB Mlocked: 5244 kB SwapTotal: 16777208 kB SwapFree: 16777080 kB Dirty: 33320 kB Writeback: 0 kB AnonPages: 4580508 kB Mapped: 68785120 kB Shmem: 1784 kB Slab: 548132 kB SReclaimable: 429880 kB SUnreclaim: 118252 kB KernelStack: 5504 kB PageTables: 158976 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 82854068 kB Committed_AS: 45740984 kB VmallocTotal: 34359738367 kB VmallocUsed: 493684 kB VmallocChunk: 34359215588 kB HardwareCorrupted: 0 kB AnonHugePages: 1718272 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 9456 kB DirectMap2M: 134205440 kB Which OS/kernel version ? # cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.2 (Santiago) # cat /proc/version Linux version 2.6.32-220.4.2.el6.x86_64 (mockbuild at x86-003.build.bos.redhat.com) (gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) ) #1 SMP Mon Feb 6 16:39:28 EST 2012 default VCL or do you have your own ? # cat /etc/varnish/default.vcl # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # backend default { .host = "x.y.z"; .port = "8080"; .connect_timeout = 15s; .first_byte_timeout = 120s; .between_bytes_timeout = 120s; } # # Below is a commented-out copy of the default VCL logic. If you # redefine any of these subroutines, the built-in logic will be # appended to your code. # sub vcl_recv { # if (req.restarts == 0) { # if (req.http.x-forwarded-for) { # set req.http.X-Forwarded-For = # req.http.X-Forwarded-For + ", " + client.ip; # } else { # set req.http.X-Forwarded-For = client.ip; # } # } # if (req.request != "GET" && # req.request != "HEAD" && # req.request != "PUT" && # req.request != "POST" && # req.request != "TRACE" && # req.request != "OPTIONS" && # req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ # return (pipe); # } # if (req.request != "GET" && req.request != "HEAD") { # /* We only deal with GET and HEAD by default */ # return (pass); # } # if (req.http.Authorization || req.http.Cookie) { # /* Not cacheable by default */ # return (pass); # } # return (lookup); # } # sub vcl_fetch { set beresp.grace = 1h; if (beresp.http.content-type ~ "(text|application)") { set beresp.do_gzip = true; } } sub vcl_recv { # unset cookies since we don't want to bypass caching normally if (req.http.cookie) { unset req.http.cookie; } set req.grace = 1h; } sub vcl_deliver { if (!resp.http.Vary) { set resp.http.Vary = "Accept-Encoding"; } else if (resp.http.Vary !~ "(?i)Accept-Encoding") { set resp.http.Vary = resp.http.Vary + ",Accept-Encoding"; } } # sub vcl_pipe { # # Note that only the first request to the backend will have # # X-Forwarded-For set. If you use X-Forwarded-For and want to # # have it set for all requests, make sure to have: # # set bereq.http.connection = "close"; # # here. It is not set by default as it might break some broken web # # applications, like IIS with NTLM authentication. # return (pipe); # } # # sub vcl_pass { # return (pass); # } # # sub vcl_hash { # hash_data(req.url); # if (req.http.host) { # hash_data(req.http.host); # } else { # hash_data(server.ip); # } # return (hash); # } # # sub vcl_hit { # return (deliver); # } # # sub vcl_miss { # return (fetch); # } # # sub vcl_fetch { # if (beresp.ttl <= 0s || # beresp.http.Set-Cookie || # beresp.http.Vary == "*") { # /* # * Mark as "Hit-For-Pass" for the next 2 minutes # */ # set beresp.ttl = 120 s; # return (hit_for_pass); # } # return (deliver); # } # # sub vcl_deliver { # return (deliver); # } # sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; set obj.http.Retry-After = "5"; synthetic {" The page is temporarily unavailable

Chronicling America is currently unavailable

The Chronicling America website is currently offline, undergoing maintenance. We regret the inconvenience, and invite you to visit other collections available on the Library of Congress website at www.loc.gov while we are working to restore service.

"}; return (deliver); } # # sub vcl_init { # return (ok); # } # # sub vcl_fini { # return (ok); # } -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 11 04:37:43 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 11 Jan 2013 04:37:43 -0000 Subject: [Varnish] #1249: Undeterministic results in regsub/regsuball Message-ID: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> #1249: Undeterministic results in regsub/regsuball ---------------------+-------------------- Reporter: arthens | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.3 | Severity: normal Keywords: | ---------------------+-------------------- First let me say that I'm not a varnish expert nor a C programmer, so bare with me if something I say is not 100% correct :) Plus given the nature of the bug I'm seeing it might be hard to produce a reliable test case. The problem: - some of our users get randomly asked to sign in even if they are already signed in - it seems to happen only on some specific pages, and mostly to staff users - I managed to verify that the problem happens in our Varnish cache, because it never happens when hitting the PHP stack directly - we have ~12 regexp rules to remove tracking cookie, but we couldn't find anything wrong with any of them (they are pretty simple regexp, I'll attach them to the ticket) Investigation: - I added some logging after each regsuball to verify which one was causing the problem - the logs suggested that there wasn't a single broken regexp, because sometimes it would drop all/most cookies on one regexp, sometimes on another - we still couldn't see anything wrong with the regexp (we tested it with other libraries, and we were never able to reproduce the problem) Second investigation (here it's where it gets interesting/weird) - I added another regsuball rule applying one of the regexp to a fixed string (hardcoded in my .vcl) and noticed that the result changed based on where this instruction would be. In other words {{{ subregall("hardcoded string here", regexp, replacement) }}} would produce a different result based on where it was executed (!!!) To be more specific, if this instruction was the first regsuball in vcl_recv then the result would be correct, but if I moved it AFTER the 12 regsuball filtering our cookies then it would produce a wrong result (not always, I suppose this bug is only triggered by a particular setup/cookies). Some other observations: - the problem doesn't actually seems to be related with cookies, that's just how we noticed it. The attached example simply use other 2 random headers - it seems like regsuball/regsub (or the underlying library) can get into a "broken state" that will cause subsequent calls to produce wrong results (but only in the same request, we never saw the bug persist across requests) - I only managed to reproduce the bug using long-ish strings - it doesn't seem to be related with the content of the string (aka not a special characters problem) Environment: - Ubuntu 12.04 - Varnish 3.0.3 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 11 04:44:31 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 11 Jan 2013 04:44:31 -0000 Subject: [Varnish] #1249: Undeterministic results in regsub/regsuball In-Reply-To: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> References: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> Message-ID: <060.b937a8b8c90ef8a03aeb118414426b90@varnish-cache.org> #1249: Undeterministic results in regsub/regsuball ---------------------+-------------------- Reporter: arthens | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ---------------------+-------------------- Comment (by arthens): I have attached default.vcl to illustrate the problem. What I am seeing is that the following string (which has a cookie format) {{{ this-will-disappear=true; and-so-will-this=true; yet-another-cookie- disappearing=0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000; wcsid=44444444444444444444444444444444; hblid=55555555555555555555555555555555; this-will-not-disappear=true }}} becomes {{{ hblid=55555555555555555555555555555555; this-will-not-disappear=true }}} After applying the regexp {{{ regsuball(">>> put the string here <<<", "(^|; ) *wcsid=[^;]+;? *", "\1"); }}} Note: at least in one case I've seen the bug temporarily disappear after a varnish restart. Take this into account if you can't reproduce the problem with the provided file :/ -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 11 04:51:47 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 11 Jan 2013 04:51:47 -0000 Subject: [Varnish] #1249: Undeterministic results in regsub/regsuball In-Reply-To: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> References: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> Message-ID: <060.c4658cf6e02bb590d08355a35be215ab@varnish-cache.org> #1249: Undeterministic results in regsub/regsuball ---------------------+-------------------- Reporter: arthens | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ---------------------+-------------------- Comment (by arthens): One other detail is that regsub/regsuball only lose characters appearing BEFORE the matched substring. If you have the regexp: {{{ (^|; ) *wcsid=[^;]+;? * }}} and the string {{{ var1=value1; wcsid=wcsid-value; var2=value2 }}} There are 2 possible results: {{{ var1=value1; var2=value2 (when it works correctly) }}} and {{{ var2=value2 (when it's bugged) }}} I have NEVER seen it lose: - the characters following the match - only a subset of the characters preceding the match -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Jan 12 09:40:05 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 12 Jan 2013 09:40:05 -0000 Subject: [Varnish] #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data In-Reply-To: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> References: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> Message-ID: <059.dbde424239221b024eeea97b1339d30b@varnish-cache.org> #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data ----------------------+--------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Comment (by LordCope): I'm seeing behaviour very much in keeping with this issue. Running on 3.0.3 using the Varnish package for Ubuntu, if I run the test case attached by "martin", it fails. See attached log of varnishtest. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Jan 12 10:00:56 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 12 Jan 2013 10:00:56 -0000 Subject: [Varnish] #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data In-Reply-To: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> References: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> Message-ID: <059.59b8d5cefe5cabc53c863432f9e6fc6b@varnish-cache.org> #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data ----------------------+----------------------- Reporter: martin | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Changes (by LordCope): * status: closed => reopened * resolution: fixed => -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Jan 12 10:03:00 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 12 Jan 2013 10:03:00 -0000 Subject: [Varnish] #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data In-Reply-To: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> References: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> Message-ID: <059.c3df19ab448dc2b396cf705d24df38ab@varnish-cache.org> #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data ----------------------+----------------------- Reporter: martin | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by LordCope): Also, although it claims to be fixed, I don't see any relevant commit logs, or anything in either changelog or the current src which references. Perhaps when checked by 'phk' the reason for the test passing was incidental - ie something could have subsequently changed which makes this issue happen again? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Jan 12 14:39:50 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 12 Jan 2013 14:39:50 -0000 Subject: [Varnish] #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data In-Reply-To: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> References: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> Message-ID: <059.115b0ebe7dd43e955dba9ad13c79ebab@varnish-cache.org> #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data ----------------------+----------------------- Reporter: martin | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by LordCope): Built the current git master of 3.0. Changed debug paramater to debug=+syncvsl, and ran the test. Failure attached. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 14 11:42:06 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Jan 2013 11:42:06 -0000 Subject: [Varnish] #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data In-Reply-To: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> References: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> Message-ID: <059.b7ea57a9de7df100a15aff11ebd8b30f@varnish-cache.org> #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data ----------------------+--------------------- Reporter: martin | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+--------------------- Changes (by martin): * status: reopened => new * owner: => martin Comment: Not applicable to trunk. I will have a look to see if this is a real problem on 3.0. Bear in mind that the test case attached here might not be very indicative, as testing on close conditions and streaming is not trivial. Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 14 11:45:35 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Jan 2013 11:45:35 -0000 Subject: [Varnish] #1248: varnishd crash - Panic message: Assert error in VGZ_Ibuf(), cache_gzip.c In-Reply-To: <048.b306df775cd9737d62129e8861634e4e@varnish-cache.org> References: <048.b306df775cd9737d62129e8861634e4e@varnish-cache.org> Message-ID: <063.243edec09ee212727cbe9b5ce61879e9@varnish-cache.org> #1248: varnishd crash - Panic message: Assert error in VGZ_Ibuf(), cache_gzip.c -------------------------------------------------+------------------------- Reporter: msallen333 | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: varnishd crash Assert error in | VGZ_Ibuf(), cache_gzip.c | -------------------------------------------------+------------------------- Description changed by phk: Old description: > *** PROBLEM DESCRIPTION *** > > Varnish varnish-3.0.2 crashed with Panic message: Assert error in > VGZ_Ibuf(), cache_gzip.c line 222:#012 > > I have already disabled "Transparent Hugepages". > > Has anyone else experienced this same problem, and possibly have a > solution? > > > ============================================= > > > # /var/log/messages > Jan 10 08:47:40 lx11 varnishd[7568]: Child (9172) not responding to CLI, > killing it. > Jan 10 08:47:44 lx11 abrt[2049]: saved core dump of pid 9172 > (/usr/sbin/varnishd) to > /var/spool/abrt/ccpp-2013-01-10-08:47:28-9172.new/coredump (2436448256 > bytes) > Jan 10 08:47:44 lx11 abrtd: Directory 'ccpp-2013-01-10-08:47:28-9172' > creation detected > Jan 10 08:47:46 lx11 abrtd: Package 'varnish' isn't signed with proper > key > Jan 10 08:47:46 lx11 abrtd: Corrupted or bad dump > /var/spool/abrt/ccpp-2013-01-10-08:47:28-9172 (res:2), deleting > Jan 10 08:47:50 lx11 varnishd[7568]: Child (9172) not responding to CLI, > killing it. > Jan 10 08:48:00 lx11 varnishd[7568]: Child (9172) not responding to CLI, > killing it. > Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) not responding to CLI, > killing it. > Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) not responding to CLI, > killing it. > Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) died signal=6 (core > dumped) > Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) Panic message: Assert > error in VGZ_Ibuf(), cache_gzip.c line 222:#012 > Condition((vg->vz.avail_in) == 0) not true.#012errno = 12 (Cannot > allocate memory)#012thread = (cache-worker)#012ident = > Linux,2.6.32-220.4.2.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit,epoll#012Backtrace:#012 > 0x42c7a6: /usr/sbin/varnishd() [0x42c7a6]#012 0x4227da: > /usr/sbin/varnishd(VGZ_Ibuf+0x7a) [0x4227da]#012 0x422e19: > /usr/sbin/varnishd() [0x422e19]#012 0x4215fd: > /usr/sbin/varnishd(FetchBody+0x3fd) [0x4215fd]#012 0x4153e8: > /usr/sbin/varnishd() [0x4153e8]#012 0x417ab6: > /usr/sbin/varnishd(CNT_Session+0x9f6) [0x417ab6]#012 0x42efb8: > /usr/sbin/varnishd() [0x42efb8]#012 0x42e19b: /usr/sbin/varnishd() > [0x42e19b]#012 0x3deaa077f1: /lib64/libpthread.so.0() [0x3deaa077f1]#012 > 0x3dea2e592d: /lib64/libc.so.6(clone+0x6d) [0x3dea2e592d]#012sp = > 0x7e2f15902008 {#012 fd = 29, id = 29, xid = 1294036898,#012 client = > 50.51.7.12 1716,#012 step = STP_FETCHBODY,#012 handling = deliver,#012 > err_code = 200, err_reason = (null),#012 restarts = 0, esi_level = 0#012 > flags = do_gzip is_gunzip#012 bodystatus = 4#012 ws = 0x7e2f15902080 { > #012 id = "sess",#012 {s,f,r,e} = > {0x7e2f15902c90,+1488,(nil),+65536},#012 },#012 http[req] = {#012 ws > = 0x7e2f15902080[sess]#012 "GET",#012 > "/lccn/sn83035143/1863-01-06/ed-1/seq-1/coordinates/;words=Oberlin",#012 > "HTTP/1.1",#012 "x-requested-with: XMLHttpRequest",#012 > "Accept-Language: en-us",#012 "Referer: > http://chroniclingamerica.loc.gov/search/pages/results/?date1=1862&rows=20&searchType=basic&state=Ohio&date2=1863&proxtext=Oberlin&y=19&x=17&dateFilterType=yearRange&page=2&sort=relevance",#012 > "Accept: application/json, text/javascript, */*; q=0.01",#012 "User- > Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; > AlexaToolbar/amzni-3.0; BTRS106374; .NET CLR 2.0.50727; .NET CLR > 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; > AlexaToolbar/amzni-3.0)",#012 "Host: chroniclingamerica.loc.gov", > Jan 10 08:48:03 lx11 varnishd[7568]: child (2072) Started > Jan 10 08:48:03 lx11 varnishd[7568]: Child (2072) said Child starts > Jan 10 08:48:03 lx11 varnishd[7568]: Child (2072) said SMF.s0 mmap'ed > 1589334294528 bytes of 1589334294528 > > # df -h | grep varnish > /dev/mapper/emcvg1-varnish 2.0T 1.5T 432G 78% /varnish > > # ps -ef | grep arnish > varnish 2072 7568 13 08:48 ? 00:24:15 /usr/sbin/varnishd -P > /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 > -t 120 -w 1,1000,120 -u varnish -g varnish -S /etc/varnish/secret -s > file,/varnish/varnish_storage.bin,98% > root 4130 1 0 2012 ? 00:02:33 /usr/bin/varnishlog -a -w > /var/log/varnish/varnish.log -D -P /var/run/varnishlog.pid > root 4137 1 0 2012 ? 01:33:48 /usr/bin/varnishncsa -a > -w /var/log/varnish/varnishncsa.log -D -P /var/run/varnishncsa.pid > root 7568 1 0 2012 ? 00:00:49 /usr/sbin/varnishd -P > /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 > -t 120 -w 1,1000,120 -u varnish -g varnish -S /etc/varnish/secret -s > file,/varnish/varnish_storage.bin,98% > > # cat /sys/kernel/mm/redhat_transparent_hugepage/enabled > always [never] > > > Which varnish version ? > > # /usr/sbin/varnishd -V > varnishd (varnish-3.0.2 revision 55e70a4) > Copyright (c) 2006 Verdens Gang AS > Copyright (c) 2006-2011 Varnish Software AS > > # rpm -qa | grep varnish > varnish-libs-3.0.2-1.el5.x86_64 > varnish-3.0.2-1.el5.x86_64 > varnish-release-3.0-1.noarch > > > Which type of CPU ? > > # more /proc/cpuinfo > processor : 0 > vendor_id : GenuineIntel > cpu family : 6 > model : 29 > model name : Intel(R) Xeon(R) CPU E7440 @ 2.40GHz > stepping : 1 > cpu MHz : 2400.080 > cache size : 16384 KB > physical id : 0 > siblings : 4 > core id : 0 > cpu cores : 4 > apicid : 0 > initial apicid : 0 > fpu : yes > fpu_exception : yes > cpuid level : 11 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca > cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm > constant_tsc arch_perfmon pebs bts rep_good xtopology aperfmperf pni > dtes64 monitor ds_cpl vm > x est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm dts tpr_shadow vnmi > flexpriority > bogomips : 4800.16 > clflush size : 64 > cache_alignment : 64 > address sizes : 40 bits physical, 48 bits virtual > power management: > > > 32 or 64 bit mode ? > > 64bit > > > how much RAM ? > > 128GB > > # more meminfo > MemTotal: 132153720 kB > MemFree: 662828 kB > Buffers: 334308 kB > Cached: 124553164 kB > SwapCached: 28 kB > Active: 15416940 kB > Inactive: 114045600 kB > Active(anon): 4406140 kB > Inactive(anon): 174144 kB > Active(file): 11010800 kB > Inactive(file): 113871456 kB > Unevictable: 5244 kB > Mlocked: 5244 kB > SwapTotal: 16777208 kB > SwapFree: 16777080 kB > Dirty: 33320 kB > Writeback: 0 kB > AnonPages: 4580508 kB > Mapped: 68785120 kB > Shmem: 1784 kB > Slab: 548132 kB > SReclaimable: 429880 kB > SUnreclaim: 118252 kB > KernelStack: 5504 kB > PageTables: 158976 kB > NFS_Unstable: 0 kB > Bounce: 0 kB > WritebackTmp: 0 kB > CommitLimit: 82854068 kB > Committed_AS: 45740984 kB > VmallocTotal: 34359738367 kB > VmallocUsed: 493684 kB > VmallocChunk: 34359215588 kB > HardwareCorrupted: 0 kB > AnonHugePages: 1718272 kB > HugePages_Total: 0 > HugePages_Free: 0 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 2048 kB > DirectMap4k: 9456 kB > DirectMap2M: 134205440 kB > > > Which OS/kernel version ? > > # cat /etc/redhat-release > Red Hat Enterprise Linux Server release 6.2 (Santiago) > # cat /proc/version > Linux version 2.6.32-220.4.2.el6.x86_64 > (mockbuild at x86-003.build.bos.redhat.com) (gcc version 4.4.6 20110731 (Red > Hat 4.4.6-3) (GCC) ) #1 SMP Mon Feb 6 16:39:28 EST 2012 > > > default VCL or do you have your own ? > > # cat /etc/varnish/default.vcl > # This is a basic VCL configuration file for varnish. See the vcl(7) > # man page for details on VCL syntax and semantics. > # > # Default backend definition. Set this to point to your content > # server. > # > backend default { > .host = "x.y.z"; > .port = "8080"; > .connect_timeout = 15s; > .first_byte_timeout = 120s; > .between_bytes_timeout = 120s; > } > # > # Below is a commented-out copy of the default VCL logic. If you > # redefine any of these subroutines, the built-in logic will be > # appended to your code. > # sub vcl_recv { > # if (req.restarts == 0) { > # if (req.http.x-forwarded-for) { > # set req.http.X-Forwarded-For = > # req.http.X-Forwarded-For + ", " + client.ip; > # } else { > # set req.http.X-Forwarded-For = client.ip; > # } > # } > # if (req.request != "GET" && > # req.request != "HEAD" && > # req.request != "PUT" && > # req.request != "POST" && > # req.request != "TRACE" && > # req.request != "OPTIONS" && > # req.request != "DELETE") { > # /* Non-RFC2616 or CONNECT which is weird. */ > # return (pipe); > # } > # if (req.request != "GET" && req.request != "HEAD") { > # /* We only deal with GET and HEAD by default */ > # return (pass); > # } > # if (req.http.Authorization || req.http.Cookie) { > # /* Not cacheable by default */ > # return (pass); > # } > # return (lookup); > # } > # > > sub vcl_fetch { > set beresp.grace = 1h; > > if (beresp.http.content-type ~ "(text|application)") { > set beresp.do_gzip = true; > } > > } > > sub vcl_recv { > # unset cookies since we don't want to bypass caching normally > if (req.http.cookie) { > unset req.http.cookie; > } > > set req.grace = 1h; > } > > sub vcl_deliver { > if (!resp.http.Vary) { > set resp.http.Vary = "Accept-Encoding"; > } else if (resp.http.Vary !~ "(?i)Accept-Encoding") { > set resp.http.Vary = resp.http.Vary + ",Accept-Encoding"; > } > } > > # sub vcl_pipe { > # # Note that only the first request to the backend will have > # # X-Forwarded-For set. If you use X-Forwarded-For and want to > # # have it set for all requests, make sure to have: > # # set bereq.http.connection = "close"; > # # here. It is not set by default as it might break some broken web > # # applications, like IIS with NTLM authentication. > # return (pipe); > # } > # > # sub vcl_pass { > # return (pass); > # } > # > # sub vcl_hash { > # hash_data(req.url); > # if (req.http.host) { > # hash_data(req.http.host); > # } else { > # hash_data(server.ip); > # } > # return (hash); > # } > # > # sub vcl_hit { > # return (deliver); > # } > # > # sub vcl_miss { > # return (fetch); > # } > # > # sub vcl_fetch { > # if (beresp.ttl <= 0s || > # beresp.http.Set-Cookie || > # beresp.http.Vary == "*") { > # /* > # * Mark as "Hit-For-Pass" for the next 2 minutes > # */ > # set beresp.ttl = 120 s; > # return (hit_for_pass); > # } > # return (deliver); > # } > # > # sub vcl_deliver { > # return (deliver); > # } > # > sub vcl_error { > set obj.http.Content-Type = "text/html; charset=utf-8"; > set obj.http.Retry-After = "5"; > synthetic {" > > "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> > > > The page is temporarily unavailable > > >

Chronicling America is currently unavailable

>

The Chronicling America website is currently offline, undergoing > maintenance. We regret the inconvenience, and invite you to visit other > collections available on the Library of Congress website at href="http://www.loc.gov">www.loc.gov while we are working to restore > service.

> > > "}; > return (deliver); > } > # > # sub vcl_init { > # return (ok); > # } > # > # sub vcl_fini { > # return (ok); > # } New description: *** PROBLEM DESCRIPTION *** Varnish varnish-3.0.2 crashed with Panic message: Assert error in VGZ_Ibuf(), cache_gzip.c line 222:#012 I have already disabled "Transparent Hugepages". Has anyone else experienced this same problem, and possibly have a solution? {{{ ============================================= # /var/log/messages Jan 10 08:47:40 lx11 varnishd[7568]: Child (9172) not responding to CLI, killing it. Jan 10 08:47:44 lx11 abrt[2049]: saved core dump of pid 9172 (/usr/sbin/varnishd) to /var/spool/abrt/ccpp-2013-01-10-08:47:28-9172.new/coredump (2436448256 bytes) Jan 10 08:47:44 lx11 abrtd: Directory 'ccpp-2013-01-10-08:47:28-9172' creation detected Jan 10 08:47:46 lx11 abrtd: Package 'varnish' isn't signed with proper key Jan 10 08:47:46 lx11 abrtd: Corrupted or bad dump /var/spool/abrt/ccpp-2013-01-10-08:47:28-9172 (res:2), deleting Jan 10 08:47:50 lx11 varnishd[7568]: Child (9172) not responding to CLI, killing it. Jan 10 08:48:00 lx11 varnishd[7568]: Child (9172) not responding to CLI, killing it. Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) not responding to CLI, killing it. Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) not responding to CLI, killing it. Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) died signal=6 (core dumped) Jan 10 08:48:03 lx11 varnishd[7568]: Child (9172) Panic message: Assert error in VGZ_Ibuf(), cache_gzip.c line 222:#012 Condition((vg->vz.avail_in) == 0) not true.#012errno = 12 (Cannot allocate memory)#012thread = (cache-worker)#012ident = Linux,2.6.32-220.4.2.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit,epoll#012Backtrace:#012 0x42c7a6: /usr/sbin/varnishd() [0x42c7a6]#012 0x4227da: /usr/sbin/varnishd(VGZ_Ibuf+0x7a) [0x4227da]#012 0x422e19: /usr/sbin/varnishd() [0x422e19]#012 0x4215fd: /usr/sbin/varnishd(FetchBody+0x3fd) [0x4215fd]#012 0x4153e8: /usr/sbin/varnishd() [0x4153e8]#012 0x417ab6: /usr/sbin/varnishd(CNT_Session+0x9f6) [0x417ab6]#012 0x42efb8: /usr/sbin/varnishd() [0x42efb8]#012 0x42e19b: /usr/sbin/varnishd() [0x42e19b]#012 0x3deaa077f1: /lib64/libpthread.so.0() [0x3deaa077f1]#012 0x3dea2e592d: /lib64/libc.so.6(clone+0x6d) [0x3dea2e592d]#012sp = 0x7e2f15902008 {#012 fd = 29, id = 29, xid = 1294036898,#012 client = 50.51.7.12 1716,#012 step = STP_FETCHBODY,#012 handling = deliver,#012 err_code = 200, err_reason = (null),#012 restarts = 0, esi_level = 0#012 flags = do_gzip is_gunzip#012 bodystatus = 4#012 ws = 0x7e2f15902080 { #012 id = "sess",#012 {s,f,r,e} = {0x7e2f15902c90,+1488,(nil),+65536},#012 },#012 http[req] = {#012 ws = 0x7e2f15902080[sess]#012 "GET",#012 "/lccn/sn83035143/1863-01-06/ed-1/seq-1/coordinates/;words=Oberlin",#012 "HTTP/1.1",#012 "x-requested-with: XMLHttpRequest",#012 "Accept- Language: en-us",#012 "Referer: http://chroniclingamerica.loc.gov/search/pages/results/?date1=1862&rows=20&searchType=basic&state=Ohio&date2=1863&proxtext=Oberlin&y=19&x=17&dateFilterType=yearRange&page=2&sort=relevance",#012 "Accept: application/json, text/javascript, */*; q=0.01",#012 "User- Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; AlexaToolbar/amzni-3.0; BTRS106374; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; AlexaToolbar/amzni-3.0)",#012 "Host: chroniclingamerica.loc.gov", Jan 10 08:48:03 lx11 varnishd[7568]: child (2072) Started Jan 10 08:48:03 lx11 varnishd[7568]: Child (2072) said Child starts Jan 10 08:48:03 lx11 varnishd[7568]: Child (2072) said SMF.s0 mmap'ed 1589334294528 bytes of 1589334294528 # df -h | grep varnish /dev/mapper/emcvg1-varnish 2.0T 1.5T 432G 78% /varnish # ps -ef | grep arnish varnish 2072 7568 13 08:48 ? 00:24:15 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 1,1000,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/varnish/varnish_storage.bin,98% root 4130 1 0 2012 ? 00:02:33 /usr/bin/varnishlog -a -w /var/log/varnish/varnish.log -D -P /var/run/varnishlog.pid root 4137 1 0 2012 ? 01:33:48 /usr/bin/varnishncsa -a -w /var/log/varnish/varnishncsa.log -D -P /var/run/varnishncsa.pid root 7568 1 0 2012 ? 00:00:49 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 1,1000,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/varnish/varnish_storage.bin,98% # cat /sys/kernel/mm/redhat_transparent_hugepage/enabled always [never] Which varnish version ? # /usr/sbin/varnishd -V varnishd (varnish-3.0.2 revision 55e70a4) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2011 Varnish Software AS # rpm -qa | grep varnish varnish-libs-3.0.2-1.el5.x86_64 varnish-3.0.2-1.el5.x86_64 varnish-release-3.0-1.noarch Which type of CPU ? # more /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 29 model name : Intel(R) Xeon(R) CPU E7440 @ 2.40GHz stepping : 1 cpu MHz : 2400.080 cache size : 16384 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good xtopology aperfmperf pni dtes64 monitor ds_cpl vm x est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 4800.16 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: 32 or 64 bit mode ? 64bit how much RAM ? 128GB # more meminfo MemTotal: 132153720 kB MemFree: 662828 kB Buffers: 334308 kB Cached: 124553164 kB SwapCached: 28 kB Active: 15416940 kB Inactive: 114045600 kB Active(anon): 4406140 kB Inactive(anon): 174144 kB Active(file): 11010800 kB Inactive(file): 113871456 kB Unevictable: 5244 kB Mlocked: 5244 kB SwapTotal: 16777208 kB SwapFree: 16777080 kB Dirty: 33320 kB Writeback: 0 kB AnonPages: 4580508 kB Mapped: 68785120 kB Shmem: 1784 kB Slab: 548132 kB SReclaimable: 429880 kB SUnreclaim: 118252 kB KernelStack: 5504 kB PageTables: 158976 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 82854068 kB Committed_AS: 45740984 kB VmallocTotal: 34359738367 kB VmallocUsed: 493684 kB VmallocChunk: 34359215588 kB HardwareCorrupted: 0 kB AnonHugePages: 1718272 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 9456 kB DirectMap2M: 134205440 kB Which OS/kernel version ? # cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.2 (Santiago) # cat /proc/version Linux version 2.6.32-220.4.2.el6.x86_64 (mockbuild at x86-003.build.bos.redhat.com) (gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) ) #1 SMP Mon Feb 6 16:39:28 EST 2012 default VCL or do you have your own ? # cat /etc/varnish/default.vcl # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # backend default { .host = "x.y.z"; .port = "8080"; .connect_timeout = 15s; .first_byte_timeout = 120s; .between_bytes_timeout = 120s; } # # Below is a commented-out copy of the default VCL logic. If you # redefine any of these subroutines, the built-in logic will be # appended to your code. # sub vcl_recv { # if (req.restarts == 0) { # if (req.http.x-forwarded-for) { # set req.http.X-Forwarded-For = # req.http.X-Forwarded-For + ", " + client.ip; # } else { # set req.http.X-Forwarded-For = client.ip; # } # } # if (req.request != "GET" && # req.request != "HEAD" && # req.request != "PUT" && # req.request != "POST" && # req.request != "TRACE" && # req.request != "OPTIONS" && # req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ # return (pipe); # } # if (req.request != "GET" && req.request != "HEAD") { # /* We only deal with GET and HEAD by default */ # return (pass); # } # if (req.http.Authorization || req.http.Cookie) { # /* Not cacheable by default */ # return (pass); # } # return (lookup); # } # sub vcl_fetch { set beresp.grace = 1h; if (beresp.http.content-type ~ "(text|application)") { set beresp.do_gzip = true; } } sub vcl_recv { # unset cookies since we don't want to bypass caching normally if (req.http.cookie) { unset req.http.cookie; } set req.grace = 1h; } sub vcl_deliver { if (!resp.http.Vary) { set resp.http.Vary = "Accept-Encoding"; } else if (resp.http.Vary !~ "(?i)Accept-Encoding") { set resp.http.Vary = resp.http.Vary + ",Accept-Encoding"; } } # sub vcl_pipe { # # Note that only the first request to the backend will have # # X-Forwarded-For set. If you use X-Forwarded-For and want to # # have it set for all requests, make sure to have: # # set bereq.http.connection = "close"; # # here. It is not set by default as it might break some broken web # # applications, like IIS with NTLM authentication. # return (pipe); # } # # sub vcl_pass { # return (pass); # } # # sub vcl_hash { # hash_data(req.url); # if (req.http.host) { # hash_data(req.http.host); # } else { # hash_data(server.ip); # } # return (hash); # } # # sub vcl_hit { # return (deliver); # } # # sub vcl_miss { # return (fetch); # } # # sub vcl_fetch { # if (beresp.ttl <= 0s || # beresp.http.Set-Cookie || # beresp.http.Vary == "*") { # /* # * Mark as "Hit-For-Pass" for the next 2 minutes # */ # set beresp.ttl = 120 s; # return (hit_for_pass); # } # return (deliver); # } # # sub vcl_deliver { # return (deliver); # } # sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; set obj.http.Retry-After = "5"; synthetic {" The page is temporarily unavailable

Chronicling America is currently unavailable

The Chronicling America website is currently offline, undergoing maintenance. We regret the inconvenience, and invite you to visit other collections available on the Library of Congress website at www.loc.gov while we are working to restore service.

"}; return (deliver); } # # sub vcl_init { # return (ok); # } # # sub vcl_fini { # return (ok); # } }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 14 11:53:38 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Jan 2013 11:53:38 -0000 Subject: [Varnish] #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data In-Reply-To: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> References: <044.dedba8f13c20d02f02a39731a1edea7c@varnish-cache.org> Message-ID: <059.25f9d9a1c1222c8300b6fd0f25d6e97e@varnish-cache.org> #1086: VGZ_WrwGunzip loops forever if receiving junk data after end of gzip data ----------------------+--------------------- Reporter: martin | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+--------------------- Comment (by martin): Note to self: Look at this with e6e34d24b7b2e47d936867a4a1d7714ca568b7ae in mind -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 14 12:01:37 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Jan 2013 12:01:37 -0000 Subject: [Varnish] #1248: varnishd crash - Panic message: Assert error in VGZ_Ibuf(), cache_gzip.c In-Reply-To: <048.b306df775cd9737d62129e8861634e4e@varnish-cache.org> References: <048.b306df775cd9737d62129e8861634e4e@varnish-cache.org> Message-ID: <063.c458aec7b3142b22d79af3c5423fe848@varnish-cache.org> #1248: varnishd crash - Panic message: Assert error in VGZ_Ibuf(), cache_gzip.c -------------------------------------------------+------------------------- Reporter: msallen333 | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: duplicate Keywords: varnishd crash Assert error in | VGZ_Ibuf(), cache_gzip.c | -------------------------------------------------+------------------------- Changes (by martin): * status: new => closed * resolution: => duplicate Comment: This looks like ticket #1036, which has been fixed in Varnish version 3.0.3. Please upgrade and see if that fixes the problem. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 14 13:38:05 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Jan 2013 13:38:05 -0000 Subject: [Varnish] #1249: Undeterministic results in regsub/regsuball In-Reply-To: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> References: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> Message-ID: <060.8a3702fe490558c853003f4d80b34a2d@varnish-cache.org> #1249: Undeterministic results in regsub/regsuball ---------------------+-------------------- Reporter: arthens | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ---------------------+-------------------- Comment (by tmagnien): Hi, I remember I have seen some unpredictable results with regexps, and the only way I found to correct them was to increase sess_workspace. If I remember correctly, space for regsub functions is taken there and exhausting workspace can cause such results. Can you try again with an increased sess_workspace (don't hesitate to increase it by a x10 or more factor for test) ? Thierry -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 14 15:47:22 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Jan 2013 15:47:22 -0000 Subject: [Varnish] #1249: Undeterministic results in regsub/regsuball In-Reply-To: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> References: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> Message-ID: <060.5131e0ac1b0346470380d1023a04ddcb@varnish-cache.org> #1249: Undeterministic results in regsub/regsuball ---------------------+-------------------- Reporter: arthens | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ---------------------+-------------------- Comment (by martin): This is workspace exhaustion. See attached test case (your vcl converted into a varnishtest file) where it passes when the session workspace is doubled to 128k. Your very big headers (4996 bytes) quickly eat through the available workspace when you do your header manipulations. Increase available workspace if you need to do a lot of header manipulations. Getting rid of the large elements first will also help to reduce the amount needed. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 14 15:48:55 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Jan 2013 15:48:55 -0000 Subject: [Varnish] #1249: Undeterministic results in regsub/regsuball In-Reply-To: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> References: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> Message-ID: <060.9605c65feaf398ac048fcf685dd2085d@varnish-cache.org> #1249: Undeterministic results in regsub/regsuball ---------------------+---------------------- Reporter: arthens | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: invalid Keywords: | ---------------------+---------------------- Changes (by martin): * status: new => closed * resolution: => invalid -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 15 00:32:51 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 15 Jan 2013 00:32:51 -0000 Subject: [Varnish] #1249: Undeterministic results in regsub/regsuball In-Reply-To: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> References: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> Message-ID: <060.7f9aa71b24a46afcf4eeb1a27e2e0f66@varnish-cache.org> #1249: Undeterministic results in regsub/regsuball ---------------------+---------------------- Reporter: arthens | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: invalid Keywords: | ---------------------+---------------------- Comment (by arthens): Thanks for the answer! We'll try to increase sess_workspace. Just one other question, shouldn't Varnish throw an error if it can't allocate enough memory? Returning a wrong value seems a dangerous thing to do, and it makes debugging the problem quite complicated :/ -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 16 16:20:18 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 16 Jan 2013 16:20:18 -0000 Subject: [Varnish] #1250: No source packages for Ubuntu Precise Message-ID: <043.199bfdb04729b0df7c3cb1c2aa76a0fb@varnish-cache.org> #1250: No source packages for Ubuntu Precise -------------------+----------------------- Reporter: lampe | Type: defect Status: new | Priority: normal Milestone: | Component: packaging Version: 3.0.3 | Severity: minor Keywords: | -------------------+----------------------- http://repo.varnish-cache.org/debian/dists/precise/varnish-3.0/source/ contains empty Sources.(gz|bz2) files and no "precise" source packages in http://repo.varnish-cache.org/debian/pool/varnish-3.0/v/varnish/ Source packages are present for Ubuntu Lucid release: http://repo.varnish-cache.org/debian/dists/lucid/varnish-3.0/source/ -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 16 16:52:12 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 16 Jan 2013 16:52:12 -0000 Subject: [Varnish] #1124: Document Cache-Control private/no-cache behaviour In-Reply-To: <046.efd3b140d66579698a8653ee733c7d45@varnish-cache.org> References: <046.efd3b140d66579698a8653ee733c7d45@varnish-cache.org> Message-ID: <061.adbff11eac665be70f5a196803ab7eb5@varnish-cache.org> #1124: Document Cache-Control private/no-cache behaviour ---------------------------+-------------------- Reporter: timbunce | Owner: Type: documentation | Status: new Priority: normal | Milestone: Component: documentation | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ---------------------------+-------------------- Comment (by martijnheemels): I'd like to second this report. We were bitten by the default "private/no- cache" behaviour and I was surprised to find the solution on the https://www.varnish-cache.org/trac/wiki/VCLExampleHitMissHeader page. Varnish usually seems to have sensible and reasonable defaults, just not in this case. I was also quite surprised by the respons to related ticket #477. (https://www.varnish-cache.org/trac/ticket/477#comment:3) and do not agree with the reasoning. Safe defaults should always be the goal, especially when easily implemented. In my opinion it would be far easier to fix the default VCL, rather than post warnings all over the docs, but anything is better than causing an information disclosure problem for unsuspecting users. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 17 09:54:13 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 17 Jan 2013 09:54:13 -0000 Subject: [Varnish] #1250: No source packages for Ubuntu Precise In-Reply-To: <043.199bfdb04729b0df7c3cb1c2aa76a0fb@varnish-cache.org> References: <043.199bfdb04729b0df7c3cb1c2aa76a0fb@varnish-cache.org> Message-ID: <058.47fd538e34a3d5062a11f94290329e7e@varnish-cache.org> #1250: No source packages for Ubuntu Precise -----------------------+--------------------- Reporter: lampe | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Component: packaging | Version: 3.0.3 Severity: minor | Resolution: Keywords: | -----------------------+--------------------- Changes (by tfheen): * owner: => tfheen Comment: Thanks for the bug report. You can use the 3.0.3-1 source from http://repo.varnish- cache.org/debian/pool/varnish-3.0/v/varnish/varnish_3.0.3-1.dsc to build the Debian packages, it's the source package used for all the various builds. The reason it's not exposed in precise appear to be a bug in the archive management software. I'll get it fixed. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 17 15:19:41 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 17 Jan 2013 15:19:41 -0000 Subject: [Varnish] #1246: Assert error in cnt_hit(), cache_center.c line 1025 In-Reply-To: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> References: <041.1f2a76d5482a27c006e9476dc3c8e805@varnish-cache.org> Message-ID: <056.36b9ff00b19e979d262191412475574d@varnish-cache.org> #1246: Assert error in cnt_hit(), cache_center.c line 1025 ----------------------+---------------------- Reporter: psa | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: invalid Keywords: | ----------------------+---------------------- Changes (by martin): * status: new => closed * resolution: => invalid Comment: Hi, I have had a look at this, and this problem is definitely because of the no-close patch you are using. The clean up code to be able to continue a session after an internally generated error isn't in place, and it isn't safe to just clear the close flag like you do, causing the assertion on the next request handling. Closing ticket as invalid. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 17 17:43:26 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 17 Jan 2013 17:43:26 -0000 Subject: [Varnish] #1249: Undeterministic results in regsub/regsuball In-Reply-To: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> References: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> Message-ID: <060.62f5616053b7c0b255052dde53d592b2@varnish-cache.org> #1249: Undeterministic results in regsub/regsuball ---------------------+---------------------- Reporter: arthens | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: invalid Keywords: | ---------------------+---------------------- Comment (by tohide): I'm possibly seeing a similar issue using 3.03 on ubuntu 10.04 I have the following in vcl_recv if (req.request == "POST" && req.url ~ "^/foo/bar") { ban("req.url ~ "+regsuball(req.url, "code=cotn:([^\&]+)",".*\1.*")); return (pass); } i.e. if a POST is done to /foo/bar/?code=cotn:VOD.L¶m2=value2¶m3=value3 I'm expecting to see in ban.list :- req.url ~ /foo/bar?.*VOD.L.* This helps me invalidate a number of URLs that relate to that 'code' under that URI structure following a new update. However, I'm seeing req.url ~ /foo/bar?.*:VOD.L.*¶m1=value1¶m2=value2 i.e. I'm seeing values appearing which the regex seems to suggest shouldn't be there. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 17 17:43:46 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 17 Jan 2013 17:43:46 -0000 Subject: [Varnish] #1249: Undeterministic results in regsub/regsuball In-Reply-To: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> References: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> Message-ID: <060.cb873914cf0e40f00524997fbcea140f@varnish-cache.org> #1249: Undeterministic results in regsub/regsuball ---------------------+----------------------- Reporter: arthens | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ---------------------+----------------------- Changes (by tohide): * status: closed => reopened * resolution: invalid => Comment: Apologies, reopening. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 17 18:44:46 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 17 Jan 2013 18:44:46 -0000 Subject: [Varnish] #1249: Undeterministic results in regsub/regsuball In-Reply-To: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> References: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> Message-ID: <060.1e4f43686e0c647e09ef18e6f43ea40f@varnish-cache.org> #1249: Undeterministic results in regsub/regsuball ---------------------+---------------------- Reporter: arthens | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: invalid Keywords: | ---------------------+---------------------- Changes (by martin): * status: reopened => closed * resolution: => invalid Comment: Unrelated, and this happens because your regex is wrong. You probably want to do something along this in the regsub (your substitution has to substitute everything you want to change, and then reinsert the captures that should stay): ban("req.url ~ "+regsuball(req.url, "code=cotn:([^&]+).*$",".*\1.*")); Reclosing. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 21 13:14:16 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Jan 2013 13:14:16 -0000 Subject: [Varnish] #1251: Regsuball doesn't replace all occurrences Message-ID: <044.1fd7280c48c2a454b946d259951dd201@varnish-cache.org> #1251: Regsuball doesn't replace all occurrences --------------------+---------------------- Reporter: tmotyl | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.5 | Severity: normal Keywords: | --------------------+---------------------- It seems that regsuball is not replacing all occurances of a regular expression. I have code like this in my ACL {{{ set req.url = regsuball(req.url, "(\?|&)(utm_[a-zA-Z]+)=[^&]+&?", "\1"); }}} for url's like {{{ /uk/sale.html?utm_medium=game&utm_source=ad+network&utm_campaign=post+holiday+sale&utm_content=320x50 }}} Varnish returns {{{ /uk/sale.html?utm_source=ad+network&utm_content=320x50 }}} I also tried {{{ set req.url = regsuball(req.url, "(\?|&)(utm_source|utm_medium|utm_content|utm_campaign)=[^&]+&?", "\1"); }}} but the result is the same. I'm using Varnish 2.1.5 on ubuntu. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 21 17:45:59 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Jan 2013 17:45:59 -0000 Subject: [Varnish] #1251: Regsuball doesn't replace all occurrences In-Reply-To: <044.1fd7280c48c2a454b946d259951dd201@varnish-cache.org> References: <044.1fd7280c48c2a454b946d259951dd201@varnish-cache.org> Message-ID: <059.2a41f890b717f6bb5499a39256df2da3@varnish-cache.org> #1251: Regsuball doesn't replace all occurrences ----------------------+-------------------- Reporter: tmotyl | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.1.5 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by tmotyl): It's not a bug - regex is wrong, should be {{{ (\?|&)(utm_[a-zA-Z]+)=[^&]+ }}} instead -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 21 21:32:09 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Jan 2013 21:32:09 -0000 Subject: [Varnish] #1252: varnish retries fetch from unresponsive backend indefinitely, leading to FD exhaustion Message-ID: <046.de7f99d95026561fe7f4357f355a3091@varnish-cache.org> #1252: varnish retries fetch from unresponsive backend indefinitely, leading to FD exhaustion ----------------------+-------------------- Reporter: askalski | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | ----------------------+-------------------- When a backend request fails (e.g. first_byte_timeout), varnish returns a 503 error only to one client waiting on that object. It then retries the backend request, and keeps doing so as long as there are sessions on the waiting list. These retries are made even for clients that have already timed out and closed their HTTP connection to varnish. This queueing of clients and serialization of backend requests leads to file descriptor exhaustion. This was originally observed on 2.1.5 with the DNS director, but has been reproduced on Trunk with a single backend (no director), and also with the random director configured with .retries=1 === Suggested behavior === After some number of failed retries, varnish should flush out the object's entire waitinglist with an immediate 503 error. === Steps to reproduce === Start with a fresh build of trunk. Configure the backend with a short .first_byte_timeout such as 5 seconds. default.vcl {{{ backend default { .host = "127.0.0.1"; .port = "80"; .first_byte_timeout = 5s; } }}} On the backend, write a script that delays longer than first_byte_timeout. For example, this PHP script sleeps for 10 seconds before returning: slow.php {{{ }}} Start varnish: {{{ $ /tmp/x/sbin/varnishd -d -f /tmp/x/etc/varnish/default.vcl -a localhost:6081 Message from VCC-compiler: Not running as root, no priv-sep Message from C-compiler: Not running as root, no priv-sep Message from dlopen: Not running as root, no priv-sep Platform: Linux,3.2.0-36-generic,x86_64,-sfile,-smalloc,-hcritbit 200 273 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.2.0-36-generic,x86_64,-sfile,-smalloc,-hcritbit varnish-trunk revision d175906 Type 'help' for command list. Type 'quit' to close CLI session. Type 'start' to launch worker process. start child (9286) Started 200 0 Child (9286) said Not running as root, no priv-sep Child (9286) said Child starts Child (9286) said SMF.s0 mmap'ed 104857600 bytes of 104857600 }}} In one window, use "lsof" to monitor varnish's open file descriptors: {{{ $ watch 'lsof -P -n -p 9286' Every 2.0s: lsof -P -n -p 9286 | grep TCP Mon Jan 21 16:22:33 2013 varnishd 9286 askalski 5u IPv4 680493 0t0 TCP 127.0.0.1:6081 (LISTEN) }}} In another window, run curl in a loop to request the delay script, using a timeout shorter than first_byte_timeout (for example, 1 second.) {{{ $ while sleep 1; do curl -m1 http://localhost:6081/slow.php; done curl: (28) Operation timed out after 1001 milliseconds with 0 bytes received curl: (28) Operation timed out after 1001 milliseconds with 0 bytes received curl: (28) Operation timed out after 1001 milliseconds with 0 bytes received curl: (28) Operation timed out after 1001 milliseconds with 0 bytes received curl: (28) Operation timed out after 1001 milliseconds with 0 bytes received curl: (28) Operation timed out after 1001 milliseconds with 0 bytes received curl: (28) Operation timed out after 1001 milliseconds with 0 bytes received ... }}} Watch as varnish's file descriptor list grows: {{{ Every 2.0s: lsof -P -n -p 9286 | grep TCP Mon Jan 21 16:25:22 2013 varnishd 9286 askalski 5u IPv4 680493 0t0 TCP 127.0.0.1:6081 (LISTEN) varnishd 9286 askalski 13u IPv4 682462 0t0 TCP 127.0.0.1:6081->127.0.0.1:57800 (CLOSE_WAIT) varnishd 9286 askalski 14u IPv4 680904 0t0 TCP 127.0.0.1:6081->127.0.0.1:57793 (CLOSE_WAIT) varnishd 9286 askalski 15u IPv4 683250 0t0 TCP 127.0.0.1:6081->127.0.0.1:57774 (CLOSE_WAIT) varnishd 9286 askalski 16u IPv4 684243 0t0 TCP 127.0.0.1:40133->127.0.0.1:80 (ESTABLISHED) varnishd 9286 askalski 17u IPv4 682412 0t0 TCP 127.0.0.1:6081->127.0.0.1:57788 (CLOSE_WAIT) varnishd 9286 askalski 18u IPv4 685075 0t0 TCP 127.0.0.1:6081->127.0.0.1:57809 (CLOSE_WAIT) varnishd 9286 askalski 19u IPv4 683301 0t0 TCP 127.0.0.1:6081->127.0.0.1:57780 (CLOSE_WAIT) varnishd 9286 askalski 20u IPv4 680907 0t0 TCP 127.0.0.1:6081->127.0.0.1:57795 (CLOSE_WAIT) varnishd 9286 askalski 21u IPv4 682466 0t0 TCP 127.0.0.1:6081->127.0.0.1:57802 (CLOSE_WAIT) varnishd 9286 askalski 22u IPv4 680883 0t0 TCP 127.0.0.1:6081->127.0.0.1:57787 (CLOSE_WAIT) varnishd 9286 askalski 23u IPv4 680888 0t0 TCP 127.0.0.1:6081->127.0.0.1:57790 (CLOSE_WAIT) varnishd 9286 askalski 24u IPv4 685068 0t0 TCP 127.0.0.1:6081->127.0.0.1:57807 (CLOSE_WAIT) varnishd 9286 askalski 26u IPv4 682457 0t0 TCP 127.0.0.1:6081->127.0.0.1:57797 (CLOSE_WAIT) varnishd 9286 askalski 27u IPv4 680924 0t0 TCP 127.0.0.1:6081->127.0.0.1:57798 (CLOSE_WAIT) varnishd 9286 askalski 28u IPv4 680959 0t0 TCP 127.0.0.1:6081->127.0.0.1:57801 (CLOSE_WAIT) varnishd 9286 askalski 29u IPv4 685059 0t0 TCP 127.0.0.1:6081->127.0.0.1:57804 (CLOSE_WAIT) varnishd 9286 askalski 30u IPv4 682499 0t0 TCP 127.0.0.1:6081->127.0.0.1:57805 (CLOSE_WAIT) varnishd 9286 askalski 31u IPv4 682536 0t0 TCP 127.0.0.1:6081->127.0.0.1:57808 (CLOSE_WAIT) varnishd 9286 askalski 32u IPv4 682555 0t0 TCP 127.0.0.1:6081->127.0.0.1:57811 (CLOSE_WAIT) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 22 10:46:35 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 22 Jan 2013 10:46:35 -0000 Subject: [Varnish] #1253: restart in vcl_fetch{} on pass panics Message-ID: <041.4bf6be14ed9c63ea426ec752d09d0e33@varnish-cache.org> #1253: restart in vcl_fetch{} on pass panics ----------------------+------------------- Reporter: phk | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- This ticket is just to get a number for the regression test. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 22 11:00:22 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 22 Jan 2013 11:00:22 -0000 Subject: [Varnish] #1253: restart in vcl_fetch{} on pass panics In-Reply-To: <041.4bf6be14ed9c63ea426ec752d09d0e33@varnish-cache.org> References: <041.4bf6be14ed9c63ea426ec752d09d0e33@varnish-cache.org> Message-ID: <056.004fa643f8a4dc10c04f977f839695f8@varnish-cache.org> #1253: restart in vcl_fetch{} on pass panics ----------------------+-------------------- Reporter: phk | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by Poul-Henning Kamp ): In [98c23bb1733b02e1d1fad65fd5212ae04ced865d]: {{{ #!CommitTicketReference repository="" revision="98c23bb1733b02e1d1fad65fd5212ae04ced865d" A restart in vcl_fetch{} needs to loose the objcore as well. Fixes #1253 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 22 11:00:23 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 22 Jan 2013 11:00:23 -0000 Subject: [Varnish] #1253: restart in vcl_fetch{} on pass panics In-Reply-To: <041.4bf6be14ed9c63ea426ec752d09d0e33@varnish-cache.org> References: <041.4bf6be14ed9c63ea426ec752d09d0e33@varnish-cache.org> Message-ID: <056.e98bc8059c5713f68e58665e906cf3b8@varnish-cache.org> #1253: restart in vcl_fetch{} on pass panics ----------------------+--------------------- Reporter: phk | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [98c23bb1733b02e1d1fad65fd5212ae04ced865d]) A restart in vcl_fetch{} needs to loose the objcore as well. Fixes #1253 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 22 13:33:27 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 22 Jan 2013 13:33:27 -0000 Subject: [Varnish] #1254: Future Probes Message-ID: <043.6accc659509c9fd90871e2d864de0ba1@varnish-cache.org> #1254: Future Probes ----------------------------------------------+------------------------- Reporter: nicob | Type: enhancement Status: new | Priority: normal Milestone: Varnish 3.0 dev | Component: varnishd Version: trunk | Severity: normal Keywords: probes, failover, load balancing | ----------------------------------------------+------------------------- Hi there, I'm not sure either this has been requested, or discarded, neither considered, but I've been looking for this feature and couldn't find any information so here it goes. :) In short, I would like to have the option to set in a probe block the host and port where the request should go. Since I need to do some checks in the backend for balancing traffic, (a bit more complicated than .threshold, .expected_response or .timeout) I would like to be able to ask a different server/servlet if a backend is ready to accept traffic; and in this way the complex logic can be moved out from varnish and still use its highly performance engine. Use case example; Balancing search requests to different search engines while they are being updated using replication. If a search engine node goes down and it needs to build again its index or restore a backup, it can't accept any request until the index is fully restored. Thanks you, Nicol?s -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 22 14:52:50 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 22 Jan 2013 14:52:50 -0000 Subject: [Varnish] #516: vsl_mtx "deadlock"; child stops responding In-Reply-To: <040.47b4df5616659526789f282834055e47@varnish-cache.org> References: <040.47b4df5616659526789f282834055e47@varnish-cache.org> Message-ID: <055.5583d0836bd98e75f938c054a722e773@varnish-cache.org> #516: vsl_mtx "deadlock"; child stops responding ----------------------+------------------------- Reporter: kb | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: worksforme Keywords: | ----------------------+------------------------- Comment (by jammy): We're using version 3.0.3. The very issue happened once in our two production nodes. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 23 12:00:48 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 23 Jan 2013 12:00:48 -0000 Subject: [Varnish] #1255: Incorrect URL constructed for requests with an absolute URI in the request line Message-ID: <047.a9b2cad833790f8fdc112159ebc18d43@varnish-cache.org> #1255: Incorrect URL constructed for requests with an absolute URI in the request line -----------------------+-------------------- Reporter: tstarling | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -----------------------+-------------------- According to RFC 2616 section 5.1.2: "To allow for transition to absoluteURIs in all requests in future versions of HTTP, all HTTP/1.1 servers MUST accept the absoluteURI form in requests, even though HTTP/1.1 clients will only generate them in requests to proxies." This handy property of HTTP is commonly used to route requests to the desired server in a cluster, by setting the HTTP proxy parameter in a client such as cURL or LWP. Varnish does not follow this specified behaviour in some respects. If I send such a request to a Varnish 3.0.3 server, for example with: curl -x varnish:80 http://upload.wikimedia.org/foo varnishncsa reports a log entry with a URL of http://upload.wikimedia.orghttp://upload.wikimedia.org/foo That is, the protocol and host header are simply concatenated to the request URI, whether it is absolute or not. The request appears to be forwarded correctly to the backend, but the bug does break purging: https://bugzilla.wikimedia.org/show_bug.cgi?id=39005 See the terrible hackish workaround we implemented to allow purging from an LWP client: https://gerrit.wikimedia.org/r/gitweb?p=operations/puppet.git;a=blob;f=templates/varnish/wikimedia.vcl.erb;h=e041f817fe6878389d6135cf27091f31ea706aa4;hb=d061399ab9488e2685ad56dee1853d54b5020577#l211 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 28 11:24:14 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 Jan 2013 11:24:14 -0000 Subject: [Varnish] #1255: Incorrect URL constructed for requests with an absolute URI in the request line In-Reply-To: <047.a9b2cad833790f8fdc112159ebc18d43@varnish-cache.org> References: <047.a9b2cad833790f8fdc112159ebc18d43@varnish-cache.org> Message-ID: <062.198c761449c4b4005a29711d23d15126@varnish-cache.org> #1255: Incorrect URL constructed for requests with an absolute URI in the request line -----------------------+-------------------- Reporter: tstarling | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -----------------------+-------------------- Changes (by phk): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 28 11:31:32 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 Jan 2013 11:31:32 -0000 Subject: [Varnish] #1254: Future Probes In-Reply-To: <043.6accc659509c9fd90871e2d864de0ba1@varnish-cache.org> References: <043.6accc659509c9fd90871e2d864de0ba1@varnish-cache.org> Message-ID: <058.f8b9d046efac63d1f13a78ec1ff5bc42@varnish-cache.org> #1254: Future Probes ----------------------------------------------+---------------------------- Reporter: nicob | Owner: Type: enhancement | Status: closed Priority: normal | Milestone: Varnish 3.0 Component: varnishd | dev Severity: normal | Version: trunk Keywords: probes, failover, load balancing | Resolution: invalid ----------------------------------------------+---------------------------- Changes (by martin): * status: new => closed * resolution: => invalid Comment: You could have some cgi script on the server at the url of the probe. This script should then perform all the logic (querying another server if necessary) and report 200 or not back to Varnish, solving the problem. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 28 11:36:43 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 Jan 2013 11:36:43 -0000 Subject: [Varnish] #1252: varnish retries fetch from unresponsive backend indefinitely, leading to FD exhaustion In-Reply-To: <046.de7f99d95026561fe7f4357f355a3091@varnish-cache.org> References: <046.de7f99d95026561fe7f4357f355a3091@varnish-cache.org> Message-ID: <061.f13d983a443f1e98639c5d0d00553ede@varnish-cache.org> #1252: varnish retries fetch from unresponsive backend indefinitely, leading to FD exhaustion ----------------------+--------------------- Reporter: askalski | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+--------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 28 11:37:58 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 Jan 2013 11:37:58 -0000 Subject: [Varnish] #1251: Regsuball doesn't replace all occurrences In-Reply-To: <044.1fd7280c48c2a454b946d259951dd201@varnish-cache.org> References: <044.1fd7280c48c2a454b946d259951dd201@varnish-cache.org> Message-ID: <059.a52d575d8f7b6ab1f460aa5d293fa3c8@varnish-cache.org> #1251: Regsuball doesn't replace all occurrences ----------------------+------------------------- Reporter: tmotyl | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.1.5 Severity: normal | Resolution: worksforme Keywords: | ----------------------+------------------------- Changes (by martin): * status: new => closed * resolution: => worksforme Comment: Closing as submitter figured it out. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 28 14:17:24 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 Jan 2013 14:17:24 -0000 Subject: [Varnish] #1256: spurious sigchlds causes ping pong to stop working Message-ID: <044.a9d45c0eb1b28797b6386ae6dc62c356@varnish-cache.org> #1256: spurious sigchlds causes ping pong to stop working ----------------------+------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: critical | Keywords: ----------------------+------------------- On my debian linux system, sigchlds received from e.g. vcc/gcc causes the ev_poker to be removed from MGT's event list, causing the master<->child ping pong to stop. This has been confirmed on both 3.0 and master. This is caused by mgt_sigchld()(mgt_child.c) removing the ev_poker event unconditionally on any sigchld received. Though the question remains open why these signals are received in the first place. Interestingly phk did not see the same effect on FreeBSD. Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 29 13:26:42 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 29 Jan 2013 13:26:42 -0000 Subject: [Varnish] #1257: Varnish restarting it self, large cache. Message-ID: <051.4eb1f47d3c10c74deb792ffa3ee19c81@varnish-cache.org> #1257: Varnish restarting it self, large cache. ---------------------------+-------------------- Reporter: anders-bazoom | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.3 | Severity: major Keywords: | ---------------------------+-------------------- Okay, we have a varnish server in front of an image server. We recently tried the file storage option instead of malloc, to increase our hit rates. But it's giving us some problems. The master process will restart the child process on seemingly random(they are probably not random :p) intervals. Sometimes after 20minutes, longest it has lasted has been about 48 hours. This last time it restarted, I saw a panic message for the first time. The previous reboots only produced died signal=6 http://pastebin.com/BTFzeYRd {{{ varnish> panic.show 200 Last panic at: Tue, 29 Jan 2013 10:25:13 GMT Assert error in default_oc_getobj(), stevedore.c line 65: Condition(((o))->magic == (0x32851d42)) not true. thread = (cache-worker) ident = Linux,2.6.32-37-server,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x430768: /usr/sbin/varnishd() [0x430768] 0x44867c: /usr/sbin/varnishd() [0x44867c] 0x429f36: /usr/sbin/varnishd(HSH_Lookup+0x3a6) [0x429f36] 0x416b19: /usr/sbin/varnishd() [0x416b19] 0x41a265: /usr/sbin/varnishd(CNT_Session+0x705) [0x41a265] 0x4324b1: /usr/sbin/varnishd() [0x4324b1] 0x7f19771e89ca: /lib/libpthread.so.0(+0x69ca) [0x7f19771e89ca] 0x7f1976f4521d: /lib/libc.so.6(clone+0x6d) [0x7f1976f4521d] sp = 0x7e76e7665008 { fd = 218, id = 218, xid = 1472043706, client = 66.249.76.55 42918, step = STP_LOOKUP, handling = hash, restarts = 0, esi_level = 0 flags = bodystatus = 4 ws = 0x7e76e7665080 { id = "sess", {s,f,r,e} = {0x7e76e7665c78,+464,+65536,+65536}, }, http[req] = { ws = 0x7e76e7665080[sess] "GET", "/bruger/7/70/52/33443/EEEEEE/28-01-2013_213537", "HTTP/1.1", "Referer: http://www.bilgalleri.dk/forum/generel- diskussion/936318-vinterhjul_", "Connection: Keep-alive", "Accept: */*", "From: googlebot(at)googlebot.com", "User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)", "host: billeder2.bilgalleri.dk", "X-Forwarded-For: 66.249.76.55", "Accept-Encoding: gzip", }, worker = 0x7e76727fea90 { ws = 0x7e76727fecc8 { id = "wrk", {s,f,r,e} = {0x7e76727eca20,0x7e76727eca20,(nil),+65536}, }, }, vcl = { srcname = { "input", "Default", }, }, }, }}} It's a virtual server, with 16gigs of ram. The storage size is set to 650gb. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 29 14:27:55 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 29 Jan 2013 14:27:55 -0000 Subject: [Varnish] #1256: spurious sigchlds causes ping pong to stop working In-Reply-To: <044.a9d45c0eb1b28797b6386ae6dc62c356@varnish-cache.org> References: <044.a9d45c0eb1b28797b6386ae6dc62c356@varnish-cache.org> Message-ID: <059.713c5f6e907cd277a9858c6444b75686@varnish-cache.org> #1256: spurious sigchlds causes ping pong to stop working ----------------------+-------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: critical | Resolution: Keywords: | ----------------------+-------------------- Comment (by Poul-Henning Kamp ): In [054fd6a6d3a79fd26d5136b0a486413bce55ff24]: {{{ #!CommitTicketReference repository="" revision="054fd6a6d3a79fd26d5136b0a486413bce55ff24" Revisit the managers code to start and stop the child. Eliminate SIGCHLD usage, it's icky, at best, when we have other child processes (See #1256) Instead of reaping the child on SIGCHLD, we do it explicitly, and there is now a 10 second wait to give the child a chance to shut down gracefully, before we take a bat to the kneecaps. More work may be warranted, but I want to get some feedback on this bit first. Fixes #1256 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 29 14:27:57 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 29 Jan 2013 14:27:57 -0000 Subject: [Varnish] #1256: spurious sigchlds causes ping pong to stop working In-Reply-To: <044.a9d45c0eb1b28797b6386ae6dc62c356@varnish-cache.org> References: <044.a9d45c0eb1b28797b6386ae6dc62c356@varnish-cache.org> Message-ID: <059.ad65623bf37580ac11d9007050583d14@varnish-cache.org> #1256: spurious sigchlds causes ping pong to stop working ----------------------+--------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: critical | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [054fd6a6d3a79fd26d5136b0a486413bce55ff24]) Revisit the managers code to start and stop the child. Eliminate SIGCHLD usage, it's icky, at best, when we have other child processes (See #1256) Instead of reaping the child on SIGCHLD, we do it explicitly, and there is now a 10 second wait to give the child a chance to shut down gracefully, before we take a bat to the kneecaps. More work may be warranted, but I want to get some feedback on this bit first. Fixes #1256 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 29 14:30:13 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 29 Jan 2013 14:30:13 -0000 Subject: [Varnish] #940: ETag for gzip'd variant identical to ETag of ungzipped variant. In-Reply-To: <043.d3f0122c5f2e8a8e1ed36783de6aff0a@varnish-cache.org> References: <043.d3f0122c5f2e8a8e1ed36783de6aff0a@varnish-cache.org> Message-ID: <058.0d46d10d2d89fec4d539c4f55cb19f39@varnish-cache.org> #940: ETag for gzip'd variant identical to ETag of ungzipped variant. ----------------------+--------------------- Reporter: david | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: build | Version: 3.0.0 Severity: critical | Resolution: Keywords: | ----------------------+--------------------- Comment (by slink): Replying to [comment:2 david]: > Thanks for the ideas. Changing the etag for gzipped variants in vcl_deliver was the first thing I tried. It neutered my hit ratio because all of the if-modified-since and if-none-match headers started to not work. @david: I have written down some thoughts in [wiki:ETags] and I wonder if you had tried changing the ETag for '''un'''gzipped variants to `W/`eak or removing it, but leaving it as it is for gzipped. In practice, most clients should `Accept-Encoding: gzip` so we definitely want Etags to allow for * INM `Range` for this case, but I believe removing Etags for un-gzipped or making them weak should not make that big a difference. @anyone else: I'd appreciate feedback on [wiki:ETags] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 29 21:16:09 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 29 Jan 2013 21:16:09 -0000 Subject: [Varnish] #940: ETag for gzip'd variant identical to ETag of ungzipped variant. In-Reply-To: <043.d3f0122c5f2e8a8e1ed36783de6aff0a@varnish-cache.org> References: <043.d3f0122c5f2e8a8e1ed36783de6aff0a@varnish-cache.org> Message-ID: <058.83885146faa061686368684d4d857094@varnish-cache.org> #940: ETag for gzip'd variant identical to ETag of ungzipped variant. ----------------------+--------------------- Reporter: david | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: build | Version: 3.0.0 Severity: critical | Resolution: Keywords: | ----------------------+--------------------- Comment (by david): Hi slink, It's been a year and a half since I wrote the code to handle this problem. I just added/removed the prefix on the way out/in. Honestly, I think that should be part of the default VCL because this will eventually bite someone else. Tollef has a fresh copy if my VCL if you want to take a look at how I did it. Suggestions warmly welcome. As for removing the ETag for ungzipped docs, well, I may try that in the future, but for now that code seems just fine. I have other problems to tackle. :) Thank you and regards, -david -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 29 21:20:39 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 29 Jan 2013 21:20:39 -0000 Subject: [Varnish] #940: ETag for gzip'd variant identical to ETag of ungzipped variant. In-Reply-To: <043.d3f0122c5f2e8a8e1ed36783de6aff0a@varnish-cache.org> References: <043.d3f0122c5f2e8a8e1ed36783de6aff0a@varnish-cache.org> Message-ID: <058.0cd8d7ac6b876f404880c7e3fa2102ba@varnish-cache.org> #940: ETag for gzip'd variant identical to ETag of ungzipped variant. ----------------------+--------------------- Reporter: david | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: build | Version: 3.0.0 Severity: critical | Resolution: Keywords: | ----------------------+--------------------- Comment (by slink): I understand that your solution works for you, but I was trying to think of generic ways which would, in particular, work with `pipe` and other scenarios bypassing varnish. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 30 10:14:51 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 30 Jan 2013 10:14:51 -0000 Subject: [Varnish] #1255: Incorrect URL constructed for requests with an absolute URI in the request line In-Reply-To: <047.a9b2cad833790f8fdc112159ebc18d43@varnish-cache.org> References: <047.a9b2cad833790f8fdc112159ebc18d43@varnish-cache.org> Message-ID: <062.14b2e7cc263b99b4e0768d0c82f1eafb@varnish-cache.org> #1255: Incorrect URL constructed for requests with an absolute URI in the request line -----------------------+-------------------- Reporter: tstarling | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -----------------------+-------------------- Comment (by Poul-Henning Kamp ): In [2bbb032bf67871d7d5a43a38104d58f747f2e860]: {{{ #!CommitTicketReference repository="" revision="2bbb032bf67871d7d5a43a38104d58f747f2e860" Split absolute URIs into URL and Host: as per RFC2616 5.2 Fixes #1255 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 30 10:14:55 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 30 Jan 2013 10:14:55 -0000 Subject: [Varnish] #1255: Incorrect URL constructed for requests with an absolute URI in the request line In-Reply-To: <047.a9b2cad833790f8fdc112159ebc18d43@varnish-cache.org> References: <047.a9b2cad833790f8fdc112159ebc18d43@varnish-cache.org> Message-ID: <062.731341780cb1c5e7a5593e13e931e7ed@varnish-cache.org> #1255: Incorrect URL constructed for requests with an absolute URI in the request line -----------------------+--------------------- Reporter: tstarling | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | -----------------------+--------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [2bbb032bf67871d7d5a43a38104d58f747f2e860]) Split absolute URIs into URL and Host: as per RFC2616 5.2 Fixes #1255 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 30 16:57:58 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 30 Jan 2013 16:57:58 -0000 Subject: [Varnish] #1076: Unable to ban on object http status In-Reply-To: <041.502fa3a290b14b0e8496aad415d3f2b1@varnish-cache.org> References: <041.502fa3a290b14b0e8496aad415d3f2b1@varnish-cache.org> Message-ID: <056.8221734f19651e3cc998ee335908cc8c@varnish-cache.org> #1076: Unable to ban on object http status ----------------------+--------------------- Reporter: mha | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Comment (by xani): It is broken in varnish-3.0.3 revision 9e6a70f {{{ # varnishd -V varnishd (varnish-3.0.3 revision 9e6a70f) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2011 Varnish Software AS $ varnishadm 200 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,2.6.32-279.14.1.el6.x86_64,x86_64,-smalloc,-smalloc,-hclassic Type 'help' for command list. Type 'quit' to close CLI session. ban obj.status == 204 106 unknown or unsupported field "obj.status" }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 30 16:58:27 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 30 Jan 2013 16:58:27 -0000 Subject: [Varnish] #1076: Unable to ban on object http status In-Reply-To: <041.502fa3a290b14b0e8496aad415d3f2b1@varnish-cache.org> References: <041.502fa3a290b14b0e8496aad415d3f2b1@varnish-cache.org> Message-ID: <056.b03ddebce4b05e138f6036db1a269774@varnish-cache.org> #1076: Unable to ban on object http status ----------------------+----------------------- Reporter: mha | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Changes (by xani): * status: closed => reopened * resolution: fixed => -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 31 04:37:25 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 31 Jan 2013 04:37:25 -0000 Subject: [Varnish] #1258: Failure to build due to aclocal macros Message-ID: <043.6ea237116f12d12a49b1dd4812cb0105@varnish-cache.org> #1258: Failure to build due to aclocal macros -------------------+-------------------- Reporter: richo | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------+-------------------- When running autogen.sh on my OSX 10.8 system from master on the github mirror, I get: + aclocal -I m4 configure.ac:7: error: 'AM_CONFIG_HEADER': this macro is obsolete. You should use the 'AC_CONFIG_HEADERS' macro instead. /usr/local/Cellar/automake/1.13.1/share/aclocal-1.13/obsolete-err.m4:14: AM_CONFIG_HEADER is expanded from... configure.ac:7: the top level I've attached a patch to use the new header (And one which fixes a ton of trailing whitespace I found while I was at it). My branch is also visible with a signed tag at https://github.com/richo /Varnish-Cache/tree/for_varnish/am_header -- Ticket URL: Varnish The Varnish HTTP Accelerator From martin at varnish-software.com Thu Jan 17 18:43:15 2013 From: martin at varnish-software.com (Martin Blix Grydeland) Date: Thu, 17 Jan 2013 18:43:15 -0000 Subject: [Varnish] #1249: Undeterministic results in regsub/regsuball In-Reply-To: <060.cb873914cf0e40f00524997fbcea140f@varnish-cache.org> References: <045.e6dc8e5440e1ef6a8928770a58f0ebd8@varnish-cache.org> <060.cb873914cf0e40f00524997fbcea140f@varnish-cache.org> Message-ID: Unrelated, and this happens because your regex is wrong. You probably want to do something along this in the regsub (your substitution has to substitute everything you want to change, and then reinsert the captures that should stay): ban("req.url ~ "+regsuball(req.url, "code=cotn:([^&]+).*$",".*\1.*")); Reclosing. Regards, Martin Blix Grydeland On Thu, Jan 17, 2013 at 6:43 PM, Varnish wrote: > #1249: Undeterministic results in regsub/regsuball > ---------------------+----------------------- > Reporter: arthens | Owner: > Type: defect | Status: reopened > Priority: normal | Milestone: > Component: build | Version: 3.0.3 > Severity: normal | Resolution: > Keywords: | > ---------------------+----------------------- > Changes (by tohide): > > * status: closed => reopened > * resolution: invalid => > > > Comment: > > Apologies, reopening. > > -- > Ticket URL: > Varnish > The Varnish HTTP Accelerator > -- *Martin Blix Grydeland* Senior Developer | Varnish Software AS Cell: +47 21 98 92 60 We Make Websites Fly! -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish-bugs at varnish-cache.org Mon Jan 28 10:01:40 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 Jan 2013 10:01:40 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <061.edb233610ba4f78fe607487df9b0f4f8@varnish-cache.org> #1054: Child not responding to CLI, killing it ---------------------------+----------------------- Reporter: scorillo | Owner: lkarsten Type: defect | Status: reopened Priority: normal | Milestone: Component: documentation | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ---------------------------+----------------------- Comment (by andrii.grytsenko): Hi guys, We have the same issue with varnish on all our environments(test/preprod/prod). It's killing by parent process at least once a day. It has nothing to do with load, because it fails even at preproduction environment during weekend when there is no traffic at all. {{{ Jan 28 05:20:14 proxy-002 varnishd[11866]: Child (31970) not responding to CLI, killing it. Jan 28 05:20:24 proxy-002 varnishd[11866]: Child (31970) not responding to CLI, killing it. Jan 28 05:20:34 proxy-002 varnishd[11866]: Child (31970) not responding to CLI, killing it. Jan 28 05:20:44 proxy-002 varnishd[11866]: Child (31970) not responding to CLI, killing it. Jan 28 05:20:54 proxy-002 varnishd[11866]: Child (31970) not responding to CLI, killing it. Jan 28 05:20:58 proxy-002 varnishd[11866]: Child (31970) not responding to CLI, killing it. Jan 28 05:20:58 proxy-002 varnishd[11866]: Child (31970) not responding to CLI, killing it. Jan 28 05:20:58 proxy-002 varnishd[11866]: Child (31970) died signal=3 (core dumped) Jan 28 05:20:58 proxy-002 varnishd[11866]: child (5972) Started Jan 28 05:20:58 proxy-002 varnishd[11866]: Child (5972) said Child starts Jan 28 05:47:24 proxy-002 varnishd[11866]: Child (5972) not responding to CLI, killing it. Jan 28 05:47:30 proxy-002 varnishd[11866]: Child (5972) not responding to CLI, killing it. Jan 28 05:47:30 proxy-002 varnishd[11866]: Child (5972) not responding to CLI, killing it. Jan 28 05:47:30 proxy-002 varnishd[11866]: Child (5972) died signal=3 (core dumped) Jan 28 05:47:31 proxy-002 varnishd[11866]: child (10110) Started Jan 28 05:47:31 proxy-002 varnishd[11866]: Child (10110) said Child starts }}} I turned on core dump. Here is the backtrace from gdb: {{{ gdb `which varnishd` core-varnishd-3-101-102-5972-1359348450 ... ... ... warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7fff8f7fd000 Core was generated by `/usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -'. Program terminated with signal 3, Quit. #0 0x000000359ac0d594 in __lll_lock_wait () from /lib64/libpthread.so.0 (gdb) bt #0 0x000000359ac0d594 in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x000000359ac08e8a in _L_lock_1034 () from /lib64/libpthread.so.0 #2 0x000000359ac08d4c in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x000000000043231a in ?? () #4 0x00000000004324b8 in ?? () #5 0x00000000004326a1 in VSL () #6 0x00000000004199be in ?? () #7 0x000000359bc0756c in ?? () from /usr/lib64/varnish/libvarnish.so #8 0x000000359bc07bad in ?? () from /usr/lib64/varnish/libvarnish.so #9 0x000000359bc0a88d in ?? () from /usr/lib64/varnish/libvarnish.so #10 0x000000359bc06a22 in VCLS_Poll () from /usr/lib64/varnish/libvarnish.so #11 0x0000000000419a31 in CLI_Run () #12 0x000000000042ca1d in child_main () #13 0x000000000043ecbc in ?? () #14 0x000000000043f54c in ?? () #15 0x000000359bc09567 in ?? () from /usr/lib64/varnish/libvarnish.so #16 0x000000359bc09bf8 in vev_schedule () from /usr/lib64/varnish/libvarnish.so #17 0x000000000043ee92 in MGT_Run () #18 0x000000000044e1bb in main () }}} Information about system: OS RHEL 5.8 {{{ cat /etc/issue Red Hat Enterprise Linux Server release 5.8 (Tikanga) Kernel \r on an \m }}} kernel: {{{ 2.6.18-308.11.1.el5xen }}} processors: {{{ processor0 => Intel(R) Xeon(R) CPU L5420 @ 2.50GHz processor1 => Intel(R) Xeon(R) CPU L5420 @ 2.50GHz }}} rpm package from official repo(http://repo.varnish- cache.org/redhat/varnish-3.0/el$release/$arch/): {{{ rpm -qa |grep varnish varnish-libs-3.0.3-1.el5.centos varnish-3.0.3-1.el5.centos }}} binary: {{{ file /usr/sbin/varnishd /usr/sbin/varnishd: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), stripped }}} there is no panic messages: {{{ varnishadm panic.show Child has not panicked or panic has been cleared Command failed with error code 300 }}} varnish started with next params: {{{ varnish 10110 0.0 1.1 2273492 49452 ? Sl 05:47 0:03 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 100,250,120 -u varnish -g varnish -s malloc,3G -h critbit -p thread_pools 2 -p thread_pool_add_delay 2 -p listen_depth 1024 -p session_linger 50 -p lru_interval 2 -p shm_reclen 255 -p sess_timeout 30 root 11866 0.0 0.0 110948 940 ? Ss Jan25 0:00 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 100,250,120 -u varnish -g varnish -s malloc,3G -h critbit -p thread_pools 2 -p thread_pool_add_delay 2 -p listen_depth 1024 -p session_linger 50 -p lru_interval 2 -p shm_reclen 255 -p sess_timeout 30 }}} varnish stat: {{{ varnishstat -1 client_conn 16258 0.95 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 23568 1.37 Client requests received cache_hit 0 0.00 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 0 0.00 Cache misses backend_conn 106 0.01 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 16621 0.97 Backend conn. reuses backend_toolate 102 0.01 Backend conn. was closed backend_recycle 16727 0.97 Backend conn. recycles backend_retry 0 0.00 Backend conn. retry fetch_head 0 0.00 Fetch head fetch_length 5920 0.34 Fetch with Length fetch_chunked 10807 0.63 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) n_sess_mem 23 . N struct sess_mem n_sess 5 . N struct sess n_object 0 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 9 . N struct objectcore n_objecthead 9 . N struct objecthead n_waitinglist 9 . N struct waitinglist n_vbc 4 . N struct vbc n_wrk 200 . N worker threads n_wrk_create 200 0.01 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 0 0.00 N queued work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 2 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_moved 0 . N LRU moved objects losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 37957 2.21 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 16258 0.95 Total Sessions s_req 23568 1.37 Total Requests s_pipe 0 0.00 Total pipe s_pass 16727 0.97 Total pass s_fetch 16727 0.97 Total fetch s_hdrbytes 2988339 173.95 Total header bytes s_bodybytes 2500675296 145565.82 Total body bytes sess_closed 7191 0.42 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 16726 0.97 Session Linger sess_herd 14469 0.84 Session herd shm_records 1266876 73.75 SHM records shm_writes 138871 8.08 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 691 0.04 SHM MTX contention shm_cycles 0 0.00 SHM cycles through buffer sms_nreq 6841 0.40 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 89378 . SMS bytes allocated sms_bfree 89378 . SMS bytes freed backend_req 16727 0.97 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 1 . N total active bans n_ban_gone 1 . N total gone bans n_ban_add 1 0.00 N new bans added n_ban_retire 0 0.00 N old bans deleted n_ban_obj_test 0 0.00 N objects tested n_ban_re_test 0 0.00 N regexps tested against n_ban_dups 0 0.00 N duplicate bans removed hcb_nolock 0 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 0 0.00 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 17179 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 0 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 0 0.00 Gunzip operations LCK.sms.creat 3 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 181323 10.55 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 6 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 948103 55.19 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 3 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 845 0.05 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 0 0.00 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 0 0.00 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 3 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 39 0.00 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 3 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 146498 8.53 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 3 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 154696 9.00 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 3 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 304602 17.73 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 3 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 3 0.00 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 6 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 687062 39.99 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 32 0.00 Created locks LCK.objhdr.destroy 0 0.00 Destroyed locks LCK.objhdr.locks 0 0.00 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 3 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 151859 8.84 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 6 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 0 0.00 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 3 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 50672 2.95 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 3 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 151878 8.84 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 3 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 60615 3.53 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 3 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 1969 0.11 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 6 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 298061 17.35 Lock Operations LCK.backend.colls 0 0.00 Collisions SMA.s0.c_req 6841 0.40 Allocator requests SMA.s0.c_fail 0 0.00 Allocator failures SMA.s0.c_bytes 233360192 13584.04 Bytes allocated SMA.s0.c_freed 233360192 13584.04 Bytes freed SMA.s0.g_alloc 0 . Allocations outstanding SMA.s0.g_bytes 0 . Bytes outstanding SMA.s0.g_space 3221225472 . Bytes available SMA.Transient.c_req 47843 2.78 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 3323928244 193487.88 Bytes allocated SMA.Transient.c_freed 3323928244 193487.88 Bytes freed SMA.Transient.g_alloc 0 . Allocations outstanding SMA.Transient.g_bytes 0 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.preprod_001(172.19.xx.xx,,8080).vcls 3 . VCL references VBE.preprod_001(172.19.xx.xx,,8080).happy18446744073709551615 . Happy health probes VBE.preprod_002(172.19.xx.yy,,8080).vcls 3 . VCL references VBE.preprod_002(172.19.xx.yy,,8080).happy18446744073709551615 . Happy health probes }}} This server is not caching anything, but balance traffic between couples backends. Apparently, there is no huge load on the box during the crash: memory: {{{ 05:20:01 AM kbmemfree kbmemused %memused kbbuffers kbcached kbswpfree kbswpused %swpused kbswpcad 05:30:01 AM 449312 3744992 89.29 256468 2058492 2097144 0 0.00 0 05:40:01 AM 448924 3745380 89.30 256468 2058616 2097144 0 0.00 0 05:50:01 AM 442336 3751968 89.45 256496 2066160 2097144 0 0.00 0 }}} CPU: {{{ 05:20:01 AM CPU %user %nice %system %iowait %steal %idle 05:30:01 AM all 0.09 0.00 0.15 14.53 0.02 85.21 05:40:01 AM all 0.09 0.00 0.12 3.58 0.02 96.19 05:50:01 AM all 0.03 0.00 0.12 3.62 0.02 96.21 }}} Disk IO: {{{ 05:20:01 AM tps rtps wtps bread/s bwrtn/s 05:30:01 AM 19.23 0.00 19.23 0.00 199.72 05:40:01 AM 9.37 0.00 9.37 0.00 110.86 05:50:01 AM 16.73 0.00 16.73 0.00 169.21 }}} If you need more details please do not hesitate to contact me. Thank you! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 28 11:24:57 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 Jan 2013 11:24:57 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <061.3a676882822ab535efdf790d27b40db2@varnish-cache.org> #1054: Child not responding to CLI, killing it ---------------------------+----------------------- Reporter: scorillo | Owner: lkarsten Type: defect | Status: reopened Priority: normal | Milestone: Component: documentation | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ---------------------------+----------------------- Comment (by lkarsten): Thank you for the information. Can you please increase cli_timeout and see if this problem persists? 30 seconds should be sufficient. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 30 09:24:52 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 30 Jan 2013 09:24:52 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <061.85e9fa632eec6e6cb276ce26aae5f874@varnish-cache.org> #1054: Child not responding to CLI, killing it ---------------------------+----------------------- Reporter: scorillo | Owner: lkarsten Type: defect | Status: reopened Priority: normal | Milestone: Component: documentation | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ---------------------------+----------------------- Comment (by andrii.grytsenko): Thank you for your answer. I increased cli_timeout to 30 seconds but child keeps failing anyway. {{{ Jan 29 12:59:00 proxy-001 varnishd[14728]: CLI telnet 127.0.0.1 57463 127.0.0.1 6082 Wr 200 cli_timeout 30 [seconds]#012 Default is 10#012 Timeout for the childs replies to CLI requests#012 from the master. Jan 29 12:59:04 proxy-001 varnishd[14728]: CLI telnet 127.0.0.1 57463 127.0.0.1 6082 Rd quit Jan 29 12:59:04 proxy-001 varnishd[14728]: CLI telnet 127.0.0.1 57463 127.0.0.1 6082 Wr 500 Closing CLI connection Jan 30 04:39:31 proxy-001 varnishd[14728]: Child (31581) not responding to CLI, killing it. Jan 30 04:39:32 proxy-001 varnishd[14728]: Child (31581) not responding to CLI, killing it. Jan 30 04:39:32 proxy-001 varnishd[14728]: Child (31581) died signal=3 Jan 30 04:39:32 proxy-001 varnishd[14728]: child (16605) Started Jan 30 04:39:32 proxy-001 varnishd[14728]: Child (16605) said Child starts }}} by the way varnish is run inside xen virtual machine, may be is has something to do with this crashes. Thanks -- Ticket URL: Varnish The Varnish HTTP Accelerator