From varnish-bugs at projects.linpro.no Thu May 1 20:49:29 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 01 May 2008 20:49:29 -0000 Subject: [Varnish] #236: Past-date Expires is not handled correctly Message-ID: <053.c78f5a31201fd3e4e808ade9133318c7@projects.linpro.no> #236: Past-date Expires is not handled correctly ---------------------+------------------------------------------------------ Reporter: newbery | Owner: des Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: ---------------------+------------------------------------------------------ Using the 1.1.2 release, lines 84-86 in rcd2616.c show this pseudocode: if (date && expires) retirement_age = max(0, min(retirement_age, Expires: - Date:) But in lines 146-145, we have, if (h_date != 0 && h_expires != 0) { if (h_date < h_expires && h_expires - h_date < retirement_age) retirement_age = h_expires - h_date; } Which appears to impose an extra requirement that Expires must be greater than Date. This breaks the usecase where Expires is deliberately set to a past date as a shorthand for "do not cache". At the moment, the only solution is to set a default ttl of zero seconds upon Varnish startup. It would seem to be better if we could fix this in the code to abide by the apparent intent of the pseudocode. This issue was discussed on the list: http://projects.linpro.no/pipermail/varnish- misc/2008-April/thread.html#1695 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri May 2 03:52:17 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 02 May 2008 03:52:17 -0000 Subject: [Varnish] #188: thread pileup In-Reply-To: <054.501398c9432dffc43debd3778be2327e@projects.linpro.no> References: <054.501398c9432dffc43debd3778be2327e@projects.linpro.no> Message-ID: <063.5efb47821db92a0b0289d5e431b32d39@projects.linpro.no> #188: thread pileup ----------------------+----------------------------------------------------- Reporter: steinove | Owner: phk Type: defect | Status: assigned Priority: high | Milestone: Component: varnishd | Version: 1.1.1 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by marcusgrando): People, Please see http://varnish.projects.linpro.no/ticket/235. Maybe that's why thread eat 100% CPU. Regards -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri May 2 04:03:55 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 02 May 2008 04:03:55 -0000 Subject: [Varnish] #235: Varnish Linux performance In-Reply-To: <057.939e5a37a22cf5a6a639947d809cd9b1@projects.linpro.no> References: <057.939e5a37a22cf5a6a639947d809cd9b1@projects.linpro.no> Message-ID: <066.36bfcd3beae78b54d1aaae8cebc3e683@projects.linpro.no> #235: Varnish Linux performance -------------------------+-------------------------------------------------- Reporter: rafaelumann | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: performance | -------------------------+-------------------------------------------------- Comment (by marcusgrando): I really think the cleaner need to be done by another thread, and keep accepter thread more cleaner. I don't know if will need protect some value with mutex or something, but keep accept thread and response threads faster are acceptable. In real world without ab(1), client have different internet connections and many clients can lost tcp connection. Best regards -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon May 5 06:52:45 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 05 May 2008 06:52:45 -0000 Subject: [Varnish] #237: Varnish crashes on assert error in http_dissect_hdrs(), cache_http.c line 375 Message-ID: <052.b912108d70e8121ab174125f63fe3742@projects.linpro.no> #237: Varnish crashes on assert error in http_dissect_hdrs(), cache_http.c line 375 ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: varnishd core dump http_dissect_hdrs ----------------------+----------------------------------------------------- Running Varnish trunk (up to date to commit 2629) for a good number of days, finally it crashed: {{{ Child said (2, 2305): <> Child said (2, 2305): << Condition(p <= t.e) not true.>> Child said (2, 2305): << errno = 35 (Resource temporarily unavailable)>> }}} Unfortunately I could not generate a backtrace, as the core-dump was too big and I ran out of diskspace. I am doing this on FreeBSD 7.0-RELEASE/amd64 with the ULE scheduler. I have one local patch in storage_file.c that sets the pagesize to 2048 bytes, and I start Varnish with these parameters: -p obj_workspace=4096 -p lru_interval=3600 -h classic,500009 -p ping_interval=0 -p cli_timeout=30 -p auto_restart=off -p thread_pools=4 -p thread_pool_max=1000 -p listen_depth=4096 -p srcaddr_hash=20480 -p default_ttl=604800 -s malloc,100G -P /var/run/varnishd.pid At the time of the crash, I had: * 4.71 million objects, 216.29 GB data in the cache. * 56 GB swap in use. My VCL: {{{ backend default { .host = "192.168.110.1"; .port = "80"; } acl purge { "192.168.100.1"/32; } sub vcl_recv { set req.grace = 5m; if (req.http.host ~ "^(bars.*.foo.no|bazcache.foo.no)$") { if (req.request == "GET" || req.request == "HEAD") { lookup; } elsif (req.request == "PURGE") { if (client.ip ~ purge) { lookup; } else { error 405 "Not allowed."; } } else { pipe; } } else { error 403 "Access denied. Contact cacheadmin at foo.no if you have problems."; } } sub vcl_miss { if (req.request ~ "^(PURGE)$") { error 404 "Not in cache."; } else { fetch; } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } else { if (!obj.cacheable) { pass; } else { deliver; } } } sub vcl_fetch { set obj.grace = 5m; if (obj.status == 404 || obj.status == 401 || obj.status == 500) { pass; } if (!obj.valid) { error obj.status; } if (!obj.cacheable) { pass; } if (obj.http.Cookie) { remove obj.http.Cookie; } if (obj.http.Set-Cookie) { remove obj.http.Set-Cookie; } insert; } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue May 6 18:59:58 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 06 May 2008 18:59:58 -0000 Subject: [Varnish] #230: Set bereq.host = backend.host if backend host is domain name (instead of ip address) In-Reply-To: <050.f38bc564612b70770d5cc97096c8e88b@projects.linpro.no> References: <050.f38bc564612b70770d5cc97096c8e88b@projects.linpro.no> Message-ID: <059.2af456e01b12d061ec071ca64aca96aa@projects.linpro.no> #230: Set bereq.host = backend.host if backend host is domain name (instead of ip address) ----------------------+----------------------------------------------------- Reporter: runa | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: wontfix Keywords: | ----------------------+----------------------------------------------------- Comment (by runa): Sorry, I don't agree: there's a reason why the system admin used a hostname instead of an IP address in the backend definition. I think we should honor that. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue May 6 19:12:16 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 06 May 2008 19:12:16 -0000 Subject: [Varnish] #230: Set bereq.host = backend.host if backend host is domain name (instead of ip address) In-Reply-To: <050.f38bc564612b70770d5cc97096c8e88b@projects.linpro.no> References: <050.f38bc564612b70770d5cc97096c8e88b@projects.linpro.no> Message-ID: <059.a774fc80bf0b058c3793815dac7ab7ff@projects.linpro.no> #230: Set bereq.host = backend.host if backend host is domain name (instead of ip address) ----------------------+----------------------------------------------------- Reporter: runa | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: wontfix Keywords: | ----------------------+----------------------------------------------------- Comment (by phk): Appearantly you misunderstood my answer, let me try again: If the request has a Host: header, we send that to the backend unmodified. (If it needs modified, do so in VCL). If the request has no Host: header (or it was removed by VCL) we add a Host: header before sending the request to the backend, using the hostname from the backend specification, as a string. In other words: If we are about to send a request to the backend and it has no Host: header, we add a Host: header using the exact string specified in VCL, whatever format it has. For code references see: cache_http.c line 662-664 and cache_backend.c line 126-136. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue May 6 22:04:16 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 06 May 2008 22:04:16 -0000 Subject: [Varnish] #223: fixed redhat varnishlog init script and added one for varnishncsa In-Reply-To: <052.0a72877af7628ee3eb86ab4b4655aad2@projects.linpro.no> References: <052.0a72877af7628ee3eb86ab4b4655aad2@projects.linpro.no> Message-ID: <061.9a84f096ec09fced099710468292e62e@projects.linpro.no> #223: fixed redhat varnishlog init script and added one for varnishncsa -------------------------------------------+-------------------------------- Reporter: hillct | Owner: des Type: defect | Status: new Priority: normal | Milestone: Varnish 1.2 Component: packaging | Version: trunk Severity: normal | Resolution: Keywords: varnishlog varnishncsa redhat | -------------------------------------------+-------------------------------- Changes (by phk): * milestone: Varnish 2.0 code complete => Varnish 1.2 Comment: Milestone changed per hillct request -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu May 8 23:26:47 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 08 May 2008 23:26:47 -0000 Subject: [Varnish] #238: regsub only replaces one occurance Message-ID: <053.504a720132fa7a84012e00981861c60d@projects.linpro.no> #238: regsub only replaces one occurance ----------------------+----------------------------------------------------- Reporter: galaxor | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Keywords: ----------------------+----------------------------------------------------- In the manpage, it says that regsub "[r]eturns a copy of str with all occurrences of the regular expression regex replaced with sub". In reality, it only replaces the *first* occurrence. Since vcl doesn't have loops, it is highly inconvenient to replace all. (It could probably be done by including some C, I guess...?) This patch brings the reality in line with the documentation, by making regsub replace *all* occurrences. It's a patch against r2617. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri May 9 23:07:07 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 09 May 2008 23:07:07 -0000 Subject: [Varnish] #239: Varnish runtime VCL compile hack on solaris 10/opensolaris Message-ID: <053.8b793790e604bcc73ad5bac9e78eeb65@projects.linpro.no> #239: Varnish runtime VCL compile hack on solaris 10/opensolaris ----------------------+----------------------------------------------------- Reporter: victori | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: blocker | Keywords: ----------------------+----------------------------------------------------- I am not a C programmer so I just created this hack to fix the vcl runtime compile issue. mgt_cc_cmd defaults to "exec cc -fpic -shared -Wl,-x -o %o %s" on solaris -Wl,-x flags are invalid, and even if removed it produces an invalid linker file. My fix was to change the runtime arguments to "exec cc -shared -fpic -c %o %s"; Which seems to work, varnish starts up and runs. printf("\nOriginal cmd: %s\n",mgt_cc_cmd); char *foo = "exec cc -shared -fpic -c %o %s"; printf("\nUsing cmd: %s\n",foo); for (p = foo, pct = 0; *p; ++p) {..... -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu May 15 09:04:22 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 15 May 2008 09:04:22 -0000 Subject: [Varnish] #132: Varnish 1.1 dies with assert error in SES_Delete In-Reply-To: <052.fe8cf8964ac10aa02472af7aa5d37aad@projects.linpro.no> References: <052.fe8cf8964ac10aa02472af7aa5d37aad@projects.linpro.no> Message-ID: <061.4e714ad13f6ab42b0dc1d29c442e684a@projects.linpro.no> #132: Varnish 1.1 dies with assert error in SES_Delete ----------------------------------+----------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: closed Priority: high | Milestone: Varnish 1.1.2 Component: varnishd | Version: 1.1 Severity: major | Resolution: fixed Keywords: core dump SES_Delete | ----------------------------------+----------------------------------------- Comment (by add): [http://www.iron-dvb.com.cn installing metal stair rails Interior stair handrail exterior baluster Glass wood stainless wrought CONTEMPORARY designs stairways aluminum modern log banister DECK outdoor price posts vinyl curved] -- Ticket URL: Varnish The Varnish HTTP Accelerator From MAILER-DAEMON at projects.linpro.no Thu May 15 09:04:26 2008 From: MAILER-DAEMON at projects.linpro.no (Mail Delivery System) Date: Thu, 15 May 2008 11:04:26 +0200 (CEST) Subject: Undelivered Mail Returned to Sender Message-ID: <20080515090426.CB34B1ED230@projects.linpro.no> This is the mail system at host projects.linpro.no. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system : host sohumx.h.a.sohu.com[61.135.132.110] said: 550 5.1.1 : Recipient address rejected: User unknown in local recipient table (in reply to RCPT TO command) -------------- next part -------------- An embedded message was scrubbed... From: "Varnish" Subject: Re: [Varnish] #132: Varnish 1.1 dies with assert error in SES_Delete Date: Thu, 15 May 2008 09:04:22 -0000 Size: 2651 URL: From varnish-bugs at projects.linpro.no Thu May 15 11:29:00 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 15 May 2008 11:29:00 -0000 Subject: [Varnish] #19: HTTP purge In-Reply-To: <049.8f39fc8f66a2142171f0ae9fa5886ddc@projects.linpro.no> References: <049.8f39fc8f66a2142171f0ae9fa5886ddc@projects.linpro.no> Message-ID: <058.c003ceed488cc13e28dcfad2f3fb33f4@projects.linpro.no> #19: HTTP purge -------------------------+-------------------------------------------------- Reporter: des | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: duplicate Keywords: | -------------------------+-------------------------------------------------- Comment (by add): [http://www.iron-dvb.com.cn installing metal stair rails Interior stair handrail exterior baluster Glass wood stainless wrought CONTEMPORARY designs stairways aluminum modern log banister DECK outdoor price posts vinyl curved] -- Ticket URL: Varnish The Varnish HTTP Accelerator From MAILER-DAEMON at projects.linpro.no Thu May 15 11:29:05 2008 From: MAILER-DAEMON at projects.linpro.no (Mail Delivery System) Date: Thu, 15 May 2008 13:29:05 +0200 (CEST) Subject: Undelivered Mail Returned to Sender Message-ID: <20080515112905.D80D81ED29D@projects.linpro.no> This is the mail system at host projects.linpro.no. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system : host sohumx.h.a.sohu.com[61.135.132.110] said: 550 5.1.1 : Recipient address rejected: User unknown in local recipient table (in reply to RCPT TO command) -------------- next part -------------- An embedded message was scrubbed... From: "Varnish" Subject: Re: [Varnish] #19: HTTP purge Date: Thu, 15 May 2008 11:29:00 -0000 Size: 2455 URL: From varnish-bugs at projects.linpro.no Thu May 15 21:23:30 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 15 May 2008 21:23:30 -0000 Subject: [Varnish] #240: Second and subsiquent ESI includes are not cached Message-ID: <048.96d1a92b1118c5e88d7130ed74d67c36@projects.linpro.no> #240: Second and subsiquent ESI includes are not cached ----------------------+----------------------------------------------------- Reporter: jt | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Keywords: esi include ----------------------+----------------------------------------------------- If more than one ESI include directives are placed in a page, the second and subsiquent includes will not be cached properly. * Running latest varnish trunk[[BR]] * The behavior is the same regardless of the size of max-age and the order in which includes are placed.[[BR]] * The containing document is cached properly, as is the first include[[BR]] * The ESI objects are inserted, and seem to expire on schedule, but subsequent lookups which should hit on the cache entries do not[[BR]] * An observation, which could easily be a mis-correlation: second and subsequent requests for esi fragments are logged with an 'XID' of '0'. I do not know if this is intentional or perhaps indicative of the underlying problem. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon May 19 23:18:20 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 19 May 2008 23:18:20 -0000 Subject: [Varnish] #240: Second and subsiquent ESI includes are not cached In-Reply-To: <048.96d1a92b1118c5e88d7130ed74d67c36@projects.linpro.no> References: <048.96d1a92b1118c5e88d7130ed74d67c36@projects.linpro.no> Message-ID: <057.02771c3b15e332db7b10577adf5eab80@projects.linpro.no> #240: Second and subsiquent ESI includes are not cached -------------------------+-------------------------------------------------- Reporter: jt | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: esi include | -------------------------+-------------------------------------------------- Comment (by jt): Additional findings. * I can see the missed includes expire on time, ONE PER REQUEST. For example, on the esi page with two includes, one of which is missed, if I load the page ten times in twenty seconds, I will see ten objects expire (in addition to the main document and the first include). The objects are obviously stored, but still miss on further requests. * The object goes to the MISS step in cache_center cnt_lookup() where there is a check on sp->obj->busy. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed May 21 12:47:59 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 21 May 2008 12:47:59 -0000 Subject: [Varnish] #145: Issues whild loading VCL code on the fly In-Reply-To: <052.cbf0da58c87c745a45933c736d50eec2@projects.linpro.no> References: <052.cbf0da58c87c745a45933c736d50eec2@projects.linpro.no> Message-ID: <061.ab00b22a16166ca9be052c7df440ca51@projects.linpro.no> #145: Issues whild loading VCL code on the fly -------------------------------+-------------------------------------------- Reporter: anders | Owner: des Type: defect | Status: assigned Priority: normal | Milestone: Component: varnishd | Version: 1.1 Severity: normal | Resolution: Keywords: varnishd load VCL | -------------------------------+-------------------------------------------- Comment (by anders): This problem still exists. I checked with trunk/2635. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed May 21 13:36:22 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 21 May 2008 13:36:22 -0000 Subject: [Varnish] #241: Varnish gets stuck, stops responding Message-ID: <052.7305c1bbe0e17a9adeb70af90b7b47a0@projects.linpro.no> #241: Varnish gets stuck, stops responding ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- I am running Varnish/trunk up to date to commit 2625, on FreeBSD/amd64 7.0-RELEASE with SCHED_ULE on Intel hardware. Today, my Varnish server stopped responding (just getting timeouts) while spending 100% CPU usage on one CPU (this is a SMP system with two cores): {{{ last pid: 54762; load averages: 1.02, 1.02, 0.98 up 47+23:32:28 15:18:54 33 processes: 2 running, 31 sleeping CPU states: % user, % nice, % system, % interrupt, % idle Mem: 940M Active, 4784K Inact, 204M Wired, 51M Cache, 214M Buf, 1762M Free Swap: 6144M Total, 76K Used, 6144M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 6894 root 6 4 0 41235M 909M sbwait 1 19.2H 100.00% varnishd }}} My VCL: {{{ backend ai { .host = "192.168.37.230"; .port = "80"; } backend tux { .host = "192.168.38.10"; .port = "80"; } backend tjenester { .host = "192.168.38.4"; .port = "80"; } backend tux2 { .host = "192.168.38.12"; .port = "80"; } backend tux3 { .host = "192.168.38.13"; .port = "80"; } acl aipurge { "192.168.37.211"/32; /* aicache1 */ "192.168.40.240"/32; /* aicacheadmin1 */ "192.168.40.241"/32; /* aicacheadmin2 */ "192.168.33.33"/32; /* home.schibsted.no (nagios) */ } sub vcl_recv { set req.grace = 1m; if (req.http.host ~ "^(aicache.+.aftenposten.no|cache.aftenposten.no|www.aftenposten.no|aftenposten.no|forbruker.no|www.forbruker.no|oslopuls.no|www.oslopuls.no|e24.no|www.e24.no|hyttemag.no|www.hyttemag.no|www.ap.no|ap.no|www.golf.no|golf.no)$") { set req.backend = ai; } elsif (req.http.host ~ "^(tuxcache.aftenposten.no)$") { set req.backend = tux; } elsif (req.http.host ~ "^(tjenestercache.aftenposten.no)$") { set req.backend = tjenester; } elsif (req.http.host ~ "^(www.aguiden.no|aguiden.no|www.aftenpostenguiden.no|aftenpostenguiden.no)$") { set req.backend = tux2; } elsif (req.http.host ~ "^(www.adressaguiden.no|adressaguiden.no|www.btguiden.no|btguiden.no|www.f-guiden.no|f-guiden.no|www.aftenbladguiden.no|aftenbladguiden.no|www.t-aguiden.no|t-aguiden.no)$") { set req.backend = tux3; } else { error 403 "Access denied. Contact cacheadmin at aftenposten.no if you have problems. Please indicate which OS, browser, browser version and URL you are using."; } if (req.request == "GET" || req.request == "HEAD") { if (req.http.Expect) { pipe; } if (req.http.Authenticate) { pass; } lookup; } elsif (req.request == "PURGE") { if (client.ip ~ aipurge) { lookup; } else { error 405 "Not allowed."; } } else { pipe; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } else { fetch; } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } else { if (!obj.cacheable) { pass; } else { deliver; } } } sub vcl_fetch { set obj.grace = 1m; if (obj.status == 404 || obj.status == 503) { pass; } if (obj.http.host ~ "^(aicache.+.aftenposten.no|cache.aftenposten.no|www.aftenposten.no|aftenposten.no|forbruker.no|www.forbruker.no|oslopuls.no|www.oslopuls.no|e24.no|www.e24.no|hyttemag.no|www.hyttemag.no|www.ap.no|ap.no|www.golf.no|golf.no)$") { if (obj.http.Set-Cookie) { remove obj.http.Set-Cookie; } } if (!obj.valid) { error obj.status; } if (!obj.cacheable) { pass; } if (obj.ttl < 120s) { set obj.ttl = 120s; } insert; } sub vcl_hash { set req.hash += req.url; if (req.http.host ~ "^(aicache.*|cache).aftenposten.no$") { set req.hash += "www.aftenposten.no"; } else { set req.hash += req.http.host; } hash; } }}} Backtrace: {{{ [New Thread 0x12070e0290 (LWP 100265)] [New Thread 0x800f016e0 (LWP 100228)] [New Thread 0x800f01570 (LWP 100227)] [New Thread 0x800f01400 (LWP 100226)] [New Thread 0x800f01290 (LWP 100203)] [New Thread 0x800f01120 (LWP 100136)] Loaded symbols for /lib/libthr.so.3 Reading symbols from /lib/libm.so.5...done. Loaded symbols for /lib/libm.so.5 Reading symbols from /lib/libc.so.7...done. Loaded symbols for /lib/libc.so.7 Error while reading shared library symbols: ./vcl.1P9zoqAU.o: No such file or directory. Reading symbols from /libexec/ld-elf.so.1...done. Loaded symbols for /libexec/ld-elf.so.1 [Switching to Thread 0x12070e0290 (LWP 100265)] 0x0000000800d98d5a in read () from /lib/libc.so.7 (gdb) bt #0 0x0000000800d98d5a in read () from /lib/libc.so.7 #1 0x0000000800a92710 in read () from /lib/libthr.so.3 #2 0x0000000000415927 in HTC_Rx (htc=0x7ffff57aa9f0) at cache_httpconn.c:171 #3 0x0000000000410cda in Fetch (sp=0x1207879008) at cache_fetch.c:356 #4 0x000000000040c723 in cnt_fetch (sp=0x1207879008) at cache_center.c:338 #5 0x000000000040d462 in CNT_Session (sp=0x1207879008) at steps.h:41 #6 0x00000000004172c8 in wrk_do_one (w=0x7ffff57acad0) at cache_pool.c:194 #7 0x000000000041758d in wrk_thread (priv=0x800f0f160) at cache_pool.c:247 #8 0x0000000800a93a38 in pthread_getprio () from /lib/libthr.so.3 #9 0x00007ffff55ad000 in ?? () Error accessing memory address 0x7ffff57ad000: Bad address. (gdb) frame 3 #3 0x0000000000410cda in Fetch (sp=0x1207879008) at cache_fetch.c:356 356 i = HTC_Rx(htc); (gdb) print *hp $1 = {magic = 1680389577, ws = 0x1206915018, conds = 0 '\0', logtag = HTTP_Tx, status = 0, hd = {{b = 0x430182 "GET", e = 0x430185 ""}, { b = 0x12078797a7 "/jobbflash/prodFinn.jsp", e = 0x12078797be ""}, { b = 0x4301fd "HTTP/1.1", e = 0x430205 ""}, {b = 0x0, e = 0x0}, {b = 0x0, e = 0x0}, {b = 0x12078797c9 "Accept: */*", e = 0x12078797d4 ""}, { b = 0x12078797d6 "Accept-Language: nb-NO", e = 0x12078797ec ""}, { b = 0x12078797ee "Referer: http://tuxcache.aftenposten.no/jobbflash/flash/bannerResize.swf", e = 0x1207879836 ""}, { b = 0x1207879838 "x-flash-version: 9,0,115,0", e = 0x1207879852 ""}, { b = 0x1207879854 "UA-CPU: x86", e = 0x120787985f ""}, { b = 0x1207879861 "Accept-Encoding: gzip, deflate", e = 0x120787987f ""}, { b = 0x1207879881 "User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)", e = 0x12078798fd ""}, {b = 0x12078798ff "Host: tuxcache.aftenposten.no", e = 0x120787991c ""}, { b = 0x1207879936 "Cookie: RMID=3e65ce3448106fa0; RMFL=011JyLWHU20OWI|U20Oib; __utma=132922206.360310670.1209112880.1211366073.1211369561.66; __utmz=132922206.1211369561.66.65.utmcsr=startsiden.no|utmccn=(referral)|utmc"..., e = 0x1207879a99 ""}, {b = 0x1206915298 "X-Varnish: 307859345", e = 0x12069152ac ""}, { b = 0x12069152ad "X-Forwarded-for: 62.101.206.52", e = 0x12069152cb ""}, {b = 0x0, e = 0x0} }, hdf = '\0' , nhd = 16} (gdb) print *sp $2 = {magic = 741317722, fd = 122, id = 122, xid = 307859345, restarts = 0, esis = 0, wrk = 0x7ffff57acad0, sockaddrlen = 16, mysockaddrlen = 128, sockaddr = 0x1207879690, mysockaddr = 0x1207879710, addr = 0x1207879790 "62.101.206.52", port = 0x120787979e "9744", srcaddr = 0x1207035ed0, doclose = 0x0, http = 0x12078791e8, http0 = 0x1207879430, ws = {{magic = 905626964, id = 0x433f78 "sess", s = 0x1207879790 "62.101.206.52", f = 0x1207879b05 ": Wed, 21 May 2008 11:33:55 GMT", r = 0x0, e = 0x120787b790 "", overflow = 0}}, ws_ses = 0x12078797a3 "GET", ws_req = 0x1207879a9d "", htc = {{magic = 1041886673, fd = 122, ws = 0x1207879070, rxbuf = {b = 0x12078797a3 "GET", e = 0x1207879a9d ""}, pipeline = {b = 0x0, e = 0x0}}}, t_open = 1211369635.5090055, t_req = 1211369635.5951691, t_resp = nan(0x8000000000000), t_end = 1211369635.5090055, grace = 60, step = STP_FETCH, cur_method = 0, handling = 32, sendbody = 0 '\0', wantbody = 1 '\001', err_code = 0, err_reason = 0x0, list = { vtqe_next = 0x1207767008, vtqe_prev = 0x1207d34130}, director = 0x800f16348, backend = 0x800f40320, bereq = 0x1206915000, obj = 0x807824000, objhead = 0x0, vcl = 0x1206705480, mem = 0x1207879000, workreq = {list = {vtqe_next = 0x0, vtqe_prev = 0x0}, sess = 0x1207879008}, acct = {first = 1211369635.4965239, sess = 1, req = 1, pipe = 0, pass = 0, fetch = 0, hdrbytes = 176, bodybytes = 0}, nhashptr = 12, ihashptr = 4, lhashptr = 49, hashptr = 0x1207879aa0} }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed May 21 14:00:47 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 21 May 2008 14:00:47 -0000 Subject: [Varnish] #155: varnishncsa -b results in empty output In-Reply-To: <052.15db299ad6ec4d546be3f9c42090b487@projects.linpro.no> References: <052.15db299ad6ec4d546be3f9c42090b487@projects.linpro.no> Message-ID: <061.d35b43ea4101c1c828f97fe47fade61f@projects.linpro.no> #155: varnishncsa -b results in empty output -------------------------+-------------------------------------------------- Reporter: anders | Owner: des Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishncsa | Version: trunk Severity: normal | Resolution: Keywords: varnishncsa | -------------------------+-------------------------------------------------- Comment (by anders): Testing with trunk/2625 and trunk/2635, this problem does not exist anymore for me - on FreeBSD 7.0 SMP (SCHED_ULE) systems. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri May 23 11:09:39 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 23 May 2008 11:09:39 -0000 Subject: [Varnish] #241: Varnish gets stuck, stops responding In-Reply-To: <052.7305c1bbe0e17a9adeb70af90b7b47a0@projects.linpro.no> References: <052.7305c1bbe0e17a9adeb70af90b7b47a0@projects.linpro.no> Message-ID: <061.a5637a3dcbe5e91c931dca05da251d10@projects.linpro.no> #241: Varnish gets stuck, stops responding ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by anders): I attached a one second ktrace of what Varnish was doing when it was stuck, hanging. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon May 26 08:14:37 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 26 May 2008 08:14:37 -0000 Subject: [Varnish] #241: Varnish gets stuck, stops responding In-Reply-To: <052.7305c1bbe0e17a9adeb70af90b7b47a0@projects.linpro.no> References: <052.7305c1bbe0e17a9adeb70af90b7b47a0@projects.linpro.no> Message-ID: <061.7eb4105625a90dd2eca0b9eca8a0653d@projects.linpro.no> #241: Varnish gets stuck, stops responding ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by phk): Hi Anders, Sorry about the delay. THe ktrace shows that varnish ran out of filedescriptors: {{{ 6894 100228 varnishd 1211375024.912964 CALL accept(0x7,0x7fffff5fbf00,0x7fffff5fbf8c) 6894 100228 varnishd 1211375024.912972 RET accept -1 errno 24 Too many open files 6894 100228 varnishd 1211375024.912980 CALL poll(0x1206801078,0x1,0x3e8) 6894 100228 varnishd 1211375024.912987 RET poll 1 6894 100228 varnishd 1211375024.912994 CALL clock_gettime(0,0x7fffff5fbed0) 6894 100228 varnishd 1211375024.913003 RET clock_gettime 0 6894 100228 varnishd 1211375024.913009 CALL accept(0x7,0x7fffff5fbf00,0x7fffff5fbf8c) 6894 100228 varnishd 1211375024.913017 RET accept -1 errno 24 Too many open files 6894 100228 varnishd 1211375024.913025 CALL poll(0x1206801078,0x1,0x3e8) 6894 100228 varnishd 1211375024.913033 RET poll 1 6894 100228 varnishd 1211375024.913040 CALL clock_gettime(0,0x7fffff5fbed0) 6894 100228 varnishd 1211375024.913048 RET clock_gettime 0 6894 100228 varnishd 1211375024.913054 CALL accept(0x7,0x7fffff5fbf00,0x7fffff5fbf8c) }}} When this happens again, please try to record "netstat -an", "sockstat" and "fstat" outputs. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon May 26 08:37:14 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 26 May 2008 08:37:14 -0000 Subject: [Varnish] #239: Varnish runtime VCL compile hack on solaris 10/opensolaris In-Reply-To: <053.8b793790e604bcc73ad5bac9e78eeb65@projects.linpro.no> References: <053.8b793790e604bcc73ad5bac9e78eeb65@projects.linpro.no> Message-ID: <062.507c79f274fc507e8bd64d17302d6a78@projects.linpro.no> #239: Varnish runtime VCL compile hack on solaris 10/opensolaris ----------------------+----------------------------------------------------- Reporter: victori | Owner: phk Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: trunk Severity: blocker | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: I have put this command under #if defined (__SOLARIS__), hope that is the correct check. Thanks for the patch :-) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue May 27 07:11:17 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 27 May 2008 07:11:17 -0000 Subject: [Varnish] #238: regsub only replaces one occurance In-Reply-To: <053.504a720132fa7a84012e00981861c60d@projects.linpro.no> References: <053.504a720132fa7a84012e00981861c60d@projects.linpro.no> Message-ID: <062.c4936eb474ffe1d950eacbd00f0ad929@projects.linpro.no> #238: regsub only replaces one occurance ----------------------+----------------------------------------------------- Reporter: galaxor | Owner: phk Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: Your patch uses needless much memory on the workspace, a better and smaller fix committed in #2640. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue May 27 07:30:04 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 27 May 2008 07:30:04 -0000 Subject: [Varnish] #233: Backend responses with no Content-Length header In-Reply-To: <051.539ad76537ce0e22cfebd3de9e5f1443@projects.linpro.no> References: <051.539ad76537ce0e22cfebd3de9e5f1443@projects.linpro.no> Message-ID: <060.5c51c0c9f2892d1eb34c95c8541f1988@projects.linpro.no> #233: Backend responses with no Content-Length header ----------------------------+----------------------------------------------- Reporter: afnid | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 1.1.2 Severity: normal | Resolution: Keywords: Content-Length | ----------------------------+----------------------------------------------- Old description: > When the content-length header is missing, the length defaults to zero > and the object does not get cached or sent to the client. > > If I set the state to pipe in the vcl.conf, the content get's passed to > the client correctly. > > So why is there no Content-Length header? In my case I have a Java > application server that does will dynamically generate a resized image > from a master image based on the request, and using varnish to cache the > resized images. When possible, the application server uses 'convert' to > create the image and pass it through to the client, so the content length > is not known ahead of time. If I buffer the images in my app-server and > set a correct Content-Length header and everything works as expected. > > Viewing the image through a direct connection to the app server using > firefox worked fine, so did viewing the image through a pound server that > bypassed varnish. I have a work-around, just not as memory efficient as > I would like, and if I was dealing with larger images, I would have to > cache them to disk, and would defeat any reason to use varnish at all. > > Here is the tail-end of the log: > > 17 ObjProtocol c HTTP/1.1 > 17 ObjStatus c 200 > 17 ObjResponse c OK > 17 ObjHeader c Server: Winstone Servlet Engine v0.9.10 > 17 ObjHeader c Expires: Wed, 30 Apr 2008 18:06:28 GMT > 17 ObjHeader c Content-Type: image/jpeg > 17 ObjHeader c Date: Sun, 27 Apr 2008 18:06:28 GMT > 17 ObjHeader c X-Powered-By: Servlet/2.5 (Winstone/0.9.10) > 17 TTL c 1336541393 RFC 259199 1209319588 1209319588 > 1209578788 0 0 > 17 VCL_call c fetch insert > 17 Length c 0 > 17 VCL_call c deliver deliver > 17 TxProtocol c HTTP/1.1 > 17 TxStatus c 200 > 17 TxResponse c OK > 17 TxHeader c Server: Winstone Servlet Engine v0.9.10 > 17 TxHeader c Expires: Wed, 30 Apr 2008 18:06:28 GMT > 17 TxHeader c Content-Type: image/jpeg > 17 TxHeader c X-Powered-By: Servlet/2.5 (Winstone/0.9.10) > 17 TxHeader c Date: Sun, 27 Apr 2008 18:06:28 GMT > 17 TxHeader c X-Varnish: 1336541393 > 17 TxHeader c Age: 0 > 17 TxHeader c Via: 1.1 varnish > 17 TxHeader c Connection: keep-alive > 17 ReqEnd c 1336541393 1209319588.402075768 1209319588.478549719 > 0.005740166 0.076447487 0.000026464 New description: When the content-length header is missing, the length defaults to zero and the object does not get cached or sent to the client. If I set the state to pipe in the vcl.conf, the content get's passed to the client correctly. So why is there no Content-Length header? In my case I have a Java application server that does will dynamically generate a resized image from a master image based on the request, and using varnish to cache the resized images. When possible, the application server uses 'convert' to create the image and pass it through to the client, so the content length is not known ahead of time. If I buffer the images in my app-server and set a correct Content-Length header and everything works as expected. Viewing the image through a direct connection to the app server using firefox worked fine, so did viewing the image through a pound server that bypassed varnish. I have a work-around, just not as memory efficient as I would like, and if I was dealing with larger images, I would have to cache them to disk, and would defeat any reason to use varnish at all. Here is the tail-end of the log: {{{ 17 ObjProtocol c HTTP/1.1 17 ObjStatus c 200 17 ObjResponse c OK 17 ObjHeader c Server: Winstone Servlet Engine v0.9.10 17 ObjHeader c Expires: Wed, 30 Apr 2008 18:06:28 GMT 17 ObjHeader c Content-Type: image/jpeg 17 ObjHeader c Date: Sun, 27 Apr 2008 18:06:28 GMT 17 ObjHeader c X-Powered-By: Servlet/2.5 (Winstone/0.9.10) 17 TTL c 1336541393 RFC 259199 1209319588 1209319588 1209578788 0 0 17 VCL_call c fetch insert 17 Length c 0 17 VCL_call c deliver deliver 17 TxProtocol c HTTP/1.1 17 TxStatus c 200 17 TxResponse c OK 17 TxHeader c Server: Winstone Servlet Engine v0.9.10 17 TxHeader c Expires: Wed, 30 Apr 2008 18:06:28 GMT 17 TxHeader c Content-Type: image/jpeg 17 TxHeader c X-Powered-By: Servlet/2.5 (Winstone/0.9.10) 17 TxHeader c Date: Sun, 27 Apr 2008 18:06:28 GMT 17 TxHeader c X-Varnish: 1336541393 17 TxHeader c Age: 0 17 TxHeader c Via: 1.1 varnish 17 TxHeader c Connection: keep-alive 17 ReqEnd c 1336541393 1209319588.402075768 1209319588.478549719 0.005740166 0.076447487 0.000026464 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue May 27 07:45:17 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 27 May 2008 07:45:17 -0000 Subject: [Varnish] #233: Backend responses with no Content-Length header In-Reply-To: <051.539ad76537ce0e22cfebd3de9e5f1443@projects.linpro.no> References: <051.539ad76537ce0e22cfebd3de9e5f1443@projects.linpro.no> Message-ID: <060.b43faaf5c73938b0ddc9440cb710ad9c@projects.linpro.no> #233: Backend responses with no Content-Length header ----------------------------+----------------------------------------------- Reporter: afnid | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 1.1.2 Severity: normal | Resolution: worksforme Keywords: Content-Length | ----------------------------+----------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: I tested this with -trunk version, and it seems to work there. I can't remember specifically fixing this, but the code that fetches from the backend got quite an overhaul along the way and this seems to have been fixed as a sideeffect. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue May 27 13:16:38 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 27 May 2008 13:16:38 -0000 Subject: [Varnish] #203: X-Forwarded-For handling In-Reply-To: <049.a9dfbcae697beab84bb891028f059653@projects.linpro.no> References: <049.a9dfbcae697beab84bb891028f059653@projects.linpro.no> Message-ID: <058.c03a017fe89e9d13f19b3688e4cc091d@projects.linpro.no> #203: X-Forwarded-For handling ---------------------------+------------------------------------------------ Reporter: des | Owner: des Type: enhancement | Status: assigned Priority: normal | Milestone: Component: documentation | Version: trunk Severity: normal | Resolution: Keywords: | ---------------------------+------------------------------------------------ Comment (by noah): Apache and many other proxies implement the non-standard X-Forward-For header with capital X and Fs. In varnish-cache/bin/varnishd/cache_http.c it reads {{{ http_PrintfHeader(sp->wrk, sp->fd, hp, "X-Forwarded-for: %s", sp->addr); }}} Please consider changing -for to -For to help maintain compliance with already existing code that depend on the exact case of the header in question. (And yes, I'm aware that RFC 2616 says they should be case-insensitive - but still; ) Arrays in PHP are case-sensitive and doing something like this won't work as expected: {{{ $headers = apache_request_headers(); echo "XFF value: ". $headers["X-Forwarded-For"]. }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed May 28 13:51:01 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 28 May 2008 13:51:01 -0000 Subject: [Varnish] #125: exit, quit command in telnet interface. In-Reply-To: <051.9cead11972a3e74d718d7abf4c4982f0@projects.linpro.no> References: <051.9cead11972a3e74d718d7abf4c4982f0@projects.linpro.no> Message-ID: <060.a527ce25cc95b961ed87bb5b7a2ed136@projects.linpro.no> #125: exit, quit command in telnet interface. -------------------------+-------------------------------------------------- Reporter: tiamo | Owner: phk Type: enhancement | Status: reopened Priority: lowest | Milestone: Component: varnishd | Version: trunk Severity: trivial | Resolution: Keywords: | -------------------------+-------------------------------------------------- Changes (by anders): * status: closed => reopened * resolution: wontfix => Comment: I would like to reopen this ticket. On odd keyboards/OSes/character maps, it can be hard to produce the escape character needed for telnet to close the connection. If we had a quit/exit command, it would be much much easier. Its nice to be able to exit the telnet connection without losing the terminal/login.... -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed May 28 22:50:52 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 28 May 2008 22:50:52 -0000 Subject: [Varnish] #242: Segfault on Set Header in vcl_hit Assert error in WS_Reserve(), cache_ws.c line 103: Message-ID: <049.b9726f6e4df7e7ea64ccd784ec34e269@projects.linpro.no> #242: Segfault on Set Header in vcl_hit Assert error in WS_Reserve(), cache_ws.c line 103: --------------------+------------------------------------------------------- Reporter: sky | Owner: des Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 1.1.2 Severity: normal | Keywords: --------------------+------------------------------------------------------- (gdb) bt #0 0x000000373cc30055 in raise () from /lib64/libc.so.6 #1 0x000000373cc31af0 in abort () from /lib64/libc.so.6 #2 0x000000373d8024fc in lbv_assert () from /usr/lib64/libvarnish.so.0 #3 0x0000000000417bb8 in WS_Reserve () #4 0x0000000000415f52 in VRT_l_obj_status () #5 0x0000000000416bc3 in VRT_SetHdr () #6 0x00002aaaeb459f96 in ?? () #7 0x000000004a00cc3f in ?? () #8 0x000000001ac43088 in ?? () #9 0x000000001ac43088 in ?? () #10 0x0000000000413cac in VCL_hit_method () #11 0x000000000040984f in CNT_Session () #12 0x0000000000410a33 in WRK_QueueSession () #13 0x0000000000410c73 in WRK_QueueSession () #14 0x000000373dc062f7 in start_thread () from /lib64/libpthread.so.0 #15 0x000000373ccce85d in clone () from /lib64/libc.so.6 Changing sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } if (!obj.cacheable) { pass; } set obj.http.X-Cache = "HIT"; deliver; } to sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } if (!obj.cacheable) { pass; } if(obj.http.X-Cache == "MISS") { set obj.http.X-Cache = "HIT"; } deliver; } -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed May 28 22:51:43 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 28 May 2008 22:51:43 -0000 Subject: [Varnish] #242: Segfault on Set Header in vcl_hit Assert error in WS_Reserve(), cache_ws.c line 103: In-Reply-To: <049.b9726f6e4df7e7ea64ccd784ec34e269@projects.linpro.no> References: <049.b9726f6e4df7e7ea64ccd784ec34e269@projects.linpro.no> Message-ID: <058.37616e67f62807b1de608f1ef9266ce8@projects.linpro.no> #242: Segfault on Set Header in vcl_hit Assert error in WS_Reserve(), cache_ws.c line 103: --------------------+------------------------------------------------------- Reporter: sky | Owner: des Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 1.1.2 Severity: normal | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by sky): {{{ #0 0x000000373cc30055 in raise () from /lib64/libc.so.6 #1 0x000000373cc31af0 in abort () from /lib64/libc.so.6 #2 0x000000373d8024fc in lbv_assert () from /usr/lib64/libvarnish.so.0 #3 0x0000000000417bb8 in WS_Reserve () #4 0x0000000000415f52 in VRT_l_obj_status () #5 0x0000000000416bc3 in VRT_SetHdr () #6 0x00002aaaeb459f96 in ?? () #7 0x000000004a00cc3f in ?? () #8 0x000000001ac43088 in ?? () #9 0x000000001ac43088 in ?? () #10 0x0000000000413cac in VCL_hit_method () #11 0x000000000040984f in CNT_Session () #12 0x0000000000410a33 in WRK_QueueSession () #13 0x0000000000410c73 in WRK_QueueSession () #14 0x000000373dc062f7 in start_thread () from /lib64/libpthread.so.0 #15 0x000000373ccce85d in clone () from /lib64/libc.so.6 }}} better formatted -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri May 30 22:27:56 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 30 May 2008 22:27:56 -0000 Subject: [Varnish] #240: Second and subsiquent ESI includes are not cached In-Reply-To: <048.96d1a92b1118c5e88d7130ed74d67c36@projects.linpro.no> References: <048.96d1a92b1118c5e88d7130ed74d67c36@projects.linpro.no> Message-ID: <057.069fea6868441c368963137c2e87128c@projects.linpro.no> #240: Second and subsiquent ESI includes are not cached -------------------------+-------------------------------------------------- Reporter: jt | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: esi include | -------------------------+-------------------------------------------------- Comment (by phk): Can you please check if rev 2643 fixes this ? Thanks for the debugging & patience. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sat May 31 09:34:54 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sat, 31 May 2008 09:34:54 -0000 Subject: [Varnish] #125: exit, quit command in telnet interface. In-Reply-To: <051.9cead11972a3e74d718d7abf4c4982f0@projects.linpro.no> References: <051.9cead11972a3e74d718d7abf4c4982f0@projects.linpro.no> Message-ID: <060.41747c54efaee6383dbad8bdb202b3e3@projects.linpro.no> #125: exit, quit command in telnet interface. -------------------------+-------------------------------------------------- Reporter: tiamo | Owner: phk Type: enhancement | Status: reopened Priority: lowest | Milestone: Component: varnishd | Version: trunk Severity: trivial | Resolution: Keywords: | -------------------------+-------------------------------------------------- Comment (by phk): Implemented in #2644 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sat May 31 09:34:58 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sat, 31 May 2008 09:34:58 -0000 Subject: [Varnish] #125: exit, quit command in telnet interface. In-Reply-To: <051.9cead11972a3e74d718d7abf4c4982f0@projects.linpro.no> References: <051.9cead11972a3e74d718d7abf4c4982f0@projects.linpro.no> Message-ID: <060.38d91af18676116eb7644c2b919fa365@projects.linpro.no> #125: exit, quit command in telnet interface. -------------------------+-------------------------------------------------- Reporter: tiamo | Owner: phk Type: enhancement | Status: closed Priority: lowest | Milestone: Component: varnishd | Version: trunk Severity: trivial | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by phk): * status: reopened => closed * resolution: => fixed -- Ticket URL: Varnish The Varnish HTTP Accelerator