From varnish-bugs at varnish-cache.org Mon May 4 11:28:52 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 11:28:52 -0000 Subject: [Varnish] #1630: Varnish crashes with long regex condition In-Reply-To: <049.e52d1e681fb29ad347dd951ccb0f4fa2@varnish-cache.org> References: <049.e52d1e681fb29ad347dd951ccb0f4fa2@varnish-cache.org> Message-ID: <064.622a8f5142737e4678cb984d48101964@varnish-cache.org> #1630: Varnish crashes with long regex condition -------------------------+----------------------------------------- Reporter: huguesalary | Owner: Tollef Fog Heen Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: fixed Keywords: | -------------------------+----------------------------------------- Changes (by Tollef Fog Heen ): * status: new => closed * owner: => Tollef Fog Heen * resolution: => fixed Comment: In [419f983eeec6d0b0b932105495f4e09179260eec]: {{{ #!CommitTicketReference repository="" revision="419f983eeec6d0b0b932105495f4e09179260eec" Enable PCRE JIT-er by default The JIT-er is generally safe to use, and faster, so use that. Fixes: #1576, #1630 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 11:28:52 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 11:28:52 -0000 Subject: [Varnish] #1576: default pcre_match_limit_recursion and thread_pool_stack dont match - varnishd child process crashes with segfault error 6 in libpcre.so.3.13.1 In-Reply-To: <042.4a3d8a5cb69dc6179ad46be4f593a479@varnish-cache.org> References: <042.4a3d8a5cb69dc6179ad46be4f593a479@varnish-cache.org> Message-ID: <057.30b3f88919137c35d21386ce1f643ecb@varnish-cache.org> #1576: default pcre_match_limit_recursion and thread_pool_stack dont match - varnishd child process crashes with segfault error 6 in libpcre.so.3.13.1 ----------------------+--------------------- Reporter: abdi | Owner: slink Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.1 Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: In [419f983eeec6d0b0b932105495f4e09179260eec]: {{{ #!CommitTicketReference repository="" revision="419f983eeec6d0b0b932105495f4e09179260eec" Enable PCRE JIT-er by default The JIT-er is generally safe to use, and faster, so use that. Fixes: #1576, #1630 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 11:30:45 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 11:30:45 -0000 Subject: [Varnish] #1632: Debian: postinst broken In-Reply-To: <043.bdb6966f18eda35af6376ddaeb977125@varnish-cache.org> References: <043.bdb6966f18eda35af6376ddaeb977125@varnish-cache.org> Message-ID: <058.0c2f10c2f2bd395d0c2d95e85ac1c861@varnish-cache.org> #1632: Debian: postinst broken -----------------------+----------------------- Reporter: idl0r | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: packaging | Version: unknown Severity: normal | Resolution: Keywords: | -----------------------+----------------------- Comment (by lkarsten): Discussed during bugwash today. [13:27:02] < Mithrandir> it's also not a bug, the preinst stops varnish, then postinst starts it again [13:27:06] < Mithrandir> this is how all init scripts work +Review this as part of preparation to 4.1. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 11:45:22 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 11:45:22 -0000 Subject: [Varnish] #1608: Varnish does not return 413 or 414 In-Reply-To: <043.855604c38eb44e6eba395fc067ec62e2@varnish-cache.org> References: <043.855604c38eb44e6eba395fc067ec62e2@varnish-cache.org> Message-ID: <058.e5740cdd82d3719fc65f89938cbfc253@varnish-cache.org> #1608: Varnish does not return 413 or 414 --------------------+--------------------- Reporter: fgsch | Owner: fgsch Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+--------------------- Changes (by martin): * status: new => closed * resolution: => fixed Comment: Discussed during bugwash today. Conclusion is current behaviour (closing session) is fine. Closing. Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 13:43:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 13:43:28 -0000 Subject: [Varnish] #1483: std.fileread can't handle files larger than 32kB In-Reply-To: <046.f994344d3b207a1697bfef2ba72a4d0c@varnish-cache.org> References: <046.f994344d3b207a1697bfef2ba72a4d0c@varnish-cache.org> Message-ID: <061.7914fd77544c12545d56359edd758c8e@varnish-cache.org> #1483: std.fileread can't handle files larger than 32kB ----------------------+---------------------- Reporter: kipusoep | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.5 Severity: normal | Resolution: invalid Keywords: fileread | ----------------------+---------------------- Comment (by adminah): I am about to leave and cannot look into this further right now, but I do want to mention that relying on pkg-config itself is not a problem (as you said, we also use it later), the problem is that on some systems, the .pc file for pkg-config is supplying wrong information (whereas on yours, it's botan-config that is wrong), so that's why I'm trying to make it work with both (while one of them is bad). [http://komputermesh.blogspot.com/2015/03/martabak.html Martabak Paling Enak di Jakarta] [http://komputermesh.blogspot.com/2015/04/berbagi-bahagia-bersama- tabloidnovacom.html Berbagi Bahagia Bersama Tabloidnova.com] [http://komputermesh.blogspot.com/2015/05/synthesis.html Synthesis Development ? Indonesia Developer Property] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 13:43:35 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 13:43:35 -0000 Subject: [Varnish] #352: ESI can't process gzipped documents In-Reply-To: <044.a1bc1e5e4b58bc0e4001b7760414e0f6@varnish-cache.org> References: <044.a1bc1e5e4b58bc0e4001b7760414e0f6@varnish-cache.org> Message-ID: <059.4f512517312119493014be897b794fa8@varnish-cache.org> #352: ESI can't process gzipped documents ----------------------+---------------------------------------- Reporter: toledo | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Varnish 2.0 code complete Component: varnishd | Version: trunk Severity: normal | Resolution: invalid Keywords: esi gzip | ----------------------+---------------------------------------- Comment (by adminah): I am about to leave and cannot look into this further right now, but I do want to mention that relying on pkg-config itself is not a problem (as you said, we also use it later), the problem is that on some systems, the .pc file for pkg-config is supplying wrong information (whereas on yours, it's botan-config that is wrong), so that's why I'm trying to make it work with both (while one of them is bad). [http://komputermesh.blogspot.com/2015/03/martabak.html Martabak Paling Enak di Jakarta] [http://komputermesh.blogspot.com/2015/04/berbagi-bahagia-bersama- tabloidnovacom.html Berbagi Bahagia Bersama Tabloidnova.com] [http://komputermesh.blogspot.com/2015/05/synthesis.html Synthesis Development ? Indonesia Developer Property] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 13:58:57 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 13:58:57 -0000 Subject: [Varnish] #1730: Child panic Message-ID: <045.6c642b40115a78971718ad97727ffbc6@varnish-cache.org> #1730: Child panic ---------------------+-------------------- Reporter: llavaud | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: major Keywords: panic | ---------------------+-------------------- {{{ May 4 15:18:21 webcache24 varnishd[860319]: Child (860338) died signal=6 May 4 15:18:21 webcache24 varnishd[860319]: Child (860338) Panic message: Assert error in vbe_dir_finish(), cache/cache_backend.c line 165: Condition((wrk) != NULL) not true. errno = 111 (Connection refused) thread = (cache-worker) version = varnish-trunk revision fc41c5c ident = Linux,3.2.0-4-amd64,x86_64,-junix,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x433034: pan_ic+0x134 0x41262e: vbe_dir_finish+0x25e 0x41b9c0: VDI_Http1Pipe+0x50 0x437d5c: CNT_Request+0xf7c 0x44c58b: HTTP1_Session+0x12b 0x43a7f1: SES_Proto_Req+0x61 0x4354d8: Pool_Work_Thread+0x3c8 0x447da3: WRK_Thread+0x103 0x4348cb: pool_thread+0x2b 0x7f2fa597cb50: libpthread.so.0(+0x6b50) [0x7f2fa597cb50] req = 0x7f163fc77020 { sp = 0x7f16579b9220, vxid = 18803899, step = R_STP_PIPE, req_body = R_BODY_WITH_LEN, restarts = 0, esi_level = 0, sp = 0x7f16579b9220 { fd = -1, vxid = 18803898, client = 109.25.8.176 63259, step = S_STP_H1PROC, }, worker = 0x7f165d7cac30 { stack = {0x7f165d7cb000 -> 0x7f165d7bf000} ws = 0x7f165d7cae38 { id = "wrk", {s,f,r,e} = {0x7f165d7ca420,0x7f165d7ca420,(nil),+2040}, }, VCL::method = PIPE, VCL::return = pipe, VCL::methods = {RECV, PIPE, HASH}, }, ws = 0x7f163fc77200 { id = "req", {s,f,r,e} = {0x7f163fc79020,+3048,(nil),+253912}, }, http[req] = { ws = 0x7f163fc77200[req] "PROPFIND", "/myuri", "HTTP/1.1", "Keep-Alive:", "Connection: TE, Keep-Alive", "TE: trailers", "Depth: 0", "Content-Length: 237", "Co }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 14:30:58 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 14:30:58 -0000 Subject: [Varnish] #1483: std.fileread can't handle files larger than 32kB In-Reply-To: <046.f994344d3b207a1697bfef2ba72a4d0c@varnish-cache.org> References: <046.f994344d3b207a1697bfef2ba72a4d0c@varnish-cache.org> Message-ID: <061.dc726be9949aa81b25347a3f89de2fc1@varnish-cache.org> #1483: std.fileread can't handle files larger than 32kB ----------------------+---------------------- Reporter: kipusoep | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.5 Severity: normal | Resolution: invalid Keywords: fileread | ----------------------+---------------------- Comment (by kipusoep): @adminah wrong ticket I think? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 19:26:54 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 19:26:54 -0000 Subject: [Varnish] #1730: Crash on pipe with an unreachable backend (was: Child panic) In-Reply-To: <045.6c642b40115a78971718ad97727ffbc6@varnish-cache.org> References: <045.6c642b40115a78971718ad97727ffbc6@varnish-cache.org> Message-ID: <060.5c6eafb2e16c7338108945342bea6df3@varnish-cache.org> #1730: Crash on pipe with an unreachable backend ----------------------+-------------------- Reporter: llavaud | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: panic | ----------------------+-------------------- Changes (by fgsch): * component: build => varnishd -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 20:20:14 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 20:20:14 -0000 Subject: [Varnish] #1724: no backend connection logged multiple times In-Reply-To: <043.9fdf3a72c0356475ff54e937934daaae@varnish-cache.org> References: <043.9fdf3a72c0356475ff54e937934daaae@varnish-cache.org> Message-ID: <058.4cad7f4fddeb0f39ae66c307691246eb@varnish-cache.org> #1724: no backend connection logged multiple times ----------------------+--------------------------------------------- Reporter: fgsch | Owner: Federico G. Schwindt Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------------------------------- Changes (by Federico G. Schwindt ): * owner: => Federico G. Schwindt * status: new => closed * resolution: => fixed Comment: In [a8d065135b9e63735abe83b88e8011f8f8658e60]: {{{ #!CommitTicketReference repository="" revision="a8d065135b9e63735abe83b88e8011f8f8658e60" One "no backend connection" is enough Fixes #1724 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 20:20:14 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 20:20:14 -0000 Subject: [Varnish] #1730: Crash on pipe with an unreachable backend In-Reply-To: <045.6c642b40115a78971718ad97727ffbc6@varnish-cache.org> References: <045.6c642b40115a78971718ad97727ffbc6@varnish-cache.org> Message-ID: <060.98d339b6f1d26f0ddad90c04865ec547@varnish-cache.org> #1730: Crash on pipe with an unreachable backend ----------------------+--------------------------------------------- Reporter: llavaud | Owner: Federico G. Schwindt Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: fixed Keywords: panic | ----------------------+--------------------------------------------- Changes (by Federico G. Schwindt ): * owner: => Federico G. Schwindt * status: new => closed * resolution: => fixed Comment: In [4e5c8ab06ee081aca68dc04ea91493778ae2a040]: {{{ #!CommitTicketReference repository="" revision="4e5c8ab06ee081aca68dc04ea91493778ae2a040" Fail gracefully if we can't get a backend on pipe Fixes #1730 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 21:15:56 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 21:15:56 -0000 Subject: [Varnish] #1618: The "Range" header is not honored for a cache miss. In-Reply-To: <047.7e50d570f16c3170f5bb0bb63e9ff04d@varnish-cache.org> References: <047.7e50d570f16c3170f5bb0bb63e9ff04d@varnish-cache.org> Message-ID: <062.6d19401df4fc637aa6e8acd82e0cfad9@varnish-cache.org> #1618: The "Range" header is not honored for a cache miss. -------------------------------+--------------------- Reporter: jeffawang | Owner: martin Type: defect | Status: closed Priority: high | Milestone: Component: build | Version: 4.0.2 Severity: normal | Resolution: fixed Keywords: range header miss | -------------------------------+--------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: In [97434c7cef70d99533f1fb7f78419dd5c9d36d23]: {{{ #!CommitTicketReference repository="" revision="97434c7cef70d99533f1fb7f78419dd5c9d36d23" Handle Range requests on streaming objects better. If the client sends [LO]-HI range, we trust it knows something we don't (yet), and do the Range request. If we later ran out of data, we close the session. Other range requests on streaming objects are ignored. Fixes: #1506 Fixes: #1618 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 4 21:15:56 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 May 2015 21:15:56 -0000 Subject: [Varnish] #1506: Make better use of Content-Length information: Avoid chunked responses, more control over Range handling In-Reply-To: <050.7446d258f6b1af112a619a4b721885a7@varnish-cache.org> References: <050.7446d258f6b1af112a619a4b721885a7@varnish-cache.org> Message-ID: <065.2b11c7965fddb1252a03c301038e8109@varnish-cache.org> #1506: Make better use of Content-Length information: Avoid chunked responses, more control over Range handling --------------------------+---------------------------------- Reporter: DonMacAskill | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.0 Severity: critical | Resolution: fixed Keywords: | --------------------------+---------------------------------- Changes (by Poul-Henning Kamp ): * status: reopened => closed * resolution: => fixed Comment: In [97434c7cef70d99533f1fb7f78419dd5c9d36d23]: {{{ #!CommitTicketReference repository="" revision="97434c7cef70d99533f1fb7f78419dd5c9d36d23" Handle Range requests on streaming objects better. If the client sends [LO]-HI range, we trust it knows something we don't (yet), and do the Range request. If we later ran out of data, we close the session. Other range requests on streaming objects are ignored. Fixes: #1506 Fixes: #1618 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 5 05:07:02 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 05 May 2015 05:07:02 -0000 Subject: [Varnish] #1731: 4.0.3 Panic Message-ID: <047.eff33cac2cd7895995bd2969faac6b35@varnish-cache.org> #1731: 4.0.3 Panic ---------------------------------+---------------------- Reporter: billnbell | Type: defect Status: new | Priority: highest Milestone: Varnish 4.0 release | Component: varnishd Version: unknown | Severity: normal Keywords: | ---------------------------------+---------------------- root at ip-10-250-1-184:/var/log# varnishadm 200 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.2.0-4-amd64,x86_64,-smalloc,-smalloc,-hcritbit varnish-4.0.3 revision b8c4a34 Type 'help' for command list. Type 'quit' to close CLI session. varnish> panic.show 200 Last panic at: Mon, 04 May 2015 20:38:19 GMT Assert error in Tcheck(), cache/cache.h line 1296: Condition((t.b) != 0) not true. thread = (cache-worker) version = varnish-4.0.3 revision b8c4a34 ident = Linux,3.2.0-4-amd64,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x435424: /usr/sbin/varnishd() [0x435424] 0x40e7e8: /usr/sbin/varnishd() [0x40e7e8] 0x429e33: /usr/sbin/varnishd() [0x429e33] 0x42c856: /usr/sbin/varnishd(http_FilterResp+0x86) [0x42c856] 0x420d3e: /usr/sbin/varnishd() [0x420d3e] 0x42142f: /usr/sbin/varnishd() [0x42142f] 0x438301: /usr/sbin/varnishd(Pool_Work_Thread+0x381) [0x438301] 0x44b29d: /usr/sbin/varnishd() [0x44b29d] 0x7f1f2da24b50: /lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7f1f2da24b50] 0x7f1f2d76e95d: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f1f2d76e95d] busyobj = 0x7f1f0a94e020 { ws = 0x7f1f0a94e0e0 { OVERFLOW id = ""o", {s,f,r,e} = {0x7f1f0a951400,+52256,(nil),+52256}, }, refcnt = 2 retries = 5 failed = 0 state = 1 is_do_stream is_do_pass is_uncacheable bodystatus = 3 (length), }, http[bereq] = { ws = 0x7f1f0a94e0e0["o] "GET", "/provider-search- directory/search?what=pain+management+&where=New+York%2C+NY&DeviceLocationLatitude=40.6638&DeviceLocationLongitude=-73.938141&SearchLocationLatitude=40.71455&SearchLocationLongitude=-74.007118&Specialty=&SpecialtyId=&SearchCity=&SearchState=&SearchZip=&SearchFormattedAddress=New+York%2C+NY&DidAskForLocation=&DeviceLocationName=&WasSearchByAddress=true&lat=&lon=&SearchType=", "HTTP/1.1", "host: www.healthgrades.com", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "Accept-Encoding: gzip,deflate,sdch", "Accept-Language: en-US,en;q=0.8", "Cookie: __gads=ID=6363003f23801784:T=1430758616:S=ALNI_MZcbf6Mv8nOPKmRJDgdzsPZgrEBHg; __utmt=1; __utmt_b=1; hg.omniture=Desktop|Se010fafa338d92d0; msession=Sac9fe9d13044a20; mbox=PC#1430758615257-738325.17_65#1431981465|session#1430771543694-639832#1430773725|check#true#1430771925; s_vi=[CS]v1|2AA3D26C05011824-400001092003671B[CE]; __utma=236544792.2001808011.1430758616.1430764531.1430771545.3; __utmb=236544792.4.10.1430771545; __utmc=236544792; __utmz=236544792.1430758616.1.1.utmcsr=bing|utmccn=(organic)|utmcmd=organic|utmctr=dr.%20S%20Kundi; save.complete.show.pwid=; QSI_HistorySession=http%3A%2F%2Fwww.healthgrades.com%2Fphysician%2Fdr- samiullah-kundi-y8w6d%2Frate- doctor~1430771549550%7Chttp%3A%2F%2Fwww.healthgrades.com%2Ffind-a-doctor~1430771868179; where=%7B%22pt%22%3A%2240.71455%2C-74.007118%22%2C%22displayText%22%3A%22New%20York%2C%20NY%22%2C%22selectedText%22%3A%22New%20York%2C%20NY%22%7D; searchMode=providers; what=pain%20management%20; s_pers=%20s_firstvisit%3D1430758645862%7C1588438645862%3B%20s_firstvisit_s%3DFirst%2520Visit%7C1430760805392%3B%20s_nr%3D1430771884563-Repeat%7C1433363884563%3B%20s_lastvisit%3D1430771884578%7C1525379884578%3B%20s_lastvisit_s%3DLess%2520than%25201%2520day%7C1430773684578%3B; s_sess=%20s_cc%3Dtrue%3B%20s_sq%3Dhgprod%253D%252526pid%25253Dsearch%2525253A%25252520doctor%252526pidt%25253D1%252526oid%25253Dhttp%2525253A%2525252F%2525252Fwww.healthgrades.com%2525252Ffind-a-doctor%25252523_8%252526oidt%25253D1%252526ot%25253DA%252526oi%25253D1%3B", "Referer: http://www.healthgrades.com/find-a-doctor", "User-Agent: Mozilla/5.0 (Linux; Android 5.0; SAMSUNG-SM-G870A Build/LRX21T) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/2.1 Chrome/34.0.1847.76 Mobile Safari/537.36", "Via: HTTP/1.1 chcnz01msp4ts07.wnsnet.attws.com", "x-wap-profile: "http://wap.samsungmobile.com/uaprof/SM- G870A.xml"", "X-Forwarded-Port: 80", "X-Forwarded-Proto: http", "X-Forwarded-For: 107.77.83.120, 10.250.0.75", "X-Backend-Type: m", "X-Varnish: 12877833", "X-Varnish: 10944621", "X-Varnish: 10944622", "X-Varnish: 10944623", "X-Varnish: 10944624", }, http[beresp] = { ws = 0x7f1f0a94e0e0["o] "HTTP/1.1", "Internal Server Error", "Server: Varnish", }, ws = 0x7f1f0a94e270 { id = "obj", {s,f,r,e} = {0x7f1f07faa2c8,0x7f1f07faa2c8,(nil),+56}, }, objcore (FETCH) = 0x7f1f07ec1020 { refcnt = 2 flags = 0x106 objhead = 0x7f1f2cc50500 } obj (FETCH) = 0x7f1f07faa180 { vxid = 2158428272, http[obj] = { ws = 0x7f1f0a94e270[obj] "HTTP/1.1", }, len = 0, store = { }, }, } -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 5 05:09:05 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 05 May 2015 05:09:05 -0000 Subject: [Varnish] #1731: 4.0.3 Panic In-Reply-To: <047.eff33cac2cd7895995bd2969faac6b35@varnish-cache.org> References: <047.eff33cac2cd7895995bd2969faac6b35@varnish-cache.org> Message-ID: <062.6fa2ad20eba390fb7309f04346e62b37@varnish-cache.org> #1731: 4.0.3 Panic -----------------------+---------------------------------- Reporter: billnbell | Owner: Type: defect | Status: new Priority: highest | Milestone: Varnish 4.0 release Component: varnishd | Version: unknown Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------- Comment (by billnbell): Not sure why we get this on retry. We confirmed that is happens on retry... The only thing that looks weird is: "x-wap-profile: "http://wap.samsungmobile.com/uaprof/SM-G870A.xml"", -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 5 05:11:40 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 05 May 2015 05:11:40 -0000 Subject: [Varnish] #1731: 4.0.3 Panic In-Reply-To: <047.eff33cac2cd7895995bd2969faac6b35@varnish-cache.org> References: <047.eff33cac2cd7895995bd2969faac6b35@varnish-cache.org> Message-ID: <062.0e5111f1a719488dae80017c0e25dfe0@varnish-cache.org> #1731: 4.0.3 Panic -----------------------+---------------------------------- Reporter: billnbell | Owner: Type: defect | Status: new Priority: highest | Milestone: Varnish 4.0 release Component: varnishd | Version: unknown Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------- Comment (by billnbell): Also, this looks weird: "X-Varnish: 12877833", "X-Varnish: 10944621", "X-Varnish: 10944622", "X-Varnish: 10944623", "X-Varnish: 10944624", AND "Via: HTTP/1.1 chcnz01msp4ts07.wnsnet.attws.com" -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 5 09:50:59 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 05 May 2015 09:50:59 -0000 Subject: [Varnish] #1729: Incorrect parsing of responses containing both chunked transfer-encoding and Content-length In-Reply-To: <046.49b447f81a2003545c8fc6e797d7e59c@varnish-cache.org> References: <046.49b447f81a2003545c8fc6e797d7e59c@varnish-cache.org> Message-ID: <061.327cfa20f53314a321ee945fec7186db@varnish-cache.org> #1729: Incorrect parsing of responses containing both chunked transfer-encoding and Content-length ----------------------+---------------------------------------- Reporter: regilero | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | ----------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * owner: => Poul-Henning Kamp * resolution: => fixed Comment: In [9ab5669b8684add053650ff724cfe75cecfa324b]: {{{ #!CommitTicketReference repository="" revision="9ab5669b8684add053650ff724cfe75cecfa324b" Remove any C-L header if T-E: chunked is present. Fixes: #1729 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 5 11:47:02 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 05 May 2015 11:47:02 -0000 Subject: [Varnish] #1643: corrupt range response In-Reply-To: <041.113f5e6ae1e1582a5da6631379ba3e37@varnish-cache.org> References: <041.113f5e6ae1e1582a5da6631379ba3e37@varnish-cache.org> Message-ID: <056.ebf6d6e25906029785d5e2405d9668d8@varnish-cache.org> #1643: corrupt range response --------------------------------------+--------------------- Reporter: Jay | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: byte-range request range | --------------------------------------+--------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: I overlooked this ticket yesterday. This should be fixed now. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 5 21:01:50 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 05 May 2015 21:01:50 -0000 Subject: [Varnish] #1731: 4.0.3 Panic In-Reply-To: <047.eff33cac2cd7895995bd2969faac6b35@varnish-cache.org> References: <047.eff33cac2cd7895995bd2969faac6b35@varnish-cache.org> Message-ID: <062.5e0497a4ee20efce2b636b6d8085258f@varnish-cache.org> #1731: 4.0.3 Panic -----------------------+---------------------------------- Reporter: billnbell | Owner: Type: defect | Status: new Priority: highest | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------- Changes (by fgsch): * version: unknown => 4.0.3 Old description: > root at ip-10-250-1-184:/var/log# varnishadm > 200 > ----------------------------- > Varnish Cache CLI 1.0 > ----------------------------- > Linux,3.2.0-4-amd64,x86_64,-smalloc,-smalloc,-hcritbit > varnish-4.0.3 revision b8c4a34 > > Type 'help' for command list. > Type 'quit' to close CLI session. > > varnish> panic.show > 200 > Last panic at: Mon, 04 May 2015 20:38:19 GMT > Assert error in Tcheck(), cache/cache.h line 1296: > Condition((t.b) != 0) not true. > thread = (cache-worker) > version = varnish-4.0.3 revision b8c4a34 > ident = Linux,3.2.0-4-amd64,x86_64,-smalloc,-smalloc,-hcritbit,epoll > Backtrace: > 0x435424: /usr/sbin/varnishd() [0x435424] > 0x40e7e8: /usr/sbin/varnishd() [0x40e7e8] > 0x429e33: /usr/sbin/varnishd() [0x429e33] > 0x42c856: /usr/sbin/varnishd(http_FilterResp+0x86) [0x42c856] > 0x420d3e: /usr/sbin/varnishd() [0x420d3e] > 0x42142f: /usr/sbin/varnishd() [0x42142f] > 0x438301: /usr/sbin/varnishd(Pool_Work_Thread+0x381) [0x438301] > 0x44b29d: /usr/sbin/varnishd() [0x44b29d] > 0x7f1f2da24b50: /lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) > [0x7f1f2da24b50] > 0x7f1f2d76e95d: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) > [0x7f1f2d76e95d] > busyobj = 0x7f1f0a94e020 { > ws = 0x7f1f0a94e0e0 { OVERFLOW > id = ""o", > {s,f,r,e} = {0x7f1f0a951400,+52256,(nil),+52256}, > }, > refcnt = 2 > retries = 5 > failed = 0 > state = 1 > is_do_stream > is_do_pass > is_uncacheable > bodystatus = 3 (length), > }, > http[bereq] = { > ws = 0x7f1f0a94e0e0["o] > "GET", > "/provider-search- > directory/search?what=pain+management+&where=New+York%2C+NY&DeviceLocationLatitude=40.6638&DeviceLocationLongitude=-73.938141&SearchLocationLatitude=40.71455&SearchLocationLongitude=-74.007118&Specialty=&SpecialtyId=&SearchCity=&SearchState=&SearchZip=&SearchFormattedAddress=New+York%2C+NY&DidAskForLocation=&DeviceLocationName=&WasSearchByAddress=true&lat=&lon=&SearchType=", > "HTTP/1.1", > "host: www.healthgrades.com", > "Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", > "Accept-Encoding: gzip,deflate,sdch", > "Accept-Language: en-US,en;q=0.8", > "Cookie: > __gads=ID=6363003f23801784:T=1430758616:S=ALNI_MZcbf6Mv8nOPKmRJDgdzsPZgrEBHg; > __utmt=1; __utmt_b=1; hg.omniture=Desktop|Se010fafa338d92d0; > msession=Sac9fe9d13044a20; > mbox=PC#1430758615257-738325.17_65#1431981465|session#1430771543694-639832#1430773725|check#true#1430771925; > s_vi=[CS]v1|2AA3D26C05011824-400001092003671B[CE]; > __utma=236544792.2001808011.1430758616.1430764531.1430771545.3; > __utmb=236544792.4.10.1430771545; __utmc=236544792; > __utmz=236544792.1430758616.1.1.utmcsr=bing|utmccn=(organic)|utmcmd=organic|utmctr=dr.%20S%20Kundi; > save.complete.show.pwid=; > QSI_HistorySession=http%3A%2F%2Fwww.healthgrades.com%2Fphysician%2Fdr- > samiullah-kundi-y8w6d%2Frate- > doctor~1430771549550%7Chttp%3A%2F%2Fwww.healthgrades.com%2Ffind-a-doctor~1430771868179; > where=%7B%22pt%22%3A%2240.71455%2C-74.007118%22%2C%22displayText%22%3A%22New%20York%2C%20NY%22%2C%22selectedText%22%3A%22New%20York%2C%20NY%22%7D; > searchMode=providers; what=pain%20management%20; > s_pers=%20s_firstvisit%3D1430758645862%7C1588438645862%3B%20s_firstvisit_s%3DFirst%2520Visit%7C1430760805392%3B%20s_nr%3D1430771884563-Repeat%7C1433363884563%3B%20s_lastvisit%3D1430771884578%7C1525379884578%3B%20s_lastvisit_s%3DLess%2520than%25201%2520day%7C1430773684578%3B; > s_sess=%20s_cc%3Dtrue%3B%20s_sq%3Dhgprod%253D%252526pid%25253Dsearch%2525253A%25252520doctor%252526pidt%25253D1%252526oid%25253Dhttp%2525253A%2525252F%2525252Fwww.healthgrades.com%2525252Ffind-a-doctor%25252523_8%252526oidt%25253D1%252526ot%25253DA%252526oi%25253D1%3B", > "Referer: http://www.healthgrades.com/find-a-doctor", > "User-Agent: Mozilla/5.0 (Linux; Android 5.0; SAMSUNG-SM-G870A > Build/LRX21T) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/2.1 > Chrome/34.0.1847.76 Mobile Safari/537.36", > "Via: HTTP/1.1 chcnz01msp4ts07.wnsnet.attws.com", > "x-wap-profile: "http://wap.samsungmobile.com/uaprof/SM- > G870A.xml"", > "X-Forwarded-Port: 80", > "X-Forwarded-Proto: http", > "X-Forwarded-For: 107.77.83.120, 10.250.0.75", > "X-Backend-Type: m", > "X-Varnish: 12877833", > "X-Varnish: 10944621", > "X-Varnish: 10944622", > "X-Varnish: 10944623", > "X-Varnish: 10944624", > }, > http[beresp] = { > ws = 0x7f1f0a94e0e0["o] > "HTTP/1.1", > "Internal Server Error", > "Server: Varnish", > }, > ws = 0x7f1f0a94e270 { > id = "obj", > {s,f,r,e} = {0x7f1f07faa2c8,0x7f1f07faa2c8,(nil),+56}, > }, > objcore (FETCH) = 0x7f1f07ec1020 { > refcnt = 2 > flags = 0x106 > objhead = 0x7f1f2cc50500 > } > obj (FETCH) = 0x7f1f07faa180 { > vxid = 2158428272, > http[obj] = { > ws = 0x7f1f0a94e270[obj] > "HTTP/1.1", > }, > len = 0, > store = { > }, > }, > } New description: {{{ root at ip-10-250-1-184:/var/log# varnishadm 200 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.2.0-4-amd64,x86_64,-smalloc,-smalloc,-hcritbit varnish-4.0.3 revision b8c4a34 Type 'help' for command list. Type 'quit' to close CLI session. varnish> panic.show 200 Last panic at: Mon, 04 May 2015 20:38:19 GMT Assert error in Tcheck(), cache/cache.h line 1296: Condition((t.b) != 0) not true. thread = (cache-worker) version = varnish-4.0.3 revision b8c4a34 ident = Linux,3.2.0-4-amd64,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x435424: /usr/sbin/varnishd() [0x435424] 0x40e7e8: /usr/sbin/varnishd() [0x40e7e8] 0x429e33: /usr/sbin/varnishd() [0x429e33] 0x42c856: /usr/sbin/varnishd(http_FilterResp+0x86) [0x42c856] 0x420d3e: /usr/sbin/varnishd() [0x420d3e] 0x42142f: /usr/sbin/varnishd() [0x42142f] 0x438301: /usr/sbin/varnishd(Pool_Work_Thread+0x381) [0x438301] 0x44b29d: /usr/sbin/varnishd() [0x44b29d] 0x7f1f2da24b50: /lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7f1f2da24b50] 0x7f1f2d76e95d: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f1f2d76e95d] busyobj = 0x7f1f0a94e020 { ws = 0x7f1f0a94e0e0 { OVERFLOW id = ""o", {s,f,r,e} = {0x7f1f0a951400,+52256,(nil),+52256}, }, refcnt = 2 retries = 5 failed = 0 state = 1 is_do_stream is_do_pass is_uncacheable bodystatus = 3 (length), }, http[bereq] = { ws = 0x7f1f0a94e0e0["o] "GET", "/provider-search- directory/search?what=pain+management+&where=New+York%2C+NY&DeviceLocationLatitude=40.6638&DeviceLocationLongitude=-73.938141&SearchLocationLatitude=40.71455&SearchLocationLongitude=-74.007118&Specialty=&SpecialtyId=&SearchCity=&SearchState=&SearchZip=&SearchFormattedAddress=New+York%2C+NY&DidAskForLocation=&DeviceLocationName=&WasSearchByAddress=true&lat=&lon=&SearchType=", "HTTP/1.1", "host: www.healthgrades.com", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "Accept-Encoding: gzip,deflate,sdch", "Accept-Language: en-US,en;q=0.8", "Cookie: __gads=ID=6363003f23801784:T=1430758616:S=ALNI_MZcbf6Mv8nOPKmRJDgdzsPZgrEBHg; __utmt=1; __utmt_b=1; hg.omniture=Desktop|Se010fafa338d92d0; msession=Sac9fe9d13044a20; mbox=PC#1430758615257-738325.17_65#1431981465|session#1430771543694-639832#1430773725|check#true#1430771925; s_vi=[CS]v1|2AA3D26C05011824-400001092003671B[CE]; __utma=236544792.2001808011.1430758616.1430764531.1430771545.3; __utmb=236544792.4.10.1430771545; __utmc=236544792; __utmz=236544792.1430758616.1.1.utmcsr=bing|utmccn=(organic)|utmcmd=organic|utmctr=dr.%20S%20Kundi; save.complete.show.pwid=; QSI_HistorySession=http%3A%2F%2Fwww.healthgrades.com%2Fphysician%2Fdr- samiullah-kundi-y8w6d%2Frate- doctor~1430771549550%7Chttp%3A%2F%2Fwww.healthgrades.com%2Ffind-a-doctor~1430771868179; where=%7B%22pt%22%3A%2240.71455%2C-74.007118%22%2C%22displayText%22%3A%22New%20York%2C%20NY%22%2C%22selectedText%22%3A%22New%20York%2C%20NY%22%7D; searchMode=providers; what=pain%20management%20; s_pers=%20s_firstvisit%3D1430758645862%7C1588438645862%3B%20s_firstvisit_s%3DFirst%2520Visit%7C1430760805392%3B%20s_nr%3D1430771884563-Repeat%7C1433363884563%3B%20s_lastvisit%3D1430771884578%7C1525379884578%3B%20s_lastvisit_s%3DLess%2520than%25201%2520day%7C1430773684578%3B; s_sess=%20s_cc%3Dtrue%3B%20s_sq%3Dhgprod%253D%252526pid%25253Dsearch%2525253A%25252520doctor%252526pidt%25253D1%252526oid%25253Dhttp%2525253A%2525252F%2525252Fwww.healthgrades.com%2525252Ffind-a-doctor%25252523_8%252526oidt%25253D1%252526ot%25253DA%252526oi%25253D1%3B", "Referer: http://www.healthgrades.com/find-a-doctor", "User-Agent: Mozilla/5.0 (Linux; Android 5.0; SAMSUNG-SM-G870A Build/LRX21T) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/2.1 Chrome/34.0.1847.76 Mobile Safari/537.36", "Via: HTTP/1.1 chcnz01msp4ts07.wnsnet.attws.com", "x-wap-profile: "http://wap.samsungmobile.com/uaprof/SM- G870A.xml"", "X-Forwarded-Port: 80", "X-Forwarded-Proto: http", "X-Forwarded-For: 107.77.83.120, 10.250.0.75", "X-Backend-Type: m", "X-Varnish: 12877833", "X-Varnish: 10944621", "X-Varnish: 10944622", "X-Varnish: 10944623", "X-Varnish: 10944624", }, http[beresp] = { ws = 0x7f1f0a94e0e0["o] "HTTP/1.1", "Internal Server Error", "Server: Varnish", }, ws = 0x7f1f0a94e270 { id = "obj", {s,f,r,e} = {0x7f1f07faa2c8,0x7f1f07faa2c8,(nil),+56}, }, objcore (FETCH) = 0x7f1f07ec1020 { refcnt = 2 flags = 0x106 objhead = 0x7f1f2cc50500 } obj (FETCH) = 0x7f1f07faa180 { vxid = 2158428272, http[obj] = { ws = 0x7f1f0a94e270[obj] "HTTP/1.1", }, len = 0, store = { }, }, } }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 5 21:07:03 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 05 May 2015 21:07:03 -0000 Subject: [Varnish] #1731: 4.0.3 Panic In-Reply-To: <047.eff33cac2cd7895995bd2969faac6b35@varnish-cache.org> References: <047.eff33cac2cd7895995bd2969faac6b35@varnish-cache.org> Message-ID: <062.e6f978b0a928178cf685fece932bbbae@varnish-cache.org> #1731: 4.0.3 Panic -----------------------+---------------------------------- Reporter: billnbell | Owner: Type: defect | Status: new Priority: highest | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------- Comment (by fgsch): Workspace overflow. {{{ ws = 0x7f1f0a94e0e0 { OVERFLOW id = ""o", {s,f,r,e} = {0x7f1f0a951400,+52256,(nil),+52256}, }, }}} What's in your vcl_backend_fetch{} and vcl_backend_response{} ? Try increasing `workspace_backend`. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 6 03:35:11 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 06 May 2015 03:35:11 -0000 Subject: [Varnish] #1731: 4.0.3 Panic In-Reply-To: <047.eff33cac2cd7895995bd2969faac6b35@varnish-cache.org> References: <047.eff33cac2cd7895995bd2969faac6b35@varnish-cache.org> Message-ID: <062.394ca919b0a8673e8b501f6fd7454a08@varnish-cache.org> #1731: 4.0.3 Panic -----------------------+---------------------------------- Reporter: billnbell | Owner: Type: defect | Status: new Priority: highest | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------- Comment (by billnbell): What does s,f,r,e mean? Does it mean that I went over by 52256 bytes? It seemed like we were only doing 5 retries. Why would it need more than the default fix for these parameters? Should I try both workspace_backend and workspace_client? What are the issue with increasing this? Is 252144 a good size? Here are the 2 functions you requested. If I comment out the following section, the problem goes away. The max retries is the default, so it should stop right? Or is it infinite looping? if ( beresp.status >= 500 && beresp.status < 600) { return(retry); } Here ya go: sub vcl_backend_fetch { if(bereq.http.X-Backend-Type == "m") { set bereq.backend = m.backend(); } else if(bereq.http.X-Backend-Type == "hg_tips") { set bereq.backend = hg_tips.backend(); } else if(bereq.http.X-Backend-Type == "hg") { set bereq.backend = hg.backend(); } else if(bereq.http.X-Backend-Type == "articles") { set bereq.backend = articles.backend(); } if(bereq.url ~ "(?i)^/(?!provider-search)[^/]+-directory(/[^/]+)?/?$") { unset bereq.http.set-cookie; unset bereq.http.Cookie; } return (fetch); } sub vcl_backend_response { if ( beresp.status >= 500 && beresp.status < 600) { return(retry); } if ( beresp.backend.name ~ "(web|mobile|hg3)") { if(bereq.url ~ "(?i)^/error$" && beresp.backend.name ~ "mobile" ) { set beresp.status = 500; } if(bereq.http.Accept-Encoding ~ "gzip" && beresp.http.Content- Encoding ~ "^\s*$") { if ((beresp.http.content-type ~ "\/xml") || (beresp.http.content-type ~ "\/json") || (beresp.http.content-type ~ "^text\/") || (beresp.http.content-type ~ "^application\/x-javascript") || (beresp.http.content-type ~ "^image\/svg\+xml") || (beresp.http.content-type ~ "^application\/x-font- ttf") || (beresp.http.content-type ~ "^application\/x-font- woff") || (beresp.http.content-type ~ "^application\/font- woff") || (beresp.http.content-type ~ "^application\/font- ttf") || (beresp.http.content-type ~ "^application\/font- otf") || (beresp.http.content-type ~ "^application\/vnd \.ms-fontobject") || (beresp.http.content-type ~ "^application\/x-font- opentype") || (beresp.http.content-type ~ "^application\/javascript") || (beresp.http.content-type ~ "\/html")) { set beresp.do_gzip = true; } } if(bereq.url ~ ".(png|ico|js)$") { unset beresp.http.Set-Cookie; unset beresp.http.expires; set beresp.http.X-Cache-Control = "1"; unset beresp.http.cache-control; unset beresp.http.pragma; unset beresp.http.last-modified; set beresp.ttl = 30m; } if (beresp.http.Vary !~ "User-Agent") { if (beresp.http.Content-Type ~ "text/html") { if (beresp.http.Vary !~ "^\s*$") { set beresp.http.Vary = beresp.http.Vary + ", User- Agent"; } else { set beresp.http.Vary = "User-Agent"; } } } else { if (beresp.http.Content-Type !~ "text/html") { set beresp.http.Vary = regsub(beresp.http.Vary, ",? *User- Agent *", ""); set beresp.http.Vary = regsub(beresp.http.Vary, "^, *", ""); if (beresp.http.Vary == "") { unset beresp.http.Vary; } } } if (beresp.http.Vary !~ "Accept-Encoding" && beresp.http.Content- Encoding ~ "gzip|deflate") { if (beresp.http.Vary !~ "^\s*$") { set beresp.http.Vary = beresp.http.Vary + ", Accept- Encoding"; } else { set beresp.http.Vary = "Accept-Encoding"; } } set beresp.http.X-Vary = beresp.http.Vary; unset beresp.http.Vary; set beresp.http.X-Backend-Name = beresp.backend.name; if(bereq.url ~ "(?i)^/(?!provider- search)[^/]+-directory(/[^/]+)?/?$") { unset beresp.http.Set-Cookie; unset beresp.http.expires; unset beresp.http.cache-control; unset beresp.http.pragma; unset beresp.http.last-modified; set beresp.ttl = 15m; # return(deliver); } } else { unset beresp.http.Set-Cookie; unset beresp.http.expires; set beresp.http.X-Cache-Control = "1"; unset beresp.http.cache-control; unset beresp.http.pragma; unset beresp.http.last-modified; set beresp.http.X-Vary = beresp.http.Vary; unset beresp.http.Vary; if ((beresp.http.content-type ~ "\/xml") || (beresp.http.content-type ~ "\/json") || (beresp.http.content-type ~ "^text\/") || (beresp.http.content-type ~ "^application\/x-javascript") || (beresp.http.content-type ~ "^image\/svg\+xml") || (beresp.http.content-type ~ "^application\/x-font-ttf") || (beresp.http.content-type ~ "^application\/x-font-woff") || (beresp.http.content-type ~ "^application\/font-woff") || (beresp.http.content-type ~ "^application\/font-ttf") || (beresp.http.content-type ~ "^application\/font-otf") || (beresp.http.content-type ~ "^application\/vnd\.ms-fontobject") || (beresp.http.content-type ~ "^application\/x-font-opentype") || (beresp.http.content-type ~ "^application\/javascript") || (beresp.http.content-type ~ "\/html")) { set beresp.do_gzip = true; } set beresp.grace = 1h; if (bereq.url == "/home/PageNotFound") { set beresp.ttl = 0s; } else { set beresp.ttl = 15m; } set beresp.http.X-Backend = beresp.backend.ip; } # set one hour cache for TIPS if(beresp.backend.name ~ "hg_tips") { set beresp.ttl = 60m; } if (beresp.status == 404) { set beresp.http.X-Cache-Control = "2"; set beresp.ttl = 30s; } } -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 6 09:12:27 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 06 May 2015 09:12:27 -0000 Subject: [Varnish] #1732: Assert error in SES_Close() Message-ID: <045.a6ec702b87452c0e0feea89ea7ed04b3@varnish-cache.org> #1732: Assert error in SES_Close() --------------------------------+---------------------- Reporter: llavaud | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: major Keywords: panic assert error | --------------------------------+---------------------- {{{ May 6 11:02:33 webcache14 varnishd[502993]: Child (503012) died signal=6 May 6 11:02:33 webcache14 varnishd[502993]: Child (503012) Panic message: Assert error in SES_Close(), cache/cache_session.c line 530: Condition(sp->fd >= 0) not true. errno = 104 (Connection reset by peer) thread = (cache-worker) version = varnish-trunk revision 668be11 ident = Linux,3.2.0-4-amd64,x86_64,-junix,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x433044: pan_ic+0x134 0x43bc7c: SES_Close+0x24c 0x43a14b: vrg_range_bytes+0xdb 0x41af0f: VDP_close+0x4f 0x44bbb7: V1D_Deliver+0x1f7 0x437189: CNT_Request+0x389 0x44c88b: HTTP1_Session+0x12b 0x43a901: SES_Proto_Req+0x61 0x4354f8: Pool_Work_Thread+0x3c8 0x448053: WRK_Thread+0x103 req = 0x7eefee046020 { sp = 0x7eeffcdfc420, vxid = 10424451, step = R_STP_DELIVER, req_body = R_BODY_NONE, restarts = 0, esi_level = 0, sp = 0x7eeffcdfc420 { fd = -1, vxid = 68247, client = 194.51.87.86 40367, step = S_STP_H1PROC, }, worker = 0x7eeff928ac30 { stack = {0x7eeff928b000 -> 0x7eeff927f000} ws = 0x7eeff928ae38 { id = "wrk", {s,f,r,e} = {0x7eeff928a420,0x7eeff928a420,(nil),+2040}, }, VCL::method = DELIVER, VCL::return = deliver, VCL::methods = {RECV, HASH, MISS, DELIVER}, }, ws = 0x7eefee046208 { id = "req", {s,f,r,e} = {0x7eefee048028,+3360,(nil),+253904}, }, http[req] = { ws = 0x7eefee046208[req] "GET", "myuri", "HTTP/1.1", "Connection: keep-alive", "Accept: image/webp,*/*;q=0.8", "User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36",}}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 6 09:34:45 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 06 May 2015 09:34:45 -0000 Subject: [Varnish] #1732: Assert error in SES_Close() In-Reply-To: <045.a6ec702b87452c0e0feea89ea7ed04b3@varnish-cache.org> References: <045.a6ec702b87452c0e0feea89ea7ed04b3@varnish-cache.org> Message-ID: <060.5886942db97fdbf1b3e042253a9f00eb@varnish-cache.org> #1732: Assert error in SES_Close() --------------------------------+-------------------- Reporter: llavaud | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: panic assert error | --------------------------------+-------------------- Comment (by llavaud): seems to be introduced by the commit f6d71499cccfb75ac4d709d66841ac80628239c0 dated Tue May 5 08:58:39 2015 +0000 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 11 08:46:42 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 11 May 2015 08:46:42 -0000 Subject: [Varnish] #1732: Assert error in SES_Close() In-Reply-To: <045.a6ec702b87452c0e0feea89ea7ed04b3@varnish-cache.org> References: <045.a6ec702b87452c0e0feea89ea7ed04b3@varnish-cache.org> Message-ID: <060.677d50aab514a647ea4aa7b5701944e9@varnish-cache.org> #1732: Assert error in SES_Close() --------------------------------+---------------------------------------- Reporter: llavaud | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: fixed Keywords: panic assert error | --------------------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * owner: => Poul-Henning Kamp * resolution: => fixed Comment: In [8c0af33385c8045cea0574af26418f5480c13e50]: {{{ #!CommitTicketReference repository="" revision="8c0af33385c8045cea0574af26418f5480c13e50" Fix an attempt to double close a session. In future versions of HTTP requests can be terminated abnormally without ditching the entire session, and this change starts the transition to that logic. Fixes: #1732 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 11 09:10:13 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 11 May 2015 09:10:13 -0000 Subject: [Varnish] #1726: backend_toolate no longer used In-Reply-To: <043.842e036ecb4f153f9d14ef128269f440@varnish-cache.org> References: <043.842e036ecb4f153f9d14ef128269f440@varnish-cache.org> Message-ID: <058.997f01322bea59fba10797724b0c335b@varnish-cache.org> #1726: backend_toolate no longer used ----------------------+---------------------------------------- Reporter: fgsch | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * owner: => Poul-Henning Kamp * status: new => closed * resolution: => fixed Comment: In [d0012bd98c55446624a47cb256c28ff4afc01809]: {{{ #!CommitTicketReference repository="" revision="d0012bd98c55446624a47cb256c28ff4afc01809" Retire the backend_toolate counter. With backend waiters it makes no sense to bring it back. Fixes: #1726 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 11 09:58:02 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 11 May 2015 09:58:02 -0000 Subject: [Varnish] #1733: test l00002 is not stable Message-ID: <041.e3e9abd44efc9f6e731fb370c77db66e@varnish-cache.org> #1733: test l00002 is not stable -------------------------+-------------------- Reporter: phk | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishtest | Version: trunk Severity: normal | Keywords: -------------------------+-------------------- the pipelined requests gets varying XID's making the logexpects fail -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 11 17:03:47 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 11 May 2015 17:03:47 -0000 Subject: [Varnish] #1734: segfault after vcl.state cold Message-ID: <055.ec7692e32d951e1b1e281c27d6c75658@varnish-cache.org> #1734: segfault after vcl.state cold ---------------------------------+---------------------- Reporter: zaterio@? | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: varnishd Version: trunk | Severity: normal Keywords: vcl state cold warm | ---------------------------------+---------------------- When I try to place a VCL in a cold state, varnish segfaults and child uptime returns to 0. {{{ #varnishadm vcl.list available auto/cold 0 boot available auto/cold 0 test01 active auto/warm 107 test92 #varnishadm vcl.load test03 /etc/varnish/default.vcl VCL compiled. #varnishadm vcl.list available auto/cold 0 boot available auto/cold 0 test01 active auto/warm 161 test92 available auto/warm 0 test03 #varnishadm vcl.use test03 VCL 'test03' now active #varnishstat -1|grep uptime && varnishadm vcl.state test92 cold MAIN.uptime 345 1.00 Child process uptime MGT.uptime 904 115.75 Management process uptime }}} And then after 10 segs: {{{ #varnishstat -1 MAIN.uptime 2 1.00 Child process uptime MGT.uptime 910 115.75 Management process uptime #dmesg|grep varnishd varnishd[26270]: segfault at 8 ip 00000000004124ff sp 00007f76ddbda1b0 error 6 in varnishd (deleted)[400000+8e000] }}} reproducible -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 11 17:05:59 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 11 May 2015 17:05:59 -0000 Subject: [Varnish] #1734: segfault after vcl.state cold In-Reply-To: <055.ec7692e32d951e1b1e281c27d6c75658@varnish-cache.org> References: <055.ec7692e32d951e1b1e281c27d6c75658@varnish-cache.org> Message-ID: <070.9815b70d94bdf160c3401eca299d4659@varnish-cache.org> #1734: segfault after vcl.state cold ---------------------------------+---------------------------------- Reporter: zaterio@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: vcl state cold warm | ---------------------------------+---------------------------------- Comment (by zaterio@?): varnish Version: {{{ # varnishd -V varnishd (varnish-trunk revision d0012bd) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 12 12:41:15 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 May 2015 12:41:15 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.6e96277827b6142c6ae69c563c9e3708@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): Still repeatable in latest master: {{{ varnishd (varnish-trunk revision d0012bd) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS }}} {{{ varnish> panic.show 200 Last panic at: Tue, 12 May 2015 12:33:17 GMT Assert error in tcp_handle(), cache/cache_backend_tcp.c line 94: Condition((vbc->in_waiter) != 0) not true. thread = (cache-epoll) version = varnish-trunk revision d0012bd ident = Linux,3.2.0-4-amd64,x86_64,-junix,-smalloc,-smalloc,-hclassic,epoll Backtrace: 0x433224: pan_ic+0x134 0x415bed: tcp_handle+0x38d 0x463279: Wait_Handle+0x89 0x463a1a: vwe_thread+0xfa 0x7f05516c9b50: libpthread.so.0(+0x6b50) [0x7f05516c9b50] 0x7f055141395d: libc.so.6(clone+0x6d) [0x7f055141395d] }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 12 13:54:31 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 May 2015 13:54:31 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.9f93a376928f4e319e06b09957f95846@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): the following command stops right after the panic: {{{ varnishlog -i Debug -g raw > /home/out.log }}} FD States {{{ cat /home/salidarwa.txt |grep Handler|awk '{print $8" "$9" "$10" "$11" "$12" "$13" "$14" "$15}'|sort|uniq -c|sort -n }}} {{{ 1 in_w 0 state 0x2 ev 2 have_been 1" 1 in_w 1 state 0x2 ev 1 have_been 0" 1 in_w 1 state 0x4 ev 1 have_been 1" 2 in_w 1 state 0x1 ev 2 have_been 0" 2 in_w 1 state 0x2 ev 1 have_been 1" 2 in_w 1 state 0x4 ev 2 have_been 0" 4 in_w 1 state 0x8 ev 1 have_been 1" 33 in_w 1 state 0x8 ev 2 have_been 1" 125 in_w 1 state 0x2 ev 2 have_been 0" 1081 in_w 1 state 0x2 ev 2 have_been 1" }}} the next state is suspect: {{{ 1 in_w 0 state 0x2 ev 2 have_been 1 }}} * just one case vbc->in_waiter = 0. * the log entry is the last in /home/out.log It is associated with FD 105: {{{ cat /home/out.log |grep "fd 105" }}} {{{ 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Close fd 105 in_w 1" 0 Debug - "------> Handler fd 105 in_w 1 state 0x8 ev 2 have_been 1" 0 Debug - "------> New fd 105" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 0" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 2 have_been 1" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Close fd 105 in_w 1" 0 Debug - "------> Handler fd 105 in_w 1 state 0x8 ev 2 have_been 1" 0 Debug - "------> New fd 105" 0 Debug - "------> Recycle fd 105 in_w 0" 0 Debug - "------> Recycle fd 105 Wait_Enter" 0 Debug - "------> Steal fd 105 state 0x1" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Recycle fd 105 in_w 1" 0 Debug - "------> Steal fd 105 state 0x4" 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 1 have_been 0" 0 Debug - "------> Handler fd 105 in_w 0 state 0x2 ev 2 have_been 1" }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 12 15:44:05 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 May 2015 15:44:05 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.ecb93c0c17c4e987653657f880fad456@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): {{{ 0 Debug - "------> Handler fd 105 in_w 1 state 0x2 ev 1 have_been 0" 0 Debug - "------> Handler fd 105 in_w 0 state 0x2 ev 2 have_been 1" }}} both status are VBC_STATE_USED, the first status is changed to the second status by: {{{ case VBC_STATE_USED: vbc->in_waiter = 0; vbc->have_been_in_waiter = 1; break; }}} And then, when the second handler hits: {{{ AN(vbc->in_waiter); }}} Fails because in_waiter = 0. vbc->in_waiter = 0; must be changed to non zero value in case VBC_STATE_USED. In my scenario states "in_w = 0" are rare, and when they appear varnish dies For example handlers after 2 hours running, shows no "in_waiter = 0": {{{ cat /home/out.log |grep Handler|awk '{print $8" "$9" "$10" "$11" "$12" "$13" "$14" "$15}'|sort|uniq -c|sort -n 2 in_w 1 state 0x4 ev 1 have_been 0" 3 in_w 1 state 0x4 ev 1 have_been 1" 14 in_w 1 state 0x1 ev 2 have_been 0" 18 in_w 1 state 0x8 ev 2 have_been 0" 235 in_w 1 state 0x8 ev 2 have_been 1" 809 in_w 1 state 0x2 ev 2 have_been 0" 8863 in_w 1 state 0x2 ev 2 have_been 1" }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 12 17:54:43 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 May 2015 17:54:43 -0000 Subject: [Varnish] #1735: Changes to varnishapi.pc.in break module compilation. Message-ID: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> #1735: Changes to varnishapi.pc.in break module compilation. ---------------------+---------------------- Reporter: drogers | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.0.3 | Severity: normal Keywords: | ---------------------+---------------------- The removal of 'vmodincludedir' from varnishapi.pc.in in 61cdf24bb88a18d175c8b6e1c4dc26593728fa27 breaks the compilation of (at least) libmod-vsthrottle (https://github.com/varnish/libvmod-vsthrottle) and boltsort (https://github.com/vimeo/libvmod-boltsort). {{{ make all-recursive make[1]: Entering directory `/tmp/libvmod-boltsort-master' Making all in src make[2]: Entering directory `/tmp/libvmod-boltsort-master/src' /bin/sh ../libtool --tag=CC --mode=compile gcc -std=gnu99 -DHAVE_CONFIG_H -I. -I.. -I -I/usr/include/varnish -g -O2 -MT vcc_if.lo -MD -MP -MF .deps/vcc_if.Tpo -c -o vcc_if.lo vcc_if.c libtool: compile: gcc -std=gnu99 -DHAVE_CONFIG_H -I. -I.. -I -I/usr/include/varnish -g -O2 -MT vcc_if.lo -MD -MP -MF .deps/vcc_if.Tpo -c vcc_if.c -fPIC -DPIC -o .libs/vcc_if.o vcc_if.c:8:17: fatal error: vrt.h: No such file or directory #include "vrt.h" ^ compilation terminated. make[2]: *** [vcc_if.lo] Error 1 make[2]: Leaving directory `/tmp/libvmod-boltsort-master/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/tmp/libvmod-boltsort-master' make: *** [all] Error 2 }}} Note the empty -I option, which confuses gcc. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 12 18:57:55 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 May 2015 18:57:55 -0000 Subject: [Varnish] #1735: Changes to varnishapi.pc.in break module compilation. In-Reply-To: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> References: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> Message-ID: <060.57f97044da5e5d171f6aad7eba1f2115@varnish-cache.org> #1735: Changes to varnishapi.pc.in break module compilation. ----------------------+----------------------- Reporter: drogers | Owner: Type: defect | Status: needinfo Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Changes (by fgsch): * status: new => needinfo Comment: Both work for me on a fresh checkout with the correct packages installed: {{{ ii libvarnishapi-dev 4.0.3-2~trusty amd64 development files for Varnish ii libvarnishapi1 4.0.3-2~trusty amd64 shared libraries for Varnish ii varnish 4.0.3-2~trusty amd64 state of the art, high-performance web accelerator }}} If it keeps occurring with a clean repo please share the configure output and installed packages. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 12 19:48:56 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 May 2015 19:48:56 -0000 Subject: [Varnish] #1735: Changes to varnishapi.pc.in break module compilation. In-Reply-To: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> References: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> Message-ID: <060.05167b4c985e75ce80705b15d4769541@varnish-cache.org> #1735: Changes to varnishapi.pc.in break module compilation. ----------------------+----------------------- Reporter: drogers | Owner: Type: defect | Status: needinfo Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by drogers): Here are my installed RPMs, from the varnish yum repo: {{{ varnish-release-4.0-3.el6.noarch varnish-libs-4.0.3-1.el6.x86_64 varnish-libs-devel-4.0.3-1.el6.x86_64 varnish-4.0.3-1.el6.x86_64 }}} Module sources are from https://github.com/varnish/libvmod- vsthrottle/archive/master.zip and https://github.com/vimeo/libvmod- boltsort/archive/master.zip configure output (note, configure gets confused by the lack of the 'vmodincludedir' variable as well: {{{ checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking minix/config.h usability... no checking minix/config.h presence... no checking for minix/config.h... no checking whether it is safe to define __EXTENSIONS__... yes checking for gcc... (cached) gcc checking whether we are using the GNU C compiler... (cached) yes checking whether gcc accepts -g... (cached) yes checking for gcc option to accept ISO C89... (cached) none needed checking dependency style of gcc... (cached) gcc3 checking for gcc option to accept ISO C99... -std=gnu99 checking for gcc -std=gnu99 option to accept ISO Standard C... (cached) -std=gnu99 checking how to run the C preprocessor... gcc -E checking how to print strings... printf checking for a sed that does not truncate output... /bin/sed checking for fgrep... /bin/grep -F checking for ld used by gcc -std=gnu99... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking how to convert x86_64-unknown-linux-gnu file names to x86_64 -unknown-linux-gnu format... func_convert_file_noop checking how to convert x86_64-unknown-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for ar... ar checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc -std=gnu99 object... ok checking for sysroot... no checking for mt... no checking if : is a manifest tool... no checking for dlfcn.h... yes checking for objdir... .libs checking if gcc -std=gnu99 supports -fno-rtti -fno-exceptions... no checking for gcc -std=gnu99 option to produce PIC... -fPIC -DPIC checking if gcc -std=gnu99 PIC flag -fPIC -DPIC works... yes checking if gcc -std=gnu99 static flag -static works... no checking if gcc -std=gnu99 supports -c -o file.o... yes checking if gcc -std=gnu99 supports -c -o file.o... (cached) yes checking whether the gcc -std=gnu99 linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking whether make sets $(MAKE)... (cached) yes checking for rst2man... rst2man checking for pkg-config... /usr/bin/pkg-config checking pkg-config is at least version 0.9.0... yes checking for ANSI C header files... (cached) yes checking sys/stdlib.h usability... no checking sys/stdlib.h presence... no checking for sys/stdlib.h... no checking for python3... no checking for python3.1... no checking for python3.2... no checking for python2.7... python2.7 checking vsha256.h usability... no checking vsha256.h presence... no checking for vsha256.h... no checking cache/cache.h usability... no checking cache/cache.h presence... no checking for cache/cache.h... no checking for varnishtest... /usr/bin/varnishtest checking for varnishd... /usr/sbin/varnishd checking that generated files are newer than configure... done configure: creating ./config.status config.status: creating Makefile config.status: creating src/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands config.status: executing libtool commands }}} This is Amazon Linux, FYI. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 12 20:44:40 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 May 2015 20:44:40 -0000 Subject: [Varnish] #1735: Changes to varnishapi.pc.in break module compilation. In-Reply-To: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> References: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> Message-ID: <060.ac4fade9a9d7ecfac403492cfd7de2f0@varnish-cache.org> #1735: Changes to varnishapi.pc.in break module compilation. ----------------------+----------------------- Reporter: drogers | Owner: Type: defect | Status: needinfo Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by fgsch): Works for me in Centos 6 as well. Can you attach the config.log file? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 12 21:31:30 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 May 2015 21:31:30 -0000 Subject: [Varnish] #1735: Changes to varnishapi.pc.in break module compilation. In-Reply-To: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> References: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> Message-ID: <060.827da4074fc629767fd4d183c6ba520d@varnish-cache.org> #1735: Changes to varnishapi.pc.in break module compilation. ----------------------+----------------------- Reporter: drogers | Owner: Type: defect | Status: needinfo Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by drogers): Sure, here it is with those lines missing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 12 22:28:42 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 May 2015 22:28:42 -0000 Subject: [Varnish] #1735: Changes to varnishapi.pc.in break module compilation. In-Reply-To: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> References: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> Message-ID: <060.d4919f16054ee9d01d857190d592c6c3@varnish-cache.org> #1735: Changes to varnishapi.pc.in break module compilation. ----------------------+----------------------- Reporter: drogers | Owner: Type: defect | Status: needinfo Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by fgsch): That's pretty weird. Your config.log defines VMOD_INCLUDE_DIR but that doesn't exist anymore. Do you have multiple varnishapi.pc? Did you re-run autogen.sh? Can you also include your configure and varnishapi.pc? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 13 07:14:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 May 2015 07:14:28 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.f6910c515150e6ec159bf92b93564ed5@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by phk): Can I get you try the "poll" waiter ? You need to add -p waiter=poll to the command line. (I suspect this is related to the epoll() waiter only.) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 13 08:54:43 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 May 2015 08:54:43 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.abf248222befb7b321f05bd92b7b2d93@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by phk): I have commited a change to the way the epoll waiter works. Please also try that (-trunk) if you can. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 13 16:20:48 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 May 2015 16:20:48 -0000 Subject: [Varnish] #1735: Changes to varnishapi.pc.in break module compilation. In-Reply-To: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> References: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> Message-ID: <060.eb5c7d50662d793b1bebcdca0e6f4e87@varnish-cache.org> #1735: Changes to varnishapi.pc.in break module compilation. ----------------------+----------------------- Reporter: drogers | Owner: Type: defect | Status: needinfo Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by drogers): Sure, they're attached now. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 13 16:26:08 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 May 2015 16:26:08 -0000 Subject: [Varnish] #1735: Changes to varnishapi.pc.in break module compilation. In-Reply-To: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> References: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> Message-ID: <060.c5b1b3cee902a5306c0aa9b8188d499b@varnish-cache.org> #1735: Changes to varnishapi.pc.in break module compilation. ----------------------+----------------------- Reporter: drogers | Owner: Type: defect | Status: needinfo Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by fgsch): Thanks. Sorry, forgot to mention, can you also include varnish.m4? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 13 19:06:52 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 May 2015 19:06:52 -0000 Subject: [Varnish] #1735: Changes to varnishapi.pc.in break module compilation. In-Reply-To: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> References: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> Message-ID: <060.788e364af5bf2f59b704a3933f21c4de@varnish-cache.org> #1735: Changes to varnishapi.pc.in break module compilation. ----------------------+----------------------- Reporter: drogers | Owner: Type: defect | Status: needinfo Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by drogers): So, this is rather embarrassing, but it turns out that our puppet config was overwriting the varnish.m4 file with an older version. Once I stopped it from doing that, it worked fine. Sorry for the confusion, and thanks for your help. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 13 20:37:04 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 May 2015 20:37:04 -0000 Subject: [Varnish] #1692: Assert error in vep_emit_common In-Reply-To: <044.b0926c147f799f3c2119ecd58287e758@varnish-cache.org> References: <044.b0926c147f799f3c2119ecd58287e758@varnish-cache.org> Message-ID: <059.c6a01650a4a629fe0a3d877b17707788@varnish-cache.org> #1692: Assert error in vep_emit_common --------------------+---------------------------------------- Reporter: martin | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: fixed Keywords: | --------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * owner: => Poul-Henning Kamp * resolution: => fixed Comment: In [7f71fe6e0182cff8aaa5470ffff6c74b67ad8bf7]: {{{ #!CommitTicketReference repository="" revision="7f71fe6e0182cff8aaa5470ffff6c74b67ad8bf7" Avoid a length=0 panic where the length can actually be zero. Fixes #1692 Found and diagnosed by: martin Slightly different patch by me. }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 13 21:44:59 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 May 2015 21:44:59 -0000 Subject: [Varnish] #1735: Changes to varnishapi.pc.in break module compilation. In-Reply-To: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> References: <045.bd0f62706526555feb56aeafea507809@varnish-cache.org> Message-ID: <060.1e442087cb32d89ad1ae4f017cba51d5@varnish-cache.org> #1735: Changes to varnishapi.pc.in break module compilation. ----------------------+---------------------- Reporter: drogers | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: invalid Keywords: | ----------------------+---------------------- Changes (by fgsch): * status: needinfo => closed * resolution: => invalid Comment: No worries. Thanks for confirming. I will close this now. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 14 02:14:30 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 14 May 2015 02:14:30 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.1bb42704477770bbcd971d80320f814f@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): After 12h varnish uptime. OBS: average varnish uptime 7h. {{{ varnishd -V varnishd (varnish-trunk revision ade0db0) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS }}} {{{ DAEMON_OPTS="-a XXX.XXX.XXX.XXX:80, \ -T 127.0.0.1:6082 \ -f /etc/varnish/default.vcl \ -h classic,16383 \ -s malloc,6G \ -p thread_pools=2 \ -p thread_pool_min=500 \ -p thread_pool_max=3000 \ -p thread_pool_add_delay=2 \ -p auto_restart=on \ -p ping_interval=3 \ -p send_timeout=5000 \ -p workspace_session=1M \ -p cli_timeout=25 \ -p http_gzip_support=off \ -p tcp_keepalive_time=600 \ -p listen_depth=8192 \ -p cli_buffer=32k \ -p cli_limit=96k \ -p waiter=poll \ -p ban_dups=on" }}} Log file 1 {{{ varnishlog -i Debug -g raw > /home/out.log }}} Log file 2 {{{ varnishlog -i BackendClose -i Backend -i BackendOpen -i BackendReuse -i Backend_health -i FetchError -i WorkThread > /home/salida2.log }}} {{{ varnishadm panic.show Child has not panicked or panic has been cleared Command failed with error code 300 }}} {{{ cat /home/out.log |grep "Acceptor poll space increased" 0 Debug - "Acceptor poll space increased to 1024" 0 Debug - "Acceptor poll space increased to 1024" 0 Debug - "Acceptor poll space increased to 1024" 0 Debug - "Acceptor poll space increased to 1024" 0 Debug - "Acceptor poll space increased to 1024" 0 Debug - "Acceptor poll space increased to 1024" 0 Debug - "Acceptor poll space increased to 1024" }}} {{{ cat /home/out.log |grep "POLL Evict"|wc -l 606873 }}} {{{ cat /home/out.log |grep "POLL Inject"|wc -l 607083 }}} {{{ cat /home/out.log |grep "POLL Handle"|wc -l 74503781 }}} {{{ cat /home/out.log |grep "Handler stolen"|wc -l 0 }}} {{{ cat /home/out.log |grep "Recycle fd"|cut -d" " -f26-|sort|uniq -c|sort -n 69015 in_w 0" 69015 Wait_Enter" 79659 in_w 1" }}} {{{ cat /home/out.log |grep "Handler fd"|cut -d" " -f26-|sort|uniq -c|sort -n 2 in_w 1 state 0x8 ev 1 have_been 1" 3 in_w 1 state 0x1 ev 1 have_been 0" 3 in_w 1 state 0x1 ev 2 have_been 1" 4 in_w 1 state 0x2 ev 1 have_been 0" 14 in_w 1 state 0x4 ev 1 have_been 0" 36 in_w 1 state 0x1 ev 1 have_been 1" 39 in_w 1 state 0x2 ev 1 have_been 1" 48 in_w 1 state 0x8 ev 2 have_been 1" 129 in_w 1 state 0x4 ev 2 have_been 1" 241 in_w 1 state 0x4 ev 2 have_been 0" 608 in_w 1 state 0x4 ev 1 have_been 1" 1115 in_w 1 state 0x1 ev 2 have_been 0" 6450 in_w 1 state 0x2 ev 2 have_been 0" 61311 in_w 1 state 0x2 ev 2 have_been 1" }}} {{{ cat /home/out.log |grep "Close fd"|cut -d" " -f26-|sort|uniq -c|sort -n 58 in_w 1" 7713 in_w 0" }}} {{{ cat /home/out.log |grep "Steal fd"|cut -d" " -f26-|sort|uniq -c|sort -n 68847 state 0x1" 78665 state 0x4" }}} {{{ cat /home/out.log |grep "No new fd"|wc -l 0 }}} {{{ cat /home/salida2.log |grep FetchError |sort|uniq -c|sort -n 1 - FetchError Resource temporarily unavailable 6 - FetchError straight insufficient bytes 58 - FetchError http first read error: EOF 84 - FetchError no backend connection }}} {{{ varnishstat -1 MAIN.uptime 45109 1.00 Child process uptime MAIN.sess_conn 620241 13.75 Sessions accepted MAIN.sess_drop 0 0.00 Sessions dropped MAIN.sess_fail 0 0.00 Session accept failures MAIN.client_req_400 0 0.00 Client requests received, subject to 400 errors MAIN.client_req_417 0 0.00 Client requests received, subject to 417 errors MAIN.client_req 1582499 35.08 Good client requests received MAIN.cache_hit 1395139 30.93 Cache hits MAIN.cache_hitpass 0 0.00 Cache hits for pass MAIN.cache_miss 163918 3.63 Cache misses MAIN.backend_conn 0 0.00 Backend conn. success MAIN.backend_unhealthy 84 0.00 Backend conn. not attempted MAIN.backend_busy 0 0.00 Backend conn. too many MAIN.backend_fail 0 0.00 Backend conn. failures MAIN.backend_reuse 344540 7.64 Backend conn. reuses MAIN.backend_recycle 347536 7.70 Backend conn. recycles MAIN.backend_retry 134 0.00 Backend conn. retry MAIN.fetch_head 0 0.00 Fetch no body (HEAD) MAIN.fetch_length 254150 5.63 Fetch with Length MAIN.fetch_chunked 1796 0.04 Fetch chunked MAIN.fetch_eof 0 0.00 Fetch EOF MAIN.fetch_bad 0 0.00 Fetch bad T-E MAIN.fetch_none 4683 0.10 Fetch no body MAIN.fetch_1xx 0 0.00 Fetch no body (1xx) MAIN.fetch_204 0 0.00 Fetch no body (204) MAIN.fetch_304 86952 1.93 Fetch no body (304) MAIN.fetch_failed 14 0.00 Fetch failed (all causes) MAIN.fetch_no_thread 0 0.00 Fetch failed (no thread) MAIN.pools 2 . Number of thread pools MAIN.threads 1000 . Total number of threads MAIN.threads_limited 0 0.00 Threads hit max MAIN.threads_created 1000 0.02 Threads created MAIN.threads_destroyed 0 0.00 Threads destroyed MAIN.threads_failed 0 0.00 Thread creation failed MAIN.thread_queue_len 0 . Length of session queue MAIN.busy_sleep 2371 0.05 Number of requests sent to sleep on busy objhdr MAIN.busy_wakeup 2371 0.05 Number of requests woken after sleep on busy objhdr MAIN.busy_killed 0 0.00 Number of requests killed after sleep on busy objhdr MAIN.sess_queued 0 0.00 Sessions queued for thread MAIN.sess_dropped 0 0.00 Sessions dropped for thread MAIN.n_object 4991 . object structs made MAIN.n_vampireobject 0 . unresurrected objects MAIN.n_objectcore 5810 . objectcore structs made MAIN.n_objecthead 5860 . objecthead structs made MAIN.n_waitinglist 981 . waitinglist structs made MAIN.n_backend 5 . Number of backends MAIN.n_expired 41683 . Number of expired objects MAIN.n_lru_nuked 117276 . Number of LRU nuked objects MAIN.n_lru_moved 307416 . Number of LRU moved objects MAIN.losthdr 0 0.00 HTTP header overflows MAIN.s_sess 620241 13.75 Total sessions seen MAIN.s_req 1582499 35.08 Total requests seen MAIN.s_pipe 19130 0.42 Total pipe sessions seen MAIN.s_pass 5 0.00 Total pass-ed requests seen MAIN.s_fetch 163923 3.63 Total backend fetches initiated MAIN.s_synth 4307 0.10 Total synthethic responses made MAIN.s_req_hdrbytes 684657640 15177.85 Request header bytes MAIN.s_req_bodybytes 0 0.00 Request body bytes MAIN.s_resp_hdrbytes 548015036 12148.69 Response header bytes MAIN.s_resp_bodybytes 1036430331602 22976131.85 Response body bytes MAIN.s_pipe_hdrbytes 7462368 165.43 Pipe request header bytes MAIN.s_pipe_in 128 0.00 Piped bytes from client MAIN.s_pipe_out 99654156717 2209185.68 Piped bytes to client MAIN.sess_closed 107237 2.38 Session Closed MAIN.sess_closed_err 490513 10.87 Session Closed with error MAIN.sess_pipeline 0 0.00 Session Pipeline MAIN.sess_readahead 0 0.00 Session Read Ahead MAIN.sess_herd 1458429 32.33 Session herd MAIN.sc_rem_close 43219 0.96 Session OK REM_CLOSE MAIN.sc_req_close 4225 0.09 Session OK REQ_CLOSE MAIN.sc_req_http10 0 0.00 Session Err REQ_HTTP10 MAIN.sc_rx_bad 0 0.00 Session Err RX_BAD MAIN.sc_rx_body 0 0.00 Session Err RX_BODY MAIN.sc_rx_junk 0 0.00 Session Err RX_JUNK MAIN.sc_rx_overflow 0 0.00 Session Err RX_OVERFLOW MAIN.sc_rx_timeout 490513 10.87 Session Err RX_TIMEOUT MAIN.sc_tx_pipe 19121 0.42 Session OK TX_PIPE MAIN.sc_tx_error 0 0.00 Session Err TX_ERROR MAIN.sc_tx_eof 0 0.00 Session OK TX_EOF MAIN.sc_resp_close 62654 1.39 Session OK RESP_CLOSE MAIN.sc_overload 0 0.00 Session Err OVERLOAD MAIN.sc_pipe_overflow 0 0.00 Session Err PIPE_OVERFLOW MAIN.sc_range_short 0 0.00 Session Err RANGE_SHORT MAIN.shm_records 289059599 6408.02 SHM records MAIN.shm_writes 165872322 3677.14 SHM writes MAIN.shm_flushes 200 0.00 SHM flushes due to overflow MAIN.shm_cont 156978 3.48 SHM MTX contention MAIN.shm_cycles 107 0.00 SHM cycles through buffer MAIN.backend_req 347680 7.71 Backend requests made MAIN.n_vcl 1 0.00 Number of loaded VCLs in total MAIN.n_vcl_avail 1 0.00 Number of VCLs available MAIN.n_vcl_discard 0 0.00 Number of discarded VCLs MAIN.bans 1 . Count of bans MAIN.bans_completed 1 . Number of bans marked 'completed' MAIN.bans_obj 0 . Number of bans using obj.* MAIN.bans_req 0 . Number of bans using req.* MAIN.bans_added 1 0.00 Bans added MAIN.bans_deleted 0 0.00 Bans deleted MAIN.bans_tested 0 0.00 Bans tested against objects (lookup) MAIN.bans_obj_killed 0 0.00 Objects killed by bans (lookup) MAIN.bans_lurker_tested 0 0.00 Bans tested against objects (lurker) MAIN.bans_tests_tested 0 0.00 Ban tests tested against objects (lookup) MAIN.bans_lurker_tests_tested 0 0.00 Ban tests tested against objects (lurker) MAIN.bans_lurker_obj_killed 0 0.00 Objects killed by bans (lurker) MAIN.bans_dups 0 0.00 Bans superseded by other bans MAIN.bans_lurker_contention 0 0.00 Lurker gave way for lookup MAIN.bans_persisted_bytes 13 . Bytes used by the persisted ban lists MAIN.bans_persisted_fragmentation 0 . Extra bytes in persisted ban lists due to fragmentation MAIN.n_purges 0 . Number of purge operations executed MAIN.n_obj_purged 0 . Number of purged objects MAIN.exp_mailed 648638 14.38 Number of objects mailed to expiry thread MAIN.exp_received 648638 14.38 Number of objects received by expiry thread MAIN.hcb_nolock 0 0.00 HCB Lookups without lock MAIN.hcb_lock 0 0.00 HCB Lookups with lock MAIN.hcb_insert 0 0.00 HCB Inserts MAIN.esi_errors 0 0.00 ESI parse errors (unlock) MAIN.esi_warnings 0 0.00 ESI parse warnings (unlock) MAIN.vmods 2 . Loaded VMODs MAIN.n_gzip 0 0.00 Gzip operations MAIN.n_gunzip 0 0.00 Gunzip operations MAIN.vsm_free 971488 . Free VSM space MAIN.vsm_used 83963120 . Used VSM space MAIN.vsm_cooling 0 . Cooling VSM space MAIN.vsm_overflow 0 . Overflow VSM space MAIN.vsm_overflowed 0 0.00 Overflowed VSM space MGT.uptime 45109 1.00 Management process uptime MGT.child_start 1 0.00 Child process started MGT.child_exit 0 0.00 Child process normal exit MGT.child_stop 0 0.00 Child process unexpected exit MGT.child_died 0 0.00 Child process died (signal) MGT.child_dump 0 0.00 Child process core dumped MGT.child_panic 0 0.00 Child process panic MEMPOOL.busyobj.live 37 . In use MEMPOOL.busyobj.pool 18 . In Pool MEMPOOL.busyobj.sz_wanted 65536 . Size requested MEMPOOL.busyobj.sz_actual 65504 . Size allocated MEMPOOL.busyobj.allocs 366813 8.13 Allocations MEMPOOL.busyobj.frees 366776 8.13 Frees MEMPOOL.busyobj.recycle 366662 8.13 Recycled from pool MEMPOOL.busyobj.timeout 28276 0.63 Timed out from pool MEMPOOL.busyobj.toosmall 0 0.00 Too small to recycle MEMPOOL.busyobj.surplus 0 0.00 Too many for pool MEMPOOL.busyobj.randry 151 0.00 Pool ran dry MEMPOOL.req0.live 72 . In use MEMPOOL.req0.pool 31 . In Pool MEMPOOL.req0.sz_wanted 65536 . Size requested MEMPOOL.req0.sz_actual 65504 . Size allocated MEMPOOL.req0.allocs 791478 17.55 Allocations MEMPOOL.req0.frees 791406 17.54 Frees MEMPOOL.req0.recycle 790891 17.53 Recycled from pool MEMPOOL.req0.timeout 30123 0.67 Timed out from pool MEMPOOL.req0.toosmall 0 0.00 Too small to recycle MEMPOOL.req0.surplus 103 0.00 Too many for pool MEMPOOL.req0.randry 587 0.01 Pool ran dry MEMPOOL.sess0.live 246 . In use MEMPOOL.sess0.pool 12 . In Pool MEMPOOL.sess0.sz_wanted 1048576 . Size requested MEMPOOL.sess0.sz_actual 1048544 . Size allocated MEMPOOL.sess0.allocs 310037 6.87 Allocations MEMPOOL.sess0.frees 309791 6.87 Frees MEMPOOL.sess0.recycle 309915 6.87 Recycled from pool MEMPOOL.sess0.timeout 30751 0.68 Timed out from pool MEMPOOL.sess0.toosmall 0 0.00 Too small to recycle MEMPOOL.sess0.surplus 0 0.00 Too many for pool MEMPOOL.sess0.randry 122 0.00 Pool ran dry MEMPOOL.req1.live 71 . In use MEMPOOL.req1.pool 37 . In Pool MEMPOOL.req1.sz_wanted 65536 . Size requested MEMPOOL.req1.sz_actual 65504 . Size allocated MEMPOOL.req1.allocs 796368 17.65 Allocations MEMPOOL.req1.frees 796297 17.65 Frees MEMPOOL.req1.recycle 795817 17.64 Recycled from pool MEMPOOL.req1.timeout 30183 0.67 Timed out from pool MEMPOOL.req1.toosmall 0 0.00 Too small to recycle MEMPOOL.req1.surplus 33 0.00 Too many for pool MEMPOOL.req1.randry 551 0.01 Pool ran dry MEMPOOL.sess1.live 262 . In use MEMPOOL.sess1.pool 9 . In Pool MEMPOOL.sess1.sz_wanted 1048576 . Size requested MEMPOOL.sess1.sz_actual 1048544 . Size allocated MEMPOOL.sess1.allocs 310241 6.88 Allocations MEMPOOL.sess1.frees 309979 6.87 Frees MEMPOOL.sess1.recycle 310078 6.87 Recycled from pool MEMPOOL.sess1.timeout 30717 0.68 Timed out from pool MEMPOOL.sess1.toosmall 0 0.00 Too small to recycle MEMPOOL.sess1.surplus 0 0.00 Too many for pool MEMPOOL.sess1.randry 163 0.00 Pool ran dry SMA.s0.c_req 2879399 63.83 Allocator requests SMA.s0.c_fail 1851290 41.04 Allocator failures SMA.s0.c_bytes 230747070537 5115322.23 Bytes allocated SMA.s0.c_freed 224306392497 4972541.90 Bytes freed SMA.s0.g_alloc 21083 . Allocations outstanding SMA.s0.g_bytes 6440678040 . Bytes outstanding SMA.s0.g_space 1772904 . Bytes available SMA.Transient.c_req 4477 0.10 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 4976123 110.31 Bytes allocated SMA.Transient.c_freed 4976123 110.31 Bytes freed SMA.Transient.g_alloc 0 . Allocations outstanding SMA.Transient.g_bytes 0 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.boot.live.happy 18446744073709551615 . Happy health probes VBE.boot.live.bereq_hdrbytes 70786367 1569.23 Request header bytes VBE.boot.live.bereq_bodybytes 0 0.00 Request body bytes VBE.boot.live.beresp_hdrbytes 70551340 1564.02 Response header bytes VBE.boot.live.beresp_bodybytes 44906673308 995514.72 Response body bytes VBE.boot.live.pipe_hdrbytes 0 0.00 Pipe request header bytes VBE.boot.live.pipe_out 0 0.00 Piped bytes to backend VBE.boot.live.pipe_in 0 0.00 Piped bytes from backend VBE.boot.live.conn 214924 . Concurrent connections to backend VBE.boot.live.req 214921 4.76 Backend requests sent VBE.boot.vod_cache.happy 18446744073709551615 . Happy health probes VBE.boot.vod_cache.bereq_hdrbytes 57031720 1264.31 Request header bytes VBE.boot.vod_cache.bereq_bodybytes 0 0.00 Request body bytes VBE.boot.vod_cache.beresp_hdrbytes 47879808 1061.42 Response header bytes VBE.boot.vod_cache.beresp_bodybytes 185635292265 4115260.64 Response body bytes VBE.boot.vod_cache.pipe_hdrbytes 6448338 142.95 Pipe request header bytes VBE.boot.vod_cache.pipe_out 128 0.00 Piped bytes to backend VBE.boot.vod_cache.pipe_in 99654156717 2209185.68 Piped bytes from backend VBE.boot.vod_cache.conn 148304 . Concurrent connections to backend VBE.boot.vod_cache.req 148304 3.29 Backend requests sent VBE.boot.m3u82.happy 18446744073709551615 . Happy health probes VBE.boot.m3u82.bereq_hdrbytes 270673 6.00 Request header bytes VBE.boot.m3u82.bereq_bodybytes 0 0.00 Request body bytes VBE.boot.m3u82.beresp_hdrbytes 149275 3.31 Response header bytes VBE.boot.m3u82.beresp_bodybytes 299164 6.63 Response body bytes VBE.boot.m3u82.pipe_hdrbytes 0 0.00 Pipe request header bytes VBE.boot.m3u82.pipe_out 0 0.00 Piped bytes to backend VBE.boot.m3u82.pipe_in 0 0.00 Piped bytes from backend VBE.boot.m3u82.conn 853 . Concurrent connections to backend VBE.boot.m3u82.req 853 0.02 Backend requests sent VBE.boot.m3u87.happy 18446744073709551615 . Happy health probes VBE.boot.m3u87.bereq_hdrbytes 288667 6.40 Request header bytes VBE.boot.m3u87.bereq_bodybytes 0 0.00 Request body bytes VBE.boot.m3u87.beresp_hdrbytes 159333 3.53 Response header bytes VBE.boot.m3u87.beresp_bodybytes 319356 7.08 Response body bytes VBE.boot.m3u87.pipe_hdrbytes 0 0.00 Pipe request header bytes VBE.boot.m3u87.pipe_out 0 0.00 Piped bytes to backend VBE.boot.m3u87.pipe_in 0 0.00 Piped bytes from backend VBE.boot.m3u87.conn 921 . Concurrent connections to backend VBE.boot.m3u87.req 921 0.02 Backend requests sent VBE.boot.chvm3u8.happy 18446744073709551615 . Happy health probes VBE.boot.chvm3u8.bereq_hdrbytes 687101 15.23 Request header bytes VBE.boot.chvm3u8.bereq_bodybytes 0 0.00 Request body bytes VBE.boot.chvm3u8.beresp_hdrbytes 565899 12.55 Response header bytes VBE.boot.chvm3u8.beresp_bodybytes 241656 5.36 Response body bytes VBE.boot.chvm3u8.pipe_hdrbytes 0 0.00 Pipe request header bytes VBE.boot.chvm3u8.pipe_out 0 0.00 Piped bytes to backend VBE.boot.chvm3u8.pipe_in 0 0.00 Piped bytes from backend VBE.boot.chvm3u8.conn 1861 . Concurrent connections to backend VBE.boot.chvm3u8.req 1861 0.04 Backend requests sent LCK.sms.creat 0 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 0 0.00 Lock Operations LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.sma.creat 2 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 3895379 86.35 Lock Operations LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hcb.creat 0 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 0 0.00 Lock Operations LCK.hcl.creat 16383 0.36 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 3113169 69.01 Lock Operations LCK.vcl.creat 1 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 748115 16.58 Lock Operations LCK.sessmem.creat 0 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 0 0.00 Lock Operations LCK.sess.creat 620189 13.75 Created locks LCK.sess.destroy 619769 13.74 Destroyed locks LCK.sess.locks 0 0.00 Lock Operations LCK.wstat.creat 1 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 2467837 54.71 Lock Operations LCK.herder.creat 0 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 0 0.00 Lock Operations LCK.wq.creat 3 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 5831768 129.28 Lock Operations LCK.objhdr.creat 164797 3.65 Created locks LCK.objhdr.destroy 158935 3.52 Destroyed locks LCK.objhdr.locks 9557275 211.87 Lock Operations LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 2361804 52.36 Lock Operations LCK.lru.creat 2 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 1647109 36.51 Lock Operations LCK.cli.creat 1 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 15043 0.33 Lock Operations LCK.ban.creat 1 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 838238 18.58 Lock Operations LCK.vbp.creat 0 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 0 0.00 Lock Operations LCK.backend.creat 15 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 1664985 36.91 Lock Operations LCK.vcapace.creat 1 0.00 Created locks LCK.vcapace.destroy 0 0.00 Destroyed locks LCK.vcapace.locks 0 0.00 Lock Operations LCK.nbusyobj.creat 0 0.00 Created locks LCK.nbusyobj.destroy 0 0.00 Destroyed locks LCK.nbusyobj.locks 0 0.00 Lock Operations LCK.busyobj.creat 366800 8.13 Created locks LCK.busyobj.destroy 366776 8.13 Destroyed locks LCK.busyobj.locks 9789231 217.01 Lock Operations LCK.mempool.creat 5 0.00 Created locks LCK.mempool.destroy 0 0.00 Destroyed locks LCK.mempool.locks 5574321 123.57 Lock Operations LCK.vxid.creat 1 0.00 Created locks LCK.vxid.destroy 0 0.00 Destroyed locks LCK.vxid.locks 986 0.02 Lock Operations LCK.pipestat.creat 1 0.00 Created locks LCK.pipestat.destroy 0 0.00 Destroyed locks LCK.pipestat.locks 19122 0.42 Lock Operations LCK.misc.creat 1 0.00 Created locks LCK.misc.destroy 0 0.00 Destroyed locks LCK.misc.locks 318126 7.05 Lock Operations }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 14 07:38:20 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 14 May 2015 07:38:20 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.d2dd034eff467ec775d6f9968532d14e@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by phk): So using the poll waiter stabilizes it ? That's very valuable information. Can I get you to try the changes I did to the epoll waiter yesterday ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 14 08:05:01 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 14 May 2015 08:05:01 -0000 Subject: [Varnish] #1734: segfault after vcl.state cold In-Reply-To: <055.ec7692e32d951e1b1e281c27d6c75658@varnish-cache.org> References: <055.ec7692e32d951e1b1e281c27d6c75658@varnish-cache.org> Message-ID: <070.32fe0d88ddc33715724bd36b148c828c@varnish-cache.org> #1734: segfault after vcl.state cold ---------------------------------+---------------------------------- Reporter: zaterio@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: vcl state cold warm | ---------------------------------+---------------------------------- Comment (by phk): Please capture output of panic.show for us if you can. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 15 15:40:07 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 15 May 2015 15:40:07 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.770d9ce9ecc8097414784f2ae990882c@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): '''waiter: poll.''' {{{ varnishd -V varnishd (varnish-trunk revision ade0db0) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS }}} new panic: {{{ varnishadm panic.show Last panic at: Fri, 15 May 2015 03:51:17 GMT Assert error in vwp_inject(), waiter/cache_waiter_poll.c line 100: Condition((vwp->pollfd[fd].revents) == 0) not true. thread = (cache-poll) version = varnish-trunk revision ade0db0 ident = Linux,3.2.0-4-amd64,x86_64,-junix,-smalloc,-smalloc,-hclassic,poll Backtrace: 0x4331f4: pan_ic+0x134 0x46425d: vwp_inject+0x18d 0x463331: Wait_Handle+0x171 0x463cb0: vwp_main+0x1e0 0x7f04eff5ab50: libpthread.so.0(+0x6b50) [0x7f04eff5ab50] 0x7f04efca495d: libc.so.6(clone+0x6d) [0x7f04efca495d] }}} I have attached a file with the last 10000 log lines. using waiter poll varnish is more stable, the average uptime is 12h. I will try this exact configuration, but epoll. {{{ cat /varnishcache/out2.log |grep "Handler fd"|cut -d" " -f26-|sort|uniq -c|sort -n 1 in_w 1 state 0x8 ev 1 have_been 0" 8 in_w 1 state 0x1 ev 2 have_been 1" 10 in_w 1 state 0x8 ev 1 have_been 1" 11 in_w 1 state 0x1 ev 1 have_been 0" 17 in_w 1 state 0x2 ev 1 have_been 0" 23 in_w 1 state 0x8 ev 2 have_been 0" 27 in_w 1 state 0x4 ev 1 have_been 0" 85 in_w 1 state 0x1 ev 1 have_been 1" 145 in_w 1 state 0x2 ev 1 have_been 1" 373 in_w 1 state 0x4 ev 2 have_been 1" 459 in_w 1 state 0x8 ev 2 have_been 1" 595 in_w 1 state 0x4 ev 2 have_been 0" 1498 in_w 1 state 0x4 ev 1 have_been 1" 2927 in_w 1 state 0x1 ev 2 have_been 0" 2977 in_w 1 state 0x2 ev 2 have_been 0" 141019 in_w 1 state 0x2 ev 2 have_been 1" }}} {{{ varnishadm param.show acceptor_sleep_decay 0.9 (default) acceptor_sleep_incr 0.001 [seconds] (default) acceptor_sleep_max 0.050 [seconds] (default) auto_restart on [bool] (default) ban_dups on [bool] (default) ban_lurker_age 60.000 [seconds] (default) ban_lurker_batch 1000 (default) ban_lurker_sleep 0.010 [seconds] (default) between_bytes_timeout 60.000 [seconds] (default) busyobj_worker_cache off [bool] (default) cc_command "exec gcc -std=gnu99 -g -O2 -Wall -Werror -Wno- error=unused-result -pthread -fpic -shared -Wl,-x -o %o %s" (default) cli_buffer 32k [bytes] cli_limit 96k [bytes] cli_timeout 25.000 [seconds] clock_skew 10 [seconds] (default) connect_timeout 3.500 [seconds] (default) critbit_cooloff 180.000 [seconds] (default) debug none (default) default_grace 10.000 [seconds] (default) default_keep 0.000 [seconds] (default) default_ttl 120.000 [seconds] (default) feature none (default) fetch_chunksize 16k [bytes] (default) fetch_maxchunksize 0.25G [bytes] (default) first_byte_timeout 60.000 [seconds] (default) gzip_buffer 32k [bytes] (default) gzip_level 6 (default) gzip_memlevel 8 (default) http_gzip_support off [bool] http_max_hdr 64 [header lines] (default) http_range_support on [bool] (default) http_req_hdr_len 8k [bytes] (default) http_req_size 32k [bytes] (default) http_resp_hdr_len 8k [bytes] (default) http_resp_size 32k [bytes] (default) idle_send_timeout 60.000 [seconds] (default) listen_depth 8192 [connections] lru_interval 2.000 [seconds] (default) max_esi_depth 5 [levels] (default) max_restarts 4 [restarts] (default) max_retries 4 [retries] (default) nuke_limit 50 [allocations] (default) pcre_match_limit 10000 (default) pcre_match_limit_recursion 10000 (default) ping_interval 3 [seconds] (default) pipe_timeout 60.000 [seconds] (default) pool_req 10,100,10 (default) pool_sess 10,100,10 (default) pool_vbo 10,100,10 (default) prefer_ipv6 off [bool] (default) rush_exponent 3 [requests per request] (default) send_timeout 5000.000 [seconds] session_max 100000 [sessions] (default) shm_reclen 255b [bytes] (default) shortlived 10.000 [seconds] (default) sigsegv_handler off [bool] (default) syslog_cli_traffic on [bool] (default) tcp_keepalive_intvl 75.000 [seconds] (default) tcp_keepalive_probes 9 [probes] (default) tcp_keepalive_time 600.000 [seconds] thread_pool_add_delay 2.000 [seconds] thread_pool_destroy_delay 1.000 [seconds] (default) thread_pool_fail_delay 0.200 [seconds] (default) thread_pool_max 3000 [threads] thread_pool_min 500 [threads] thread_pool_stack 48k [bytes] (default) thread_pool_timeout 300.000 [seconds] (default) thread_pools 2 [pools] (default) thread_queue_limit 20 (default) thread_stats_rate 10 [requests] (default) timeout_idle 10.000 [seconds] timeout_linger 0.050 [seconds] (default) vcc_allow_inline_c off [bool] (default) vcc_err_unref on [bool] (default) vcc_unsafe_path on [bool] (default) vcl_cooldown 600.000 [seconds] (default) vcl_dir /usr/local/varnish-trunk-ade0db/etc/varnish (default) vmod_dir /usr/local/varnish-trunk- ade0db/lib/varnish/vmods (default) vsl_buffer 4k [bytes] (default) vsl_mask -VCL_trace,-WorkThread,-Hash,-VfpAcct (default) vsl_reclen 255b [bytes] (default) vsl_space 80M [bytes] (default) vsm_space 1M [bytes] (default) waiter poll (possible values: epoll, poll) workspace_backend 64k [bytes] (default) workspace_client 64k [bytes] (default) workspace_session 1M [bytes] workspace_thread 2k [bytes] (default) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 15 16:00:46 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 15 May 2015 16:00:46 -0000 Subject: [Varnish] #1734: segfault after vcl.state cold In-Reply-To: <055.ec7692e32d951e1b1e281c27d6c75658@varnish-cache.org> References: <055.ec7692e32d951e1b1e281c27d6c75658@varnish-cache.org> Message-ID: <070.01b5a73e7ae071326e7c7983d2f5d41e@varnish-cache.org> #1734: segfault after vcl.state cold ---------------------------------+---------------------------------- Reporter: zaterio@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: vcl state cold warm | ---------------------------------+---------------------------------- Comment (by zaterio@?): {{{ varnishadm panic.show Child has not panicked or panic has been cleared Command failed with error code 300 }}} no VMODS. email with config send. Regards. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 15 16:08:51 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 15 May 2015 16:08:51 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.ecb2073e03c0c5cf8184cb99db70596e@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): And now with '''epoll waiter''': very unstable, two minutes of uptime. new panic: {{{ varnishadm panic.show Last panic at: Fri, 15 May 2015 16:03:23 GMT Assert error in vwe_inject(), waiter/cache_waiter_epoll.c line 76: Condition((epoll_ctl(vwe->epfd, 1, wp->fd, &ev)) == 0) not true. errno = 17 (File exists) thread = (cache-epoll) version = varnish-trunk revision ade0db0 ident = Linux,3.2.0-4-amd64,x86_64,-junix,-smalloc,-smalloc,-hclassic,epoll Backtrace: 0x4331f4: pan_ic+0x134 0x46365a: vwe_inject+0x9a 0x463331: Wait_Handle+0x171 0x4639e7: vwe_thread+0x147 0x7f954fe30b50: libpthread.so.0(+0x6b50) [0x7f954fe30b50] 0x7f954fb7a95d: libc.so.6(clone+0x6d) [0x7f954fb7a95d] }}} {{{ varnishd -V varnishd (varnish-trunk revision ade0db0) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS }}} {{{ varnishadm param.show acceptor_sleep_decay 0.9 (default) acceptor_sleep_incr 0.001 [seconds] (default) acceptor_sleep_max 0.050 [seconds] (default) auto_restart on [bool] (default) ban_dups on [bool] (default) ban_lurker_age 60.000 [seconds] (default) ban_lurker_batch 1000 (default) ban_lurker_sleep 0.010 [seconds] (default) between_bytes_timeout 60.000 [seconds] (default) busyobj_worker_cache off [bool] (default) cc_command "exec gcc -std=gnu99 -g -O2 -Wall -Werror -Wno- error=unused-result -pthread -fpic -shared -Wl,-x -o %o %s" (default) cli_buffer 32k [bytes] cli_limit 96k [bytes] cli_timeout 25.000 [seconds] clock_skew 10 [seconds] (default) connect_timeout 3.500 [seconds] (default) critbit_cooloff 180.000 [seconds] (default) debug none (default) default_grace 10.000 [seconds] (default) default_keep 0.000 [seconds] (default) default_ttl 120.000 [seconds] (default) feature none (default) fetch_chunksize 16k [bytes] (default) fetch_maxchunksize 0.25G [bytes] (default) first_byte_timeout 60.000 [seconds] (default) gzip_buffer 32k [bytes] (default) gzip_level 6 (default) gzip_memlevel 8 (default) http_gzip_support off [bool] http_max_hdr 64 [header lines] (default) http_range_support on [bool] (default) http_req_hdr_len 8k [bytes] (default) http_req_size 32k [bytes] (default) http_resp_hdr_len 8k [bytes] (default) http_resp_size 32k [bytes] (default) idle_send_timeout 60.000 [seconds] (default) listen_depth 8192 [connections] lru_interval 2.000 [seconds] (default) max_esi_depth 5 [levels] (default) max_restarts 4 [restarts] (default) max_retries 4 [retries] (default) nuke_limit 50 [allocations] (default) pcre_match_limit 10000 (default) pcre_match_limit_recursion 10000 (default) ping_interval 3 [seconds] (default) pipe_timeout 60.000 [seconds] (default) pool_req 10,100,10 (default) pool_sess 10,100,10 (default) pool_vbo 10,100,10 (default) prefer_ipv6 off [bool] (default) rush_exponent 3 [requests per request] (default) send_timeout 5000.000 [seconds] session_max 100000 [sessions] (default) shm_reclen 255b [bytes] (default) shortlived 10.000 [seconds] (default) sigsegv_handler off [bool] (default) syslog_cli_traffic on [bool] (default) tcp_keepalive_intvl 75.000 [seconds] (default) tcp_keepalive_probes 9 [probes] (default) tcp_keepalive_time 600.000 [seconds] thread_pool_add_delay 2.000 [seconds] thread_pool_destroy_delay 1.000 [seconds] (default) thread_pool_fail_delay 0.200 [seconds] (default) thread_pool_max 3000 [threads] thread_pool_min 500 [threads] thread_pool_stack 48k [bytes] (default) thread_pool_timeout 300.000 [seconds] (default) thread_pools 2 [pools] (default) thread_queue_limit 20 (default) thread_stats_rate 10 [requests] (default) timeout_idle 5.000 [seconds] (default) timeout_linger 0.050 [seconds] (default) vcc_allow_inline_c off [bool] (default) vcc_err_unref on [bool] (default) vcc_unsafe_path on [bool] (default) vcl_cooldown 600.000 [seconds] (default) vcl_dir /usr/local/varnish-trunk-ade0db/etc/varnish (default) vmod_dir /usr/local/varnish-trunk- ade0db/lib/varnish/vmods (default) vsl_buffer 4k [bytes] (default) vsl_mask -VCL_trace,-WorkThread,-Hash,-VfpAcct (default) vsl_reclen 255b [bytes] (default) vsl_space 80M [bytes] (default) vsm_space 1M [bytes] (default) waiter epoll (possible values: epoll, poll) (default) workspace_backend 64k [bytes] (default) workspace_client 64k [bytes] (default) workspace_session 1M [bytes] workspace_thread 2k [bytes] (default) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 15 21:55:59 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 15 May 2015 21:55:59 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.8f516c7a284c027c5b7424de6fdf756f@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by phk): I really don't understand this error, but I'll keep working until I do :-) I fixed a race in the backend/tcp_pool code, and while I'm far from certain that it has anything to do with this, it could explain some of the weirder behaviours we have seen, so if you have time to test eb8f35bd79206ff489b895ffb2dc3fb07cabfce5 I'd really appreciate it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat May 16 06:26:55 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 16 May 2015 06:26:55 -0000 Subject: [Varnish] #1736: energi Message-ID: <052.b516ff2fd087ce4f6ab132975708b3c8@varnish-cache.org> #1736: energi ----------------------------+------------------------- Reporter: masihakudisini | Type: enhancement Status: new | Priority: normal Milestone: | Component: build Version: unknown | Severity: normal Keywords: | ----------------------------+------------------------- minuman berenergi yang menggunakan gula murni dengan kemasan praktis sehingga bisa langsung diminum. Definisi [http://masihakudisini.blogspot.com/2015/04/minuman-berenergi-aman-tidak- berbahaya.html minuman berenergi] adalah minuman yang mengandung satu atau lebih bahan yang mudah dan cepat diserap oleh tubuh untuk menghasilkan energi dengan atau tanpa bahan tambahan makanan yang diijinkan -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun May 17 07:39:23 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 17 May 2015 07:39:23 -0000 Subject: [Varnish] #1733: Unstable test-cases (was: test l00002 is not stable) In-Reply-To: <041.e3e9abd44efc9f6e731fb370c77db66e@varnish-cache.org> References: <041.e3e9abd44efc9f6e731fb370c77db66e@varnish-cache.org> Message-ID: <056.cdcf09ff964555dc1919c89e5ab0f9fe@varnish-cache.org> #1733: Unstable test-cases -------------------------+--------------------- Reporter: phk | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishtest | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+--------------------- Description changed by phk: Old description: > the pipelined requests gets varying XID's making the logexpects fail New description: After spending some time on r01086's stability I ran an overnight '-n 1000' test run on both project.v-c.o and my own lab machine. The set of tests failing on the two machines are almost distinct: {{{ p.v-c.o: Linux 3.2.0-83-virtual x86_64 11 tests/r01086.vtc 4 tests/r00942.vtc 2 tests/c00067.vtc 2 tests/c00039.vtc 2 tests/b00046.vtc 1 tests/r01441.vtc 1 tests/r00962.vtc 1 tests/r00861.vtc 1 tests/l00002.vtc ni.f.d: FreeBSD 11.0-CURRENT amd64 205 tests/l00002.vtc 32 tests/s00004.vtc 23 tests/c00046.vtc 12 tests/c00066.vtc 7 tests/l00005.vtc 2 tests/c00049.vtc 2 tests/c00023.vtc 1 tests/r01494.vtc 1 tests/r01030.vtc 1 tests/c00045.vtc 1 tests/c00020.vtc 1 tests/c00004.vtc 1 tests/c00002.vtc 1 tests/c00001.vtc 1 tests/b00005.vtc 1 tests/b00002.vtc }}} Considering that the total run involved nearly half a million tests on each machine, the overall failure-rate is quite acceptable, but it is still obvious that we have test-cases which are not entirely stable. -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 18 06:48:55 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 May 2015 06:48:55 -0000 Subject: [Varnish] #1733: Unstable test-cases In-Reply-To: <041.e3e9abd44efc9f6e731fb370c77db66e@varnish-cache.org> References: <041.e3e9abd44efc9f6e731fb370c77db66e@varnish-cache.org> Message-ID: <056.3ecbe5a25042ffb960872a9c40f91893@varnish-cache.org> #1733: Unstable test-cases -------------------------+--------------------- Reporter: phk | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishtest | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+--------------------- Comment (by phk): I reran the tests (varnishtest -n 1000) on both machines. On project.varnish-cache.org the result is now: {{{ 2 tests/l00002.vtc 1 tests/r01441.vtc 1 tests/r00861.vtc 1 tests/e00024.vtc }}} The three l00002 failures all look like the same panic, and I'll start chasing that. My FreeBSD machine crashed half-way through (it runs FreeBSD-CURRENT :-) but the result that far was: {{{ 110 tests/l00002.vtc 18 tests/s00004.vtc 10 tests/l00005.vtc 10 tests/c00066.vtc 7 tests/c00035.vtc 2 tests/r01335.vtc 2 tests/c00045.vtc 1 tests/r01401.vtc 1 tests/r01030.vtc 1 tests/r00861.vtc 1 tests/r00495.vtc 1 tests/c00001.vtc }}} l00002 is the major culprit here, but the same panic as on the linux machine is present in some of the remaining tests. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 18 13:19:44 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 May 2015 13:19:44 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.82167b470319941c3d77e911a103f5a2@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): '''Installation Procedure:''' {{{ git clone https://github.com/varnish/Varnish-Cache.git git reset --hard eb8f35bd79206ff489b895ffb2dc3fb07cabfce5 ./autogen.sh ./configure --prefix=/usr/local/varnish-eb8f35b make make install cat /etc/ld.so.conf.d/varnish.conf /usr/local/varnish-eb8f35b/lib/ /usr/local/varnish-eb8f35b/lib/varnish/ ldconfig cp /usr/local/varnish-eb8f35b/sbin/* /usr/sbin/ cp /usr/local/varnish-eb8f35b/bin/* /usr/bin/ /etc/init.d/varnish start }}} '''lsof |grep varnishd |grep lib|awk '{print $10}'|sort|uniq -c|sort -n|awk '{print $2}'''' {{{ /lib/x86_64-linux-gnu/ld-2.13.so /lib/x86_64-linux-gnu/libc-2.13.so /lib/x86_64-linux-gnu/libdl-2.13.so /lib/x86_64-linux-gnu/libm-2.13.so /lib/x86_64-linux-gnu/libnsl-2.13.so /lib/x86_64-linux-gnu/libnss_compat-2.13.so /lib/x86_64-linux-gnu/libnss_files-2.13.so /lib/x86_64-linux-gnu/libnss_nis-2.13.so /lib/x86_64-linux-gnu/libpcre.so.3.13.1 /lib/x86_64-linux-gnu/libpthread-2.13.so /lib/x86_64-linux-gnu/librt-2.13.so /usr/lib/x86_64-linux-gnu/libjemalloc.so.1 /usr/local/varnish-eb8f35b/lib/varnish/libvarnishcompat.so /usr/local/varnish-eb8f35b/lib/varnish/libvarnish.so /usr/local/varnish-eb8f35b/lib/varnish/libvcc.so /usr/local/varnish-eb8f35b/lib/varnish/libvgz.so /usr/local/varnish-eb8f35b/lib/varnish/vmods/libvmod_directors.so /usr/local/varnish-eb8f35b/lib/varnish/vmods/libvmod_std.so }}} '''varnishd -V''' {{{ varnishd (varnish-trunk revision eb8f35b) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS }}} '''testing epoll (panic every 3 min):''' {{{ varnishadm panic.show Last panic at: Mon, 18 May 2015 12:19:14 GMT Assert error in vwe_inject(), waiter/cache_waiter_epoll.c line 76: Condition((epoll_ctl(vwe->epfd, 1, wp->fd, &ev)) == 0) not true. errno = 17 (File exists) thread = (cache-epoll) version = varnish-trunk revision eb8f35b ident = Linux,3.2.0-4-amd64,x86_64,-junix,-smalloc,-smalloc,-hclassic,epoll Backtrace: 0x4334e4: pan_ic+0x134 0x4638ca: vwe_inject+0x9a 0x4635a1: Wait_Handle+0x171 0x463c57: vwe_thread+0x147 0x7f2018de6b50: libpthread.so.0(+0x6b50) [0x7f2018de6b50] 0x7f2018b3095d: libc.so.6(clone+0x6d) [0x7f2018b3095d] }}} {{{ varnishadm panic.show Last panic at: Mon, 18 May 2015 12:22:17 GMT Assert error in vwe_inject(), waiter/cache_waiter_epoll.c line 76: Condition((epoll_ctl(vwe->epfd, 1, wp->fd, &ev)) == 0) not true. errno = 17 (File exists) thread = (cache-epoll) version = varnish-trunk revision eb8f35b ident = Linux,3.2.0-4-amd64,x86_64,-junix,-smalloc,-smalloc,-hclassic,epoll Backtrace: 0x4334e4: pan_ic+0x134 0x4638ca: vwe_inject+0x9a 0x4635a1: Wait_Handle+0x171 0x463c57: vwe_thread+0x147 0x7f2018de6b50: libpthread.so.0(+0x6b50) [0x7f2018de6b50] 0x7f2018b3095d: libc.so.6(clone+0x6d) [0x7f2018b3095d] }}} '''OBS: I have attached the output of sysctl -a''' -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 18 13:38:16 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 May 2015 13:38:16 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.763957f991d7ed8eeda53ba99f09c3ea@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): I'm also running 6 instances of varnishncsa: {{{ /usr/bin/varnishncsa -a -w /var/log/estadisticas/varnishncsa/XXXX.log -D -P /var/run/XXXX.pid -F %h %{%s}t "%r" %s %b %{Varnish:hitmiss}x "%{Referer}i" "%{User-agent}i" %{X-Domain}o %{X-Served-By}o -q ReqHeader ~ 'Host: XXXX.com' }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 18 14:14:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 May 2015 14:14:28 -0000 Subject: [Varnish] #1737: Assert error in SES_Close() Message-ID: <045.39de3dbb64534043c894a8b03f8152a8@varnish-cache.org> #1737: Assert error in SES_Close() --------------------------------+---------------------- Reporter: llavaud | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: major Keywords: panic assert error | --------------------------------+---------------------- {{{ May 18 15:58:58 webcache12 varnishd[140657]: Child (140676) died signal=6 May 18 15:58:58 webcache12 varnishd[140657]: Child (140676) Panic message: Assert error in SES_Close(), cache/cache_session.c line 274: Condition(sp->fd >= 0) not true. errno = 32 (Broken pipe) thread = (cache-worker) version = varnish-trunk revision cbb2f5d ident = Linux,3.2.0-4-amd64,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x434564: pan_ic+0x134 0x43bd46: SES_Close+0x56 0x41bd9d: ESI_Deliver+0x1cd 0x41ac45: VDP_DeliverObj+0x135 0x44dda5: V1D_Deliver+0x275 0x437fe6: cnt_deliver+0x296 0x4385c9: CNT_Request+0x119 0x44f5bb: HTTP1_Session+0x77b 0x43b9f8: ses_req_pool_task+0x68 0x43c994: SES_pool_accept_task+0x2b4 req = 0x7f1105447020 { sp = 0x7f110353d920, vxid = 2492739, step = R_STP_DELIVER, req_body = R_BODY_NONE, restarts = 0, esi_level = 0 sp = 0x7f110353d920 { fd = -1, vxid = 2492738, client = client_ip 50724, step = S_STP_WORKING, }, worker = 0x7f1106dcbc30 { stack = {0x7f1106dcc000 -> 0x7f1106dc0000} ws = 0x7f1106dcbe40 { id = "wrk", {s,f,r,e} = {0x7f1106dcb420,0x7f1106dcb420,(nil),+2040}, }, VCL::method = DELIVER, VCL::return = deliver, VCL::methods = {RECV, PASS, HASH, PURGE, MISS, HIT, DELIVER, SYNTH, BACKEND_FETCH, BACKEND_RESPONSE}, }, ws = 0x7f11054471d0 { id = "req", {s,f,r,e} = {0x7f1105448ff8,+2624,+253984,+253984}, }, http[req] = { ws = 0x7f11054471d0[req] "GET", "myuri", "HTTP/1.1", "Accept: text/html, application/xhtml+xml, */*", "Referer: http://www.gulli.fr/Jeux/My-Little-Pony/My-Little-Pony- Jeu-Rarity", "Accept-Language: fr-FR", "User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko", "DNT: 1", "Connection: Keep-Alive", "Host: www.gulli.fr", "Surrogate-Capability: abc=ESI/1.0", "X-Forwarded-For -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 18 22:22:32 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 May 2015 22:22:32 -0000 Subject: [Varnish] #1738: Large number of ExpKills and nuking issues Message-ID: <046.466e6bbaa373aa0943f5cd40ca1553a1@varnish-cache.org> #1738: Large number of ExpKills and nuking issues ----------------------+-------------------- Reporter: coredump | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: unknown | Severity: normal Keywords: | ----------------------+-------------------- Last Friday we had a problem that may be a bug. Our setup uses two varnish servers, each one using malloc,65G storage. Same hardware/config with 96GB total memory. No swap used. There's a load balancer in front of the varnish servers doing consistent hashing. Using varnish-trunk revision 6f53e59 One of our servers started increasing number of objects, expires stopped happening and we had a giant number of lru_nukes (those eventually went to zero) and worker thread going to the max number. Those are some graphs showing how it behaved: https://dl.dropboxusercontent.com/1/view/z9xmpv99p7oi6ws/Captured/3gD2H.png http://dsl.so/1B8AtVA Varnishlog showed requests going normally, except with some that had 10k+ ExpKill messages, like this one: https://gist.github.com/coredump/6710e57aaf3dff7788e1 And those are the params being used at the moment: https://gist.github.com/coredump/8a1c4d56ed79adb90956 As a result, the load balancers started to get timeouts from that varnish server. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 18 22:27:58 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 May 2015 22:27:58 -0000 Subject: [Varnish] #1738: Large number of ExpKills and nuking issues In-Reply-To: <046.466e6bbaa373aa0943f5cd40ca1553a1@varnish-cache.org> References: <046.466e6bbaa373aa0943f5cd40ca1553a1@varnish-cache.org> Message-ID: <061.9aba669003dcbc557b65fc9285c80ed1@varnish-cache.org> #1738: Large number of ExpKills and nuking issues ----------------------+---------------------- Reporter: coredump | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: Keywords: | ----------------------+---------------------- Comment (by coredump): Also, there are no panics logged anywhere. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 19 06:07:11 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 19 May 2015 06:07:11 -0000 Subject: [Varnish] #1739: overflow on "o ws - Assert error in VFP_Push(), cache/cache_fetch_proc.c line 200: Message-ID: <043.f71a854c635b7ea5de5ec292dbcbe25b@varnish-cache.org> #1739: overflow on "o ws - Assert error in VFP_Push(), cache/cache_fetch_proc.c line 200: ----------------------+------------------- Reporter: slink | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- finally running a current master with out-of-the-box config again on a prod system, seems like our default workspace_backend is too tight. And something is wrong with uppercasing the ws ID on overflow. will bump workspace_backend to 32k and report back if it helps. {{{ c12:~# va param.show | grep -v default }}} {{{ May 19 07:59:16 c12 varnishd[25582]: [ID 232431 local0.error] Child (25583) Panic message: May 19 07:59:16 c12 Assert error in VFP_Push(), cache/cache_fetch_proc.c line 200: May 19 07:59:16 c12 Condition((vfe) != 0) not true. May 19 07:59:16 c12 thread = (cache-worker) May 19 07:59:16 c12 version = varnish-trunk revision 2b1ac1c May 19 07:59:16 c12 ident = -jsolaris,-smalloc,-smalloc,-hcritbit,ports May 19 07:59:16 c12 Backtrace: May 19 07:59:16 c12 8087824: pan_backtrace+0x14 May 19 07:59:16 c12 8087b20: pan_ic+0x1d9 May 19 07:59:16 c12 8078a3b: VFP_Push+0xed May 19 07:59:16 c12 807658d: vbf_stp_fetch+0x49d May 19 07:59:16 c12 8077aac: vbf_fetch_thread+0x394 May 19 07:59:16 c12 808997c: Pool_Work_Thread+0x432 May 19 07:59:16 c12 809e311: WRK_Thread+0x1ce May 19 07:59:16 c12 8089a58: pool_thread+0x7d May 19 07:59:16 c12 fec0cd56: libc.so.1'_thrp_setup+0x7e [0xfec0cd56] May 19 07:59:16 c12 fec0cfe0: libc.so.1'_lwp_start+0x0 [0xfec0cfe0] May 19 07:59:16 c12 busyobj = 8a3fc18 { May 19 07:59:16 c12 ws = 8a3fc74 { OVERFLOW May 19 07:59:16 c12 id = ""o", May 19 07:59:16 c12 {s,f,r,e} = {8a41498,+10072,0,+10080}, May 19 07:59:16 c12 }, May 19 07:59:16 c12 refcnt = 2 May 19 07:59:16 c12 retries = 0 May 19 07:59:16 c12 failed = 0 May 19 07:59:16 c12 state = 1 May 19 07:59:16 c12 flags = { }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 19 06:28:09 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 19 May 2015 06:28:09 -0000 Subject: [Varnish] #1739: overflow on "o ws - Assert error in VFP_Push(), cache/cache_fetch_proc.c line 200: In-Reply-To: <043.f71a854c635b7ea5de5ec292dbcbe25b@varnish-cache.org> References: <043.f71a854c635b7ea5de5ec292dbcbe25b@varnish-cache.org> Message-ID: <058.1939857e1f8721883b9c49947a086415@varnish-cache.org> #1739: overflow on "o ws - Assert error in VFP_Push(), cache/cache_fetch_proc.c line 200: ----------------------+-------------------- Reporter: slink | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by slink): the bottom of the panic was missing in the description {{{ May 19 07:59:16 c12 flags = { May 19 07:59:16 c12 do_stream May 19 07:59:16 c12 is_gzip May 19 07:59:16 c12 } May 19 07:59:16 c12 bodystatus = 3 (length), May 19 07:59:16 c12 }, May 19 07:59:16 c12 http[bereq] = { May 19 07:59:16 c12 ws = 8a3fc74["o] May 19 07:59:16 c12 "GET", May 19 07:59:16 c12 "*REDACTED* }} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 21 17:47:59 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 21 May 2015 17:47:59 -0000 Subject: [Varnish] #1739: overflow on "o ws - Assert error in VFP_Push(), cache/cache_fetch_proc.c line 200: In-Reply-To: <043.f71a854c635b7ea5de5ec292dbcbe25b@varnish-cache.org> References: <043.f71a854c635b7ea5de5ec292dbcbe25b@varnish-cache.org> Message-ID: <058.4cda78979aed13ffbaa122b5064e7ad3@varnish-cache.org> #1739: overflow on "o ws - Assert error in VFP_Push(), cache/cache_fetch_proc.c line 200: ----------------------+-------------------- Reporter: slink | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by slink): to confirm: this issue has not surfaced again since I increased `workspace_backend` to 32k -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 21 20:46:48 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 21 May 2015 20:46:48 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.7a0dc170f611c63b1f4350cd62c12cb0@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by phk): I just changed the whole backend/waiter thing around to avoid the race you were hitting. Can I persuade you to try -trunk once more ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 22 01:40:21 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 22 May 2015 01:40:21 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.1be2a44744e39faeef9c7f33d11f4e12@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): just a update: running 12ace86f88c18a1ec85b18578e6f36ae2d95d501, transferring 350 Mbits for 2 hours (epoll waiter), no panics, 100% child uptime (previously we had panics every 3 min with epoll). {{{ varnishd -V varnishd (varnish-trunk revision 12ace86) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 22 21:45:08 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 22 May 2015 21:45:08 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.f68ac36f2001e9afdd6d1957023aa3ea@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): 22 hours and running OK, According to my statistics on this server, this bug appears every 12 +/- 7 hours (2 month sample), so in 47 hours we will be in 5-sigma region (99.999996% confidence). Regards -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun May 24 22:21:14 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 24 May 2015 22:21:14 -0000 Subject: [Varnish] #1723: High CPU load and exponential peak of objects in a281a10 In-Reply-To: <055.a2a864aec40e58c05df529f7335d48da@varnish-cache.org> References: <055.a2a864aec40e58c05df529f7335d48da@varnish-cache.org> Message-ID: <070.2509b85e4e0afcdb57ca6a2a305e56b3@varnish-cache.org> #1723: High CPU load and exponential peak of objects in a281a10 --------------------------+---------------------------------- Reporter: zaterio@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: load objects | --------------------------+---------------------------------- Comment (by zaterio@?): I think this error is related to #1735, after using trunk 12ace86f88c18a1ec85b18578e6f36ae2d95d501, it has not happened again. Regards -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun May 24 22:23:41 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 24 May 2015 22:23:41 -0000 Subject: [Varnish] #1738: Large number of ExpKills and nuking issues In-Reply-To: <046.466e6bbaa373aa0943f5cd40ca1553a1@varnish-cache.org> References: <046.466e6bbaa373aa0943f5cd40ca1553a1@varnish-cache.org> Message-ID: <061.e1a8d76b98dd59997a7fb09a9e4292c4@varnish-cache.org> #1738: Large number of ExpKills and nuking issues ----------------------+---------------------- Reporter: coredump | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: Keywords: | ----------------------+---------------------- Comment (by zaterio@?): panics?: {{{ varnishadm panic.show }}} I observed a similar behavior. When upgrading to 12ace86f88c18a1ec85b18578e6f36ae2d95d501 or higher no longer detected -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 25 12:32:37 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 25 May 2015 12:32:37 -0000 Subject: [Varnish] #1740: Condition((ObjGetSpace(wrk, req->objcore, &sz, &ptr)) != 0) not true. Message-ID: <055.b30ac21bd1012ab374c23fc4ba7edb26@varnish-cache.org> #1740: Condition((ObjGetSpace(wrk, req->objcore, &sz, &ptr)) != 0) not true. ---------------------------------+-------------------- Reporter: zaterio@? | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: build Version: trunk | Severity: normal Keywords: | ---------------------------------+-------------------- In order To limit the use of transient memory, I change the following settings: {{{ ... -s malloc,13G \ ... }}} to: {{{ ... -s malloc,11G \ -s Transient=malloc,2G \ ... }}} but this panic appears: (so far only it occurs for SYNTH responses.) {{{ varnishadm panic.show Last panic at: Mon, 25 May 2015 12:17:24 GMT Assert error in cnt_synth(), cache/cache_req_fsm.c line 279: Condition((ObjGetSpace(wrk, req->objcore, &sz, &ptr)) != 0) not true. thread = (cache-worker) version = varnish-trunk revision e05ac94 ident = Linux,3.2.0-4-amd64,x86_64,-junix,-smalloc,-smalloc,-hclassic,epoll Backtrace: 0x4337b4: pan_ic+0x134 0x4396be: CNT_Request+0x1efe 0x44d153: HTTP1_Session+0x133 0x43b1b1: SES_Proto_Req+0x61 0x435c48: Pool_Work_Thread+0x3c8 0x448a93: WRK_Thread+0x103 0x43503b: pool_thread+0x2b 0x7f4ba6f5fb50: libpthread.so.0(+0x6b50) [0x7f4ba6f5fb50] 0x7f4ba6ca995d: libc.so.6(clone+0x6d) [0x7f4ba6ca995d] req = 0x7f4aeb7e1020 { sp = 0x7f4b77c06020, vxid = 30872635, step = R_STP_SYNTH, req_body = R_BODY_NONE, err_code = 701, err_reason = , restarts = 0, esi_level = 0, sp = 0x7f4b77c06020 { fd = 53, vxid = 31296076, client = 190.22.159.239 52084, step = S_STP_H1PROC, }, worker = 0x7f4aae3c2c30 { stack = {0x7f4aae3c3000 -> 0x7f4aae3b7000} ws = 0x7f4aae3c2e38 { id = "wrk", {s,f,r,e} = {0x7f4aae3c2420,0x7f4aae3c2420,(nil),+2040}, }, VCL::method = SYNTH, VCL::return = deliver, VCL::methods = {RECV, HASH, SYNTH}, }, ws = 0x7f4aeb7e1218 { id = "req", {s,f,r,e} = {0x7f4aeb7e3020,+592,(nil),+57304}, }, http[req] = { ws = 0x7f4aeb7e1218[req] "GET", "/crossdomain.xml", "HTTP/1.1", "Host: live.hls.http.13.ztreaming.com", "Connection: keep-alive", "X-Requested-With: ShockwaveFlash/17.0.0.188", "Accept: */*", "Referer: http://tvdb.ddns.net/3c6237bf7c6d0a06234cc090c558facc.html", "Accept-Encoding: gzip, deflate, sdch", "Accept-Language: es-ES,es;q=0.8", "X-Forwarded-For: 190.22.159.239", }, http[resp] = { ws = 0x7f4aeb7e1218[req] "HTTP/1.1", "200", "OK", "Date: Mon, 25 May 2015 12:17:22 GMT", "Server: telsanti01", "Content-Type: application/xml", }, vcl = { srcname = { "input", "Builtin", "/etc/varnish/include.vcl", "/etc/varnish/backends.vcl", "/etc/varnish/master.vcl", }, }, objcore (REQ) = 0x7f4b09bfac00 { refcnt = 1 flags = 0x102 objhead = 0x7f4ba632a560 stevedore = 0x7f4ba60e2240 (malloc Transient) } flags = { wantbody, } }, }}} {{{ varnishadm panic.show Last panic at: Mon, 25 May 2015 12:27:45 GMT Assert error in cnt_synth(), cache/cache_req_fsm.c line 279: Condition((ObjGetSpace(wrk, req->objcore, &sz, &ptr)) != 0) not true. thread = (cache-worker) version = varnish-trunk revision e05ac94 ident = Linux,3.2.0-4-amd64,x86_64,-junix,-smalloc,-smalloc,-hclassic,epoll Backtrace: 0x4337b4: pan_ic+0x134 0x4396be: CNT_Request+0x1efe 0x44d153: HTTP1_Session+0x133 0x43b1b1: SES_Proto_Req+0x61 0x435c48: Pool_Work_Thread+0x3c8 0x448a93: WRK_Thread+0x103 0x43503b: pool_thread+0x2b 0x7f4ba6f5fb50: libpthread.so.0(+0x6b50) [0x7f4ba6f5fb50] 0x7f4ba6ca995d: libc.so.6(clone+0x6d) [0x7f4ba6ca995d] req = 0x7f4b90cf6020 { sp = 0x7f4a24106020, vxid = 16023612, step = R_STP_SYNTH, req_body = R_BODY_NONE, err_code = 701, err_reason = , restarts = 0, esi_level = 0, sp = 0x7f4a24106020 { fd = 198, vxid = 16023611, client = 190.21.126.93 49831, step = S_STP_H1PROC, }, worker = 0x7f49d51e2c30 { stack = {0x7f49d51e3000 -> 0x7f49d51d7000} ws = 0x7f49d51e2e38 { id = "wrk", {s,f,r,e} = {0x7f49d51e2420,0x7f49d51e2420,(nil),+2040}, }, VCL::method = SYNTH, VCL::return = deliver, VCL::methods = {RECV, HASH, SYNTH}, }, ws = 0x7f4b90cf6218 { id = "req", {s,f,r,e} = {0x7f4b90cf8020,+584,(nil),+57304}, }, http[req] = { ws = 0x7f4b90cf6218[req] "GET", "/crossdomain.xml", "HTTP/1.1", "Host: live.hls.http.chv.ztreaming.com", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language: es-ES,es;q=0.8,en-US;q=0.5,en;q=0.3", "Accept-Encoding: gzip, deflate", "Referer: http://estaticos.chilevision.cl/player/jwplayer6.12/jwplayer.flash.swf", "Connection: keep-alive", "X-Forwarded-For: 190.21.126.93", }, http[resp] = { ws = 0x7f4b90cf6218[req] "HTTP/1.1", "200", "OK", "Date: Mon, 25 May 2015 12:27:44 GMT", "Server: telsanti01", "Content-Type: application/xml", }, vcl = { srcname = { "input", "Builtin", "/etc/varnish/include.vcl", "/etc/varnish/backends.vcl", "/etc/varnish/master.vcl", }, }, objcore (REQ) = 0x7f4a3b603ac0 { refcnt = 1 flags = 0x102 objhead = 0x7f4ba6378560 stevedore = 0x7f4ba60e2240 (malloc Transient) } flags = { wantbody, } }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 25 12:33:39 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 25 May 2015 12:33:39 -0000 Subject: [Varnish] #1740: Condition((ObjGetSpace(wrk, req->objcore, &sz, &ptr)) != 0) not true. In-Reply-To: <055.b30ac21bd1012ab374c23fc4ba7edb26@varnish-cache.org> References: <055.b30ac21bd1012ab374c23fc4ba7edb26@varnish-cache.org> Message-ID: <070.9b2d51a9b00ce9fc855668c2412daacd@varnish-cache.org> #1740: Condition((ObjGetSpace(wrk, req->objcore, &sz, &ptr)) != 0) not true. -----------------------+---------------------------------- Reporter: zaterio@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------- Comment (by zaterio@?): {{{ varnishd -V varnishd (varnish-trunk revision e05ac94) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 25 15:24:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 25 May 2015 15:24:28 -0000 Subject: [Varnish] #1740: Condition((ObjGetSpace(wrk, req->objcore, &sz, &ptr)) != 0) not true. In-Reply-To: <055.b30ac21bd1012ab374c23fc4ba7edb26@varnish-cache.org> References: <055.b30ac21bd1012ab374c23fc4ba7edb26@varnish-cache.org> Message-ID: <070.1a7f9a96b4a00d4d078dc83e00aae362@varnish-cache.org> #1740: Condition((ObjGetSpace(wrk, req->objcore, &sz, &ptr)) != 0) not true. -----------------------+---------------------------------- Reporter: zaterio@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------- Comment (by zaterio@?): I can confirm when I remove the synth response: {{{ sub vcl_recv { if (req.url ~ "/crossdomain\.xml$") { return (synth (701, "")); } } sub vcl_synth { unset resp.http.X-Backend; unset resp.http.X-Varnish; set resp.http.Server = server.hostname; if (resp.status == 701) { set resp.status = 200; set resp.http.Content-Type = "application/xml"; synthetic( {" "}); return (deliver); } return (deliver); } }}} and I get the response from the backend , the panic is not generated {{{ if (req.url ~ "/crossdomain\.xml$") { set req.backend_hint = vod; return (hash); } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 26 19:56:08 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 26 May 2015 19:56:08 -0000 Subject: [Varnish] #1741: All request methods are changed to GET on cache miss Message-ID: <046.13ec4d0f6a29af6463f43111574a8854@varnish-cache.org> #1741: All request methods are changed to GET on cache miss ----------------------+-------------------- Reporter: askalski | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 4.0.3 | Severity: normal Keywords: | ----------------------+-------------------- In commit 2581a69bcdec44e00a6fafb93ca21b6f31d20aef the HEAD->GET logic was moved from vcl_recv{} to vcl_miss{}. As of that commit, all request methods (previously only HEAD) get converted to GET in the backend request. Although the builtin vcl_recv returns "pass" for non-GET/HEAD, a user might trip over this behavior if he tries to return "hash" instead. For example, a user might want varnish to cache the preflight OPTIONS request when implementing CORS. This is currently not possible because it is changed to GET on the backend request. Another problem is the behavior is confusing to troubleshoot. A user might wonder why his PUT/POST/DELETE requests are being changed to GET, and not realize it was because of a "return(hash)" in his VCL. For an example, see this thread: https://www.varnish-cache.org/forum/topic/235 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 27 09:39:25 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 27 May 2015 09:39:25 -0000 Subject: [Varnish] #1741: All request methods are changed to GET on cache miss In-Reply-To: <046.13ec4d0f6a29af6463f43111574a8854@varnish-cache.org> References: <046.13ec4d0f6a29af6463f43111574a8854@varnish-cache.org> Message-ID: <061.14959c3748ce874d25fbe74761a8b131@varnish-cache.org> #1741: All request methods are changed to GET on cache miss ----------------------+-------------------- Reporter: askalski | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by aondio): This is the way it is supposed to work. Both the request methods converted to GET on backend side and returning "pass" for non-GET/HEAD requests are decisions taken for security reasons. For example a POST request can't be cached and can lead to security violations on backend side if not convert to GET. If you want to cache OPTIONS request, you can either implement some logic using the req.method you can do it in your vcl(safe thing to do): {{{ sub vcl_recv { if (req.method == "OPTIONS") return(hash); } }}} or you can move this logic to vcl_backend_fetch : {{{ sub vcl_backend_fetch { set bereq.method = "GET"; } }}} Note that the second solutions is not completely safe because doing so we force the bereq method and this can lead to undefined territories (as written above), some padding logic is required in this situation. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 27 12:10:00 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 27 May 2015 12:10:00 -0000 Subject: [Varnish] #1741: All request methods are changed to GET on cache miss In-Reply-To: <046.13ec4d0f6a29af6463f43111574a8854@varnish-cache.org> References: <046.13ec4d0f6a29af6463f43111574a8854@varnish-cache.org> Message-ID: <061.1a830238ee5127d03010ae29d65d1ab5@varnish-cache.org> #1741: All request methods are changed to GET on cache miss ----------------------+---------------------- Reporter: askalski | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: wontfix Keywords: | ----------------------+---------------------- Changes (by aondio): * status: new => closed * resolution: => wontfix -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 28 17:36:38 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 28 May 2015 17:36:38 -0000 Subject: [Varnish] #1742: varnishlog -B produces a file varnishlog -r cannot read Message-ID: <043.98d45c119b409c806f03334de170ddaf@varnish-cache.org> #1742: varnishlog -B produces a file varnishlog -r cannot read ---------------------+------------------------ Reporter: Dridi | Type: defect Status: new | Priority: normal Milestone: | Component: varnishlog Version: 4.0.3 | Severity: normal Keywords: vsl vsm | ---------------------+------------------------ I may use it wrongly, but I use varnishlog -N as a workaround. {{{ varnishtest "varnishlog -r cannot read a file produced by varnishlog -B" varnish v1 -arg "-b ${bad_ip}:80" -start shell "${varnishlog} -n ${tmpdir}/v1 -d -B >${tmpdir}/output.vsl" shell "${varnishlog} -r ${tmpdir}/output.vsl" }}} The error message: {{{ Can't open log (VSL file read error: EOF ) }}} As a side note: How about a {{{ ${v1_n_arg} }}} macro in varnishtest? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 29 04:43:13 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 29 May 2015 04:43:13 -0000 Subject: [Varnish] #1743: Workspace exhaustion under load Message-ID: <043.51a23a57d9c85027357d34c46e32d875@varnish-cache.org> #1743: Workspace exhaustion under load ----------------------------------+---------------------- Reporter: geoff | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: varnishd Version: 4.0.3 | Severity: normal Keywords: workspace, esi, load | ----------------------------------+---------------------- Following up on this message in varnish-misc: https://www.varnish-cache.org/lists/pipermail/varnish- misc/2015-May/024426.html The problem is workspace exhaustion in load tests: LostHeader appears in the logs, lostheader stats counter increases, and VMODs report insufficient workspace. We've only seen these problems under load. The currently productive Varnish3 setup runs against the same backends without the problem. In V3 we have sess_workspace=256KB. In V4 we started with workspace_client=workspace_backend=256KB, and doubled the value up to as high as 16MB, still getting the problem. At 32MB, varnishd filled up RAM. In V3, we run with well under 50% of available RAM on same-sized machines. When we captured logs that include LostHeader records, we found that the offending requests were always ESI includes. The apps have some deep ESI nesting, up to at least esi_level=7. In some but not all cases, when can see that there were backend retries due to VCL logic that retries requests after 5xx responses. The lostheader counter increases in bursts when this happens, often at a rate of about 2K/s, up to about 9K/s. The bursts seem to go on for about 10-30 seconds, and then the increase rate goes back to 0. We have 3 proxies in the cluster, and the error bursts don't necessarily happen on all 3 at the same time. The problem may be related to backend availability, but I'm not entirely sure of that. The backends occasionally redeploy while load tests are going on, and some of the error bursts may have come when this happened. They also tend to increase when the load is high, which may be just due to the higher load on varnishd, but might also be related to backends throwing errors under load. We had one run with no errors at all, in evening hours when there are no redeployments. On the other hand, we've had more runs with errors during evening hours, and sometimes the error bursts have come shortly after the load test starts, when load is still ramping up and is far from the maximum. VMODs in use are: * std and director * header (V4 version from https://github.com/varnish/libvmod-header) * urlcode (V4 version from https://github.com/fastly/libvmod-urlcode) * uuid (as updated for V4 at https://github.com/otto-de/libvmod-uuid) * re (https://code.uplex.de/uplex-varnish/libvmod-re) * vtstor (https://code.uplex.de/uplex-varnish/libvmod-vtstor) We tried working around the use of VMOD re in VCL, since it stores the subject of regex matches in workspace, and we use it for Cookie headers, which can be very large. But it didn't solve the problem. VMOD vtstor only uses workspace for the size of a VXID as string (otherwise it mallocs its own structures). uuid only uses workspace for the size of a UUID string. I'm learning how to read MEMPOOL.* stats, and I've noticed randry > 0, timeouts > 0 and surplus > 0. But my reading of the code makes me think that these don't indicate problems (except possibly surplus > 0?), and that mempools can't help you anyway if workspaces are too small. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 29 05:29:17 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 29 May 2015 05:29:17 -0000 Subject: [Varnish] #1743: Workspace exhaustion under load In-Reply-To: <043.51a23a57d9c85027357d34c46e32d875@varnish-cache.org> References: <043.51a23a57d9c85027357d34c46e32d875@varnish-cache.org> Message-ID: <058.262f5b4dea6794637a331acdf5693be5@varnish-cache.org> #1743: Workspace exhaustion under load ----------------------------------+---------------------------------- Reporter: geoff | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: workspace, esi, load | ----------------------------------+---------------------------------- Comment (by geoff): We also test a "backend" proxy cluster under v4.0.3, for REST calls between backend apps, with no ESIs, a much lighter load, much simpler VCL logic, and with VMODs std, directors, header and re. These had no workspace problems, with defaults for workspace_client and workspace_backend (64KB). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 29 08:04:17 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 29 May 2015 08:04:17 -0000 Subject: [Varnish] #1743: Workspace exhaustion under load In-Reply-To: <043.51a23a57d9c85027357d34c46e32d875@varnish-cache.org> References: <043.51a23a57d9c85027357d34c46e32d875@varnish-cache.org> Message-ID: <058.4f076d1aeee2011d0756a6acdfb8dfb9@varnish-cache.org> #1743: Workspace exhaustion under load ----------------------------------+---------------------------------- Reporter: geoff | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: workspace, esi, load | ----------------------------------+---------------------------------- Comment (by geoff): We're considering building from master and testing it under load. We'd really prefer to go live with 4.0.3, and then migrate to 4.1 when it's released, but we could do this to see if it has any effect on the problem. It wouldn't be trivial, since we'd have to rebuild the VMODs, package RPMs for distribution in the environments, etc. So we'd appreciate any thoughts on whether it might be worth the effort -- e.g. if there is some new goodness in the ESI code worth trying, or if master is presently too bleeding-edge for a load test. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 29 09:59:16 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 29 May 2015 09:59:16 -0000 Subject: [Varnish] #1742: varnishlog -B produces a file varnishlog -r cannot read In-Reply-To: <043.98d45c119b409c806f03334de170ddaf@varnish-cache.org> References: <043.98d45c119b409c806f03334de170ddaf@varnish-cache.org> Message-ID: <058.ebd5cb7e06bb14fbd9b4e9837b254a8e@varnish-cache.org> #1742: varnishlog -B produces a file varnishlog -r cannot read ------------------------+-------------------- Reporter: Dridi | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishlog | Version: 4.0.3 Severity: normal | Resolution: Keywords: vsl vsm | ------------------------+-------------------- Comment (by Dridi): OK, {{{-B}}} works with {{{-w}}}, but the man page doesn't make the combination obvious: {{{ [...] -B Output binary data suitable for reading with -r. [...] -r filename Read log in binary file format from this file. [...] -w filename Redirect output to file. The file will be overwritten unless the -a option was specified. If the application receives a SIGHUP the file will be reopened allowing the old one to be rotated away. [...] }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 29 13:43:22 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 29 May 2015 13:43:22 -0000 Subject: [Varnish] #1744: Unable to provide sfile size as a percentage of free space Message-ID: <046.06bf44fb0e5dc52a4e04ab86e66b3298@varnish-cache.org> #1744: Unable to provide sfile size as a percentage of free space ----------------------+---------------------- Reporter: fgaillot | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.0.3 | Severity: normal Keywords: | ----------------------+---------------------- I'm trying to migrate from varnish 3.0.7 to 4.0.3 and I'm getting an "Absolute number required" error when trying to give the following argument to varnishd {{{ -s disk=file,/mnt/stream0/cache,90% }}} By looking into the code, I found that this feature has been disabled by [ed6fda4e96b6204b3a0de6c922c2e05a3484fd41]. Has this feature been removed in Varnish 4 ? I found nothing on the release note about this and it's still in the documentation. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat May 30 05:07:10 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 30 May 2015 05:07:10 -0000 Subject: [Varnish] #1742: varnishlog -B produces a file varnishlog -r cannot read In-Reply-To: <043.98d45c119b409c806f03334de170ddaf@varnish-cache.org> References: <043.98d45c119b409c806f03334de170ddaf@varnish-cache.org> Message-ID: <058.91c978a04b610799a71ef0fd00f33f4f@varnish-cache.org> #1742: varnishlog -B produces a file varnishlog -r cannot read ------------------------+-------------------- Reporter: Dridi | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishlog | Version: 4.0.3 Severity: normal | Resolution: Keywords: vsl vsm | ------------------------+-------------------- Comment (by fgsch): -B was dropped in master and it's the default behaviour for this very same reason. -A is used to generate a text file now. -- Ticket URL: Varnish The Varnish HTTP Accelerator