From varnish-bugs at varnish-cache.org Tue Apr 2 09:28:24 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Apr 2013 09:28:24 -0000 Subject: [Varnish] #1289: varnishncsa segfault in libvarnishapi Message-ID: <046.ef39a945c144a9de6b4bce1c159feeb0@varnish-cache.org> #1289: varnishncsa segfault in libvarnishapi -------------------------------------------------+------------------------- Reporter: tmagnien | Type: defect Status: new | Priority: normal Milestone: | Component: varnishncsa Version: 3.0.3 | Severity: normal Keywords: varnishncsa segfault libvarnishapi | vsl.c | -------------------------------------------------+------------------------- Hi, We experience a segfault in libvarnishapi while running varnishncsa. It seems that the log_ptr in vsl.c is beyond log_end. Command-line is: {{{ /usr/bin/varnishncsa -F '--- domain: %{VCL_Log:X-Backend}x remote_addr: %h x_forwarded_for: %{X-Forwarded-For}i hit_miss: %{Varnish:hitmiss}x bytes: %b status: %s request: %r host: %{host}i request_method: %m time_first_byte: %{Varnish:time_firstbyte}x http_referrer: %{Referrer}i http_user_agent: %{User-agent}i session_id: %{VCL_Log:X-SessionId}x cookie: %{Cookie}i ...' }}} Full backtrace is: {{{ (gdb) bt #0 0x00007f3aeafcea86 in vsl_nextlog (vd=, pp=0x7fffd61183e8, bits=0x7fffd61183e0) at vsl.c:174 #1 VSL_NextLog (vd=, pp=0x7fffd61183e8, bits=0x7fffd61183e0) at vsl.c:222 #2 0x00007f3aeafcf31e in VSL_Dispatch (vd=0xcfd010, func=, priv=0x7f3aeab8d780) at vsl.c:306 #3 0x0000000000402784 in main (argc=3, argv=) at varnishncsa.c:1554 }}} Some more output from gdb: {{{ (gdb) p vsl $2 = (struct vsl *) 0xcfd100 (gdb) p *vsl $3 = {magic = 2050087736, log_start = 0x7f3ae050e5d4, log_end = 0x7f3aea50e5d4, log_ptr = 0x7f3aea63209c, last_seq = 69513, r_fd = -1, rbuflen = 256, rbuf = 0xcfd770, b_opt = 0, c_opt = 1, d_opt = 0, flags = 0, vbm_client = 0xcfd1b0, vbm_backend = 0xcfd3e0, vbm_select = 0xcfd6c0, vbm_supress = 0xcfd610, regflags = 0, regincl = 0x0, regexcl = 0x0, num_matchers = 0, matchers = {vtqh_first = 0x0, vtqh_last = 0xcfd188}, skip = 0, keep = 0} }}} {{{ (gdb) l vsl.c:174 169 return (-1); 170 *pp = vsl->rbuf; 171 return (1); 172 } 173 for (w = 0; w < TIMEOUT_USEC;) { 174 t = *vsl->log_ptr; 175 176 if (t == VSL_WRAPMARKER) { 177 /* Wrap around not possible at front */ 178 assert(vsl->log_ptr != vsl->log_start + 1); }}} Note that it's a 3.0.3plus release Thanks, Thierry -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Apr 2 11:31:56 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Apr 2013 11:31:56 -0000 Subject: [Varnish] #854: patch: extra features for the varnish initrc script In-Reply-To: <047.d51a54f258d82d1a5002e1089c1982d2@varnish-cache.org> References: <047.d51a54f258d82d1a5002e1089c1982d2@varnish-cache.org> Message-ID: <062.44fa57eab226cedcc4a518148a249718@varnish-cache.org> #854: patch: extra features for the varnish initrc script ----------------------------------------+---------------------- Reporter: jhalfmoon | Owner: tfheen Type: enhancement | Status: closed Priority: low | Milestone: Later Component: packaging | Version: trunk Severity: trivial | Resolution: wontfix Keywords: initrc patch extra feature | ----------------------------------------+---------------------- Changes (by tfheen): * status: new => closed * resolution: => wontfix Comment: First, apologies for taking so long to answer this bug. Given systems seem to be moving away from using standard init scripts, I would rather not merge this. I wonder if we should extend varnishadm to be able to do what you're asking for, or perhaps add a more high-level tool. I suggest you raise this on the -misc list and we can see what people think? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Apr 2 11:49:30 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Apr 2013 11:49:30 -0000 Subject: [Varnish] #1167: 3.0.3rc1 Compile Error on Solaris 10 with gcc 4.3.3 In-Reply-To: <044.677261b72c12472e07a7cff6c933fe66@varnish-cache.org> References: <044.677261b72c12472e07a7cff6c933fe66@varnish-cache.org> Message-ID: <059.16c8467583e44dde577eb8416dbe6442@varnish-cache.org> #1167: 3.0.3rc1 Compile Error on Solaris 10 with gcc 4.3.3 --------------------------+--------------------- Reporter: Dommas | Owner: tfheen Type: defect | Status: closed Priority: normal | Milestone: Component: port:solaris | Version: 3.0.3 Severity: normal | Resolution: fixed Keywords: | --------------------------+--------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: Fixed in c6b3fae69349063338772267cfd9f631c530b9fd. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Apr 2 12:26:27 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 02 Apr 2013 12:26:27 -0000 Subject: [Varnish] #1224: Long backend name asserts the varnishd child In-Reply-To: <046.c4f71086627c182e1c792186110e4c5e@varnish-cache.org> References: <046.c4f71086627c182e1c792186110e4c5e@varnish-cache.org> Message-ID: <061.2b1c569394fb632df4bc510e3c5c47bf@varnish-cache.org> #1224: Long backend name asserts the varnishd child ----------------------+--------------------- Reporter: lkarsten | Owner: tfheen Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: Fixed in 5e064a1bafcb3b0bd4a2d7cdaeb479e36ff9c310 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Apr 3 08:32:59 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 03 Apr 2013 08:32:59 -0000 Subject: [Varnish] #1250: No source packages for Ubuntu Precise In-Reply-To: <043.199bfdb04729b0df7c3cb1c2aa76a0fb@varnish-cache.org> References: <043.199bfdb04729b0df7c3cb1c2aa76a0fb@varnish-cache.org> Message-ID: <058.17416b057101d0066855a9e8982ea695@varnish-cache.org> #1250: No source packages for Ubuntu Precise -----------------------+--------------------- Reporter: lampe | Owner: tfheen Type: defect | Status: closed Priority: normal | Milestone: Component: packaging | Version: 3.0.3 Severity: minor | Resolution: fixed Keywords: | -----------------------+--------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: I've fixed this now. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Apr 3 15:19:08 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 03 Apr 2013 15:19:08 -0000 Subject: [Varnish] #1290: varnishd crashes with signal 6 Message-ID: <042.8c4ff30a221e6ab80bd3a213ce5a9a42@varnish-cache.org> #1290: varnishd crashes with signal 6 -------------------+---------------------- Reporter: olli | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.3 | Severity: critical Keywords: | -------------------+---------------------- Hi, I am using varnish-3.0.3. Since some days the varnishd crashes with signal 6 on some requests. The crash happens mainly on similar requests. I tried to reproduce the error with curl and same headers, but can not do. Here is the log: 2013-04-03 16:54:38.284318500 Child (24783) died signal=6 2013-04-03 16:54:38.284511500 Child (24783) Panic message: Assert error in VRT_IP_string(), cache_vrt.c line 312: 2013-04-03 16:54:38.284513500 Condition((p = WS_Alloc(sp->http->ws, len)) != 0) not true. 2013-04-03 16:54:38.284514500 thread = (cache-worker) 2013-04-03 16:54:38.284515500 ident = Linux,2.6.27.9-multicore-3,i686,-smalloc,-smalloc,-hcritbit,epoll 2013-04-03 16:54:38.284516500 Backtrace: 2013-04-03 16:54:38.284517500 0x807b9e2: pan_ic+f2 2013-04-03 16:54:38.284517500 0x8084ff0: VRT_IP_string+160 2013-04-03 16:54:38.284518500 0xb02e2fea: _end+a821ec1a 2013-04-03 16:54:38.284524500 0x80837f4: VCL_deliver_method+54 2013-04-03 16:54:38.284525500 0x806113b: cnt_prepresp+23b 2013-04-03 16:54:38.284526500 0x8061ab2: CNT_Session+572 2013-04-03 16:54:38.284526500 0x807d929: wrk_thread_real+4c9 2013-04-03 16:54:38.284527500 0x807df37: wrk_thread+a7 2013-04-03 16:54:38.284528500 0xb7ec74c0: _end+afe030f0 2013-04-03 16:54:38.284529500 0xb7e466de: _end+afd8230e 2013-04-03 16:54:38.284530500 sp = 0xa6f82004 { 2013-04-03 16:54:38.284530500 fd = 74, id = 74, xid = 1988237405, 2013-04-03 16:54:38.284535500 client = 69.164.213.164 41897, 2013-04-03 16:54:38.284536500 step = STP_PREPRESP, 2013-04-03 16:54:38.284536500 handling = deliver, 2013-04-03 16:54:38.284537500 err_code = 302, err_reason = (null), 2013-04-03 16:54:38.284538500 restarts = 0, esi_level = 0 2013-04-03 16:54:38.284539500 flags = is_gunzip 2013-04-03 16:54:38.284539500 bodystatus = 4 2013-04-03 16:54:38.284540500 ws = 0xa6f82054 { overflow 2013-04-03 16:54:38.284541500 id = "sess", 2013-04-03 16:54:38.284545500 {s,f,r,e} = {0xa6f827ac,+16384,(nil),+16384}, 2013-04-03 16:54:38.284546500 }, 2013-04-03 16:54:38.284547500 http[req] = { 2013-04-03 16:54:38.284547500 ws = 0xa6f82054[sess] 2013-04-03 16:54:38.284548500 "GET", 2013-04-03 16:54:38.284549500 "/rdf_news_category-empfehlungen.rss", 2013-04-03 16:54:38.284550500 "HTTP/1.1", 2013-04-03 16:54:38.284550500 "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2; Feeder.co) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.43 Safari/537.31", 2013-04-03 16:54:38.284576500 "host: www.finanztreff.de", 2013-04-03 16:54:38.284577500 "content-type: application/x-www-form- urlencoded; charset=utf-8", 2013-04-03 16:54:38.284578500 "Connection: keep-alive", 2013-04-03 16:54:38.284578500 "X-USF-clientip: 69.164.213.164", 2013-04-03 16:54:38.284579500 "X-Forwarded-For: 69.164.213.164", 2013-04-03 16:54:38.284585500 "X-USF-Cookie: ", 2013-04-03 16:54:38.284585500 "Cookie: MUIDB=2620451E839967EF30F541BA8288672F; SRCHUID=V=2&GUID=F9E7B89829944089A5C45316020E78FF; SRCHUSR=AUTOREDIR=0&GEOVAR=&DOB=20130403; MUID=2620451E839967EF30F541BA8288672F; OrigMUID=2620451E839967EF30F541BA8288672F%2c83991576571c43428160e1e8eed9203c; _FS=MB=1&NU=1; __cfduid=d9668d1baabd8d2699d9576a8734ba6ab1365000815; bb_lastvisit=1365000815; su19hd7=0e9d1ccdeacbb1e476c01cb765d2b9001f9b85775308174a03362bf40c1ee9c9b5eeece1f56599910e16aa604589eb583b934bdf3ad80f51a210f244f8252a14; sses=6408ef7f502337083649252ab1badd69c11f73aba9870bd1299fe69a878f18cde6255423aa3e3d3be2726fe378774744a01982c6585c1b63; global=hi_1; ud=hi%2FUS%2F-1%2F0%2F-%2F%2F-1%2F%2F%2F%2F; __log=29b3d447e8d66118afcabf7a6c0c44c215da797a; __track=1365000815; _SS=C=23.0&SID=AF98DD187F4B486C8F22B05B55B266F3; DUP=Q=H5uWKJTxTH4i5DZN5jfF&T=165855216&IG=4d845c842f124a86ab2daf6db0857c41&V=1&A=2; SRCHD=MS=2764253&D=2764253&AF=QBRE; eStore_cart_blog_id=1; ASP.NET_SessionId=h1sbjn45z3siba55gb2fowyb; fusion_visited=yes; route=d7c501730afb628b40233bec9288f18d; crumb=9251b464e4; SS_MID=11fd28a0-7050-4982-bd7c-b5b3e570b059hf2m8th4; ss_lastvisit=1365000820447; wfvt_3331420876=515c4274b1593; visited=1; wc_session_cookie_198b2155ba43664bcf73ec5872d8b9e6=Ht89%21sXstrMNt%40U3X%281TQ1un%24NUer%21Lu%7C%7C1365173626%7C%7C1365170026%7C%7C2d606619d509bb78732ac461075ccd6c; BIGipServersplitcoaststampers_POOL=1931546634.20480.0000; GEOIP_COUNTRY_CODE=US; shiftylook_last_visit=1049640834; shiftylook_last_activity=1365000834; shiftylook_tracker=a%3A1%3A%7Bi%3A0%3Bs%3A6%3A%22comics%22%3B%7D; BX=8g14m6h8logk2&b=3&s=n5; xb=21; Apache=69.164.213.164.1365000834904368; XARAYASID=n2j99g2c0apqn9kaube22btf02; X-Mapping- fjhppofk=4404D60DD487450E13A740B11948E4E9; wplastvisit=1365000835; wpthisvisit=1365000835; wplastvisit_posts=0; wplastvisit_comments=0; session=s; pvpsite_last_visit=1049640835; pvpsite_last_activity=1365000835; pvpsite_tracker=a%3A1%3A%7Bi%3A0%3Bs%3A4%3A%22feed%22%3B%7D; bb_lastactivity=0; cookietest=1; 83da7448d0afc7f835cf437ca796a35f=19a35c75bd13a8fa9483194647e48542; wp_ozh_wsa_visits=1; wp_ozh_wsa_visit_lasttime=1365000837; MF2=1fl6zajpjhj3m; wfvt_1619179685=515c428609429; gvc=MjQyNTgwOTg2NjY3OTA1MDY5NzM3NTAxNjIyMjUxMTEyMzkxNTg0; t=hFC6aKjSDtNOaaHpZXrYFq1J; SESS343decbfdec534c0650d7f0ca0703eef=mim9vd3p7a764fo0fr7pefclk2; ccbKeyCookie=69.164.213.164; ccbABSPATH=%2Fhome%2Fwebepfr%2Fpublic_html%2F_geekpauvre%2F; start=R3918579999; 720plan=R3438225113; CAKEPHP=4m8lcsmai09nuhf5790edv97a7; 240plan=R3497973306; wfvt_167400494=515c428e9865f; _wixUIDX=null-user-id; userType=ANONYMOUS; _session_id=f34f84ab84b2190f1db79571be4f9d83; wuv-p=1191911084.49431.0000; 1088d49a0e64822be7be99d5020e28f1=36ec862af4aee05d5c4864bb08c66f62; zf_5y_visitor=nwBVqeEaj6NfMV0a065z4UWk1aQAAAAAouXUchGAlixe; _AVESTA_ENVIRONMENT=prod; xn_visitor=85b073aa-f15a-48bc-94c8-5c0e6de267f1; ning_session=nUoEM1/zP3RJvbW8ldoH/XTfDN36kuMF9QODM3SPIt/Zv87OUjpPAPEpyLxDvFlQVJoUFVLqgT4=; CFID=86757767; CFTOKEN=69983607; csrsid=gn2v8ao2n6e0820krfqj0lqos5; blogs- prod=4ArDxZzKI7DnOZAt8LQR1QdE/RSsXRSLAwa+UfbRVGStOpg/5CRcmXnLCK12E0w7SM4Jlml7LhW3HM8=; myvmware- www=IUDWIr0vzrAb25QDkcG7XmT0yVnXDqLjpmwKCosQyJKkhOp3QNPsUuCfzFgPhOzC7IKf063Zo2eLlw==; TSc99b3e=12078580fb8719ad15981f8f0e052301be700b66c8e9ce4d515c4298; symfony=596cf8f425721ab54a6af1d556fc6c1f; 60gp=R152106569; SESS3ec0a452c89dba601cddad703c982f0c=b50b4520084372681e801ea36f16c26f; mobileplugin_group=full%2C0; incap_ses_144_40042=+LNXG37iQUd8898prKn/AZpCXFEAAAAAUvta9VOvNpUrTOtL58Q5nQ==; visid_incap_40042=06UDJpoKRf2Q9rIufMGJLJpCXFEAAAAAAAAAAAAAAADtfQfLo4YRL796lnssBPv5; nostrodomus=e619ad275085e270fa74a23802cf3258; PHPSESSID=7AA230BA3C7D40C5B5332F3A895E25B9; mad4=b; RMID=007f01006b48515c42a00052; uid=AAAACVFcQqB+lOdxBZo8Ag==; msdn=L=1033; VISITORID=1451167430; LiSESSIONID=4A51A725CD21EC1412DDB17E05982A7D; AWSELB=7B51A9B11CE172A49004A8D7CFBA8FD6DC62E2719EA050C78658A8EC8EF667DD6EAC2AADD208A5C3AA276393F0CCA47D90E6F92AFB7F1B174BCC7027420A5C463F0B803266; 0762c5d08dc168a46297ba9f3faea2c2=ldpfob4em3e2nh7j1529o45fm7; session_id=c494ddbc0d8f9e1adcc18035846c1b95; X-Mapping- ilhindhm=E0173EA196397D523CD792DA546E7F1F; device_view=not_mobile; Cart66DBSID=8DN0NSF28403PEZUHMBHYIJAWH30NOBBPWXGHF1R; 3e2e567b3a7c56eab12fcaad02acfffb=4fb3af3b28d9bc94ca5e2d9bf2c32a14; JSESSIONID=8F1D8020139D7D8FBEBDC07B1FE5A9EA.nyc-webster11; WebPersCookie=2AjqO+qIcmzeDEmz0tv/bEs4IJpcBxi+m1O0zvc8pD37DkKLqSDNfJQNsRtPyvUWF5DmnBiUj5rScNM=; exp_last_visit=1049640868; exp_last_activity=1365000868; exp_tracker=a%3A1%3A%7Bi%3A0%3Bs%3A16%3A%22coastal-blog%2Frss%22%3B%7D; wpsc_customer_cookie_2ec264f2d237b5c2c2dde7ac4d05ece9=_Wafw8%29SabyYB%7C1365175030%7Ca90864311ff9a70a246a75efabdcc25e; BASEREFERER=referrerless; SIGNUPEARCODE=REFERERLESS; phsViewerID=69.164.213.164.1365000783.12021; MV_SESSION_ID=cAoWLtAH:69.164.213.164; wpmp_switcher=desktop; =true; novaator=dcfuik5il7ql24168bp7bbg866; phpbb3_9e9u6_u=1; phpbb3_9e9u6_k=; phpbb3_9e9u6_sid=c814978766a08e64bad6014cad068646; SESSID=cvsa0ngvcqpjpsr2j6huau8qj4; 2e8d03fa77ff9d8430b5ebea14f521b5=283783e9d5712e06dcd923fd7003ae17; _icl_current_language=et; _wp_session=6c26a158b9b428211198fafd4515c56a%7C%7C1365002635%7C%7C1365002275; 972e24eb2acf9583c36dc8b5c7eb0f4b=90052b0393c30883697d014ec341fcbe; SESSID_new=cvsa0ngvcqpjpsr2j6huau8qj4; cookielang=eesti", 2013-04-03 16:54:38.284688500 "X-USF-ESI-Level: 0", 2013-04-03 16:54:38.284689500 }, 2013-04-03 16:54:38.284689500 worker = 0xa613905c { 2013-04-03 16:54:38.284690500 ws = 0xa613922c { 2013-04-03 16:54:38.284691500 id = "wrk", 2013-04-03 16:54:38.284692500 {s,f,r,e} = {0xa6133010,+624,(nil),+16384}, 2013-04-03 16:54:38.284693500 }, 2013-04-03 16:54:38.284693500 http[resp] = { 2013-04-03 16:54:38.284694500 ws = 0xa613922c[wrk] 2013-04-03 16:54:38.284695500 "HTTP/1.1", 2013-04-03 16:54:38.284700500 "Found", 2013-04-03 16:54:38.284700500 "Server: Apache", 2013-04-03 16:54:38.284701500 "Location: http://rss.feedsportal.com/c/32337/f/442157/index.rss", 2013-04-03 16:54:38.284702500 "Content-Type: text/html; charset=iso-8859-1", 2013-04-03 16:54:38.284703500 "X-USF-CacheNote: forced", 2013-04-03 16:54:38.284704500 "Content-Length: 237", 2013-04-03 16:54:38.284705500 "Accept-Ranges: bytes", 2013-04-03 16:54:38.284716500 "Date: Wed, 03 Apr 2013 14:54:38 GMT", 2013-04-03 16:54:38.284717500 "X-Varnish: 1988237405", 2013-04-03 16:54:38.284718500 "Age: 0", 2013-04-03 16:54:38.284719500 "Via: 1.1 varnish", 2013-04-03 16:54:38.284719500 "Connection: keep-alive", 2013-04-03 16:54:38.284720500 "X-USF-Cache: MISS", 2013-04-03 16:54:38.284721500 }, 2013-04-03 16:54:38.284722500 }, 2013-04-03 16:54:38.284722500 vcl = { 2013-04-03 16:54:38.284723500 srcname = { 2013-04-03 16:54:38.284727500 "input", 2013-04-03 16:54:38.284728500 "Default", 2013-04-03 16:54:38.284729500 "/usr/local/opt/varnish/etc/varnish/cookie.inc", 2013-04-03 16:54:38.284730500 "/usr/local/opt/varnish/etc/varnish /pool-push-a.inc", 2013-04-03 16:54:38.284731500 "/usr/local/opt/varnish/etc/varnish/push.inc", 2013-04-03 16:54:38.284732500 }, 2013-04-03 16:54:38.284732500 }, 2013-04-03 16:54:38.284733500 obj = 0xa6bd8700 { 2013-04-03 16:54:38.284734500 xid = 1988237405, 2013-04-03 16:54:38.284738500 ws = 0xa6bd8710 { 2013-04-03 16:54:38.284739500 id = "obj", 2013-04-03 16:54:38.284739500 {s,f,r,e} = {0xa6bd882c,+224,(nil),+248}, 2013-04-03 16:54:38.284740500 }, 2013-04-03 16:54:38.284741500 http[obj] = { 2013-04-03 16:54:38.284741500 ws = 0xa6bd8710[obj] 2013-04-03 16:54:38.284742500 "HTTP/1.1", 2013-04-03 16:54:38.284743500 "Found", 2013-04-03 16:54:38.284744500 "Date: Wed, 03 Apr 2013 14:54:38 GMT", 2013-04-03 16:54:38.284745500 "Server: Apache", 2013-04-03 16:54:38.284759500 "Location: http://rss.feedsportal.com/c/32337/f/442157/index.rss", 2013-04-03 16:54:38.284760500 "Content-Type: text/html; charset=iso-8859-1", 2013-04-03 16:54:38.284761500 "X-USF-CacheNote: forced", 2013-04-03 16:54:38.284762500 "Content-Length: 237", 2013-04-03 16:54:38.284763500 }, 2013-04-03 16:54:38.284764500 len = 237, 2013-04-03 16:54:38.284764500 store = { 2013-04-03 16:54:38.284765500 237 { 2013-04-03 16:54:38.284774500 3c 21 44 4f 43 54 59 50 45 20 48 54 4d 4c 20 50 |..| 2013-04-03 16:54:38.284778500 [173 more] 2013-04-03 16:54:38.284779500 }, 2013-04-03 16:54:38.284784500 }, 2013-04-03 16:54:38.284785500 }, 2013-04-03 16:54:38.284785500 }, -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Apr 3 15:23:08 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 03 Apr 2013 15:23:08 -0000 Subject: [Varnish] #1290: varnishd crashes with signal 6 In-Reply-To: <042.8c4ff30a221e6ab80bd3a213ce5a9a42@varnish-cache.org> References: <042.8c4ff30a221e6ab80bd3a213ce5a9a42@varnish-cache.org> Message-ID: <057.ee5372db043aa6de4c42ed6b07db6dd6@varnish-cache.org> #1290: varnishd crashes with signal 6 ----------------------+-------------------- Reporter: olli | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.3 Severity: critical | Resolution: Keywords: | ----------------------+-------------------- Comment (by olli): Sorry, the formatting is wrong, I attached the log. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Apr 4 08:44:42 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 04 Apr 2013 08:44:42 -0000 Subject: [Varnish] #1289: varnishncsa segfault in libvarnishapi In-Reply-To: <046.ef39a945c144a9de6b4bce1c159feeb0@varnish-cache.org> References: <046.ef39a945c144a9de6b4bce1c159feeb0@varnish-cache.org> Message-ID: <061.12e0e5a9be221059e0ef7473645094c3@varnish-cache.org> #1289: varnishncsa segfault in libvarnishapi ------------------------------------------------------+-------------------- Reporter: tmagnien | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishncsa | Version: 3.0.3 Severity: normal | Resolution: Keywords: varnishncsa segfault libvarnishapi vsl.c | ------------------------------------------------------+-------------------- Comment (by luc): I've identified when log_ptr is incorrect. {{{ *pp = (void*)(uintptr_t)vsl->log_ptr; /* Loose volatile */ vsl->log_ptr = VSL_NEXT(vsl->log_ptr); }}} An assert just after the block trigger the error: ''Assert error in vsl_nextlog(), vsl.c line 202'' {{{ *pp = (void*)(uintptr_t)vsl->log_ptr; /* Loose volatile */ vsl->log_ptr = VSL_NEXT(vsl->log_ptr); assert(vsl->log_ptr < vsl->log_end); }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Apr 5 13:48:24 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 05 Apr 2013 13:48:24 -0000 Subject: [Varnish] #1289: varnishncsa segfault in libvarnishapi In-Reply-To: <046.ef39a945c144a9de6b4bce1c159feeb0@varnish-cache.org> References: <046.ef39a945c144a9de6b4bce1c159feeb0@varnish-cache.org> Message-ID: <061.156e19a1b63cc9041af72415b3ac463b@varnish-cache.org> #1289: varnishncsa segfault in libvarnishapi ------------------------------------------------------+-------------------- Reporter: tmagnien | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishncsa | Version: 3.0.3 Severity: normal | Resolution: Keywords: varnishncsa segfault libvarnishapi vsl.c | ------------------------------------------------------+-------------------- Comment (by martin): Hi Thierry, I have had a look at that bug report, and this sounds to me very much like the log being overrun by varnishd. What will happen then is that varnishncsa while reading the log will see new log data being interpreted as previous pointers, easily creating a situation where the next log entry being pointed to is outside the log. There isn't today any clearly defined way for log readers to detect that they are lagging behind, so undefined behavior is to be expected if that happens. Could this be an explanation for what is happening? Where is the varnishncsa logging to? If that place can potentially stall, increasing the amount of logging space through the -l argument to varnishd might be a workaround that will help you. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 8 10:10:32 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Apr 2013 10:10:32 -0000 Subject: [Varnish] #1290: varnishd crashes with signal 6 In-Reply-To: <042.8c4ff30a221e6ab80bd3a213ce5a9a42@varnish-cache.org> References: <042.8c4ff30a221e6ab80bd3a213ce5a9a42@varnish-cache.org> Message-ID: <057.f1d9fa866b0aa77f10bf94e5ac9ee4bf@varnish-cache.org> #1290: varnishd crashes with signal 6 ----------------------+-------------------- Reporter: olli | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.3 Severity: critical | Resolution: Keywords: | ----------------------+-------------------- Description changed by tfheen: Old description: > Hi, > > I am using varnish-3.0.3. Since some days the varnishd crashes with > signal 6 on > some requests. The crash happens mainly on similar requests. I tried to > reproduce > the error with curl and same headers, but can not do. > > Here is the log: > > 2013-04-03 16:54:38.284318500 Child (24783) died signal=6 > 2013-04-03 16:54:38.284511500 Child (24783) Panic message: Assert error > in VRT_IP_string(), cache_vrt.c line 312: > 2013-04-03 16:54:38.284513500 Condition((p = WS_Alloc(sp->http->ws, > len)) != 0) not true. > 2013-04-03 16:54:38.284514500 thread = (cache-worker) > 2013-04-03 16:54:38.284515500 ident = > Linux,2.6.27.9-multicore-3,i686,-smalloc,-smalloc,-hcritbit,epoll > 2013-04-03 16:54:38.284516500 Backtrace: > 2013-04-03 16:54:38.284517500 0x807b9e2: pan_ic+f2 > 2013-04-03 16:54:38.284517500 0x8084ff0: VRT_IP_string+160 > 2013-04-03 16:54:38.284518500 0xb02e2fea: _end+a821ec1a > 2013-04-03 16:54:38.284524500 0x80837f4: VCL_deliver_method+54 > 2013-04-03 16:54:38.284525500 0x806113b: cnt_prepresp+23b > 2013-04-03 16:54:38.284526500 0x8061ab2: CNT_Session+572 > 2013-04-03 16:54:38.284526500 0x807d929: wrk_thread_real+4c9 > 2013-04-03 16:54:38.284527500 0x807df37: wrk_thread+a7 > 2013-04-03 16:54:38.284528500 0xb7ec74c0: _end+afe030f0 > 2013-04-03 16:54:38.284529500 0xb7e466de: _end+afd8230e > 2013-04-03 16:54:38.284530500 sp = 0xa6f82004 { > 2013-04-03 16:54:38.284530500 fd = 74, id = 74, xid = 1988237405, > 2013-04-03 16:54:38.284535500 client = 69.164.213.164 41897, > 2013-04-03 16:54:38.284536500 step = STP_PREPRESP, > 2013-04-03 16:54:38.284536500 handling = deliver, > 2013-04-03 16:54:38.284537500 err_code = 302, err_reason = (null), > 2013-04-03 16:54:38.284538500 restarts = 0, esi_level = 0 > 2013-04-03 16:54:38.284539500 flags = is_gunzip > 2013-04-03 16:54:38.284539500 bodystatus = 4 > 2013-04-03 16:54:38.284540500 ws = 0xa6f82054 { overflow > 2013-04-03 16:54:38.284541500 id = "sess", > 2013-04-03 16:54:38.284545500 {s,f,r,e} = > {0xa6f827ac,+16384,(nil),+16384}, > 2013-04-03 16:54:38.284546500 }, > 2013-04-03 16:54:38.284547500 http[req] = { > 2013-04-03 16:54:38.284547500 ws = 0xa6f82054[sess] > 2013-04-03 16:54:38.284548500 "GET", > 2013-04-03 16:54:38.284549500 "/rdf_news_category- > empfehlungen.rss", > 2013-04-03 16:54:38.284550500 "HTTP/1.1", > 2013-04-03 16:54:38.284550500 "User-Agent: Mozilla/5.0 (Macintosh; > Intel Mac OS X 10_8_2; Feeder.co) AppleWebKit/537.31 (KHTML, like Gecko) > Chrome/26.0.1410.43 Safari/537.31", > 2013-04-03 16:54:38.284576500 "host: www.finanztreff.de", > 2013-04-03 16:54:38.284577500 "content-type: application/x-www- > form-urlencoded; charset=utf-8", > 2013-04-03 16:54:38.284578500 "Connection: keep-alive", > 2013-04-03 16:54:38.284578500 "X-USF-clientip: 69.164.213.164", > 2013-04-03 16:54:38.284579500 "X-Forwarded-For: 69.164.213.164", > 2013-04-03 16:54:38.284585500 "X-USF-Cookie: ", > 2013-04-03 16:54:38.284585500 "Cookie: > MUIDB=2620451E839967EF30F541BA8288672F; > SRCHUID=V=2&GUID=F9E7B89829944089A5C45316020E78FF; > SRCHUSR=AUTOREDIR=0&GEOVAR=&DOB=20130403; > MUID=2620451E839967EF30F541BA8288672F; > OrigMUID=2620451E839967EF30F541BA8288672F%2c83991576571c43428160e1e8eed9203c; > _FS=MB=1&NU=1; __cfduid=d9668d1baabd8d2699d9576a8734ba6ab1365000815; > bb_lastvisit=1365000815; > su19hd7=0e9d1ccdeacbb1e476c01cb765d2b9001f9b85775308174a03362bf40c1ee9c9b5eeece1f56599910e16aa604589eb583b934bdf3ad80f51a210f244f8252a14; > sses=6408ef7f502337083649252ab1badd69c11f73aba9870bd1299fe69a878f18cde6255423aa3e3d3be2726fe378774744a01982c6585c1b63; > global=hi_1; ud=hi%2FUS%2F-1%2F0%2F-%2F%2F-1%2F%2F%2F%2F; > __log=29b3d447e8d66118afcabf7a6c0c44c215da797a; __track=1365000815; > _SS=C=23.0&SID=AF98DD187F4B486C8F22B05B55B266F3; > DUP=Q=H5uWKJTxTH4i5DZN5jfF&T=165855216&IG=4d845c842f124a86ab2daf6db0857c41&V=1&A=2; > SRCHD=MS=2764253&D=2764253&AF=QBRE; eStore_cart_blog_id=1; > ASP.NET_SessionId=h1sbjn45z3siba55gb2fowyb; fusion_visited=yes; > route=d7c501730afb628b40233bec9288f18d; crumb=9251b464e4; > SS_MID=11fd28a0-7050-4982-bd7c-b5b3e570b059hf2m8th4; > ss_lastvisit=1365000820447; wfvt_3331420876=515c4274b1593; visited=1; > wc_session_cookie_198b2155ba43664bcf73ec5872d8b9e6=Ht89%21sXstrMNt%40U3X%281TQ1un%24NUer%21Lu%7C%7C1365173626%7C%7C1365170026%7C%7C2d606619d509bb78732ac461075ccd6c; > BIGipServersplitcoaststampers_POOL=1931546634.20480.0000; > GEOIP_COUNTRY_CODE=US; shiftylook_last_visit=1049640834; > shiftylook_last_activity=1365000834; > shiftylook_tracker=a%3A1%3A%7Bi%3A0%3Bs%3A6%3A%22comics%22%3B%7D; > BX=8g14m6h8logk2&b=3&s=n5; xb=21; Apache=69.164.213.164.1365000834904368; > XARAYASID=n2j99g2c0apqn9kaube22btf02; X-Mapping- > fjhppofk=4404D60DD487450E13A740B11948E4E9; wplastvisit=1365000835; > wpthisvisit=1365000835; wplastvisit_posts=0; wplastvisit_comments=0; > session=s; pvpsite_last_visit=1049640835; > pvpsite_last_activity=1365000835; > pvpsite_tracker=a%3A1%3A%7Bi%3A0%3Bs%3A4%3A%22feed%22%3B%7D; > bb_lastactivity=0; cookietest=1; > 83da7448d0afc7f835cf437ca796a35f=19a35c75bd13a8fa9483194647e48542; > wp_ozh_wsa_visits=1; wp_ozh_wsa_visit_lasttime=1365000837; > MF2=1fl6zajpjhj3m; wfvt_1619179685=515c428609429; > gvc=MjQyNTgwOTg2NjY3OTA1MDY5NzM3NTAxNjIyMjUxMTEyMzkxNTg0; > t=hFC6aKjSDtNOaaHpZXrYFq1J; > SESS343decbfdec534c0650d7f0ca0703eef=mim9vd3p7a764fo0fr7pefclk2; > ccbKeyCookie=69.164.213.164; > ccbABSPATH=%2Fhome%2Fwebepfr%2Fpublic_html%2F_geekpauvre%2F; > start=R3918579999; 720plan=R3438225113; > CAKEPHP=4m8lcsmai09nuhf5790edv97a7; 240plan=R3497973306; > wfvt_167400494=515c428e9865f; _wixUIDX=null-user-id; userType=ANONYMOUS; > _session_id=f34f84ab84b2190f1db79571be4f9d83; > wuv-p=1191911084.49431.0000; > 1088d49a0e64822be7be99d5020e28f1=36ec862af4aee05d5c4864bb08c66f62; > zf_5y_visitor=nwBVqeEaj6NfMV0a065z4UWk1aQAAAAAouXUchGAlixe; > _AVESTA_ENVIRONMENT=prod; xn_visitor=85b073aa-f15a-48bc- > 94c8-5c0e6de267f1; > ning_session=nUoEM1/zP3RJvbW8ldoH/XTfDN36kuMF9QODM3SPIt/Zv87OUjpPAPEpyLxDvFlQVJoUFVLqgT4=; > CFID=86757767; CFTOKEN=69983607; csrsid=gn2v8ao2n6e0820krfqj0lqos5; > blogs- > prod=4ArDxZzKI7DnOZAt8LQR1QdE/RSsXRSLAwa+UfbRVGStOpg/5CRcmXnLCK12E0w7SM4Jlml7LhW3HM8=; > myvmware- > www=IUDWIr0vzrAb25QDkcG7XmT0yVnXDqLjpmwKCosQyJKkhOp3QNPsUuCfzFgPhOzC7IKf063Zo2eLlw==; > TSc99b3e=12078580fb8719ad15981f8f0e052301be700b66c8e9ce4d515c4298; > symfony=596cf8f425721ab54a6af1d556fc6c1f; 60gp=R152106569; > SESS3ec0a452c89dba601cddad703c982f0c=b50b4520084372681e801ea36f16c26f; > mobileplugin_group=full%2C0; > incap_ses_144_40042=+LNXG37iQUd8898prKn/AZpCXFEAAAAAUvta9VOvNpUrTOtL58Q5nQ==; > visid_incap_40042=06UDJpoKRf2Q9rIufMGJLJpCXFEAAAAAAAAAAAAAAADtfQfLo4YRL796lnssBPv5; > nostrodomus=e619ad275085e270fa74a23802cf3258; > PHPSESSID=7AA230BA3C7D40C5B5332F3A895E25B9; mad4=b; > RMID=007f01006b48515c42a00052; uid=AAAACVFcQqB+lOdxBZo8Ag==; msdn=L=1033; > VISITORID=1451167430; LiSESSIONID=4A51A725CD21EC1412DDB17E05982A7D; > AWSELB=7B51A9B11CE172A49004A8D7CFBA8FD6DC62E2719EA050C78658A8EC8EF667DD6EAC2AADD208A5C3AA276393F0CCA47D90E6F92AFB7F1B174BCC7027420A5C463F0B803266; > 0762c5d08dc168a46297ba9f3faea2c2=ldpfob4em3e2nh7j1529o45fm7; > session_id=c494ddbc0d8f9e1adcc18035846c1b95; X-Mapping- > ilhindhm=E0173EA196397D523CD792DA546E7F1F; device_view=not_mobile; > Cart66DBSID=8DN0NSF28403PEZUHMBHYIJAWH30NOBBPWXGHF1R; > 3e2e567b3a7c56eab12fcaad02acfffb=4fb3af3b28d9bc94ca5e2d9bf2c32a14; > JSESSIONID=8F1D8020139D7D8FBEBDC07B1FE5A9EA.nyc-webster11; > WebPersCookie=2AjqO+qIcmzeDEmz0tv/bEs4IJpcBxi+m1O0zvc8pD37DkKLqSDNfJQNsRtPyvUWF5DmnBiUj5rScNM=; > exp_last_visit=1049640868; exp_last_activity=1365000868; > exp_tracker=a%3A1%3A%7Bi%3A0%3Bs%3A16%3A%22coastal-blog%2Frss%22%3B%7D; > wpsc_customer_cookie_2ec264f2d237b5c2c2dde7ac4d05ece9=_Wafw8%29SabyYB%7C1365175030%7Ca90864311ff9a70a246a75efabdcc25e; > BASEREFERER=referrerless; SIGNUPEARCODE=REFERERLESS; > phsViewerID=69.164.213.164.1365000783.12021; > MV_SESSION_ID=cAoWLtAH:69.164.213.164; wpmp_switcher=desktop; =true; > novaator=dcfuik5il7ql24168bp7bbg866; phpbb3_9e9u6_u=1; phpbb3_9e9u6_k=; > phpbb3_9e9u6_sid=c814978766a08e64bad6014cad068646; > SESSID=cvsa0ngvcqpjpsr2j6huau8qj4; > 2e8d03fa77ff9d8430b5ebea14f521b5=283783e9d5712e06dcd923fd7003ae17; > _icl_current_language=et; > _wp_session=6c26a158b9b428211198fafd4515c56a%7C%7C1365002635%7C%7C1365002275; > 972e24eb2acf9583c36dc8b5c7eb0f4b=90052b0393c30883697d014ec341fcbe; > SESSID_new=cvsa0ngvcqpjpsr2j6huau8qj4; cookielang=eesti", > 2013-04-03 16:54:38.284688500 "X-USF-ESI-Level: 0", > 2013-04-03 16:54:38.284689500 }, > 2013-04-03 16:54:38.284689500 worker = 0xa613905c { > 2013-04-03 16:54:38.284690500 ws = 0xa613922c { > 2013-04-03 16:54:38.284691500 id = "wrk", > 2013-04-03 16:54:38.284692500 {s,f,r,e} = > {0xa6133010,+624,(nil),+16384}, > 2013-04-03 16:54:38.284693500 }, > 2013-04-03 16:54:38.284693500 http[resp] = { > 2013-04-03 16:54:38.284694500 ws = 0xa613922c[wrk] > 2013-04-03 16:54:38.284695500 "HTTP/1.1", > 2013-04-03 16:54:38.284700500 "Found", > 2013-04-03 16:54:38.284700500 "Server: Apache", > 2013-04-03 16:54:38.284701500 "Location: > http://rss.feedsportal.com/c/32337/f/442157/index.rss", > 2013-04-03 16:54:38.284702500 "Content-Type: text/html; > charset=iso-8859-1", > 2013-04-03 16:54:38.284703500 "X-USF-CacheNote: forced", > 2013-04-03 16:54:38.284704500 "Content-Length: 237", > 2013-04-03 16:54:38.284705500 "Accept-Ranges: bytes", > 2013-04-03 16:54:38.284716500 "Date: Wed, 03 Apr 2013 14:54:38 > GMT", > 2013-04-03 16:54:38.284717500 "X-Varnish: 1988237405", > 2013-04-03 16:54:38.284718500 "Age: 0", > 2013-04-03 16:54:38.284719500 "Via: 1.1 varnish", > 2013-04-03 16:54:38.284719500 "Connection: keep-alive", > 2013-04-03 16:54:38.284720500 "X-USF-Cache: MISS", > 2013-04-03 16:54:38.284721500 }, > 2013-04-03 16:54:38.284722500 }, > 2013-04-03 16:54:38.284722500 vcl = { > 2013-04-03 16:54:38.284723500 srcname = { > 2013-04-03 16:54:38.284727500 "input", > 2013-04-03 16:54:38.284728500 "Default", > 2013-04-03 16:54:38.284729500 > "/usr/local/opt/varnish/etc/varnish/cookie.inc", > 2013-04-03 16:54:38.284730500 "/usr/local/opt/varnish/etc/varnish > /pool-push-a.inc", > 2013-04-03 16:54:38.284731500 > "/usr/local/opt/varnish/etc/varnish/push.inc", > 2013-04-03 16:54:38.284732500 }, > 2013-04-03 16:54:38.284732500 }, > 2013-04-03 16:54:38.284733500 obj = 0xa6bd8700 { > 2013-04-03 16:54:38.284734500 xid = 1988237405, > 2013-04-03 16:54:38.284738500 ws = 0xa6bd8710 { > 2013-04-03 16:54:38.284739500 id = "obj", > 2013-04-03 16:54:38.284739500 {s,f,r,e} = > {0xa6bd882c,+224,(nil),+248}, > 2013-04-03 16:54:38.284740500 }, > 2013-04-03 16:54:38.284741500 http[obj] = { > 2013-04-03 16:54:38.284741500 ws = 0xa6bd8710[obj] > 2013-04-03 16:54:38.284742500 "HTTP/1.1", > 2013-04-03 16:54:38.284743500 "Found", > 2013-04-03 16:54:38.284744500 "Date: Wed, 03 Apr 2013 14:54:38 > GMT", > 2013-04-03 16:54:38.284745500 "Server: Apache", > 2013-04-03 16:54:38.284759500 "Location: > http://rss.feedsportal.com/c/32337/f/442157/index.rss", > 2013-04-03 16:54:38.284760500 "Content-Type: text/html; > charset=iso-8859-1", > 2013-04-03 16:54:38.284761500 "X-USF-CacheNote: forced", > 2013-04-03 16:54:38.284762500 "Content-Length: 237", > 2013-04-03 16:54:38.284763500 }, > 2013-04-03 16:54:38.284764500 len = 237, > 2013-04-03 16:54:38.284764500 store = { > 2013-04-03 16:54:38.284765500 237 { > 2013-04-03 16:54:38.284774500 3c 21 44 4f 43 54 59 50 45 20 48 54 > 4d 4c 20 50 | 2013-04-03 16:54:38.284775500 55 42 4c 49 43 20 22 2d 2f 2f 49 45 > 54 46 2f 2f |UBLIC "-//IETF//| > 2013-04-03 16:54:38.284776500 44 54 44 20 48 54 4d 4c 20 32 2e 30 > 2f 2f 45 4e |DTD HTML 2.0//EN| > 2013-04-03 16:54:38.284777500 22 3e 0a 3c 68 74 6d 6c 3e 3c 68 65 > 61 64 3e 0a |">..| > 2013-04-03 16:54:38.284778500 [173 more] > 2013-04-03 16:54:38.284779500 }, > 2013-04-03 16:54:38.284784500 }, > 2013-04-03 16:54:38.284785500 }, > 2013-04-03 16:54:38.284785500 }, New description: Hi, I am using varnish-3.0.3. Since some days the varnishd crashes with signal 6 on some requests. The crash happens mainly on similar requests. I tried to reproduce the error with curl and same headers, but can not do. Here is the log: {{{ 2013-04-03 16:54:38.284318500 Child (24783) died signal=6 2013-04-03 16:54:38.284511500 Child (24783) Panic message: Assert error in VRT_IP_string(), cache_vrt.c line 312: 2013-04-03 16:54:38.284513500 Condition((p = WS_Alloc(sp->http->ws, len)) != 0) not true. 2013-04-03 16:54:38.284514500 thread = (cache-worker) 2013-04-03 16:54:38.284515500 ident = Linux,2.6.27.9-multicore-3,i686,-smalloc,-smalloc,-hcritbit,epoll 2013-04-03 16:54:38.284516500 Backtrace: 2013-04-03 16:54:38.284517500 0x807b9e2: pan_ic+f2 2013-04-03 16:54:38.284517500 0x8084ff0: VRT_IP_string+160 2013-04-03 16:54:38.284518500 0xb02e2fea: _end+a821ec1a 2013-04-03 16:54:38.284524500 0x80837f4: VCL_deliver_method+54 2013-04-03 16:54:38.284525500 0x806113b: cnt_prepresp+23b 2013-04-03 16:54:38.284526500 0x8061ab2: CNT_Session+572 2013-04-03 16:54:38.284526500 0x807d929: wrk_thread_real+4c9 2013-04-03 16:54:38.284527500 0x807df37: wrk_thread+a7 2013-04-03 16:54:38.284528500 0xb7ec74c0: _end+afe030f0 2013-04-03 16:54:38.284529500 0xb7e466de: _end+afd8230e 2013-04-03 16:54:38.284530500 sp = 0xa6f82004 { 2013-04-03 16:54:38.284530500 fd = 74, id = 74, xid = 1988237405, 2013-04-03 16:54:38.284535500 client = 69.164.213.164 41897, 2013-04-03 16:54:38.284536500 step = STP_PREPRESP, 2013-04-03 16:54:38.284536500 handling = deliver, 2013-04-03 16:54:38.284537500 err_code = 302, err_reason = (null), 2013-04-03 16:54:38.284538500 restarts = 0, esi_level = 0 2013-04-03 16:54:38.284539500 flags = is_gunzip 2013-04-03 16:54:38.284539500 bodystatus = 4 2013-04-03 16:54:38.284540500 ws = 0xa6f82054 { overflow 2013-04-03 16:54:38.284541500 id = "sess", 2013-04-03 16:54:38.284545500 {s,f,r,e} = {0xa6f827ac,+16384,(nil),+16384}, 2013-04-03 16:54:38.284546500 }, 2013-04-03 16:54:38.284547500 http[req] = { 2013-04-03 16:54:38.284547500 ws = 0xa6f82054[sess] 2013-04-03 16:54:38.284548500 "GET", 2013-04-03 16:54:38.284549500 "/rdf_news_category-empfehlungen.rss", 2013-04-03 16:54:38.284550500 "HTTP/1.1", 2013-04-03 16:54:38.284550500 "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2; Feeder.co) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.43 Safari/537.31", 2013-04-03 16:54:38.284576500 "host: www.finanztreff.de", 2013-04-03 16:54:38.284577500 "content-type: application/x-www-form- urlencoded; charset=utf-8", 2013-04-03 16:54:38.284578500 "Connection: keep-alive", 2013-04-03 16:54:38.284578500 "X-USF-clientip: 69.164.213.164", 2013-04-03 16:54:38.284579500 "X-Forwarded-For: 69.164.213.164", 2013-04-03 16:54:38.284585500 "X-USF-Cookie: ", 2013-04-03 16:54:38.284585500 "Cookie: MUIDB=2620451E839967EF30F541BA8288672F; SRCHUID=V=2&GUID=F9E7B89829944089A5C45316020E78FF; SRCHUSR=AUTOREDIR=0&GEOVAR=&DOB=20130403; MUID=2620451E839967EF30F541BA8288672F; OrigMUID=2620451E839967EF30F541BA8288672F%2c83991576571c43428160e1e8eed9203c; _FS=MB=1&NU=1; __cfduid=d9668d1baabd8d2699d9576a8734ba6ab1365000815; bb_lastvisit=1365000815; su19hd7=0e9d1ccdeacbb1e476c01cb765d2b9001f9b85775308174a03362bf40c1ee9c9b5eeece1f56599910e16aa604589eb583b934bdf3ad80f51a210f244f8252a14; sses=6408ef7f502337083649252ab1badd69c11f73aba9870bd1299fe69a878f18cde6255423aa3e3d3be2726fe378774744a01982c6585c1b63; global=hi_1; ud=hi%2FUS%2F-1%2F0%2F-%2F%2F-1%2F%2F%2F%2F; __log=29b3d447e8d66118afcabf7a6c0c44c215da797a; __track=1365000815; _SS=C=23.0&SID=AF98DD187F4B486C8F22B05B55B266F3; DUP=Q=H5uWKJTxTH4i5DZN5jfF&T=165855216&IG=4d845c842f124a86ab2daf6db0857c41&V=1&A=2; SRCHD=MS=2764253&D=2764253&AF=QBRE; eStore_cart_blog_id=1; ASP.NET_SessionId=h1sbjn45z3siba55gb2fowyb; fusion_visited=yes; route=d7c501730afb628b40233bec9288f18d; crumb=9251b464e4; SS_MID=11fd28a0-7050-4982-bd7c-b5b3e570b059hf2m8th4; ss_lastvisit=1365000820447; wfvt_3331420876=515c4274b1593; visited=1; wc_session_cookie_198b2155ba43664bcf73ec5872d8b9e6=Ht89%21sXstrMNt%40U3X%281TQ1un%24NUer%21Lu%7C%7C1365173626%7C%7C1365170026%7C%7C2d606619d509bb78732ac461075ccd6c; BIGipServersplitcoaststampers_POOL=1931546634.20480.0000; GEOIP_COUNTRY_CODE=US; shiftylook_last_visit=1049640834; shiftylook_last_activity=1365000834; shiftylook_tracker=a%3A1%3A%7Bi%3A0%3Bs%3A6%3A%22comics%22%3B%7D; BX=8g14m6h8logk2&b=3&s=n5; xb=21; Apache=69.164.213.164.1365000834904368; XARAYASID=n2j99g2c0apqn9kaube22btf02; X-Mapping- fjhppofk=4404D60DD487450E13A740B11948E4E9; wplastvisit=1365000835; wpthisvisit=1365000835; wplastvisit_posts=0; wplastvisit_comments=0; session=s; pvpsite_last_visit=1049640835; pvpsite_last_activity=1365000835; pvpsite_tracker=a%3A1%3A%7Bi%3A0%3Bs%3A4%3A%22feed%22%3B%7D; bb_lastactivity=0; cookietest=1; 83da7448d0afc7f835cf437ca796a35f=19a35c75bd13a8fa9483194647e48542; wp_ozh_wsa_visits=1; wp_ozh_wsa_visit_lasttime=1365000837; MF2=1fl6zajpjhj3m; wfvt_1619179685=515c428609429; gvc=MjQyNTgwOTg2NjY3OTA1MDY5NzM3NTAxNjIyMjUxMTEyMzkxNTg0; t=hFC6aKjSDtNOaaHpZXrYFq1J; SESS343decbfdec534c0650d7f0ca0703eef=mim9vd3p7a764fo0fr7pefclk2; ccbKeyCookie=69.164.213.164; ccbABSPATH=%2Fhome%2Fwebepfr%2Fpublic_html%2F_geekpauvre%2F; start=R3918579999; 720plan=R3438225113; CAKEPHP=4m8lcsmai09nuhf5790edv97a7; 240plan=R3497973306; wfvt_167400494=515c428e9865f; _wixUIDX=null-user-id; userType=ANONYMOUS; _session_id=f34f84ab84b2190f1db79571be4f9d83; wuv-p=1191911084.49431.0000; 1088d49a0e64822be7be99d5020e28f1=36ec862af4aee05d5c4864bb08c66f62; zf_5y_visitor=nwBVqeEaj6NfMV0a065z4UWk1aQAAAAAouXUchGAlixe; _AVESTA_ENVIRONMENT=prod; xn_visitor=85b073aa-f15a-48bc-94c8-5c0e6de267f1; ning_session=nUoEM1/zP3RJvbW8ldoH/XTfDN36kuMF9QODM3SPIt/Zv87OUjpPAPEpyLxDvFlQVJoUFVLqgT4=; CFID=86757767; CFTOKEN=69983607; csrsid=gn2v8ao2n6e0820krfqj0lqos5; blogs- prod=4ArDxZzKI7DnOZAt8LQR1QdE/RSsXRSLAwa+UfbRVGStOpg/5CRcmXnLCK12E0w7SM4Jlml7LhW3HM8=; myvmware- www=IUDWIr0vzrAb25QDkcG7XmT0yVnXDqLjpmwKCosQyJKkhOp3QNPsUuCfzFgPhOzC7IKf063Zo2eLlw==; TSc99b3e=12078580fb8719ad15981f8f0e052301be700b66c8e9ce4d515c4298; symfony=596cf8f425721ab54a6af1d556fc6c1f; 60gp=R152106569; SESS3ec0a452c89dba601cddad703c982f0c=b50b4520084372681e801ea36f16c26f; mobileplugin_group=full%2C0; incap_ses_144_40042=+LNXG37iQUd8898prKn/AZpCXFEAAAAAUvta9VOvNpUrTOtL58Q5nQ==; visid_incap_40042=06UDJpoKRf2Q9rIufMGJLJpCXFEAAAAAAAAAAAAAAADtfQfLo4YRL796lnssBPv5; nostrodomus=e619ad275085e270fa74a23802cf3258; PHPSESSID=7AA230BA3C7D40C5B5332F3A895E25B9; mad4=b; RMID=007f01006b48515c42a00052; uid=AAAACVFcQqB+lOdxBZo8Ag==; msdn=L=1033; VISITORID=1451167430; LiSESSIONID=4A51A725CD21EC1412DDB17E05982A7D; AWSELB=7B51A9B11CE172A49004A8D7CFBA8FD6DC62E2719EA050C78658A8EC8EF667DD6EAC2AADD208A5C3AA276393F0CCA47D90E6F92AFB7F1B174BCC7027420A5C463F0B803266; 0762c5d08dc168a46297ba9f3faea2c2=ldpfob4em3e2nh7j1529o45fm7; session_id=c494ddbc0d8f9e1adcc18035846c1b95; X-Mapping- ilhindhm=E0173EA196397D523CD792DA546E7F1F; device_view=not_mobile; Cart66DBSID=8DN0NSF28403PEZUHMBHYIJAWH30NOBBPWXGHF1R; 3e2e567b3a7c56eab12fcaad02acfffb=4fb3af3b28d9bc94ca5e2d9bf2c32a14; JSESSIONID=8F1D8020139D7D8FBEBDC07B1FE5A9EA.nyc-webster11; WebPersCookie=2AjqO+qIcmzeDEmz0tv/bEs4IJpcBxi+m1O0zvc8pD37DkKLqSDNfJQNsRtPyvUWF5DmnBiUj5rScNM=; exp_last_visit=1049640868; exp_last_activity=1365000868; exp_tracker=a%3A1%3A%7Bi%3A0%3Bs%3A16%3A%22coastal-blog%2Frss%22%3B%7D; wpsc_customer_cookie_2ec264f2d237b5c2c2dde7ac4d05ece9=_Wafw8%29SabyYB%7C1365175030%7Ca90864311ff9a70a246a75efabdcc25e; BASEREFERER=referrerless; SIGNUPEARCODE=REFERERLESS; phsViewerID=69.164.213.164.1365000783.12021; MV_SESSION_ID=cAoWLtAH:69.164.213.164; wpmp_switcher=desktop; =true; novaator=dcfuik5il7ql24168bp7bbg866; phpbb3_9e9u6_u=1; phpbb3_9e9u6_k=; phpbb3_9e9u6_sid=c814978766a08e64bad6014cad068646; SESSID=cvsa0ngvcqpjpsr2j6huau8qj4; 2e8d03fa77ff9d8430b5ebea14f521b5=283783e9d5712e06dcd923fd7003ae17; _icl_current_language=et; _wp_session=6c26a158b9b428211198fafd4515c56a%7C%7C1365002635%7C%7C1365002275; 972e24eb2acf9583c36dc8b5c7eb0f4b=90052b0393c30883697d014ec341fcbe; SESSID_new=cvsa0ngvcqpjpsr2j6huau8qj4; cookielang=eesti", 2013-04-03 16:54:38.284688500 "X-USF-ESI-Level: 0", 2013-04-03 16:54:38.284689500 }, 2013-04-03 16:54:38.284689500 worker = 0xa613905c { 2013-04-03 16:54:38.284690500 ws = 0xa613922c { 2013-04-03 16:54:38.284691500 id = "wrk", 2013-04-03 16:54:38.284692500 {s,f,r,e} = {0xa6133010,+624,(nil),+16384}, 2013-04-03 16:54:38.284693500 }, 2013-04-03 16:54:38.284693500 http[resp] = { 2013-04-03 16:54:38.284694500 ws = 0xa613922c[wrk] 2013-04-03 16:54:38.284695500 "HTTP/1.1", 2013-04-03 16:54:38.284700500 "Found", 2013-04-03 16:54:38.284700500 "Server: Apache", 2013-04-03 16:54:38.284701500 "Location: http://rss.feedsportal.com/c/32337/f/442157/index.rss", 2013-04-03 16:54:38.284702500 "Content-Type: text/html; charset=iso-8859-1", 2013-04-03 16:54:38.284703500 "X-USF-CacheNote: forced", 2013-04-03 16:54:38.284704500 "Content-Length: 237", 2013-04-03 16:54:38.284705500 "Accept-Ranges: bytes", 2013-04-03 16:54:38.284716500 "Date: Wed, 03 Apr 2013 14:54:38 GMT", 2013-04-03 16:54:38.284717500 "X-Varnish: 1988237405", 2013-04-03 16:54:38.284718500 "Age: 0", 2013-04-03 16:54:38.284719500 "Via: 1.1 varnish", 2013-04-03 16:54:38.284719500 "Connection: keep-alive", 2013-04-03 16:54:38.284720500 "X-USF-Cache: MISS", 2013-04-03 16:54:38.284721500 }, 2013-04-03 16:54:38.284722500 }, 2013-04-03 16:54:38.284722500 vcl = { 2013-04-03 16:54:38.284723500 srcname = { 2013-04-03 16:54:38.284727500 "input", 2013-04-03 16:54:38.284728500 "Default", 2013-04-03 16:54:38.284729500 "/usr/local/opt/varnish/etc/varnish/cookie.inc", 2013-04-03 16:54:38.284730500 "/usr/local/opt/varnish/etc/varnish /pool-push-a.inc", 2013-04-03 16:54:38.284731500 "/usr/local/opt/varnish/etc/varnish/push.inc", 2013-04-03 16:54:38.284732500 }, 2013-04-03 16:54:38.284732500 }, 2013-04-03 16:54:38.284733500 obj = 0xa6bd8700 { 2013-04-03 16:54:38.284734500 xid = 1988237405, 2013-04-03 16:54:38.284738500 ws = 0xa6bd8710 { 2013-04-03 16:54:38.284739500 id = "obj", 2013-04-03 16:54:38.284739500 {s,f,r,e} = {0xa6bd882c,+224,(nil),+248}, 2013-04-03 16:54:38.284740500 }, 2013-04-03 16:54:38.284741500 http[obj] = { 2013-04-03 16:54:38.284741500 ws = 0xa6bd8710[obj] 2013-04-03 16:54:38.284742500 "HTTP/1.1", 2013-04-03 16:54:38.284743500 "Found", 2013-04-03 16:54:38.284744500 "Date: Wed, 03 Apr 2013 14:54:38 GMT", 2013-04-03 16:54:38.284745500 "Server: Apache", 2013-04-03 16:54:38.284759500 "Location: http://rss.feedsportal.com/c/32337/f/442157/index.rss", 2013-04-03 16:54:38.284760500 "Content-Type: text/html; charset=iso-8859-1", 2013-04-03 16:54:38.284761500 "X-USF-CacheNote: forced", 2013-04-03 16:54:38.284762500 "Content-Length: 237", 2013-04-03 16:54:38.284763500 }, 2013-04-03 16:54:38.284764500 len = 237, 2013-04-03 16:54:38.284764500 store = { 2013-04-03 16:54:38.284765500 237 { 2013-04-03 16:54:38.284774500 3c 21 44 4f 43 54 59 50 45 20 48 54 4d 4c 20 50 |..| 2013-04-03 16:54:38.284778500 [173 more] 2013-04-03 16:54:38.284779500 }, 2013-04-03 16:54:38.284784500 }, 2013-04-03 16:54:38.284785500 }, 2013-04-03 16:54:38.284785500 }, }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 8 10:12:26 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Apr 2013 10:12:26 -0000 Subject: [Varnish] #1289: varnishncsa segfault in libvarnishapi In-Reply-To: <046.ef39a945c144a9de6b4bce1c159feeb0@varnish-cache.org> References: <046.ef39a945c144a9de6b4bce1c159feeb0@varnish-cache.org> Message-ID: <061.1f44724a2d3a68f1bbed5a6fe72dbf8c@varnish-cache.org> #1289: varnishncsa segfault in libvarnishapi -------------------------------------------------+------------------------- Reporter: tmagnien | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishncsa | Version: 3.0.3 Severity: normal | Resolution: worksforme Keywords: varnishncsa segfault libvarnishapi | vsl.c | -------------------------------------------------+------------------------- Changes (by martin): * status: new => closed * resolution: => worksforme Comment: I will close this ticket as we believe we know the cause of this, and it is also being tracked in $other system. Will reopen if it turns out to not be the solution. Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 8 10:20:53 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Apr 2013 10:20:53 -0000 Subject: [Varnish] #1287: Varnish 3.0.3 - segfault in libvarnish.so. In-Reply-To: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> References: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> Message-ID: <059.660aa1fe47fc81456d74c23b322c0402@varnish-cache.org> #1287: Varnish 3.0.3 - segfault in libvarnish.so. ------------------------------------+-------------------- Reporter: robroy | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: segfault libvarnish.so | ------------------------------------+-------------------- Description changed by tfheen: Old description: > I use varnish 3.0.3 on my production server: > > rpm -qa | grep varnish > varnish-3.0.3-1.el6.x86_64 > varnish-libs-3.0.3-1.el6.x86_64 > varnish-libs-devel-3.0.3-1.el6.x86_64 > > uname -r > 2.6.32-279.19.1.el6.x86_64 > > cat /etc/redhat-release > CentOS release 6.3 (Final) > > My varnish process suddenly dies and logs: > > Mar 22 09:00:07 server.local kernel: : varnishd[2085]: segfault at 0 > ip 0000003f60c0c234 sp 00007fa9217cc2e0 error 4 in > libvarnish.so[3f60c00000+13000] > Mar 22 08:00:14 server.local varnishd[28424]: Child (2043) not > responding to CLI, killing it. > Mar 22 08:00:14 server.local varnishd[28424]: Child (2043) not > responding to CLI, killing it. > Mar 22 08:00:14 server.local varnishd[28424]: Child (2043) died > signal=11 (core dumped) > Mar 22 08:00:14 server.local varnishd[28424]: Child cleanup complete > Mar 22 08:00:14 server.local varnishd[28424]: child (1129) Started > Mar 22 08:00:14 server.local varnishd[28424]: Child (1129) said Child > starts > > I've attached my configuration. New description: I use varnish 3.0.3 on my production server: {{{ rpm -qa | grep varnish varnish-3.0.3-1.el6.x86_64 varnish-libs-3.0.3-1.el6.x86_64 varnish-libs-devel-3.0.3-1.el6.x86_64 uname -r 2.6.32-279.19.1.el6.x86_64 cat /etc/redhat-release CentOS release 6.3 (Final) }}} My varnish process suddenly dies and logs: {{{ Mar 22 09:00:07 server.local kernel: : varnishd[2085]: segfault at 0 ip 0000003f60c0c234 sp 00007fa9217cc2e0 error 4 in libvarnish.so[3f60c00000+13000] Mar 22 08:00:14 server.local varnishd[28424]: Child (2043) not responding to CLI, killing it. Mar 22 08:00:14 server.local varnishd[28424]: Child (2043) not responding to CLI, killing it. Mar 22 08:00:14 server.local varnishd[28424]: Child (2043) died signal=11 (core dumped) Mar 22 08:00:14 server.local varnishd[28424]: Child cleanup complete Mar 22 08:00:14 server.local varnishd[28424]: child (1129) Started Mar 22 08:00:14 server.local varnishd[28424]: Child (1129) said Child starts }}} I've attached my configuration. -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 8 10:35:15 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Apr 2013 10:35:15 -0000 Subject: [Varnish] #1290: varnishd crashes with signal 6 In-Reply-To: <042.8c4ff30a221e6ab80bd3a213ce5a9a42@varnish-cache.org> References: <042.8c4ff30a221e6ab80bd3a213ce5a9a42@varnish-cache.org> Message-ID: <057.b9f8e5875949609a53bedc3fb100f83e@varnish-cache.org> #1290: varnishd crashes with signal 6 ----------------------+------------------------- Reporter: olli | Owner: Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: 3.0.3 Severity: critical | Resolution: worksforme Keywords: | ----------------------+------------------------- Changes (by martin): * status: new => closed * resolution: => worksforme Comment: You are running out of session workspace. Increase the sess_workspace runtime parameter. Also, since it is limited at 16k, it looks like you are running varnish on a 32-bit system. Although Varnish will work on 32-bit systems, it is recommended to run in on 64-bit systems, where this parameter would default to 64k. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 8 10:48:25 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Apr 2013 10:48:25 -0000 Subject: [Varnish] #1287: Varnish 3.0.3 - segfault in libvarnish.so. In-Reply-To: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> References: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> Message-ID: <059.a0ed05362096790543eae2e9267d01be@varnish-cache.org> #1287: Varnish 3.0.3 - segfault in libvarnish.so. ------------------------------------+-------------------- Reporter: robroy | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: segfault libvarnish.so | ------------------------------------+-------------------- Comment (by tfheen): Are you seeing this regularly? If so, could you please get us a core dump? The easiest way to do that is to install the varnish-debugsymbols package, ensure the DAEMON_COREFILE_LIMIT in /etc/sysconfig/varnish is not commented and echo "/tmp/core" > /proc/sys/kernel/core_pattern to make sure cores end up in /tmp. When Varnish crashes, run gdb /usr/bin/varnishd /tmp/core (or what the core file is named), then "bt full" and put that into the ticket. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 8 10:50:22 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Apr 2013 10:50:22 -0000 Subject: [Varnish] #1281: Documentation inconsistency wrt gzip In-Reply-To: <043.bd50c282914cee08d24ea130407d85bb@varnish-cache.org> References: <043.bd50c282914cee08d24ea130407d85bb@varnish-cache.org> Message-ID: <058.af5b8fc8c3c82c534af3c284659d6bef@varnish-cache.org> #1281: Documentation inconsistency wrt gzip ---------------------------+------------------------------ Reporter: perbu | Owner: Type: documentation | Status: new Priority: normal | Milestone: Varnish 3.0 dev Component: build | Version: 3.0.3 Severity: normal | Resolution: Keywords: gzip docs | ---------------------------+------------------------------ Comment (by martin): The http_gzip_support part is wrong. This should say that when http_gzip_support is true, Varnish will default to always asking the backend for compressed content, and will automatically decompress content for clients which don't grok gzip. It can then also compress uncompressed content from the backend, but will only do so if do_gzip is set to true from vcl in the context of that request. For the bonus question: req.can_gzip is a read-only variable. It will check the current client headers if they support gzip. So the correct way is to alter/remove Accept-Encoding. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 8 10:51:29 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Apr 2013 10:51:29 -0000 Subject: [Varnish] #1281: Documentation inconsistency wrt gzip In-Reply-To: <043.bd50c282914cee08d24ea130407d85bb@varnish-cache.org> References: <043.bd50c282914cee08d24ea130407d85bb@varnish-cache.org> Message-ID: <058.c8b8360b56bdb57c969c189147eb74ff@varnish-cache.org> #1281: Documentation inconsistency wrt gzip ---------------------------+------------------------------ Reporter: perbu | Owner: perbu Type: documentation | Status: new Priority: normal | Milestone: Varnish 3.0 dev Component: build | Version: 3.0.3 Severity: normal | Resolution: Keywords: gzip docs | ---------------------------+------------------------------ Changes (by martin): * owner: => perbu -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 8 11:15:12 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Apr 2013 11:15:12 -0000 Subject: [Varnish] #1286: 3 of 322 tests failed with varnish-trunk+2013-03-22.tar.gz In-Reply-To: <050.b7b48f5ca353d6d5fc1cce040d870c7c@varnish-cache.org> References: <050.b7b48f5ca353d6d5fc1cce040d870c7c@varnish-cache.org> Message-ID: <065.e09eb2de22ae48dbacc4585f9bafa949@varnish-cache.org> #1286: 3 of 322 tests failed with varnish-trunk+2013-03-22.tar.gz --------------------------+---------------------- Reporter: plamenpetrov | Owner: Type: defect | Status: closed Priority: low | Milestone: Component: regress | Version: trunk Severity: minor | Resolution: invalid Keywords: tests fail | --------------------------+---------------------- Changes (by tfheen): * status: new => closed * resolution: => invalid Comment: This looks like a failure of your resolver. Is your resolver by any chance dnsmasq? If so, could you try pointing it at something else such as a local bind or google's DNS or similar and see if the problem goes away? Since this bug looks a lot like a similar bug we've seen in the past that is a bug in the environment, I'm closing it. We might be adding a workaround in varnishtest for this at a later stage, though. If you are not using dnsmasq, please reopen av we'll dig further. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 8 11:36:53 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Apr 2013 11:36:53 -0000 Subject: [Varnish] #1288: Varnishlog -i doesn't accept hit In-Reply-To: <041.297e1ef20d34270c2df48a696429c6aa@varnish-cache.org> References: <041.297e1ef20d34270c2df48a696429c6aa@varnish-cache.org> Message-ID: <056.b6b4a651f8e8703b92593e97b588c028@varnish-cache.org> #1288: Varnishlog -i doesn't accept hit ------------------------+-------------------- Reporter: mha | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishlog | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ------------------------+-------------------- Comment (by Tollef Fog Heen ): In [e6f12e901f00989f81dc745ad1aa9b9f93051c2e]: {{{ #!CommitTicketReference repository="" revision="e6f12e901f00989f81dc745ad1aa9b9f93051c2e" Prefer exact matches If there are multiple matches for a given VSL tag (such as "Hit" matching both Hit and HitPass), prefer the one that is an exact match. Fixes #1288 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 8 11:36:54 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Apr 2013 11:36:54 -0000 Subject: [Varnish] #1288: Varnishlog -i doesn't accept hit In-Reply-To: <041.297e1ef20d34270c2df48a696429c6aa@varnish-cache.org> References: <041.297e1ef20d34270c2df48a696429c6aa@varnish-cache.org> Message-ID: <056.395b5ed3d544ccb9fb0239ef3d063cbd@varnish-cache.org> #1288: Varnishlog -i doesn't accept hit ------------------------+--------------------- Reporter: mha | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishlog | Version: 3.0.3 Severity: normal | Resolution: fixed Keywords: | ------------------------+--------------------- Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [e6f12e901f00989f81dc745ad1aa9b9f93051c2e]) Prefer exact matches If there are multiple matches for a given VSL tag (such as "Hit" matching both Hit and HitPass), prefer the one that is an exact match. Fixes #1288 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 8 17:46:08 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Apr 2013 17:46:08 -0000 Subject: [Varnish] #1286: 3 of 322 tests failed with varnish-trunk+2013-03-22.tar.gz In-Reply-To: <050.b7b48f5ca353d6d5fc1cce040d870c7c@varnish-cache.org> References: <050.b7b48f5ca353d6d5fc1cce040d870c7c@varnish-cache.org> Message-ID: <065.2c044ddbb05d1e27cec8d1b25b20c829@varnish-cache.org> #1286: 3 of 322 tests failed with varnish-trunk+2013-03-22.tar.gz --------------------------+---------------------- Reporter: plamenpetrov | Owner: Type: defect | Status: closed Priority: low | Milestone: Component: regress | Version: trunk Severity: minor | Resolution: invalid Keywords: tests fail | --------------------------+---------------------- Comment (by plamenpetrov): Yes, I am using dnsmasq as my DNS resolver.[[BR]] Thanks for your time! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Apr 9 09:33:54 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Apr 2013 09:33:54 -0000 Subject: [Varnish] #1291: Error 503 Service Unavailable Message-ID: <050.8abd3a78a57f72b0cf16860120a02fb1@varnish-cache.org> #1291: Error 503 Service Unavailable --------------------------+-------------------- Reporter: traitimvuong | Type: task Status: new | Priority: high Milestone: | Component: build Version: trunk | Severity: normal Keywords: duhx | --------------------------+-------------------- I have error when install Varnish on my server"Error 503 Service Unavailable". I have insert this parameter to file defailt.vlc .connect_timeout = 100s; .first_byte_timeout = 500s; .between_bytes_timeout = 200s; but I still get above error. How I can fix this error ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Apr 10 09:52:45 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 10 Apr 2013 09:52:45 -0000 Subject: [Varnish] #1292: Varnish restarts itself, when processing response from backend Message-ID: <042.4b2b91c80836415cc2d25b9977d04c7c@varnish-cache.org> #1292: Varnish restarts itself, when processing response from backend -----------------------------+---------------------- Reporter: ixos | Type: defect Status: new | Priority: normal Milestone: Varnish 3.0 dev | Component: varnishd Version: 3.0.3 | Severity: major Keywords: | -----------------------------+---------------------- IPs and hostnames are obfuscated. {{{ host6 kernel: varnishd[28260]: segfault at 7000 ip 000000000045eb1d sp 00007fc9db1ea130 error 4 in varnishd[400000+80000] }}} {{{ Last panic at: Wed, 10 Apr 2013 09:12:59 GMT Assert error in VDI_CloseFd(), cache_dir.c line 45: Condition((sp->vbc)->magic == 0x0c5e6592) not true. thread = (cache-worker) ident = Linux,3.2.13-grsec-xxxx-grs- ipv6-64,x86_64,-sfile,-sfile,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x434148: /usr/sbin/varnishd() [0x434148] 0x41dbe6: /usr/sbin/varnishd(VDI_CloseFd+0x46) [0x41dbe6] 0x427a38: /usr/sbin/varnishd(FetchBody+0x758) [0x427a38] 0x41ae40: /usr/sbin/varnishd() [0x41ae40] 0x41bc05: /usr/sbin/varnishd(CNT_Session+0x675) [0x41bc05] 0x437249: /usr/sbin/varnishd() [0x437249] 0x7ff3287518ba: /lib/libpthread.so.0(+0x68ba) [0x7ff3287518ba] 0x7ff3284b902d: /lib/libc.so.6(clone+0x6d) [0x7ff3284b902d] sp = 0x7ff3209b5008 { fd = 26, id = 26, xid = 1367065654, client = 86.56.36.58 10585, step = STP_STREAMDELIVER, handling = deliver, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 flags = bodystatus = 4 ws = 0x7ff3209b5080 { id = "sess", {s,f,r,e} = {0x7ff3209b5d70,+632,(nil),+65536}, }, http[req] = { ws = 0x7ff3209b5080[sess] "GET", "/banners/computersinternetdownloads/graphics.jpg", "HTTP/1.1", "User-Agent: Jedessine/1.3 CFNetwork/485.13.9 Darwin/11.0.0", "Accept: */*", "Accept-Language: fr-fr", "Accept-Encoding: gzip, deflate", "Cache-Control: max-age=43200", "Connection: keep-alive", "host: images.net", "dns: 1", "X-Forwarded-For: 84.90.27.33", "X-Remote-Ip: 84.90.27.33", "Remote-Ip: 84.90.27.33", "X-CDN-Geo: par", "X-CDN-Geo-IP: 66.249.73.37", "X-Cacheable-Force: 1", "X-Cacheable: Matched cache", }, worker = 0x7fc9d1fa1aa0 { ws = 0x7fc9d1fa1ce8 { id = "wrk", {s,f,r,e} = {0x7fc9d1f8fa30,+216,(nil),+65536}, }, http[resp] = { ws = 0x7fc9d1fa1ce8[wrk] "HTTP/1.1", "200", "OK", "Last-Modified: Thu, 29 Oct 2009 10:48:00 GMT", "Vary: Accept-Encoding", "Expires: Thu, 31 Dec 2037 23:55:55 GMT", "Pragma: public", "Content-Encoding: gzip", "Content-Type: image/png", "Accept-Ranges: bytes", "Date: Wed, 10 Apr 2013 09:12:59 GMT", "Connection: keep-alive", "X-Cacheable: Matched cache", "X-CDN-Geo: par", "X-CDN-Geo-IP: 66.249.73.37", "Content-Length: 6598", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7fe52851b000 { xid = 1367065654, ws = 0x7fe52851b018 { id = "obj", {s,f,r,e} = {0x7fe52851b228,+400,(nil),+424}, }, http[obj] = { ws = 0x7fe52851b018[obj] "HTTP/1.1", "OK", "Date: Wed, 10 Apr 2013 09:13:10 GMT", "Server: Apache/2.4.3 (Unix) OpenSSL/1.0.1c", "Last-Modified: Thu, 29 Oct 2009 10:48:00 GMT", "Vary: Accept-Encoding", "Expires: Thu, 31 Dec 2037 23:55:55 GMT", "Pragma: public", "Content-Encoding: gzip", "X-Cache: HIT from cache1.net", "Content-Type: image/png", "Content-Length: 6598", }, len = 6598, store = { 6598 { 1f 8b 08 00 00 00 00 00 00 03 00 0a 10 f5 ef 89 |................| 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 |PNG........IHDR.| 00 00 64 00 00 00 64 08 06 00 00 00 70 e2 95 54 |..d...d.....p..T| 00 00 00 19 74 45 58 74 53 6f 66 74 77 61 72 65 |....tEXtSoftware| [6534 more] }, }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Apr 10 18:50:45 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 10 Apr 2013 18:50:45 -0000 Subject: [Varnish] #1293: varnishd: page allocation failure Message-ID: <048.5c12d86511840ad8268d566798999de1@varnish-cache.org> #1293: varnishd: page allocation failure ------------------------+-------------------- Reporter: msallen333 | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.3 | Severity: normal Keywords: | ------------------------+-------------------- Varnishd recently "hung" on my system with the below error /var/log/messages and had to be restarted. Does this appear to be a varnish 3.0.3 defect? Apr 9 15:42:59 lx11 kernel: __ratelimit: 26 callbacks suppressed Apr 9 15:42:59 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:42:59 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:42:59 lx11 kernel: Call Trace: Apr 9 15:42:59 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:42:59 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:42:59 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:42:59 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:42:59 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:42:59 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:42:59 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:42:59 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:42:59 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:42:59 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:42:59 lx11 kernel: [] ? tick_program_event+0x2a/0x30 Apr 9 15:42:59 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:42:59 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:42:59 lx11 kernel: [] ? invalidate_interrupt7+0x13/0x20 Apr 9 15:42:59 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:42:59 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:42:59 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:42:59 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:42:59 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:42:59 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:42:59 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:42:59 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:42:59 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:42:59 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:42:59 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:42:59 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:42:59 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:42:59 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:42:59 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:42:59 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:42:59 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:42:59 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:42:59 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:42:59 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:42:59 lx11 kernel: Apr 9 15:42:59 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:42:59 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:42:59 lx11 kernel: Call Trace: Apr 9 15:42:59 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:42:59 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:42:59 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:42:59 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:42:59 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:42:59 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:42:59 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:42:59 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:42:59 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:42:59 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:42:59 lx11 kernel: [] ? tcp_validate_incoming+0x30b/0x3a0 Apr 9 15:42:59 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:42:59 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:42:59 lx11 kernel: [] ? invalidate_interrupt4+0x13/0x20 Apr 9 15:42:59 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:42:59 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:42:59 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:42:59 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:42:59 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:42:59 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:42:59 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:42:59 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:42:59 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:42:59 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:42:59 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:42:59 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:42:59 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:42:59 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:42:59 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:42:59 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:42:59 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:42:59 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:42:59 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:42:59 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:42:59 lx11 kernel: Apr 9 15:42:59 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:42:59 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:42:59 lx11 kernel: Call Trace: Apr 9 15:43:02 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:02 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:02 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:02 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:02 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:02 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:02 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:02 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:02 lx11 kernel: [] ? tcp_validate_incoming+0x30b/0x3a0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:02 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:02 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:02 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:02 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:02 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:02 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:02 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:02 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:02 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:02 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:02 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:02 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:02 lx11 kernel: Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:02 lx11 kernel: Call Trace: Apr 9 15:43:02 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:02 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:02 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:02 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:02 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:02 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:02 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:02 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:02 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:02 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:02 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:02 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:02 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:02 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:02 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:02 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:02 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:02 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:02 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:02 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:02 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:02 lx11 kernel: Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:02 lx11 kernel: Call Trace: Apr 9 15:43:02 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:02 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:02 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:02 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:02 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:02 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:02 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:02 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:02 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:02 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:02 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:02 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:02 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:02 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:02 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:02 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:02 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:02 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:02 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:02 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:02 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:02 lx11 kernel: Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:02 lx11 kernel: Call Trace: Apr 9 15:43:02 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:02 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:02 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:02 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:02 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:02 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:02 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:02 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:02 lx11 kernel: [] ? tcp_validate_incoming+0x30b/0x3a0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:02 lx11 kernel: [] ? smp_invalidate_interrupt+0x60/0xc0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:02 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:02 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:02 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:02 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:02 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:02 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:02 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:02 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:02 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:02 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:02 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:02 lx11 kernel: Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:02 lx11 kernel: Call Trace: Apr 9 15:43:02 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:02 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:02 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:02 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:02 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:02 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:02 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:02 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:02 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:02 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:02 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:02 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:02 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:02 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:02 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:07 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:07 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:07 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:07 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:07 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:07 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:07 lx11 kernel: Apr 9 15:43:07 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:07 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:07 lx11 kernel: Call Trace: Apr 9 15:43:07 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:07 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:07 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:07 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:07 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:07 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:07 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:07 lx11 kernel: [] ? tcp_data_queue+0x3ed/0xc70 Apr 9 15:43:07 lx11 kernel: [] ? tcp_validate_incoming+0x2c0/0x3a0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:07 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:07 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:07 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:07 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:07 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:07 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:07 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:07 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:07 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:07 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:07 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:07 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:07 lx11 kernel: Apr 9 15:43:07 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:07 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:07 lx11 kernel: Call Trace: Apr 9 15:43:07 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:07 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:07 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:07 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:07 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:07 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:07 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:07 lx11 kernel: [] ? tcp_data_queue+0x3ed/0xc70 Apr 9 15:43:07 lx11 kernel: [] ? tcp_validate_incoming+0x2c0/0x3a0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:07 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:07 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:07 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:07 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:07 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:07 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:07 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:07 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:07 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:07 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:07 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:07 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:07 lx11 kernel: Apr 9 15:43:07 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:07 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:07 lx11 kernel: Call Trace: Apr 9 15:43:07 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:07 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:07 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:07 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:07 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:07 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:07 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:07 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:07 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:07 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:07 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:07 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:07 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:07 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:07 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? smp_invalidate_interrupt+0x60/0xc0 Apr 9 15:43:07 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:07 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:07 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:07 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:07 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:07 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 16:15:24 lx11 kernel: Apr 9 16:15:24 lx11 kernel: usb 6-1: USB disconnect, device number 2 Apr 9 16:15:44 lx11 kernel: usb 6-1: new full speed USB device number 3 using uhci_hcd Apr 9 16:15:44 lx11 kernel: usb 6-1: New USB device found, idVendor=03f0, idProduct=1027 Apr 9 16:15:44 lx11 kernel: usb 6-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 Apr 9 16:15:44 lx11 kernel: usb 6-1: Product: Virtual Keyboard Apr 9 16:15:44 lx11 kernel: usb 6-1: Manufacturer: HP Apr 9 16:15:44 lx11 kernel: usb 6-1: configuration #1 chosen from 1 choice Apr 9 16:15:44 lx11 kernel: input: HP Virtual Keyboard as /devices/pci0000:00/0000:00:1e.0/0000:01:04.4/usb6/6-1/6-1:1.0/input/input6 Apr 9 16:15:44 lx11 kernel: generic-usb 0003:03F0:1027.0003: input,hidraw0: USB HID v1.01 Keyboard [HP Virtual Keyboard] on usb-0000:01:04.4-1/input0 Apr 9 16:15:44 lx11 kernel: input: HP Virtual Keyboard as /devices/pci0000:00/0000:00:1e.0/0000:01:04.4/usb6/6-1/6-1:1.1/input/input7 Apr 9 16:15:44 lx11 kernel: generic-usb 0003:03F0:1027.0004: input,hidraw1: USB HID v1.01 Mouse [HP Virtual Keyboard] on usb-0000:01:04.4-1/input1 Apr 9 17:50:33 lx11 varnishd[24950]: Manager got SIGINT Apr 9 17:50:42 lx11 varnishd[18973]: Platform: Linux,2.6.32-358.2.1.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit Apr 9 17:50:42 lx11 varnishd[18973]: child (18974) Started Apr 9 17:50:42 lx11 varnishd[18973]: Child (18974) said Child starts Apr 9 17:50:42 lx11 varnishd[18973]: Child (18974) said SMF.s0 mmap'ed -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Apr 11 16:22:35 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 11 Apr 2013 16:22:35 -0000 Subject: [Varnish] #1293: varnishd: page allocation failure In-Reply-To: <048.5c12d86511840ad8268d566798999de1@varnish-cache.org> References: <048.5c12d86511840ad8268d566798999de1@varnish-cache.org> Message-ID: <063.edebd983ced464dd9eca0861bbedbcd4@varnish-cache.org> #1293: varnishd: page allocation failure ------------------------+-------------------- Reporter: msallen333 | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ------------------------+-------------------- Comment (by msallen333): Note that we are running with hugepage support disabled [root at lx11 redhat_transparent_hugepage]# pwd /sys/kernel/mm/redhat_transparent_hugepage [root at lx11 redhat_transparent_hugepage]# ls -l total 0 -rw-r--r-- 1 root root 4096 Apr 11 12:18 defrag -rw-r--r-- 1 root root 4096 Mar 24 06:55 enabled drwxr-xr-x 2 root root 0 Apr 11 12:18 khugepaged [root at lx11 redhat_transparent_hugepage]# cat enabled always [never] Varnishd is 3.0.3 and Red Hat is 6.4 [root at lx11 redhat_transparent_hugepage]# uname -a Linux lx11 2.6.32-358.2.1.el6.x86_64 #1 SMP Wed Feb 20 12:17:37 EST 2013 x86_64 x86_64 x86_64 GNU/Linux [root at lx11 redhat_transparent_hugepage]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.4 (Santiago) [root at lx11 redhat_transparent_hugepage]# varnishd -V varnishd (varnish-3.0.3 revision 9e6a70f) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2011 Varnish Software AS -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Apr 12 11:27:11 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 12 Apr 2013 11:27:11 -0000 Subject: [Varnish] #1290: varnishd crashes with signal 6 In-Reply-To: <042.8c4ff30a221e6ab80bd3a213ce5a9a42@varnish-cache.org> References: <042.8c4ff30a221e6ab80bd3a213ce5a9a42@varnish-cache.org> Message-ID: <057.84c85fbf612f1ec2c53701496f473620@varnish-cache.org> #1290: varnishd crashes with signal 6 ----------------------+------------------------- Reporter: olli | Owner: Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: 3.0.3 Severity: critical | Resolution: worksforme Keywords: | ----------------------+------------------------- Comment (by olli): I wonder why varnishd craches? Wouldn't it be better to handle the error? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Apr 12 14:29:28 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 12 Apr 2013 14:29:28 -0000 Subject: [Varnish] #1287: Varnish 3.0.3 - segfault in libvarnish.so. In-Reply-To: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> References: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> Message-ID: <059.fb4cf5cc3aa317f62fcb36d90752c9e3@varnish-cache.org> #1287: Varnish 3.0.3 - segfault in libvarnish.so. ------------------------------------+-------------------- Reporter: robroy | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: segfault libvarnish.so | ------------------------------------+-------------------- Comment (by coennie): We just went live with Varnish 3.0.3 and I see the same results as described above: {{{ Apr 12 16:23:32 lb1 varnishd[8572]: child (18461) Started Apr 12 16:23:32 lb1 varnishd[8572]: Child (18461) said Child starts Apr 12 16:23:41 lb1 kernel: [13748.309110] varnishd[18488]: segfault at 0 ip 00007f48c9355504 sp 00007f48b3ee9320 error 4 in libvarnish.so[7f48c9349000+13000] Apr 12 16:23:41 lb1 varnishd[8572]: Child (18461) died signal=11 Apr 12 16:23:41 lb1 varnishd[8572]: Child cleanup complete Apr 12 16:23:41 lb1 varnishd[8572]: child (18499) Started Apr 12 16:23:41 lb1 varnishd[8572]: Child (18499) said Child starts }}} I've traced the trashing back to PURGING. Everytime we call a PURGE from the backend, the varnishd crashes with the error above. In the VCL I've directly copied the purge commands from the manual: {{{ sub vcl_hit { if (req.request == "PURGE" || req.url ~ "purge=true") { purge; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE" || req.url ~ "purge=true") { purge; error 200 "Purged."; } } }}} Please advise! Regards Coen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Apr 13 14:57:04 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 13 Apr 2013 14:57:04 -0000 Subject: [Varnish] #1294: Varnish diagrams don't mention hash_always_miss Message-ID: <044.6c7c8f95fb47baa3eda72dedbb11b31f@varnish-cache.org> #1294: Varnish diagrams don't mention hash_always_miss --------------------+--------------------------- Reporter: allanc | Type: documentation Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | --------------------+--------------------------- [https://www.varnish-cache.org/trac/wiki/VCLExampleDefault This diagram] which provides a good overview of how Varnish works through the request- response handling workflow doesn't include any mention of "hash_always_miss". Looking at the diagram, I had been working under the assumption that Varnish didn't provide a way to update a cached document during a request without either purging it before reaching "vcl_fetch", or letting the cache object manually expire. It would be good if that diagram could include: * A change to the "obj in cache" block to show the influence that setting hash_always_miss has. * A clearer indication at what point the previously cached object is evacuated from the cache - presumably that happens at the point we choose to return "deliver" from "vcl_fetch", but it would be good if that was made explicit. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Apr 14 10:30:32 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 14 Apr 2013 10:30:32 -0000 Subject: [Varnish] #1083: Persistent Varnish crashes since using bans and lurker In-Reply-To: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> References: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> Message-ID: <064.e4ac2996d553c42c5effd36a0aac2c3e@varnish-cache.org> #1083: Persistent Varnish crashes since using bans and lurker -------------------------+--------------------- Reporter: rmohrbacher | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.2 Severity: major | Resolution: Keywords: | -------------------------+--------------------- Comment (by numard): I can confirm this happened on 3.0.2-1~1lucid1 (once every ~ 8 hours ). I upgraded to to 3.0.3-1~precise , and it happens also, but it seems, so far, less often (~ 18 hours ). We have 2 x servers with similar usage pattern as @mohrbacher's : - file storage - no issues for a long time - we started pushing a lot more bans, and the issues started to happen. Varnish (3.0.3-1~precise package from http://repo.varnish- cache.org/ubuntu/, ubuntu Precise 12.0.4 LTS ) is acting as a cache for S3 objects. It runs as : {{{ /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -p thread_pool_min 200 -p thread_pool_max 4000 -p thread_pool_add_delay 2 -p http_req_hdr_len 10240 -p http_req_size 65536 -p first_byte_timeout 300 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s persistent,/mnt/varnish_store,360G }}} Running on AWS, m1.medium, no apparent constraints on memory, none on cpu nor i/o. When child process dies, panic.list shows: {{{ varnish> panic.show 200 Last panic at: Sun, 14 Apr 2013 09:57:34 GMT Missing errorhandling code in smp_append_sign(), storage_persistent_subr.c line 128: Condition((smp_chk_sign(ctx)) == 0) not true.thread = (cache-worker) ident = Linux,3.2.0-40-virtual,x86_64,-spersistent,-smalloc,-hcritbit,epoll Backtrace: 0x4310e5: /usr/sbin/varnishd() [0x4310e5] 0x4514d8: /usr/sbin/varnishd(smp_append_sign+0x128) [0x4514d8] 0x44f1da: /usr/sbin/varnishd(SMP_NewBan+0x3a) [0x44f1da] 0x4158d2: /usr/sbin/varnishd(BAN_Insert+0x1a2) [0x4158d2] 0x439fa8: /usr/sbin/varnishd(VRT_ban_string+0xb8) [0x439fa8] 0x7f6391ef60c7: ./vcl.LQXRTnfB.so(+0x20c7) [0x7f6391ef60c7] 0x437f48: /usr/sbin/varnishd(VCL_recv_method+0x48) [0x437f48] 0x41946b: /usr/sbin/varnishd(CNT_Session+0xf2b) [0x41946b] 0x432ee5: /usr/sbin/varnishd() [0x432ee5] 0x7fbd9bb5de9a: /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7fbd9bb5de9a] sp = 0x7f62c8cda008 { fd = 12, id = 12, xid = 1800342971, client = 10.32.37.110 49187, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 flags = bodystatus = 4 ws = 0x7f62c8cda080 { id = "sess", {s,f,r,e} = {0x7f62c8cdac78,+168,(nil),+65536}, }, http[req] = { ws = 0x7f62c8cda080[sess] "BAN", "/xxxxs3bucketxxxx/path1/key2/key3", "HTTP/1.1", "Accept: */*", "host: s3.amazonaws.com", }, worker = 0x7f632d629ac0 { ws = 0x7f632d629cf8 { id = "wrk", {s,f,r,e} = {0x7f632d617a50,+56,(nil),+65536}, }, }, vcl = { srcname = { "input", "Default", }, }, }, }}} ----- Both servers get each ban request needed (they are behind load balancers with non-deterministic choosing of the varnish server), but the url shown in the panic dumps are different (though of the same 'type' - if it matters i can show examples). I'm willing to test a patch on production ASAP if it exists... Cheers, Beto -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Apr 14 11:09:47 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 14 Apr 2013 11:09:47 -0000 Subject: [Varnish] #1083: Persistent Varnish crashes since using bans and lurker In-Reply-To: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> References: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> Message-ID: <064.2c388b0503dbd0a074b728106104c6ac@varnish-cache.org> #1083: Persistent Varnish crashes since using bans and lurker -------------------------+--------------------- Reporter: rmohrbacher | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.2 Severity: major | Resolution: Keywords: | -------------------------+--------------------- Comment (by numard): When using varnish 3.0.2, on each crash, I had to delete the persistent file storage - if not deleted, the child process would fail pretty much right away on start. 3.0.3 has only failed me once so far, restarting without deleting the persistent file storage has worked this time... -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 15 05:38:10 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Apr 2013 05:38:10 -0000 Subject: [Varnish] #1083: Persistent Varnish crashes since using bans and lurker In-Reply-To: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> References: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> Message-ID: <064.24a14c77b8c7ad2f7cc181b232b9e444@varnish-cache.org> #1083: Persistent Varnish crashes since using bans and lurker -------------------------+--------------------- Reporter: rmohrbacher | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.2 Severity: major | Resolution: Keywords: | -------------------------+--------------------- Comment (by numard): Another crash, not as much info on panic.show, but points to similar piece of code. {{{ varnish> panic.show 200 Last panic at: Mon, 15 Apr 2013 05:29:32 GMT Missing errorhandling code in smp_append_sign(), storage_persistent_subr.c line 128: Condition((smp_chk_sign(ctx)) == 0) not true.errno = 22 (Invalid argument) thread = (cache-main) ident = Linux,3.2.0-40-virtual,x86_64,-spersistent,-smalloc,-hcritbit,no_waiter Backtrace: 0x4310e5: /usr/sbin/varnishd() [0x4310e5] 0x4514d8: /usr/sbin/varnishd(smp_append_sign+0x128) [0x4514d8] 0x44f1da: /usr/sbin/varnishd(SMP_NewBan+0x3a) [0x44f1da] 0x416446: /usr/sbin/varnishd(BAN_Compile+0x66) [0x416446] 0x42fe0a: /usr/sbin/varnishd(child_main+0xca) [0x42fe0a] 0x443767: /usr/sbin/varnishd() [0x443767] 0x443caf: /usr/sbin/varnishd() [0x443caf] 0x7f7ceb8bac82: /usr/lib/varnish/libvarnish.so(+0x9c82) [0x7f7ceb8bac82] 0x7f7ceb8bb348: /usr/lib/varnish/libvarnish.so(vev_schedule+0x98) [0x7f7ceb8bb348] 0x443f87: /usr/sbin/varnishd(MGT_Run+0x137) [0x443f87] varnish> }}} within minutes of the first server dying, the 2nd server crashed as well, implying there is a limit we are hitting (or a leak we are hitting...) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 15 10:32:08 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Apr 2013 10:32:08 -0000 Subject: [Varnish] #1292: Varnish restarts itself, when processing response from backend In-Reply-To: <042.4b2b91c80836415cc2d25b9977d04c7c@varnish-cache.org> References: <042.4b2b91c80836415cc2d25b9977d04c7c@varnish-cache.org> Message-ID: <057.1a18a0afffe3f63cbc354b05f761ccc2@varnish-cache.org> #1292: Varnish restarts itself, when processing response from backend ----------------------+------------------------------ Reporter: ixos | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 3.0 dev Component: varnishd | Version: 3.0.3 Severity: major | Resolution: Keywords: | ----------------------+------------------------------ Comment (by martin): Hi, Could you please attach the VCL configuration you are using when experiencing this error? Also any other meta information to help track this down would be helpful, like how often do you experience this? Is it only for specific resources? Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 15 10:34:35 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Apr 2013 10:34:35 -0000 Subject: [Varnish] #1291: Error 503 Service Unavailable In-Reply-To: <050.8abd3a78a57f72b0cf16860120a02fb1@varnish-cache.org> References: <050.8abd3a78a57f72b0cf16860120a02fb1@varnish-cache.org> Message-ID: <065.b8c5466949a7c620f2db57f52705db0a@varnish-cache.org> #1291: Error 503 Service Unavailable --------------------------+---------------------- Reporter: traitimvuong | Owner: Type: task | Status: closed Priority: high | Milestone: Component: build | Version: trunk Severity: normal | Resolution: invalid Keywords: duhx | --------------------------+---------------------- Changes (by martin): * status: new => closed * resolution: => invalid Comment: Hi, This bug tracker is for Varnish bugs only. Please direct questions related to how to configure varnish to the varnish-misc mailing list, or ask for help on the #varnish IRC channel. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 15 10:57:19 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Apr 2013 10:57:19 -0000 Subject: [Varnish] #1293: varnishd: page allocation failure In-Reply-To: <048.5c12d86511840ad8268d566798999de1@varnish-cache.org> References: <048.5c12d86511840ad8268d566798999de1@varnish-cache.org> Message-ID: <063.eba34add220a49fb3f625e932473073a@varnish-cache.org> #1293: varnishd: page allocation failure ------------------------+------------------------- Reporter: msallen333 | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: worksforme Keywords: | ------------------------+------------------------- Changes (by martin): * status: new => closed * resolution: => worksforme Comment: Hi, This does not sound like a varnish issue, but a tuning issue of either varnish or the kernel. The system is running low on available ram, and messages like that is to be expected if one configures varnish to use more space that is available. Try reducing the cache size and see if the problem goes away. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 15 11:02:04 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Apr 2013 11:02:04 -0000 Subject: [Varnish] #1287: Varnish 3.0.3 - segfault in libvarnish.so. In-Reply-To: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> References: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> Message-ID: <059.6265e342c1303ea8d7b36e8341e3eadd@varnish-cache.org> #1287: Varnish 3.0.3 - segfault in libvarnish.so. ------------------------------------+-------------------- Reporter: robroy | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: segfault libvarnish.so | ------------------------------------+-------------------- Comment (by martin): Hi, Did you manage to get the GDB stack traces we asked for? Also, you say that the issue occurs every time you do 'purge;'. The original VCL configuration attached to this ticket does not have any purge statements. Please clearify the circumstances. If you have a clear way to reproduce this behavior, please share the steps including the VCL configuration and all the requests necessary. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 15 11:17:42 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Apr 2013 11:17:42 -0000 Subject: [Varnish] #1294: Varnish diagrams don't mention hash_always_miss In-Reply-To: <044.6c7c8f95fb47baa3eda72dedbb11b31f@varnish-cache.org> References: <044.6c7c8f95fb47baa3eda72dedbb11b31f@varnish-cache.org> Message-ID: <059.bd47e5ee6f8637c9c4fd9fb1d9cecf42@varnish-cache.org> #1294: Varnish diagrams don't mention hash_always_miss ---------------------------+-------------------- Reporter: allanc | Owner: perbu Type: documentation | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | ---------------------------+-------------------- Changes (by martin): * owner: => perbu Comment: Hi, Thanks for your feedback. The diagram is meant more to be an aid in understanding the flow through Varnish and doesn't necessarily reflect the whole situation. The hash_always_miss functionality is an afterthought added later, to address a specific situation that the VCL flow couldn't address, and that is why it isn't part of the diagram. Varnish version 4 draws closer, and this functionality will be more clearly defined and part of the design process from the get go. I am leaving the ticket open for now as a reminder to see if we go back and update the diagrams to reflect this (and the accompanying hash_ignore_busy). Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Apr 16 06:59:31 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 16 Apr 2013 06:59:31 -0000 Subject: [Varnish] #1287: Varnish 3.0.3 - segfault in libvarnish.so. In-Reply-To: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> References: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> Message-ID: <059.aa1d84dec104afe91776be11fdf13622@varnish-cache.org> #1287: Varnish 3.0.3 - segfault in libvarnish.so. ------------------------------------+-------------------- Reporter: robroy | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: segfault libvarnish.so | ------------------------------------+-------------------- Comment (by coennie): Replying to [comment:4 martin]: Hi Martin, I've uploaded Archief.zip which contains the vcl and a core dump. Here's some technical extra information: {{{ root at lb2:/proc/sys/kernel# /usr/sbin/varnishd -d -d -d -P /var/run/varnishd.pid -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,1536m Platform: Linux,2.6.32-5-xen-amd64,x86_64,-smalloc,-smalloc,-hcritbit 200 246 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,2.6.32-5-xen-amd64,x86_64,-smalloc,-smalloc,-hcritbit Type 'help' for command list. Type 'quit' to close CLI session. Type 'start' to launch worker process. start child (18281) Started 200 0 Child (18281) said Child starts Child (18281) died signal=11 (core dumped) Child cleanup complete child (18312) Started Child (18312) said Child starts }}} The segfault from coredump: [1204305.073356] varnishd[18300]: segfault at 0 ip 00007fb19d296504 sp 00007fb18c5e3320 error 4 in libvarnish.so[7fb19d28a000+13000] This segfault has been create by: get /^J^J Hope to here from you soon as it's a big caching problem on our live enviroment now. The way I can get the varnished to create a segfault is by this command on a client: curl -X PURGE {any URL} Regards Coen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Apr 16 13:28:46 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 16 Apr 2013 13:28:46 -0000 Subject: [Varnish] #1083: Persistent Varnish crashes since using bans and lurker In-Reply-To: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> References: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> Message-ID: <064.292794b426bc07a0f12301cd4688cf1e@varnish-cache.org> #1083: Persistent Varnish crashes since using bans and lurker -------------------------+--------------------- Reporter: rmohrbacher | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.2 Severity: major | Resolution: Keywords: | -------------------------+--------------------- Comment (by martin): This is because in stock Varnish persistence, the persisted ban space is a fixed size that cannot be reclaimed. When the space is exhausted, the silo becomes unusable. So in stock Varnish persistence, it is not advisable to rely on bans for cache invalidation. This is planned fixed with Varnish release 4, that will contain fixes in this area. Leaving ticket open until the necessary bits have been fully merged. (The -plus branch contains a preview of fixes that correct this behavior. While this is open source, there is no community driven support available (https://github.com/mbgrydeland/varnish-cache/tree/3.0.3-plus)). Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Apr 16 15:05:54 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 16 Apr 2013 15:05:54 -0000 Subject: [Varnish] #1083: Persistent Varnish crashes since using bans and lurker In-Reply-To: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> References: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> Message-ID: <064.18c43a1328e24a8d1ec7dd4e1030d32d@varnish-cache.org> #1083: Persistent Varnish crashes since using bans and lurker -------------------------+--------------------- Reporter: rmohrbacher | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.2 Severity: major | Resolution: Keywords: | -------------------------+--------------------- Comment (by numard): Understood, thanks for the (public) update :) . If i understand this correctly, PURGE requests don't suffer from the fixed 'list size' problem, but they happen synchronously with the purge request? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Apr 19 20:40:03 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 19 Apr 2013 20:40:03 -0000 Subject: [Varnish] #846: Let varnishd continue running when encountering unused backends in a configuration In-Reply-To: <047.7d6bc09bfbf4bcded35c1d7f831edb6c@varnish-cache.org> References: <047.7d6bc09bfbf4bcded35c1d7f831edb6c@varnish-cache.org> Message-ID: <062.6cc24c9e0d210e1e5aeb3e4fa611fd88@varnish-cache.org> #846: Let varnishd continue running when encountering unused backends in a configuration -------------------------------------+--------------------- Reporter: jhalfmoon | Owner: Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.1.4 Severity: normal | Resolution: fixed Keywords: unused backend exit vcl | -------------------------------------+--------------------- Comment (by dlec): Can't find this parameter in 3.0.3, any hint? Getting a failed compilation because of an unused backend is very annoying. Thanks. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 22 11:08:18 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Apr 2013 11:08:18 -0000 Subject: [Varnish] #1287: Varnish 3.0.3 - segfault in libvarnish.so. In-Reply-To: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> References: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> Message-ID: <059.c4dac85b7ee63d62b8cd2e9e93d1c4c3@varnish-cache.org> #1287: Varnish 3.0.3 - segfault in libvarnish.so. ------------------------------------+-------------------- Reporter: robroy | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: segfault libvarnish.so | ------------------------------------+-------------------- Comment (by martin): Hi, Unfortunately I can't make any sense of the coredump, as I can't replicate your exact setup. Please follow the procedure tfheen put in this ticket and do the backtrace output on the server that exhibits this fault. (Also it would be interesting to know if adding the customary jump to error after purging that is missing from your vcl configuration would make any difference). Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 22 11:19:27 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Apr 2013 11:19:27 -0000 Subject: [Varnish] #1295: DNS director crashes when it's not the first director. Message-ID: <044.8509096a3f0b60e148460d5273927d39@varnish-cache.org> #1295: DNS director crashes when it's not the first director. --------------------+-------------------- Reporter: tfheen | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Keywords: --------------------+-------------------- The DNS director crashes when it's not the first director. I'm just reporting this to get a ticket number for the regression test, I already have a fix for the bug. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 22 11:29:42 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Apr 2013 11:29:42 -0000 Subject: [Varnish] #1295: DNS director crashes when it's not the first director. In-Reply-To: <044.8509096a3f0b60e148460d5273927d39@varnish-cache.org> References: <044.8509096a3f0b60e148460d5273927d39@varnish-cache.org> Message-ID: <059.f87d05499afccd4022e2ece5bc140714@varnish-cache.org> #1295: DNS director crashes when it's not the first director. --------------------+--------------------- Reporter: tfheen | Owner: tfheen Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: fixed Keywords: | --------------------+--------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: commit e5da6ec90ca3e9917ec65bf077cf0042bd9b9e94 Author: Tollef Fog Heen Date: Mon Apr 22 13:25:43 2013 +0200 Use ndirector, not serial in DNS director. The test case for this doesn't trigger, but the failing VCL (when given in a file with -f) looks like: director squid round-robin { { .backend = { .host = "127.0.0.1"; .port = "3131"; } } } director dnsdir dns { .list = { "201.7.184.0"/32; } } sub vcl_recv { set req.backend = dnsdir; } Fixes #1295 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 22 12:22:57 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Apr 2013 12:22:57 -0000 Subject: [Varnish] #1285: All worker threads can block on the vca_pipe under high load In-Reply-To: <042.145d58b96f09e62e2c8788620df1a218@varnish-cache.org> References: <042.145d58b96f09e62e2c8788620df1a218@varnish-cache.org> Message-ID: <057.8e49d2be3eb6bb226e85f71e1a47d74c@varnish-cache.org> #1285: All worker threads can block on the vca_pipe under high load --------------------+----------------------------------------- Reporter: mark | Owner: Tollef Fog Heen Type: defect | Status: closed Priority: high | Milestone: Component: build | Version: 3.0.3 Severity: major | Resolution: fixed Keywords: | --------------------+----------------------------------------- Changes (by Tollef Fog Heen ): * owner: => Tollef Fog Heen * status: new => closed * resolution: => fixed Comment: In [433e86f030648db20a7a8f7d43f20ce9be7581e6]: {{{ #!CommitTicketReference repository="" revision="433e86f030648db20a7a8f7d43f20ce9be7581e6" Set the waiter pipe as non-blocking and record overflows Fixes #1285 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 22 13:56:32 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Apr 2013 13:56:32 -0000 Subject: [Varnish] #1287: Varnish 3.0.3 - segfault in libvarnish.so. In-Reply-To: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> References: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> Message-ID: <059.412ce21d0fc2ebf7b8f88dfab98f4446@varnish-cache.org> #1287: Varnish 3.0.3 - segfault in libvarnish.so. ------------------------------------+-------------------- Reporter: robroy | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: segfault libvarnish.so | ------------------------------------+-------------------- Comment (by bokkepoot): root at lb2:~# gdb /usr/sbin/varnishd /tmp/core- varnishd-11-65534-65534-6986-1366638243 GNU gdb (GDB) 7.0.1-debian Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /usr/sbin/varnishd...Reading symbols from /usr/lib/debug/usr/sbin/varnishd...done. (no debugging symbols found)...done. [New Thread 7008] [New Thread 7010] [New Thread 6986] [New Thread 6990] [New Thread 6992] [New Thread 6993] [New Thread 6994] [New Thread 6995] [New Thread 6996] [New Thread 6997] [New Thread 6987] [New Thread 7000] [New Thread 7006] [New Thread 7012] [New Thread 7007] [New Thread 6988] [New Thread 7001] [New Thread 7004] [New Thread 7002] [New Thread 7003] [New Thread 6991] [New Thread 6999] [New Thread 6989] [New Thread 6998] warning: Can't read pathname for load map: Input/output error. Error while mapping shared library sections: ./vcl.7H1mXKN6.so: No such file or directory. Reading symbols from /usr/lib/varnish/libvarnish.so...Reading symbols from /usr/lib/debug/usr/lib/varnish/libvarnish.so...done. (no debugging symbols found)...done. Loaded symbols for /usr/lib/varnish/libvarnish.so Reading symbols from /usr/lib/varnish/libvarnishcompat.so...Reading symbols from /usr/lib/debug/usr/lib/varnish/libvarnishcompat.so...done. (no debugging symbols found)...done. Loaded symbols for /usr/lib/varnish/libvarnishcompat.so Reading symbols from /usr/lib/varnish/libvcl.so...Reading symbols from /usr/lib/debug/usr/lib/varnish/libvcl.so...done. (no debugging symbols found)...done. Loaded symbols for /usr/lib/varnish/libvcl.so Reading symbols from /usr/lib/varnish/libvgz.so...Reading symbols from /usr/lib/debug/usr/lib/varnish/libvgz.so...done. (no debugging symbols found)...done. Loaded symbols for /usr/lib/varnish/libvgz.so Reading symbols from /lib/libpcre.so.3...(no debugging symbols found)...done. Loaded symbols for /lib/libpcre.so.3 Reading symbols from /lib/libdl.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libdl.so.2 Reading symbols from /lib/libnsl.so.1...(no debugging symbols found)...done. Loaded symbols for /lib/libnsl.so.1 Reading symbols from /lib/libm.so.6...(no debugging symbols found)...done. Loaded symbols for /lib/libm.so.6 Reading symbols from /lib/libpthread.so.0...(no debugging symbols found)...done. Loaded symbols for /lib/libpthread.so.0 Reading symbols from /lib/libc.so.6...(no debugging symbols found)...done. Loaded symbols for /lib/libc.so.6 Reading symbols from /lib/librt.so.1...(no debugging symbols found)...done. Loaded symbols for /lib/librt.so.1 Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Reading symbols from /lib/libnss_compat.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libnss_compat.so.2 Reading symbols from /lib/libnss_nis.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libnss_nis.so.2 Reading symbols from /lib/libnss_files.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libnss_files.so.2 Symbol file not found for ./vcl.7H1mXKN6.so Reading symbols from /usr/lib/varnish/vmods/libvmod_std.so...Reading symbols from /usr/lib/debug/usr/lib/varnish/vmods/libvmod_std.so...done. (no debugging symbols found)...done. Loaded symbols for /usr/lib/varnish/vmods/libvmod_std.so Core was generated by `/usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -T localhost:6082 -f /etc/va'. Program terminated with signal 11, Segmentation fault. #0 VSB_cat (s=0x7f1634656040, str=0x0) at vsb.c:331 331 vsb.c: No such file or directory. in vsb.c (gdb) bt full #0 VSB_cat (s=0x7f1634656040, str=0x0) at vsb.c:331 __func__ = "VSB_cat" #1 0x0000000000437c63 in VRT_synth_page (sp=0x7f16415e1008, flags=, str=0x0) at cache_vrt.c:409 ap = {{gp_offset = 1096683768, fp_offset = 32534, overflow_arg_area = 0x7f16415e1008, reg_save_area = 0x2}} p = vsb = __func__ = "VRT_synth_page" #2 0x00007f16391f613a in ?? () No symbol table info available. #3 0x00007f163464d0d8 in ?? () No symbol table info available. #4 0x00007f16415e1008 in ?? () No symbol table info available. #5 0x00007f1631ffbac0 in ?? () No symbol table info available. #6 0x0000000000436e06 in VCL_error_method (sp=0x7f16415e1008) at ../../include/vcl_returns.h:66 __func__ = "VCL_error_method" #7 0x0000000000417d92 in cnt_error (sp=0x7f16415e1008) at cache_center.c:483 h = 0x7f163464d0d8 date = "Mon, 22 Apr 2013 13:44:03 GMT\000\000\000\370\274\377\061\026\177\000" __func__ = "cnt_error" #8 0x0000000000419dbd in CNT_Session (sp=0x7f16415e1008) at steps.h:46 done = 0 w = 0x7f1631ffbac0 __func__ = "CNT_Session" #9 0x0000000000431d89 in wrk_thread_real (qp=0x7f1641514150, shm_workspace=, sess_workspace=, nhttp=, http_space=, siov=) at cache_pool.c:186 ww = {magic = 1670491599, nobjhead = 0x7f1634613100, nobjcore = 0x7f1634615180, nwaitinglist = 0x7f1634614080, nbusyobj = 0x7f163461c050, nhashpriv = 0x7f16346140a0, stats = {client_conn = 0, client_req = 1, cache_hit = 0, cache_hitpass = 0, cache_miss = 0, fetch_head = 0, fetch_length = 0, fetch_chunked = 0, fetch_eof = 0, fetch_bad = 0, fetch_close = 0, fetch_oldhttp = 0, fetch_zero = 0, fetch_failed = 0, fetch_1xx = 0, fetch_204 = 0, fetch_304 = 0, n_object = 1, n_vampireobject = 0, n_objectcore = 0, n_objecthead = 0, n_waitinglist = 0, n_objoverflow = 0, s_sess = 0, s_req = 0, s_pipe = 0, s_pass = 0, s_fetch = 0, s_hdrbytes = 0, s_bodybytes = 0, sess_closed = 0, sess_pipeline = 0, sess_readahead = 0, sess_linger = 0, sess_herd = 0}, lastused = 1366638243.5577664, wrw = {wfd = 0x0, werr = 0, iov = 0x7f1631fe84c0, siov = 128, niov = 0, liov = 0, cliov = 0, ciov = 128}, cond = {__data = {__lock = 0, __futex = 4, __total_seq = 2, __wakeup_seq = 2, __woken_seq = 2, __mutex = 0x7f164152d3a8, __nwaiters = 0, __broadcast_seq = 0}, __size = "\000\000\000\000\004\000\000\000\002\000\000\000\000\000\000\000\002\000\000\000\000\000\000\000\002\000\000\000\000\000\000\000\250\323RA\026\177\000\000\000\000\000\000\000\000\000", __align = 17179869184}, list = {vtqe_next = 0x7f16327fcac0, vtqe_prev = 0x7f1641514160}, sp = 0x7f16415e1008, vcl = 0x0, wlb = 0x7f1631ff9a60, wlp = 0x7f1631ff9a70, wle = 0x7f1631ffba60, wlr = 1, sha256ctx = 0x7f1631ffbe40, htc = {{magic = 1041886673, fd = 13, maxbytes = 32768, maxhdr = 8192, ws = 0x7f1631ffbcf8, rxbuf = { b = 0x7f1631fe9ab8 "\n\n501 Method Not Implemented\n\n

Method Not Implemented

\n

get to /index.html not supported.
\n

\n"..., e = 0x7f1631fe9bd0 ""}, pipeline = {b = 0x0, e = 0x0}}}, ws = {{magic = 905626964, overflow = 0, id = 0x460097 "wrk", s = 0x7f1631fe9a50 "X-Varnish: 1037044295", f = 0x7f1631fe9ab8 "\n\n501 Method Not Implemented\n\n

Method Not Implemented

\n

get to /index.html not supported.
\n

\n"..., r = 0x0, e = 0x7f1631ff9a50 ""}}, bereq = 0x7f1631fe95d0, beresp = 0x7f1631fe9150, resp = 0x7f1631fe8cd0, exp = {ttl = -1, grace = -1, keep = -1, age = 0, entered = 0}, storage_hint = 0x0, body_status = BS_NONE, vfp = 0x0, vgz_rx = 0x0, vef_priv = 0x0, fetch_failed = 0, do_stream = 0, do_esi = 0, do_gzip = 0, is_gzip = 0, do_gunzip = 0, is_gunzip = 0, do_close = 0, h_content_length = 0x0, sctx = 0x0, vep = 0x0, gzip_resp = 0, l_crc = 0, crc = 0, connect_timeout = 0, first_byte_timeout = 0, between_bytes_timeout = 0, res_mode = 72, acct_tmp = {first = 0, sess = 0, req = 1, pipe = 0, pass = 1, fetch = 0, hdrbytes = 0, bodybytes = 0}} sha256 = {state = {0, 0, 0, 0, 0, 0, 0, 0}, count = 0, buf = '\000' } stats_clean = 1 __func__ = "wrk_thread_real" #10 0x00007f1641c478ca in start_thread () from /lib/libpthread.so.0 No symbol table info available. #11 0x00007f16419aeb6d in clone () from /lib/libc.so.6 No symbol table info available. #12 0x0000000000000000 in ?? () No symbol table info available. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 22 14:52:30 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Apr 2013 14:52:30 -0000 Subject: [Varnish] #1287: Varnish 3.0.3 - segfault in libvarnish.so. In-Reply-To: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> References: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> Message-ID: <059.258d59c86f9f3fd815b294b4d8df7762@varnish-cache.org> #1287: Varnish 3.0.3 - segfault in libvarnish.so. ------------------------------------+-------------------- Reporter: robroy | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: segfault libvarnish.so | ------------------------------------+-------------------- Comment (by coennie): Thnx @bokkepoot @martin I assume you have enough now? Or do you need the other dump aswell? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Apr 23 15:11:05 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 23 Apr 2013 15:11:05 -0000 Subject: [Varnish] #1083: Persistent Varnish crashes since using bans and lurker In-Reply-To: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> References: <049.21c55906d3bd67b40377e9353f5dd9ce@varnish-cache.org> Message-ID: <064.650ca1bf056eadcd67c5556f4d588ada@varnish-cache.org> #1083: Persistent Varnish crashes since using bans and lurker -------------------------+--------------------- Reporter: rmohrbacher | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.2 Severity: major | Resolution: Keywords: | -------------------------+--------------------- Comment (by numard): Hi Martin, I've had this in production for a few days - not great results I'm afraid. Tried the 3.0.3-plus branch, as well as 'persistent'. Crashes still happen, though not exact output in panic (as expected). From branch persistent, panic is : {{{ Last panic at: Mon, 22 Apr 2013 01:40:53 GMT Assert error in smp_appendban(), storage/storage_persistent.c line 98: Condition(4 + sizeof t + 4 + len < left) not true. thread = (cache-main) ident = Linux,3.2.0-40-virtual,x86_64,-spersistent,-smalloc,-hcritbit,epoll Backtrace: 0x4331d5: pan_ic+d5 0x455f05: smp_appendban+e5 0x45603e: smp_newban+4e 0x451ba9: STV_NewBan+29 0x4157d3: BAN_Compile+43 0x431e0a: child_main+10a 0x4491a2: start_child+962 0x4496df: mgt_sigchld+51f 0x7f2c671874d2: _end+7f2c66afbcea 0x7f2c67187b98: _end+7f2c66afc3b0 }}} from 3.0.3-plus, almost identical to previous, {{{ Assert error in smp_appendban(), storage_persistent.c line 97: Condition(4 + sizeof t + 4 + len < left) not true. thread = (cache-main) ident = Linux,3.2.0-40-virtual,x86_64,-spersistent,-smalloc,-hcritbit,no_waiter Backtrace: 0x432b75: pan_ic+d5 0x451465: smp_appendban+e5 0x45159e: smp_newban+4e 0x44d969: STV_NewBan+29 0x416853: BAN_Compile+43 0x43187a: child_main+ca 0x445ed7: start_child+8c7 0x44641f: mgt_sigchld+51f 0x7fc825ea5312: _end+7fc82581ff62 0x7fc825ea59d8: _end+7fc825820628 }}} I'm building on up to date Precise 12.0.4 LTS, with nothing special : {{{ ./autogen.sh ./configure --exec-prefix=/usr make make install ldconfig }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Apr 24 06:53:12 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 24 Apr 2013 06:53:12 -0000 Subject: [Varnish] #1287: Varnish 3.0.3 - segfault in libvarnish.so. In-Reply-To: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> References: <044.f166c5ec2ec8ea0bd77090495e866ce4@varnish-cache.org> Message-ID: <059.67508a9323289c462ec476ca75aa9a11@varnish-cache.org> #1287: Varnish 3.0.3 - segfault in libvarnish.so. ------------------------------------+-------------------- Reporter: robroy | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: segfault libvarnish.so | ------------------------------------+-------------------- Comment (by coennie): I've replaced the error part with the minimal and now it WORKS as it should! sub vcl_error { return(deliver); } -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Apr 25 12:17:59 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 25 Apr 2013 12:17:59 -0000 Subject: [Varnish] #1003: Fix libedit (libreadline) support for FreeBSD In-Reply-To: <044.97ea0cb79c8172ef6b88dc23843a837c@varnish-cache.org> References: <044.97ea0cb79c8172ef6b88dc23843a837c@varnish-cache.org> Message-ID: <059.cf7a49a35b82cd78c4f0926d0152d317@varnish-cache.org> #1003: Fix libedit (libreadline) support for FreeBSD -------------------------+--------------------- Reporter: anders | Owner: tfheen Type: enhancement | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.1 Severity: normal | Resolution: fixed Keywords: | -------------------------+--------------------- Changes (by tfheen): * status: reopened => closed * resolution: => fixed Comment: This should be fixed in the 3.0 branch now, so closing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 29 10:23:47 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Apr 2013 10:23:47 -0000 Subject: [Varnish] #1262: raw values are not updating when using varnishstat -1, while varnishstat continues to work In-Reply-To: <042.69ee90ee822d61989104b3ac3fdfc0ed@varnish-cache.org> References: <042.69ee90ee822d61989104b3ac3fdfc0ed@varnish-cache.org> Message-ID: <057.4617dac3a64097d9174d55407d2fcf1a@varnish-cache.org> #1262: raw values are not updating when using varnishstat -1, while varnishstat continues to work --------------------+---------------------- Reporter: yves | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: invalid Keywords: | --------------------+---------------------- Changes (by tfheen): * status: new => closed * resolution: => invalid Comment: We had something similar happen with this recently, and I suspect it's due to OSX changing the host name to what's given in DNS, so if you get a different IP from DHCP, this will happen. I don't think it's reasonable to fix this in Varnish, so closing as invalid. -- Ticket URL: Varnish The Varnish HTTP Accelerator From wxz19861013 at gmail.com Mon Apr 1 07:04:28 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Mon, 01 Apr 2013 07:04:28 -0000 Subject: varnish-3.0.2-streaming crash issue (revision cd0ccbf) Message-ID: I use varnish-3.0.2-streaming for my application. I found that the object with "Cache-Control:max-age=31536000" will miss in couple days sometimes. And then I notice that varnish child process crash sometimes. This is panic log: panic.show 200 Last panic at: Sun, 31 Mar 2013 22:16:07 GMT Assert error in Tcheck(), cache.h line 1004: Condition(t.b <= t.e) not true. thread = (cache-worker) ident = Linux,2.6.32.59-0.7-xen,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42ffb3: pan_ic+d3 0x42cd35: http_IsHdr+65 0x42d311: http_FilterFields+3e1 0x43417c: RES_BuildHttp+9c 0x417248: cnt_prepresp+218 0x41998d: CNT_Session+4ad 0x4327c3: wrk_do_cnt_sess+93 0x431a0a: wrk_thread_real+3ea 0x7f0a266a56a6: _end+7f0a2602938e 0x7f0a26414f7d: _end+7f0a25d98c65 sp = 0x7f0a1369a008 { fd = 260, id = 260, xid = 752636266, client = xx.xxx.xxx.xxx xxxx, step = STP_PREPRESP, handling = hit_for_pass, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 flags = bodystatus = 3 ws = 0x7f0a1369a080 { id = "sess", {s,f,r,e} = {0x7f0a1369c628,+608,(nil),+65536}, }, http[req] = { ws = 0x7f0a1369a080[sess] "GET", " http://g.example.com/fcg-bin/cgi_emotion_list.fcg?uin=xxxx&loginUin=x&s=xxxx&num=xx&noflower=xx&g_tk=xxx ", "HTTP/1.1", "Accept: */*", "Referer: http://user.example.com/xxxxx", "Accept-Language: Zh-cn", "User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; SE 2.X MetaSr 1.0", "Host: g.example.com", "Connection: close", "Cache-Control: no-cache", "Cookie: randomSeed=774464;", "X-Forwarded-For: xx.xxx.xxx.xxx", }, worker = 0x7f0a037e5a90 { ws = 0x7f0a037e5ce0 { id = "wrk", {s,f,r,e} = {0x7f0a037d3a20,0x7f0a037d3a20,(nil),+65536}, }, http[resp] = { ws = 0x7f0a037e5ce0[wrk] "HTTP/1.1", "OK", "Server: QZHTTP-2.35.2", "Via: 1.1 localhost", "X-Accelerate: 2.20", "Vary: Accept-Encoding", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7f0a0ddf6000 { xid = 752636266, ws = 0x7f0a0ddf6018 { id = "obj", {s,f,r,e} = {0x7f0a0ddf6200,+240,(nil),+272}, }, http[obj] = { ws = 0x7f0a0ddf6018[obj] "HTTP/1.1", "OK", "Date: Sun, 31 Mar 2013 22:16:07 GMT", "Server: QZHTTP-2.35.2", "Via: 1.1 localhost", "X-Accelerate: 2.20", "Vary: Accept-Encoding", "Content-Type: text/html", "Content-Length: 211", }, len = 211, store = { 211 { 76 69 73 69 74 43 6f 75 6e 74 43 61 6c 6c 42 61 |visitCountCallBa| 63 6b 28 7b 22 72 65 74 63 6f 64 65 22 3a 31 2c |ck({"retcode":1,| 22 76 69 73 69 74 63 6f 75 6e 74 22 3a 30 2c 22 |"visitcount":0,"| 64 61 79 76 69 73 69 74 22 3a 30 2c 22 73 70 61 |dayvisit":0,"spa| [147 more] }, }, }, }, This is my starup order: ./varnishd -f varnish-stream.vcl -s malloc,16G -T 127.0.0.1:2000 -a 0.0.0.0:8080 -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100 -p thread_pools=2 -p http_req_hdr_len=32768 -p http_resp_hdr_len=32768 -p http_max_hdr=256 This is my varnish-stream.vcl : # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # probe healthcheck { .url = "/"; .interval = 30s; .timeout = 0.5 s; .window = 8; .threshold = 3; .initial = 3; } backend proxy1 { .host = "x.x.x.x1"; .port = "8080"; .probe = healthcheck; } backend proxy2 { .host = "x.x.x.x2"; .port = "8080"; .probe = healthcheck; } backend proxy3 { .host = "x.x.x.x3"; .port = "8080"; .probe = healthcheck; } backend proxy3 { .host = "x.x.x.x4"; .port = "8080"; .probe = healthcheck; } director proxy client { { .backend = proxy1; .weight = 1; } { .backend = proxy2; .weight = 1; } { .backend = proxy3; .weight = 1; } } acl refresh { "x.x.x.x5"; } # Below is a commented-out copy of the default VCL logic. If you # redefine any of these subroutines, the built-in logic will be # appended to your code. sub vcl_recv { if(req.http.X-Real-IP){ set client.identity = req.http.X-Real-IP; }else if (req.http.referer) { set client.identity = req.http.referer; }else{ set client.identity = req.url; } # set req.backend = proxy; if(client.ip == "x.x.x.x6"){ set req.backend = proxy; }else{ set req.backend = proxy4; if (client.ip ~ refresh) { set req.hash_always_miss = true; } } #set grace if (req.backend.healthy) { set req.grace = 30s; } else { set req.grace = 30m; } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } if (req.http.x-pipe && req.restarts > 0) { remove req.http.x-pipe; return (pipe); } if(req.request != "GET" && req.request != "HEAD") { # /* We only deal with GET and HEAD by default */ return (pass); } if (req.http.Cache-Control ~ "no-cache") { return (pass); } if (req.http.Accept-Encoding) { if (req.url ~ "\.(webp|jpeg|png|mid|mp3|gif|sql|jpg|nth|thm|utz|mtf|sdt|hme|tsk|zip|rar|sx|pxl|cab|mbm|app|exe|apk)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } if (req.http.Authorization) { return (pass); } return (lookup); } sub vcl_pipe { set bereq.http.connection = "close"; return (pipe); } sub vcl_pass { return (pass); } sub vcl_hash { if (req.url ~ ".(jpeg|jpg|png|gif|ico|js|css)\?.*") { hash_data(regsub(req.url, "\?[^\?]*$", "")); } else{ hash_data(req.url); } if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_hit { return (deliver); } sub vcl_miss { return (fetch); } sub vcl_fetch { set beresp.grace = 30m; set beresp.do_stream = true; if (beresp.http.Content-Length && beresp.http.Content-Length ~ "[0-9]{8,}") { set req.http.x-pipe = "1"; return (restart); } if (beresp.http.Pragma ~ "no-cache" || beresp.http.Cache-Control ~ "no-cache" || beresp.http.Cache-Control ~ "private"){ return (hit_for_pass); } if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return (hit_for_pass); } return (deliver); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT from varnish"; set resp.http.X-Hits = obj.hits; } else { set resp.http.X-Cache = "MISS from varnish"; } remove resp.http.Via; remove resp.http.X-Varnish; return (deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; set obj.http.Retry-After = "5"; synthetic {" "} + obj.status + " " + obj.response + {"

Error "} + obj.status + " " + obj.response + {"

"} + obj.response + {"

Guru Meditation:

XID: "} + req.xid + {"


Varnish cache server

varnish

"}; return (deliver); } sub vcl_init { return (ok); } sub vcl_fini { return (ok); } This is varnishstat in 0401: ./varnishstat -1 client_conn 3769680 342.17 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 3881633 352.33 Client requests received cache_hit 26574 2.41 Cache hits cache_hitpass 6072 0.55 Cache hits for pass cache_miss 155003 14.07 Cache misses backend_conn 304379 27.63 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 41 0.00 Backend conn. failures backend_reuse 3551465 322.36 Backend conn. reuses backend_toolate 1104 0.10 Backend conn. was closed backend_recycle 3552586 322.46 Backend conn. recycles backend_retry 780 0.07 Backend conn. retry fetch_head 0 0.00 Fetch head fetch_length 2308273 209.52 Fetch with Length fetch_chunked 1285891 116.72 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_streamed 3620982 328.67 Fetch streamed fetch_bad 0 0.00 Fetch had bad headers fetch_close 26761 2.43 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 62 0.01 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 49 0.00 Fetch no body (204) fetch_304 10 0.00 Fetch no body (304) n_sess_mem 3103 . N struct sess_mem n_sess 317 . N struct sess n_object 16518 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 16878 . N struct objectcore n_objecthead 13535 . N struct objecthead n_waitinglist 3049 . N struct waitinglist n_vbc 68 . N struct vbc n_wrk 400 . N worker threads n_wrk_create 618 0.06 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 2081 0.19 N queued work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 3 . N backends n_expired 138436 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_moved 21504 . N LRU moved objects losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 25702 2.33 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 3769636 342.17 Total Sessions s_req 3881633 352.33 Total Requests s_pipe 233231 21.17 Total pipe s_pass 3466827 314.68 Total pass s_fetch 0 0.00 Total fetch s_stream 3620813 328.66 Total streamed requests s_hdrbytes 986265856 89522.18 Total header bytes s_bodybytes 28556419 2592.03 Total body bytes sess_closed 3657484 331.99 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 262709 23.85 Session Linger sess_herd 262383 23.82 Session herd shm_records 304070226 27600.09 SHM records shm_writes 26571905 2411.90 SHM writes shm_flushes 2 0.00 SHM flushes due to overflow shm_cont 49165 4.46 SHM MTX contention shm_cycles 130 0.01 SHM cycles through buffer sms_nreq 846 0.08 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 384930 . SMS bytes allocated sms_bfree 384930 . SMS bytes freed backend_req 3621062 328.68 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 1 . N total active bans n_ban_add 1 0.00 N new bans added n_ban_retire 0 0.00 N old bans deleted n_ban_obj_test 0 0.00 N objects tested n_ban_re_test 0 0.00 N regexps tested against n_ban_dups 0 0.00 N duplicate bans removed hcb_nolock 187649 17.03 HCB Lookups without lock hcb_lock 80799 7.33 HCB Lookups with lock hcb_insert 80799 7.33 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 11017 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 0 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 271725 24.66 Gunzip operations LCK.sms.creat 4 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 8550 0.78 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 8 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 38262423 3473.03 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 4 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 418452 37.98 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 0 0.00 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 0 0.00 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 4 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 9462038 858.86 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 4 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 9840 0.89 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 4 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 28114307 2551.90 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 4 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 212775 19.31 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 4 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 1209 0.11 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 8 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 38953507 3535.76 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 270862 24.59 Created locks LCK.objhdr.destroy 141466 12.84 Destroyed locks LCK.objhdr.locks 4746962 430.88 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 4 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 856264 77.72 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 8 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 480694 43.63 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 4 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 22068 2.00 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 4 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 857486 77.83 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 4 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 6642 0.60 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 4 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 1723928 156.48 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 12 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 20863568 1893.76 Lock Operations LCK.backend.colls 0 0.00 Collisions LCK.busyobj.creat 2153 0.20 Created locks LCK.busyobj.destroy 658 0.06 Destroyed locks LCK.busyobj.locks 51597325 4683.43 Lock Operations LCK.busyobj.colls 0 0.00 Collisions SMA.s0.c_req 240044 21.79 Allocator requests SMA.s0.c_fail 0 0.00 Allocator failures SMA.s0.c_bytes 14874337772 1350125.97 Bytes allocated SMA.s0.c_freed 14770447168 1340695.94 Bytes freed SMA.s0.g_alloc 30924 . Allocations outstanding SMA.s0.g_bytes 103890604 . Bytes outstanding SMA.s0.g_space 17075978580 . Bytes available SMA.Transient.c_req 6620156 600.90 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 173497152178 15748130.36 Bytes allocated SMA.Transient.c_freed 173487087827 15747216.83 Bytes freed SMA.Transient.g_alloc 1383 . Allocations outstanding SMA.Transient.g_bytes 10064351 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.proxy1(xx.xx.xx.xx1,,8080).vcls 4 . VCL references VBE.proxy1(xx.xx.xx.xx1,,8080).happy18446744073709551615 . Happy health probes VBE.proxy2(xx.xx.xx.xx2,,8080).vcls 4 . VCL references VBE.proxy2(xx.xx.xx.xx2,,8080).happy18446744073709551615 . Happy health probes VBE.proxy3(xx.xx.xx.xx3,,8080).vcls 4 . VCL references VBE.proxy3(xx.xx.xx.xx3,,8080).happy18446744073709551615 . Happy health probes VBE.proxy4(xx.xx.xx.xx4,,8081).vcls 4 . VCL references VBE.proxy4(xx.xx.xx.xx4,,8081).happy18446744073709551615 . Happy health probes This is my parameter: param.show 200 acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -pthread -fpic -shared -Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 10 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group nobody (65533) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support on [bool] http_max_hdr 256 [header lines] http_range_support on [bool] http_req_hdr_len 32768 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 32768 [bytes] http_resp_size 32768 [bytes] listen_address 0.0.0.0:8080 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 60 [seconds] sess_timeout 5 [seconds] sess_workspace 65536 [bytes] session_linger 100 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] stream_maxchunksize 256 [kilobytes] stream_tokens 10 [] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 4000 [threads] thread_pool_min 200 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user nobody (65534) vcc_err_unref on [bool] vcl_dir /opt/varnish-3.0.2-streaming/etc/varnish vcl_trace off [bool] vmod_dir /opt/varnish-3.0.2-streaming/lib/varnish/vmods waiter default (epoll, poll) -------------- next part -------------- An HTML attachment was scrubbed... URL: From wxz19861013 at gmail.com Mon Apr 1 07:08:08 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Mon, 01 Apr 2013 07:08:08 -0000 Subject: varnish-3.0.2-streaming crash issue (revision cd0ccbf) Message-ID: I use varnish-3.0.2-streaming for my application. I found that the object with "Cache-Control:max-age=31536000" will miss in couple days sometimes. And then I notice that varnish child process crash sometimes. This is panic log: panic.show 200 Last panic at: Sun, 31 Mar 2013 22:16:07 GMT Assert error in Tcheck(), cache.h line 1004: Condition(t.b <= t.e) not true. thread = (cache-worker) ident = Linux,2.6.32.59-0.7-xen,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42ffb3: pan_ic+d3 0x42cd35: http_IsHdr+65 0x42d311: http_FilterFields+3e1 0x43417c: RES_BuildHttp+9c 0x417248: cnt_prepresp+218 0x41998d: CNT_Session+4ad 0x4327c3: wrk_do_cnt_sess+93 0x431a0a: wrk_thread_real+3ea 0x7f0a266a56a6: _end+7f0a2602938e 0x7f0a26414f7d: _end+7f0a25d98c65 sp = 0x7f0a1369a008 { fd = 260, id = 260, xid = 752636266, client = xx.xxx.xxx.xxx xxxx, step = STP_PREPRESP, handling = hit_for_pass, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 flags = bodystatus = 3 ws = 0x7f0a1369a080 { id = "sess", {s,f,r,e} = {0x7f0a1369c628,+608,(nil),+65536}, }, http[req] = { ws = 0x7f0a1369a080[sess] "GET", " http://g.example.com/fcg-bin/cgi_emotion_list.fcg?uin=xxxx&loginUin=x&s=xxxx&num=xx&noflower=xx&g_tk=xxx ", "HTTP/1.1", "Accept: */*", "Referer: http://user.example.com/xxxxx", "Accept-Language: Zh-cn", "User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; SE 2.X MetaSr 1.0", "Host: g.example.com", "Connection: close", "Cache-Control: no-cache", "Cookie: randomSeed=774464;", "X-Forwarded-For: xx.xxx.xxx.xxx", }, worker = 0x7f0a037e5a90 { ws = 0x7f0a037e5ce0 { id = "wrk", {s,f,r,e} = {0x7f0a037d3a20,0x7f0a037d3a20,(nil),+65536}, }, http[resp] = { ws = 0x7f0a037e5ce0[wrk] "HTTP/1.1", "OK", "Server: QZHTTP-2.35.2", "Via: 1.1 localhost", "X-Accelerate: 2.20", "Vary: Accept-Encoding", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7f0a0ddf6000 { xid = 752636266, ws = 0x7f0a0ddf6018 { id = "obj", {s,f,r,e} = {0x7f0a0ddf6200,+240,(nil),+272}, }, http[obj] = { ws = 0x7f0a0ddf6018[obj] "HTTP/1.1", "OK", "Date: Sun, 31 Mar 2013 22:16:07 GMT", "Server: QZHTTP-2.35.2", "Via: 1.1 localhost", "X-Accelerate: 2.20", "Vary: Accept-Encoding", "Content-Type: text/html", "Content-Length: 211", }, len = 211, store = { 211 { 76 69 73 69 74 43 6f 75 6e 74 43 61 6c 6c 42 61 |visitCountCallBa| 63 6b 28 7b 22 72 65 74 63 6f 64 65 22 3a 31 2c |ck({"retcode":1,| 22 76 69 73 69 74 63 6f 75 6e 74 22 3a 30 2c 22 |"visitcount":0,"| 64 61 79 76 69 73 69 74 22 3a 30 2c 22 73 70 61 |dayvisit":0,"spa| [147 more] }, }, }, }, This is my starup order: ./varnishd -f varnish-stream.vcl -s malloc,16G -T 127.0.0.1:2000 -a 0.0.0.0:8080 -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100 -p thread_pools=2 -p http_req_hdr_len=32768 -p http_resp_hdr_len=32768 -p http_max_hdr=256 This is my varnish-stream.vcl : # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # probe healthcheck { .url = "/"; .interval = 30s; .timeout = 0.5 s; .window = 8; .threshold = 3; .initial = 3; } backend proxy1 { .host = "x.x.x.x1"; .port = "8080"; .probe = healthcheck; } backend proxy2 { .host = "x.x.x.x2"; .port = "8080"; .probe = healthcheck; } backend proxy3 { .host = "x.x.x.x3"; .port = "8080"; .probe = healthcheck; } backend proxy3 { .host = "x.x.x.x4"; .port = "8080"; .probe = healthcheck; } director proxy client { { .backend = proxy1; .weight = 1; } { .backend = proxy2; .weight = 1; } { .backend = proxy3; .weight = 1; } } acl refresh { "x.x.x.x5"; } # Below is a commented-out copy of the default VCL logic. If you # redefine any of these subroutines, the built-in logic will be # appended to your code. sub vcl_recv { if(req.http.X-Real-IP){ set client.identity = req.http.X-Real-IP; }else if (req.http.referer) { set client.identity = req.http.referer; }else{ set client.identity = req.url; } # set req.backend = proxy; if(client.ip == "x.x.x.x6"){ set req.backend = proxy; }else{ set req.backend = proxy4; if (client.ip ~ refresh) { set req.hash_always_miss = true; } } #set grace if (req.backend.healthy) { set req.grace = 30s; } else { set req.grace = 30m; } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } if (req.http.x-pipe && req.restarts > 0) { remove req.http.x-pipe; return (pipe); } if(req.request != "GET" && req.request != "HEAD") { # /* We only deal with GET and HEAD by default */ return (pass); } if (req.http.Cache-Control ~ "no-cache") { return (pass); } if (req.http.Accept-Encoding) { if (req.url ~ "\.(webp|jpeg|png|mid|mp3|gif|sql|jpg|nth|thm|utz|mtf|sdt|hme|tsk|zip|rar|sx|pxl|cab|mbm|app|exe|apk)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } if (req.http.Authorization) { return (pass); } return (lookup); } sub vcl_pipe { set bereq.http.connection = "close"; return (pipe); } sub vcl_pass { return (pass); } sub vcl_hash { if (req.url ~ ".(jpeg|jpg|png|gif|ico|js|css)\?.*") { hash_data(regsub(req.url, "\?[^\?]*$", "")); } else{ hash_data(req.url); } if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_hit { return (deliver); } sub vcl_miss { return (fetch); } sub vcl_fetch { set beresp.grace = 30m; set beresp.do_stream = true; if (beresp.http.Content-Length && beresp.http.Content-Length ~ "[0-9]{8,}") { set req.http.x-pipe = "1"; return (restart); } if (beresp.http.Pragma ~ "no-cache" || beresp.http.Cache-Control ~ "no-cache" || beresp.http.Cache-Control ~ "private"){ return (hit_for_pass); } if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return (hit_for_pass); } return (deliver); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT from varnish"; set resp.http.X-Hits = obj.hits; } else { set resp.http.X-Cache = "MISS from varnish"; } remove resp.http.Via; remove resp.http.X-Varnish; return (deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; set obj.http.Retry-After = "5"; synthetic {" "} + obj.status + " " + obj.response + {"

Error "} + obj.status + " " + obj.response + {"

"} + obj.response + {"

Guru Meditation:

XID: "} + req.xid + {"


Varnish cache server

varnish

"}; return (deliver); } sub vcl_init { return (ok); } sub vcl_fini { return (ok); } This is varnishstat in 0401: ./varnishstat -1 client_conn 3769680 342.17 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 3881633 352.33 Client requests received cache_hit 26574 2.41 Cache hits cache_hitpass 6072 0.55 Cache hits for pass cache_miss 155003 14.07 Cache misses backend_conn 304379 27.63 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 41 0.00 Backend conn. failures backend_reuse 3551465 322.36 Backend conn. reuses backend_toolate 1104 0.10 Backend conn. was closed backend_recycle 3552586 322.46 Backend conn. recycles backend_retry 780 0.07 Backend conn. retry fetch_head 0 0.00 Fetch head fetch_length 2308273 209.52 Fetch with Length fetch_chunked 1285891 116.72 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_streamed 3620982 328.67 Fetch streamed fetch_bad 0 0.00 Fetch had bad headers fetch_close 26761 2.43 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 62 0.01 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 49 0.00 Fetch no body (204) fetch_304 10 0.00 Fetch no body (304) n_sess_mem 3103 . N struct sess_mem n_sess 317 . N struct sess n_object 16518 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 16878 . N struct objectcore n_objecthead 13535 . N struct objecthead n_waitinglist 3049 . N struct waitinglist n_vbc 68 . N struct vbc n_wrk 400 . N worker threads n_wrk_create 618 0.06 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 2081 0.19 N queued work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 3 . N backends n_expired 138436 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_moved 21504 . N LRU moved objects losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 25702 2.33 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 3769636 342.17 Total Sessions s_req 3881633 352.33 Total Requests s_pipe 233231 21.17 Total pipe s_pass 3466827 314.68 Total pass s_fetch 0 0.00 Total fetch s_stream 3620813 328.66 Total streamed requests s_hdrbytes 986265856 89522.18 Total header bytes s_bodybytes 28556419 2592.03 Total body bytes sess_closed 3657484 331.99 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 262709 23.85 Session Linger sess_herd 262383 23.82 Session herd shm_records 304070226 27600.09 SHM records shm_writes 26571905 2411.90 SHM writes shm_flushes 2 0.00 SHM flushes due to overflow shm_cont 49165 4.46 SHM MTX contention shm_cycles 130 0.01 SHM cycles through buffer sms_nreq 846 0.08 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 384930 . SMS bytes allocated sms_bfree 384930 . SMS bytes freed backend_req 3621062 328.68 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 1 . N total active bans n_ban_add 1 0.00 N new bans added n_ban_retire 0 0.00 N old bans deleted n_ban_obj_test 0 0.00 N objects tested n_ban_re_test 0 0.00 N regexps tested against n_ban_dups 0 0.00 N duplicate bans removed hcb_nolock 187649 17.03 HCB Lookups without lock hcb_lock 80799 7.33 HCB Lookups with lock hcb_insert 80799 7.33 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 11017 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 0 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 271725 24.66 Gunzip operations LCK.sms.creat 4 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 8550 0.78 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 8 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 38262423 3473.03 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 4 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 418452 37.98 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 0 0.00 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 0 0.00 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 4 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 9462038 858.86 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 4 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 9840 0.89 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 4 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 28114307 2551.90 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 4 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 212775 19.31 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 4 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 1209 0.11 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 8 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 38953507 3535.76 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 270862 24.59 Created locks LCK.objhdr.destroy 141466 12.84 Destroyed locks LCK.objhdr.locks 4746962 430.88 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 4 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 856264 77.72 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 8 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 480694 43.63 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 4 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 22068 2.00 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 4 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 857486 77.83 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 4 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 6642 0.60 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 4 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 1723928 156.48 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 12 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 20863568 1893.76 Lock Operations LCK.backend.colls 0 0.00 Collisions LCK.busyobj.creat 2153 0.20 Created locks LCK.busyobj.destroy 658 0.06 Destroyed locks LCK.busyobj.locks 51597325 4683.43 Lock Operations LCK.busyobj.colls 0 0.00 Collisions SMA.s0.c_req 240044 21.79 Allocator requests SMA.s0.c_fail 0 0.00 Allocator failures SMA.s0.c_bytes 14874337772 1350125.97 Bytes allocated SMA.s0.c_freed 14770447168 1340695.94 Bytes freed SMA.s0.g_alloc 30924 . Allocations outstanding SMA.s0.g_bytes 103890604 . Bytes outstanding SMA.s0.g_space 17075978580 . Bytes available SMA.Transient.c_req 6620156 600.90 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 173497152178 15748130.36 Bytes allocated SMA.Transient.c_freed 173487087827 15747216.83 Bytes freed SMA.Transient.g_alloc 1383 . Allocations outstanding SMA.Transient.g_bytes 10064351 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.proxy1(xx.xx.xx.xx1,,8080).vcls 4 . VCL references VBE.proxy1(xx.xx.xx.xx1,,8080).happy18446744073709551615 . Happy health probes VBE.proxy2(xx.xx.xx.xx2,,8080).vcls 4 . VCL references VBE.proxy2(xx.xx.xx.xx2,,8080).happy18446744073709551615 . Happy health probes VBE.proxy3(xx.xx.xx.xx3,,8080).vcls 4 . VCL references VBE.proxy3(xx.xx.xx.xx3,,8080).happy18446744073709551615 . Happy health probes VBE.proxy4(xx.xx.xx.xx4,,8081).vcls 4 . VCL references VBE.proxy4(xx.xx.xx.xx4,,8081).happy18446744073709551615 . Happy health probes This is my parameter: param.show 200 acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -pthread -fpic -shared -Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 10 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group nobody (65533) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support on [bool] http_max_hdr 256 [header lines] http_range_support on [bool] http_req_hdr_len 32768 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 32768 [bytes] http_resp_size 32768 [bytes] listen_address 0.0.0.0:8080 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 60 [seconds] sess_timeout 5 [seconds] sess_workspace 65536 [bytes] session_linger 100 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] stream_maxchunksize 256 [kilobytes] stream_tokens 10 [] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 4000 [threads] thread_pool_min 200 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user nobody (65534) vcc_err_unref on [bool] vcl_dir /opt/varnish-3.0.2-streaming/etc/varnish vcl_trace off [bool] vmod_dir /opt/varnish-3.0.2-streaming/lib/varnish/vmods waiter default (epoll, poll) -------------- next part -------------- An HTML attachment was scrubbed... URL: From wxz19861013 at gmail.com Tue Apr 2 12:42:07 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Tue, 02 Apr 2013 12:42:07 -0000 Subject: varnish-3.0.2-streaming crash issue (revision cd0ccbf) Message-ID: I use varnish-3.0.2-streaming for my application. I found that the object with "Cache-Control:max-age=31536000" will miss in couple days sometimes. And then I notice that varnish child process crash sometimes. This is panic log: panic.show 200 Last panic at: Sun, 31 Mar 2013 22:16:07 GMT Assert error in Tcheck(), cache.h line 1004: Condition(t.b <= t.e) not true. thread = (cache-worker) ident = Linux,2.6.32.59-0.7-xen,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42ffb3: pan_ic+d3 0x42cd35: http_IsHdr+65 0x42d311: http_FilterFields+3e1 0x43417c: RES_BuildHttp+9c 0x417248: cnt_prepresp+218 0x41998d: CNT_Session+4ad 0x4327c3: wrk_do_cnt_sess+93 0x431a0a: wrk_thread_real+3ea 0x7f0a266a56a6: _end+7f0a2602938e 0x7f0a26414f7d: _end+7f0a25d98c65 sp = 0x7f0a1369a008 { fd = 260, id = 260, xid = 752636266, client = xx.xxx.xxx.xxx xxxx, step = STP_PREPRESP, handling = hit_for_pass, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 flags = bodystatus = 3 ws = 0x7f0a1369a080 { id = "sess", {s,f,r,e} = {0x7f0a1369c628,+608,(nil),+65536}, }, http[req] = { ws = 0x7f0a1369a080[sess] "GET", " http://g.example.com/fcg-bin/cgi_emotion_list.fcg?uin=xxxx&loginUin=x&s=xxxx&num=xx&noflower=xx&g_tk=xxx ", "HTTP/1.1", "Accept: */*", "Referer: http://user.example.com/xxxxx", "Accept-Language: Zh-cn", "User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; SE 2.X MetaSr 1.0", "Host: g.example.com", "Connection: close", "Cache-Control: no-cache", "Cookie: randomSeed=774464;", "X-Forwarded-For: xx.xxx.xxx.xxx", }, worker = 0x7f0a037e5a90 { ws = 0x7f0a037e5ce0 { id = "wrk", {s,f,r,e} = {0x7f0a037d3a20,0x7f0a037d3a20,(nil),+65536}, }, http[resp] = { ws = 0x7f0a037e5ce0[wrk] "HTTP/1.1", "OK", "Server: QZHTTP-2.35.2", "Via: 1.1 localhost", "X-Accelerate: 2.20", "Vary: Accept-Encoding", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7f0a0ddf6000 { xid = 752636266, ws = 0x7f0a0ddf6018 { id = "obj", {s,f,r,e} = {0x7f0a0ddf6200,+240,(nil),+272}, }, http[obj] = { ws = 0x7f0a0ddf6018[obj] "HTTP/1.1", "OK", "Date: Sun, 31 Mar 2013 22:16:07 GMT", "Server: QZHTTP-2.35.2", "Via: 1.1 localhost", "X-Accelerate: 2.20", "Vary: Accept-Encoding", "Content-Type: text/html", "Content-Length: 211", }, len = 211, store = { 211 { 76 69 73 69 74 43 6f 75 6e 74 43 61 6c 6c 42 61 |visitCountCallBa| 63 6b 28 7b 22 72 65 74 63 6f 64 65 22 3a 31 2c |ck({"retcode":1,| 22 76 69 73 69 74 63 6f 75 6e 74 22 3a 30 2c 22 |"visitcount":0,"| 64 61 79 76 69 73 69 74 22 3a 30 2c 22 73 70 61 |dayvisit":0,"spa| [147 more] }, }, }, }, This is my starup order: ./varnishd -f varnish-stream.vcl -s malloc,16G -T 127.0.0.1:2000 -a 0.0.0.0:8080 -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100 -p thread_pools=2 -p http_req_hdr_len=32768 -p http_resp_hdr_len=32768 -p http_max_hdr=256 This is my varnish-stream.vcl : # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # probe healthcheck { .url = "/"; .interval = 30s; .timeout = 0.5 s; .window = 8; .threshold = 3; .initial = 3; } backend proxy1 { .host = "x.x.x.x1"; .port = "8080"; .probe = healthcheck; } backend proxy2 { .host = "x.x.x.x2"; .port = "8080"; .probe = healthcheck; } backend proxy3 { .host = "x.x.x.x3"; .port = "8080"; .probe = healthcheck; } backend proxy3 { .host = "x.x.x.x4"; .port = "8080"; .probe = healthcheck; } director proxy client { { .backend = proxy1; .weight = 1; } { .backend = proxy2; .weight = 1; } { .backend = proxy3; .weight = 1; } } acl refresh { "x.x.x.x5"; } # Below is a commented-out copy of the default VCL logic. If you # redefine any of these subroutines, the built-in logic will be # appended to your code. sub vcl_recv { if(req.http.X-Real-IP){ set client.identity = req.http.X-Real-IP; }else if (req.http.referer) { set client.identity = req.http.referer; }else{ set client.identity = req.url; } # set req.backend = proxy; if(client.ip == "x.x.x.x6"){ set req.backend = proxy; }else{ set req.backend = proxy4; if (client.ip ~ refresh) { set req.hash_always_miss = true; } } #set grace if (req.backend.healthy) { set req.grace = 30s; } else { set req.grace = 30m; } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } if (req.http.x-pipe && req.restarts > 0) { remove req.http.x-pipe; return (pipe); } if(req.request != "GET" && req.request != "HEAD") { # /* We only deal with GET and HEAD by default */ return (pass); } if (req.http.Cache-Control ~ "no-cache") { return (pass); } if (req.http.Accept-Encoding) { if (req.url ~ "\.(webp|jpeg|png|mid|mp3|gif|sql|jpg|nth|thm|utz|mtf|sdt|hme|tsk|zip|rar|sx|pxl|cab|mbm|app|exe|apk)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } if (req.http.Authorization) { return (pass); } return (lookup); } sub vcl_pipe { set bereq.http.connection = "close"; return (pipe); } sub vcl_pass { return (pass); } sub vcl_hash { if (req.url ~ ".(jpeg|jpg|png|gif|ico|js|css)\?.*") { hash_data(regsub(req.url, "\?[^\?]*$", "")); } else{ hash_data(req.url); } if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_hit { return (deliver); } sub vcl_miss { return (fetch); } sub vcl_fetch { set beresp.grace = 30m; set beresp.do_stream = true; if (beresp.http.Content-Length && beresp.http.Content-Length ~ "[0-9]{8,}") { set req.http.x-pipe = "1"; return (restart); } if (beresp.http.Pragma ~ "no-cache" || beresp.http.Cache-Control ~ "no-cache" || beresp.http.Cache-Control ~ "private"){ return (hit_for_pass); } if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return (hit_for_pass); } return (deliver); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT from varnish"; set resp.http.X-Hits = obj.hits; } else { set resp.http.X-Cache = "MISS from varnish"; } remove resp.http.Via; remove resp.http.X-Varnish; return (deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; set obj.http.Retry-After = "5"; synthetic {" "} + obj.status + " " + obj.response + {"

Error "} + obj.status + " " + obj.response + {"

"} + obj.response + {"

Guru Meditation:

XID: "} + req.xid + {"


Varnish cache server

varnish

"}; return (deliver); } sub vcl_init { return (ok); } sub vcl_fini { return (ok); } This is varnishstat in 0401: ./varnishstat -1 client_conn 3769680 342.17 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 3881633 352.33 Client requests received cache_hit 26574 2.41 Cache hits cache_hitpass 6072 0.55 Cache hits for pass cache_miss 155003 14.07 Cache misses backend_conn 304379 27.63 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 41 0.00 Backend conn. failures backend_reuse 3551465 322.36 Backend conn. reuses backend_toolate 1104 0.10 Backend conn. was closed backend_recycle 3552586 322.46 Backend conn. recycles backend_retry 780 0.07 Backend conn. retry fetch_head 0 0.00 Fetch head fetch_length 2308273 209.52 Fetch with Length fetch_chunked 1285891 116.72 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_streamed 3620982 328.67 Fetch streamed fetch_bad 0 0.00 Fetch had bad headers fetch_close 26761 2.43 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 62 0.01 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 49 0.00 Fetch no body (204) fetch_304 10 0.00 Fetch no body (304) n_sess_mem 3103 . N struct sess_mem n_sess 317 . N struct sess n_object 16518 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 16878 . N struct objectcore n_objecthead 13535 . N struct objecthead n_waitinglist 3049 . N struct waitinglist n_vbc 68 . N struct vbc n_wrk 400 . N worker threads n_wrk_create 618 0.06 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 2081 0.19 N queued work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 3 . N backends n_expired 138436 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_moved 21504 . N LRU moved objects losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 25702 2.33 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 3769636 342.17 Total Sessions s_req 3881633 352.33 Total Requests s_pipe 233231 21.17 Total pipe s_pass 3466827 314.68 Total pass s_fetch 0 0.00 Total fetch s_stream 3620813 328.66 Total streamed requests s_hdrbytes 986265856 89522.18 Total header bytes s_bodybytes 28556419 2592.03 Total body bytes sess_closed 3657484 331.99 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 262709 23.85 Session Linger sess_herd 262383 23.82 Session herd shm_records 304070226 27600.09 SHM records shm_writes 26571905 2411.90 SHM writes shm_flushes 2 0.00 SHM flushes due to overflow shm_cont 49165 4.46 SHM MTX contention shm_cycles 130 0.01 SHM cycles through buffer sms_nreq 846 0.08 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 384930 . SMS bytes allocated sms_bfree 384930 . SMS bytes freed backend_req 3621062 328.68 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 1 . N total active bans n_ban_add 1 0.00 N new bans added n_ban_retire 0 0.00 N old bans deleted n_ban_obj_test 0 0.00 N objects tested n_ban_re_test 0 0.00 N regexps tested against n_ban_dups 0 0.00 N duplicate bans removed hcb_nolock 187649 17.03 HCB Lookups without lock hcb_lock 80799 7.33 HCB Lookups with lock hcb_insert 80799 7.33 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 11017 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 0 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 271725 24.66 Gunzip operations LCK.sms.creat 4 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 8550 0.78 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 8 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 38262423 3473.03 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 4 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 418452 37.98 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 0 0.00 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 0 0.00 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 4 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 9462038 858.86 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 4 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 9840 0.89 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 4 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 28114307 2551.90 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 4 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 212775 19.31 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 4 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 1209 0.11 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 8 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 38953507 3535.76 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 270862 24.59 Created locks LCK.objhdr.destroy 141466 12.84 Destroyed locks LCK.objhdr.locks 4746962 430.88 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 4 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 856264 77.72 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 8 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 480694 43.63 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 4 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 22068 2.00 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 4 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 857486 77.83 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 4 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 6642 0.60 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 4 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 1723928 156.48 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 12 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 20863568 1893.76 Lock Operations LCK.backend.colls 0 0.00 Collisions LCK.busyobj.creat 2153 0.20 Created locks LCK.busyobj.destroy 658 0.06 Destroyed locks LCK.busyobj.locks 51597325 4683.43 Lock Operations LCK.busyobj.colls 0 0.00 Collisions SMA.s0.c_req 240044 21.79 Allocator requests SMA.s0.c_fail 0 0.00 Allocator failures SMA.s0.c_bytes 14874337772 1350125.97 Bytes allocated SMA.s0.c_freed 14770447168 1340695.94 Bytes freed SMA.s0.g_alloc 30924 . Allocations outstanding SMA.s0.g_bytes 103890604 . Bytes outstanding SMA.s0.g_space 17075978580 . Bytes available SMA.Transient.c_req 6620156 600.90 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 173497152178 15748130.36 Bytes allocated SMA.Transient.c_freed 173487087827 15747216.83 Bytes freed SMA.Transient.g_alloc 1383 . Allocations outstanding SMA.Transient.g_bytes 10064351 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.proxy1(xx.xx.xx.xx1,,8080).vcls 4 . VCL references VBE.proxy1(xx.xx.xx.xx1,,8080).happy18446744073709551615 . Happy health probes VBE.proxy2(xx.xx.xx.xx2,,8080).vcls 4 . VCL references VBE.proxy2(xx.xx.xx.xx2,,8080).happy18446744073709551615 . Happy health probes VBE.proxy3(xx.xx.xx.xx3,,8080).vcls 4 . VCL references VBE.proxy3(xx.xx.xx.xx3,,8080).happy18446744073709551615 . Happy health probes VBE.proxy4(xx.xx.xx.xx4,,8081).vcls 4 . VCL references VBE.proxy4(xx.xx.xx.xx4,,8081).happy18446744073709551615 . Happy health probes This is my parameter: param.show 200 acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -pthread -fpic -shared -Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 10 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group nobody (65533) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support on [bool] http_max_hdr 256 [header lines] http_range_support on [bool] http_req_hdr_len 32768 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 32768 [bytes] http_resp_size 32768 [bytes] listen_address 0.0.0.0:8080 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 60 [seconds] sess_timeout 5 [seconds] sess_workspace 65536 [bytes] session_linger 100 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] stream_maxchunksize 256 [kilobytes] stream_tokens 10 [] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 4000 [threads] thread_pool_min 200 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user nobody (65534) vcc_err_unref on [bool] vcl_dir /opt/varnish-3.0.2-streaming/etc/varnish vcl_trace off [bool] vmod_dir /opt/varnish-3.0.2-streaming/lib/varnish/vmods waiter default (epoll, poll) -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish-bugs at varnish-cache.org Mon Apr 15 10:51:06 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Apr 2013 10:51:06 -0000 Subject: [Varnish] #1293: varnishd: page allocation failure In-Reply-To: <048.5c12d86511840ad8268d566798999de1@varnish-cache.org> References: <048.5c12d86511840ad8268d566798999de1@varnish-cache.org> Message-ID: <063.e8aa06bc4ffdd91c80b919ffab915d85@varnish-cache.org> #1293: varnishd: page allocation failure ------------------------+-------------------- Reporter: msallen333 | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ------------------------+-------------------- Description changed by martin: Old description: > Varnishd recently "hung" on my system with the below error > /var/log/messages and had to be restarted. Does this appear to be a > varnish 3.0.3 defect? > > Apr 9 15:42:59 lx11 kernel: __ratelimit: 26 callbacks suppressed > Apr 9 15:42:59 lx11 kernel: varnishd: page allocation failure. order:0, > mode:0x20 > Apr 9 15:42:59 lx11 kernel: Pid: 16489, comm: varnishd Not tainted > 2.6.32-358.2.1.el6.x86_64 #1 > Apr 9 15:42:59 lx11 kernel: Call Trace: > Apr 9 15:42:59 lx11 kernel: [] ? > __alloc_pages_nodemask+0x757/0x8d0 > Apr 9 15:42:59 lx11 kernel: [] ? > kmem_getpages+0x62/0x170 > Apr 9 15:42:59 lx11 kernel: [] ? > fallback_alloc+0x1ba/0x270 > Apr 9 15:42:59 lx11 kernel: [] ? > ____cache_alloc_node+0x99/0x160 > Apr 9 15:42:59 lx11 kernel: [] ? > kmem_cache_alloc_node_trace+0x90/0x200 > Apr 9 15:42:59 lx11 kernel: [] ? > __kmalloc_node+0x4d/0x60 > Apr 9 15:42:59 lx11 kernel: [] ? > __alloc_skb+0x6d/0x190 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_collapse+0x1a2/0x3f0 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_try_rmem_schedule+0x245/0x360 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_data_queue+0x1ab/0xc70 > Apr 9 15:42:59 lx11 kernel: [] ? > tick_program_event+0x2a/0x30 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_rcv_established+0x369/0x800 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_v4_do_rcv+0x2e3/0x430 > Apr 9 15:42:59 lx11 kernel: [] ? > invalidate_interrupt7+0x13/0x20 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_v4_rcv+0x4fe/0x8d0 > Apr 9 15:42:59 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:42:59 lx11 kernel: [] ? > ip_local_deliver_finish+0xdd/0x2d0 > Apr 9 15:42:59 lx11 kernel: [] ? > ip_local_deliver+0x98/0xa0 > Apr 9 15:42:59 lx11 kernel: [] ? > ip_rcv_finish+0x12d/0x440 > Apr 9 15:42:59 lx11 kernel: [] ? ip_rcv+0x275/0x350 > Apr 9 15:42:59 lx11 kernel: [] ? > __netif_receive_skb+0x4ab/0x750 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp4_gro_receive+0x5a/0xd0 > Apr 9 15:42:59 lx11 kernel: [] ? > netif_receive_skb+0x58/0x60 > Apr 9 15:42:59 lx11 kernel: [] ? > napi_skb_finish+0x50/0x70 > Apr 9 15:42:59 lx11 kernel: [] ? > napi_gro_receive+0x39/0x50 > Apr 9 15:42:59 lx11 kernel: [] ? > bnx2_poll_work+0xd4f/0x1270 [bnx2] > Apr 9 15:42:59 lx11 kernel: [] ? > death_by_timeout+0x0/0x160 [nf_conntrack] > Apr 9 15:42:59 lx11 kernel: [] ? > swiotlb_map_page+0x0/0x100 > Apr 9 15:42:59 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 > [bnx2] > Apr 9 15:42:59 lx11 kernel: [] ? > net_rx_action+0x103/0x2f0 > Apr 9 15:42:59 lx11 kernel: [] ? > __do_softirq+0xc1/0x1e0 > Apr 9 15:42:59 lx11 kernel: [] ? > hrtimer_interrupt+0x14b/0x260 > Apr 9 15:42:59 lx11 kernel: [] ? > call_softirq+0x1c/0x30 > Apr 9 15:42:59 lx11 kernel: [] ? do_softirq+0x65/0xa0 > Apr 9 15:42:59 lx11 kernel: [] ? irq_exit+0x85/0x90 > Apr 9 15:42:59 lx11 kernel: [] ? > smp_apic_timer_interrupt+0x70/0x9b > Apr 9 15:42:59 lx11 kernel: [] ? > apic_timer_interrupt+0x13/0x20 > Apr 9 15:42:59 lx11 kernel: > Apr 9 15:42:59 lx11 kernel: varnishd: page allocation failure. order:0, > mode:0x20 > Apr 9 15:42:59 lx11 kernel: Pid: 16489, comm: varnishd Not tainted > 2.6.32-358.2.1.el6.x86_64 #1 > Apr 9 15:42:59 lx11 kernel: Call Trace: > Apr 9 15:42:59 lx11 kernel: [] ? > __alloc_pages_nodemask+0x757/0x8d0 > Apr 9 15:42:59 lx11 kernel: [] ? > kmem_getpages+0x62/0x170 > Apr 9 15:42:59 lx11 kernel: [] ? > fallback_alloc+0x1ba/0x270 > Apr 9 15:42:59 lx11 kernel: [] ? > ____cache_alloc_node+0x99/0x160 > Apr 9 15:42:59 lx11 kernel: [] ? > kmem_cache_alloc_node_trace+0x90/0x200 > Apr 9 15:42:59 lx11 kernel: [] ? > __kmalloc_node+0x4d/0x60 > Apr 9 15:42:59 lx11 kernel: [] ? > __alloc_skb+0x6d/0x190 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_collapse+0x1a2/0x3f0 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_try_rmem_schedule+0x245/0x360 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_data_queue+0x1ab/0xc70 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_validate_incoming+0x30b/0x3a0 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_rcv_established+0x369/0x800 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_v4_do_rcv+0x2e3/0x430 > Apr 9 15:42:59 lx11 kernel: [] ? > invalidate_interrupt4+0x13/0x20 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp_v4_rcv+0x4fe/0x8d0 > Apr 9 15:42:59 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:42:59 lx11 kernel: [] ? > ip_local_deliver_finish+0xdd/0x2d0 > Apr 9 15:42:59 lx11 kernel: [] ? > ip_local_deliver+0x98/0xa0 > Apr 9 15:42:59 lx11 kernel: [] ? > ip_rcv_finish+0x12d/0x440 > Apr 9 15:42:59 lx11 kernel: [] ? ip_rcv+0x275/0x350 > Apr 9 15:42:59 lx11 kernel: [] ? > __netif_receive_skb+0x4ab/0x750 > Apr 9 15:42:59 lx11 kernel: [] ? > tcp4_gro_receive+0x5a/0xd0 > Apr 9 15:42:59 lx11 kernel: [] ? > netif_receive_skb+0x58/0x60 > Apr 9 15:42:59 lx11 kernel: [] ? > napi_skb_finish+0x50/0x70 > Apr 9 15:42:59 lx11 kernel: [] ? > napi_gro_receive+0x39/0x50 > Apr 9 15:42:59 lx11 kernel: [] ? > bnx2_poll_work+0xd4f/0x1270 [bnx2] > Apr 9 15:42:59 lx11 kernel: [] ? > death_by_timeout+0x0/0x160 [nf_conntrack] > Apr 9 15:42:59 lx11 kernel: [] ? > swiotlb_map_page+0x0/0x100 > Apr 9 15:42:59 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 > [bnx2] > Apr 9 15:42:59 lx11 kernel: [] ? > net_rx_action+0x103/0x2f0 > Apr 9 15:42:59 lx11 kernel: [] ? > __do_softirq+0xc1/0x1e0 > Apr 9 15:42:59 lx11 kernel: [] ? > hrtimer_interrupt+0x14b/0x260 > Apr 9 15:42:59 lx11 kernel: [] ? > call_softirq+0x1c/0x30 > Apr 9 15:42:59 lx11 kernel: [] ? do_softirq+0x65/0xa0 > Apr 9 15:42:59 lx11 kernel: [] ? irq_exit+0x85/0x90 > Apr 9 15:42:59 lx11 kernel: [] ? > smp_apic_timer_interrupt+0x70/0x9b > Apr 9 15:42:59 lx11 kernel: [] ? > apic_timer_interrupt+0x13/0x20 > Apr 9 15:42:59 lx11 kernel: > Apr 9 15:42:59 lx11 kernel: varnishd: page allocation failure. order:0, > mode:0x20 > Apr 9 15:42:59 lx11 kernel: Pid: 16489, comm: varnishd Not tainted > 2.6.32-358.2.1.el6.x86_64 #1 > Apr 9 15:42:59 lx11 kernel: Call Trace: > Apr 9 15:43:02 lx11 kernel: [] ? > __alloc_pages_nodemask+0x757/0x8d0 > Apr 9 15:43:02 lx11 kernel: [] ? > kmem_getpages+0x62/0x170 > Apr 9 15:43:02 lx11 kernel: [] ? > fallback_alloc+0x1ba/0x270 > Apr 9 15:43:02 lx11 kernel: [] ? > ____cache_alloc_node+0x99/0x160 > Apr 9 15:43:02 lx11 kernel: [] ? > kmem_cache_alloc_node_trace+0x90/0x200 > Apr 9 15:43:02 lx11 kernel: [] ? > __kmalloc_node+0x4d/0x60 > Apr 9 15:43:02 lx11 kernel: [] ? > __alloc_skb+0x6d/0x190 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_collapse+0x1a2/0x3f0 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_try_rmem_schedule+0x245/0x360 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_data_queue+0x1ab/0xc70 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_validate_incoming+0x30b/0x3a0 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_rcv_established+0x369/0x800 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_v4_do_rcv+0x2e3/0x430 > Apr 9 15:43:02 lx11 kernel: [] ? > ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_v4_rcv+0x4fe/0x8d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver_finish+0xdd/0x2d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver+0x98/0xa0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_rcv_finish+0x12d/0x440 > Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 > Apr 9 15:43:02 lx11 kernel: [] ? > __netif_receive_skb+0x4ab/0x750 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp4_gro_receive+0x5a/0xd0 > Apr 9 15:43:02 lx11 kernel: [] ? > netif_receive_skb+0x58/0x60 > Apr 9 15:43:02 lx11 kernel: [] ? > napi_skb_finish+0x50/0x70 > Apr 9 15:43:02 lx11 kernel: [] ? > napi_gro_receive+0x39/0x50 > Apr 9 15:43:02 lx11 kernel: [] ? > bnx2_poll_work+0xd4f/0x1270 [bnx2] > Apr 9 15:43:02 lx11 kernel: [] ? > death_by_timeout+0x0/0x160 [nf_conntrack] > Apr 9 15:43:02 lx11 kernel: [] ? > swiotlb_map_page+0x0/0x100 > Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 > [bnx2] > Apr 9 15:43:02 lx11 kernel: [] ? > net_rx_action+0x103/0x2f0 > Apr 9 15:43:02 lx11 kernel: [] ? > __do_softirq+0xc1/0x1e0 > Apr 9 15:43:02 lx11 kernel: [] ? > hrtimer_interrupt+0x14b/0x260 > Apr 9 15:43:02 lx11 kernel: [] ? > call_softirq+0x1c/0x30 > Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 > Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 > Apr 9 15:43:02 lx11 kernel: [] ? > smp_apic_timer_interrupt+0x70/0x9b > Apr 9 15:43:02 lx11 kernel: [] ? > apic_timer_interrupt+0x13/0x20 > Apr 9 15:43:02 lx11 kernel: > Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, > mode:0x20 > Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted > 2.6.32-358.2.1.el6.x86_64 #1 > Apr 9 15:43:02 lx11 kernel: Call Trace: > Apr 9 15:43:02 lx11 kernel: [] ? > __alloc_pages_nodemask+0x757/0x8d0 > Apr 9 15:43:02 lx11 kernel: [] ? > kmem_getpages+0x62/0x170 > Apr 9 15:43:02 lx11 kernel: [] ? > fallback_alloc+0x1ba/0x270 > Apr 9 15:43:02 lx11 kernel: [] ? > ____cache_alloc_node+0x99/0x160 > Apr 9 15:43:02 lx11 kernel: [] ? > kmem_cache_alloc_node_trace+0x90/0x200 > Apr 9 15:43:02 lx11 kernel: [] ? > __kmalloc_node+0x4d/0x60 > Apr 9 15:43:02 lx11 kernel: [] ? > __alloc_skb+0x6d/0x190 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_collapse+0x1a2/0x3f0 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_try_rmem_schedule+0x245/0x360 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_data_queue+0x1ab/0xc70 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_rcv_established+0x369/0x800 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_v4_do_rcv+0x2e3/0x430 > Apr 9 15:43:02 lx11 kernel: [] ? > ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_v4_rcv+0x4fe/0x8d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver_finish+0xdd/0x2d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver+0x98/0xa0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_rcv_finish+0x12d/0x440 > Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 > Apr 9 15:43:02 lx11 kernel: [] ? > __netif_receive_skb+0x4ab/0x750 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp4_gro_receive+0x5a/0xd0 > Apr 9 15:43:02 lx11 kernel: [] ? > netif_receive_skb+0x58/0x60 > Apr 9 15:43:02 lx11 kernel: [] ? > napi_skb_finish+0x50/0x70 > Apr 9 15:43:02 lx11 kernel: [] ? > napi_gro_receive+0x39/0x50 > Apr 9 15:43:02 lx11 kernel: [] ? > bnx2_poll_work+0xd4f/0x1270 [bnx2] > Apr 9 15:43:02 lx11 kernel: [] ? > death_by_timeout+0x0/0x160 [nf_conntrack] > Apr 9 15:43:02 lx11 kernel: [] ? > swiotlb_map_page+0x0/0x100 > Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 > [bnx2] > Apr 9 15:43:02 lx11 kernel: [] ? > net_rx_action+0x103/0x2f0 > Apr 9 15:43:02 lx11 kernel: [] ? > __do_softirq+0xc1/0x1e0 > Apr 9 15:43:02 lx11 kernel: [] ? > hrtimer_interrupt+0x14b/0x260 > Apr 9 15:43:02 lx11 kernel: [] ? > call_softirq+0x1c/0x30 > Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 > Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 > Apr 9 15:43:02 lx11 kernel: [] ? > smp_apic_timer_interrupt+0x70/0x9b > Apr 9 15:43:02 lx11 kernel: [] ? > apic_timer_interrupt+0x13/0x20 > Apr 9 15:43:02 lx11 kernel: > Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, > mode:0x20 > Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted > 2.6.32-358.2.1.el6.x86_64 #1 > Apr 9 15:43:02 lx11 kernel: Call Trace: > Apr 9 15:43:02 lx11 kernel: [] ? > __alloc_pages_nodemask+0x757/0x8d0 > Apr 9 15:43:02 lx11 kernel: [] ? > kmem_getpages+0x62/0x170 > Apr 9 15:43:02 lx11 kernel: [] ? > fallback_alloc+0x1ba/0x270 > Apr 9 15:43:02 lx11 kernel: [] ? > ____cache_alloc_node+0x99/0x160 > Apr 9 15:43:02 lx11 kernel: [] ? > kmem_cache_alloc_node_trace+0x90/0x200 > Apr 9 15:43:02 lx11 kernel: [] ? > __kmalloc_node+0x4d/0x60 > Apr 9 15:43:02 lx11 kernel: [] ? > __alloc_skb+0x6d/0x190 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_collapse+0x1a2/0x3f0 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_try_rmem_schedule+0x245/0x360 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_data_queue+0x1ab/0xc70 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_rcv_established+0x369/0x800 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_v4_do_rcv+0x2e3/0x430 > Apr 9 15:43:02 lx11 kernel: [] ? > ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_v4_rcv+0x4fe/0x8d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver_finish+0xdd/0x2d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver+0x98/0xa0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_rcv_finish+0x12d/0x440 > Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 > Apr 9 15:43:02 lx11 kernel: [] ? > __netif_receive_skb+0x4ab/0x750 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp4_gro_receive+0x5a/0xd0 > Apr 9 15:43:02 lx11 kernel: [] ? > netif_receive_skb+0x58/0x60 > Apr 9 15:43:02 lx11 kernel: [] ? > napi_skb_finish+0x50/0x70 > Apr 9 15:43:02 lx11 kernel: [] ? > napi_gro_receive+0x39/0x50 > Apr 9 15:43:02 lx11 kernel: [] ? > bnx2_poll_work+0xd4f/0x1270 [bnx2] > Apr 9 15:43:02 lx11 kernel: [] ? > death_by_timeout+0x0/0x160 [nf_conntrack] > Apr 9 15:43:02 lx11 kernel: [] ? > swiotlb_map_page+0x0/0x100 > Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 > [bnx2] > Apr 9 15:43:02 lx11 kernel: [] ? > net_rx_action+0x103/0x2f0 > Apr 9 15:43:02 lx11 kernel: [] ? > __do_softirq+0xc1/0x1e0 > Apr 9 15:43:02 lx11 kernel: [] ? > hrtimer_interrupt+0x14b/0x260 > Apr 9 15:43:02 lx11 kernel: [] ? > call_softirq+0x1c/0x30 > Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 > Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 > Apr 9 15:43:02 lx11 kernel: [] ? > smp_apic_timer_interrupt+0x70/0x9b > Apr 9 15:43:02 lx11 kernel: [] ? > apic_timer_interrupt+0x13/0x20 > Apr 9 15:43:02 lx11 kernel: > Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, > mode:0x20 > Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted > 2.6.32-358.2.1.el6.x86_64 #1 > Apr 9 15:43:02 lx11 kernel: Call Trace: > Apr 9 15:43:02 lx11 kernel: [] ? > __alloc_pages_nodemask+0x757/0x8d0 > Apr 9 15:43:02 lx11 kernel: [] ? > kmem_getpages+0x62/0x170 > Apr 9 15:43:02 lx11 kernel: [] ? > fallback_alloc+0x1ba/0x270 > Apr 9 15:43:02 lx11 kernel: [] ? > ____cache_alloc_node+0x99/0x160 > Apr 9 15:43:02 lx11 kernel: [] ? > kmem_cache_alloc_node_trace+0x90/0x200 > Apr 9 15:43:02 lx11 kernel: [] ? > __kmalloc_node+0x4d/0x60 > Apr 9 15:43:02 lx11 kernel: [] ? > __alloc_skb+0x6d/0x190 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_collapse+0x1a2/0x3f0 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_try_rmem_schedule+0x245/0x360 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_data_queue+0x1ab/0xc70 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_validate_incoming+0x30b/0x3a0 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_rcv_established+0x369/0x800 > Apr 9 15:43:02 lx11 kernel: [] ? > smp_invalidate_interrupt+0x60/0xc0 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_v4_do_rcv+0x2e3/0x430 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_v4_rcv+0x4fe/0x8d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver_finish+0xdd/0x2d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver+0x98/0xa0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_rcv_finish+0x12d/0x440 > Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 > Apr 9 15:43:02 lx11 kernel: [] ? > __netif_receive_skb+0x4ab/0x750 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp4_gro_receive+0x5a/0xd0 > Apr 9 15:43:02 lx11 kernel: [] ? > netif_receive_skb+0x58/0x60 > Apr 9 15:43:02 lx11 kernel: [] ? > napi_skb_finish+0x50/0x70 > Apr 9 15:43:02 lx11 kernel: [] ? > napi_gro_receive+0x39/0x50 > Apr 9 15:43:02 lx11 kernel: [] ? > bnx2_poll_work+0xd4f/0x1270 [bnx2] > Apr 9 15:43:02 lx11 kernel: [] ? > death_by_timeout+0x0/0x160 [nf_conntrack] > Apr 9 15:43:02 lx11 kernel: [] ? > swiotlb_map_page+0x0/0x100 > Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 > [bnx2] > Apr 9 15:43:02 lx11 kernel: [] ? > net_rx_action+0x103/0x2f0 > Apr 9 15:43:02 lx11 kernel: [] ? > __do_softirq+0xc1/0x1e0 > Apr 9 15:43:02 lx11 kernel: [] ? > hrtimer_interrupt+0x14b/0x260 > Apr 9 15:43:02 lx11 kernel: [] ? > call_softirq+0x1c/0x30 > Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 > Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 > Apr 9 15:43:02 lx11 kernel: [] ? > smp_apic_timer_interrupt+0x70/0x9b > Apr 9 15:43:02 lx11 kernel: [] ? > apic_timer_interrupt+0x13/0x20 > Apr 9 15:43:02 lx11 kernel: > Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, > mode:0x20 > Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted > 2.6.32-358.2.1.el6.x86_64 #1 > Apr 9 15:43:02 lx11 kernel: Call Trace: > Apr 9 15:43:02 lx11 kernel: [] ? > __alloc_pages_nodemask+0x757/0x8d0 > Apr 9 15:43:02 lx11 kernel: [] ? > kmem_getpages+0x62/0x170 > Apr 9 15:43:02 lx11 kernel: [] ? > fallback_alloc+0x1ba/0x270 > Apr 9 15:43:02 lx11 kernel: [] ? > ____cache_alloc_node+0x99/0x160 > Apr 9 15:43:02 lx11 kernel: [] ? > kmem_cache_alloc_node_trace+0x90/0x200 > Apr 9 15:43:02 lx11 kernel: [] ? > __kmalloc_node+0x4d/0x60 > Apr 9 15:43:02 lx11 kernel: [] ? > __alloc_skb+0x6d/0x190 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_collapse+0x1a2/0x3f0 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_try_rmem_schedule+0x245/0x360 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_data_queue+0x1ab/0xc70 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_rcv_established+0x369/0x800 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_v4_do_rcv+0x2e3/0x430 > Apr 9 15:43:02 lx11 kernel: [] ? > ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp_v4_rcv+0x4fe/0x8d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver_finish+0xdd/0x2d0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_local_deliver+0x98/0xa0 > Apr 9 15:43:02 lx11 kernel: [] ? > ip_rcv_finish+0x12d/0x440 > Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 > Apr 9 15:43:02 lx11 kernel: [] ? > __netif_receive_skb+0x4ab/0x750 > Apr 9 15:43:02 lx11 kernel: [] ? > tcp4_gro_receive+0x5a/0xd0 > Apr 9 15:43:02 lx11 kernel: [] ? > netif_receive_skb+0x58/0x60 > Apr 9 15:43:02 lx11 kernel: [] ? > napi_skb_finish+0x50/0x70 > Apr 9 15:43:02 lx11 kernel: [] ? > napi_gro_receive+0x39/0x50 > Apr 9 15:43:02 lx11 kernel: [] ? > bnx2_poll_work+0xd4f/0x1270 [bnx2] > Apr 9 15:43:07 lx11 kernel: [] ? > death_by_timeout+0x0/0x160 [nf_conntrack] > Apr 9 15:43:07 lx11 kernel: [] ? > swiotlb_map_page+0x0/0x100 > Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 > [bnx2] > Apr 9 15:43:07 lx11 kernel: [] ? > net_rx_action+0x103/0x2f0 > Apr 9 15:43:07 lx11 kernel: [] ? > __do_softirq+0xc1/0x1e0 > Apr 9 15:43:07 lx11 kernel: [] ? > hrtimer_interrupt+0x14b/0x260 > Apr 9 15:43:07 lx11 kernel: [] ? > call_softirq+0x1c/0x30 > Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 > Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 > Apr 9 15:43:07 lx11 kernel: [] ? > smp_apic_timer_interrupt+0x70/0x9b > Apr 9 15:43:07 lx11 kernel: [] ? > apic_timer_interrupt+0x13/0x20 > Apr 9 15:43:07 lx11 kernel: > Apr 9 15:43:07 lx11 kernel: varnishd: page allocation failure. order:0, > mode:0x20 > Apr 9 15:43:07 lx11 kernel: Pid: 16489, comm: varnishd Not tainted > 2.6.32-358.2.1.el6.x86_64 #1 > Apr 9 15:43:07 lx11 kernel: Call Trace: > Apr 9 15:43:07 lx11 kernel: [] ? > __alloc_pages_nodemask+0x757/0x8d0 > Apr 9 15:43:07 lx11 kernel: [] ? > kmem_getpages+0x62/0x170 > Apr 9 15:43:07 lx11 kernel: [] ? > fallback_alloc+0x1ba/0x270 > Apr 9 15:43:07 lx11 kernel: [] ? > ____cache_alloc_node+0x99/0x160 > Apr 9 15:43:07 lx11 kernel: [] ? > kmem_cache_alloc_node_trace+0x90/0x200 > Apr 9 15:43:07 lx11 kernel: [] ? > __kmalloc_node+0x4d/0x60 > Apr 9 15:43:07 lx11 kernel: [] ? > __alloc_skb+0x6d/0x190 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_collapse+0x1a2/0x3f0 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_try_rmem_schedule+0x245/0x360 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_data_queue+0x3ed/0xc70 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_validate_incoming+0x2c0/0x3a0 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_rcv_established+0x369/0x800 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_v4_do_rcv+0x2e3/0x430 > Apr 9 15:43:07 lx11 kernel: [] ? > ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] > Apr 9 15:43:07 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_v4_rcv+0x4fe/0x8d0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_local_deliver_finish+0xdd/0x2d0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_local_deliver+0x98/0xa0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_rcv_finish+0x12d/0x440 > Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv+0x275/0x350 > Apr 9 15:43:07 lx11 kernel: [] ? > __netif_receive_skb+0x4ab/0x750 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp4_gro_receive+0x5a/0xd0 > Apr 9 15:43:07 lx11 kernel: [] ? > netif_receive_skb+0x58/0x60 > Apr 9 15:43:07 lx11 kernel: [] ? > napi_skb_finish+0x50/0x70 > Apr 9 15:43:07 lx11 kernel: [] ? > napi_gro_receive+0x39/0x50 > Apr 9 15:43:07 lx11 kernel: [] ? > bnx2_poll_work+0xd4f/0x1270 [bnx2] > Apr 9 15:43:07 lx11 kernel: [] ? > death_by_timeout+0x0/0x160 [nf_conntrack] > Apr 9 15:43:07 lx11 kernel: [] ? > swiotlb_map_page+0x0/0x100 > Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 > [bnx2] > Apr 9 15:43:07 lx11 kernel: [] ? > net_rx_action+0x103/0x2f0 > Apr 9 15:43:07 lx11 kernel: [] ? > __do_softirq+0xc1/0x1e0 > Apr 9 15:43:07 lx11 kernel: [] ? > hrtimer_interrupt+0x14b/0x260 > Apr 9 15:43:07 lx11 kernel: [] ? > call_softirq+0x1c/0x30 > Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 > Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 > Apr 9 15:43:07 lx11 kernel: [] ? > smp_apic_timer_interrupt+0x70/0x9b > Apr 9 15:43:07 lx11 kernel: [] ? > apic_timer_interrupt+0x13/0x20 > Apr 9 15:43:07 lx11 kernel: > Apr 9 15:43:07 lx11 kernel: varnishd: page allocation failure. order:0, > mode:0x20 > Apr 9 15:43:07 lx11 kernel: Pid: 16489, comm: varnishd Not tainted > 2.6.32-358.2.1.el6.x86_64 #1 > Apr 9 15:43:07 lx11 kernel: Call Trace: > Apr 9 15:43:07 lx11 kernel: [] ? > __alloc_pages_nodemask+0x757/0x8d0 > Apr 9 15:43:07 lx11 kernel: [] ? > kmem_getpages+0x62/0x170 > Apr 9 15:43:07 lx11 kernel: [] ? > fallback_alloc+0x1ba/0x270 > Apr 9 15:43:07 lx11 kernel: [] ? > ____cache_alloc_node+0x99/0x160 > Apr 9 15:43:07 lx11 kernel: [] ? > kmem_cache_alloc_node_trace+0x90/0x200 > Apr 9 15:43:07 lx11 kernel: [] ? > __kmalloc_node+0x4d/0x60 > Apr 9 15:43:07 lx11 kernel: [] ? > __alloc_skb+0x6d/0x190 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_collapse+0x1a2/0x3f0 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_try_rmem_schedule+0x245/0x360 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_data_queue+0x3ed/0xc70 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_validate_incoming+0x2c0/0x3a0 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_rcv_established+0x369/0x800 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_v4_do_rcv+0x2e3/0x430 > Apr 9 15:43:07 lx11 kernel: [] ? > ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] > Apr 9 15:43:07 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_v4_rcv+0x4fe/0x8d0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_local_deliver_finish+0xdd/0x2d0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_local_deliver+0x98/0xa0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_rcv_finish+0x12d/0x440 > Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv+0x275/0x350 > Apr 9 15:43:07 lx11 kernel: [] ? > __netif_receive_skb+0x4ab/0x750 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp4_gro_receive+0x5a/0xd0 > Apr 9 15:43:07 lx11 kernel: [] ? > netif_receive_skb+0x58/0x60 > Apr 9 15:43:07 lx11 kernel: [] ? > napi_skb_finish+0x50/0x70 > Apr 9 15:43:07 lx11 kernel: [] ? > napi_gro_receive+0x39/0x50 > Apr 9 15:43:07 lx11 kernel: [] ? > bnx2_poll_work+0xd4f/0x1270 [bnx2] > Apr 9 15:43:07 lx11 kernel: [] ? > death_by_timeout+0x0/0x160 [nf_conntrack] > Apr 9 15:43:07 lx11 kernel: [] ? > swiotlb_map_page+0x0/0x100 > Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 > [bnx2] > Apr 9 15:43:07 lx11 kernel: [] ? > net_rx_action+0x103/0x2f0 > Apr 9 15:43:07 lx11 kernel: [] ? > __do_softirq+0xc1/0x1e0 > Apr 9 15:43:07 lx11 kernel: [] ? > hrtimer_interrupt+0x14b/0x260 > Apr 9 15:43:07 lx11 kernel: [] ? > call_softirq+0x1c/0x30 > Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 > Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 > Apr 9 15:43:07 lx11 kernel: [] ? > smp_apic_timer_interrupt+0x70/0x9b > Apr 9 15:43:07 lx11 kernel: [] ? > apic_timer_interrupt+0x13/0x20 > Apr 9 15:43:07 lx11 kernel: > Apr 9 15:43:07 lx11 kernel: varnishd: page allocation failure. order:0, > mode:0x20 > Apr 9 15:43:07 lx11 kernel: Pid: 16489, comm: varnishd Not tainted > 2.6.32-358.2.1.el6.x86_64 #1 > Apr 9 15:43:07 lx11 kernel: Call Trace: > Apr 9 15:43:07 lx11 kernel: [] ? > __alloc_pages_nodemask+0x757/0x8d0 > Apr 9 15:43:07 lx11 kernel: [] ? > kmem_getpages+0x62/0x170 > Apr 9 15:43:07 lx11 kernel: [] ? > fallback_alloc+0x1ba/0x270 > Apr 9 15:43:07 lx11 kernel: [] ? > ____cache_alloc_node+0x99/0x160 > Apr 9 15:43:07 lx11 kernel: [] ? > kmem_cache_alloc_node_trace+0x90/0x200 > Apr 9 15:43:07 lx11 kernel: [] ? > __kmalloc_node+0x4d/0x60 > Apr 9 15:43:07 lx11 kernel: [] ? > __alloc_skb+0x6d/0x190 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_collapse+0x1a2/0x3f0 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_try_rmem_schedule+0x245/0x360 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_data_queue+0x1ab/0xc70 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_rcv_established+0x369/0x800 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_v4_do_rcv+0x2e3/0x430 > Apr 9 15:43:07 lx11 kernel: [] ? > ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] > Apr 9 15:43:07 lx11 kernel: [] ? > tcp_v4_rcv+0x4fe/0x8d0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_local_deliver_finish+0x0/0x2d0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_local_deliver_finish+0xdd/0x2d0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_local_deliver+0x98/0xa0 > Apr 9 15:43:07 lx11 kernel: [] ? > ip_rcv_finish+0x12d/0x440 > Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv+0x275/0x350 > Apr 9 15:43:07 lx11 kernel: [] ? > __netif_receive_skb+0x4ab/0x750 > Apr 9 15:43:07 lx11 kernel: [] ? > tcp4_gro_receive+0x5a/0xd0 > Apr 9 15:43:07 lx11 kernel: [] ? > netif_receive_skb+0x58/0x60 > Apr 9 15:43:07 lx11 kernel: [] ? > napi_skb_finish+0x50/0x70 > Apr 9 15:43:07 lx11 kernel: [] ? > napi_gro_receive+0x39/0x50 > Apr 9 15:43:07 lx11 kernel: [] ? > bnx2_poll_work+0xd4f/0x1270 [bnx2] > Apr 9 15:43:07 lx11 kernel: [] ? > smp_invalidate_interrupt+0x60/0xc0 > Apr 9 15:43:07 lx11 kernel: [] ? > swiotlb_map_page+0x0/0x100 > Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 > [bnx2] > Apr 9 15:43:07 lx11 kernel: [] ? > net_rx_action+0x103/0x2f0 > Apr 9 15:43:07 lx11 kernel: [] ? > __do_softirq+0xc1/0x1e0 > Apr 9 15:43:07 lx11 kernel: [] ? > hrtimer_interrupt+0x14b/0x260 > Apr 9 15:43:07 lx11 kernel: [] ? > call_softirq+0x1c/0x30 > Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 > Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 > Apr 9 15:43:07 lx11 kernel: [] ? > smp_apic_timer_interrupt+0x70/0x9b > Apr 9 15:43:07 lx11 kernel: [] ? > apic_timer_interrupt+0x13/0x20 > Apr 9 16:15:24 lx11 kernel: > Apr 9 16:15:24 lx11 kernel: usb 6-1: USB disconnect, device number 2 > Apr 9 16:15:44 lx11 kernel: usb 6-1: new full speed USB device number 3 > using uhci_hcd > Apr 9 16:15:44 lx11 kernel: usb 6-1: New USB device found, > idVendor=03f0, idProduct=1027 > Apr 9 16:15:44 lx11 kernel: usb 6-1: New USB device strings: Mfr=1, > Product=2, SerialNumber=0 > Apr 9 16:15:44 lx11 kernel: usb 6-1: Product: Virtual Keyboard > Apr 9 16:15:44 lx11 kernel: usb 6-1: Manufacturer: HP > Apr 9 16:15:44 lx11 kernel: usb 6-1: configuration #1 chosen from 1 > choice > Apr 9 16:15:44 lx11 kernel: input: HP Virtual Keyboard as > /devices/pci0000:00/0000:00:1e.0/0000:01:04.4/usb6/6-1/6-1:1.0/input/input6 > Apr 9 16:15:44 lx11 kernel: generic-usb 0003:03F0:1027.0003: > input,hidraw0: USB HID v1.01 Keyboard [HP Virtual Keyboard] on > usb-0000:01:04.4-1/input0 > Apr 9 16:15:44 lx11 kernel: input: HP Virtual Keyboard as > /devices/pci0000:00/0000:00:1e.0/0000:01:04.4/usb6/6-1/6-1:1.1/input/input7 > Apr 9 16:15:44 lx11 kernel: generic-usb 0003:03F0:1027.0004: > input,hidraw1: USB HID v1.01 Mouse [HP Virtual Keyboard] on > usb-0000:01:04.4-1/input1 > Apr 9 17:50:33 lx11 varnishd[24950]: Manager got SIGINT > Apr 9 17:50:42 lx11 varnishd[18973]: Platform: > Linux,2.6.32-358.2.1.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit > Apr 9 17:50:42 lx11 varnishd[18973]: child (18974) Started > Apr 9 17:50:42 lx11 varnishd[18973]: Child (18974) said Child starts > Apr 9 17:50:42 lx11 varnishd[18973]: Child (18974) said SMF.s0 mmap'ed New description: Varnishd recently "hung" on my system with the below error /var/log/messages and had to be restarted. Does this appear to be a varnish 3.0.3 defect? {{{ Apr 9 15:42:59 lx11 kernel: __ratelimit: 26 callbacks suppressed Apr 9 15:42:59 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:42:59 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:42:59 lx11 kernel: Call Trace: Apr 9 15:42:59 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:42:59 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:42:59 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:42:59 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:42:59 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:42:59 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:42:59 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:42:59 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:42:59 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:42:59 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:42:59 lx11 kernel: [] ? tick_program_event+0x2a/0x30 Apr 9 15:42:59 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:42:59 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:42:59 lx11 kernel: [] ? invalidate_interrupt7+0x13/0x20 Apr 9 15:42:59 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:42:59 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:42:59 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:42:59 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:42:59 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:42:59 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:42:59 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:42:59 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:42:59 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:42:59 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:42:59 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:42:59 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:42:59 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:42:59 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:42:59 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:42:59 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:42:59 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:42:59 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:42:59 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:42:59 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:42:59 lx11 kernel: Apr 9 15:42:59 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:42:59 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:42:59 lx11 kernel: Call Trace: Apr 9 15:42:59 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:42:59 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:42:59 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:42:59 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:42:59 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:42:59 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:42:59 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:42:59 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:42:59 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:42:59 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:42:59 lx11 kernel: [] ? tcp_validate_incoming+0x30b/0x3a0 Apr 9 15:42:59 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:42:59 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:42:59 lx11 kernel: [] ? invalidate_interrupt4+0x13/0x20 Apr 9 15:42:59 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:42:59 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:42:59 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:42:59 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:42:59 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:42:59 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:42:59 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:42:59 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:42:59 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:42:59 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:42:59 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:42:59 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:42:59 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:42:59 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:42:59 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:42:59 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:42:59 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:42:59 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:42:59 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:42:59 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:42:59 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:42:59 lx11 kernel: Apr 9 15:42:59 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:42:59 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:42:59 lx11 kernel: Call Trace: Apr 9 15:43:02 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:02 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:02 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:02 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:02 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:02 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:02 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:02 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:02 lx11 kernel: [] ? tcp_validate_incoming+0x30b/0x3a0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:02 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:02 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:02 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:02 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:02 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:02 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:02 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:02 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:02 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:02 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:02 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:02 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:02 lx11 kernel: Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:02 lx11 kernel: Call Trace: Apr 9 15:43:02 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:02 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:02 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:02 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:02 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:02 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:02 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:02 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:02 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:02 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:02 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:02 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:02 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:02 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:02 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:02 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:02 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:02 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:02 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:02 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:02 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:02 lx11 kernel: Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:02 lx11 kernel: Call Trace: Apr 9 15:43:02 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:02 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:02 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:02 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:02 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:02 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:02 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:02 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:02 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:02 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:02 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:02 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:02 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:02 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:02 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:02 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:02 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:02 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:02 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:02 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:02 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:02 lx11 kernel: Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:02 lx11 kernel: Call Trace: Apr 9 15:43:02 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:02 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:02 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:02 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:02 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:02 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:02 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:02 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:02 lx11 kernel: [] ? tcp_validate_incoming+0x30b/0x3a0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:02 lx11 kernel: [] ? smp_invalidate_interrupt+0x60/0xc0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:02 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:02 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:02 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:02 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:02 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:02 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:02 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:02 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:02 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:02 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:02 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:02 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:02 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:02 lx11 kernel: Apr 9 15:43:02 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:02 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:02 lx11 kernel: Call Trace: Apr 9 15:43:02 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:02 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:02 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:02 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:02 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:02 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:02 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:02 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:02 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:02 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:02 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:02 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:02 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:02 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:02 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:02 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:02 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:02 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:07 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:07 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:07 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:07 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:07 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:07 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:07 lx11 kernel: Apr 9 15:43:07 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:07 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:07 lx11 kernel: Call Trace: Apr 9 15:43:07 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:07 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:07 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:07 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:07 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:07 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:07 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:07 lx11 kernel: [] ? tcp_data_queue+0x3ed/0xc70 Apr 9 15:43:07 lx11 kernel: [] ? tcp_validate_incoming+0x2c0/0x3a0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:07 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:07 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:07 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:07 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:07 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:07 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:07 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:07 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:07 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:07 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:07 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:07 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:07 lx11 kernel: Apr 9 15:43:07 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:07 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:07 lx11 kernel: Call Trace: Apr 9 15:43:07 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:07 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:07 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:07 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:07 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:07 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:07 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:07 lx11 kernel: [] ? tcp_data_queue+0x3ed/0xc70 Apr 9 15:43:07 lx11 kernel: [] ? tcp_validate_incoming+0x2c0/0x3a0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:07 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:07 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:07 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:07 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:07 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:07 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? death_by_timeout+0x0/0x160 [nf_conntrack] Apr 9 15:43:07 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:07 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:07 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:07 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:07 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:07 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 15:43:07 lx11 kernel: Apr 9 15:43:07 lx11 kernel: varnishd: page allocation failure. order:0, mode:0x20 Apr 9 15:43:07 lx11 kernel: Pid: 16489, comm: varnishd Not tainted 2.6.32-358.2.1.el6.x86_64 #1 Apr 9 15:43:07 lx11 kernel: Call Trace: Apr 9 15:43:07 lx11 kernel: [] ? __alloc_pages_nodemask+0x757/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? kmem_getpages+0x62/0x170 Apr 9 15:43:07 lx11 kernel: [] ? fallback_alloc+0x1ba/0x270 Apr 9 15:43:07 lx11 kernel: [] ? ____cache_alloc_node+0x99/0x160 Apr 9 15:43:07 lx11 kernel: [] ? kmem_cache_alloc_node_trace+0x90/0x200 Apr 9 15:43:07 lx11 kernel: [] ? __kmalloc_node+0x4d/0x60 Apr 9 15:43:07 lx11 kernel: [] ? __alloc_skb+0x6d/0x190 Apr 9 15:43:07 lx11 kernel: [] ? tcp_collapse+0x1a2/0x3f0 Apr 9 15:43:07 lx11 kernel: [] ? tcp_try_rmem_schedule+0x245/0x360 Apr 9 15:43:07 lx11 kernel: [] ? tcp_data_queue+0x1ab/0xc70 Apr 9 15:43:07 lx11 kernel: [] ? tcp_rcv_established+0x369/0x800 Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_do_rcv+0x2e3/0x430 Apr 9 15:43:07 lx11 kernel: [] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4] Apr 9 15:43:07 lx11 kernel: [] ? tcp_v4_rcv+0x4fe/0x8d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0x0/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver_finish+0xdd/0x2d0 Apr 9 15:43:07 lx11 kernel: [] ? ip_local_deliver+0x98/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv_finish+0x12d/0x440 Apr 9 15:43:07 lx11 kernel: [] ? ip_rcv+0x275/0x350 Apr 9 15:43:07 lx11 kernel: [] ? __netif_receive_skb+0x4ab/0x750 Apr 9 15:43:07 lx11 kernel: [] ? tcp4_gro_receive+0x5a/0xd0 Apr 9 15:43:07 lx11 kernel: [] ? netif_receive_skb+0x58/0x60 Apr 9 15:43:07 lx11 kernel: [] ? napi_skb_finish+0x50/0x70 Apr 9 15:43:07 lx11 kernel: [] ? napi_gro_receive+0x39/0x50 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll_work+0xd4f/0x1270 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? smp_invalidate_interrupt+0x60/0xc0 Apr 9 15:43:07 lx11 kernel: [] ? swiotlb_map_page+0x0/0x100 Apr 9 15:43:07 lx11 kernel: [] ? bnx2_poll+0x69/0x2d8 [bnx2] Apr 9 15:43:07 lx11 kernel: [] ? net_rx_action+0x103/0x2f0 Apr 9 15:43:07 lx11 kernel: [] ? __do_softirq+0xc1/0x1e0 Apr 9 15:43:07 lx11 kernel: [] ? hrtimer_interrupt+0x14b/0x260 Apr 9 15:43:07 lx11 kernel: [] ? call_softirq+0x1c/0x30 Apr 9 15:43:07 lx11 kernel: [] ? do_softirq+0x65/0xa0 Apr 9 15:43:07 lx11 kernel: [] ? irq_exit+0x85/0x90 Apr 9 15:43:07 lx11 kernel: [] ? smp_apic_timer_interrupt+0x70/0x9b Apr 9 15:43:07 lx11 kernel: [] ? apic_timer_interrupt+0x13/0x20 Apr 9 16:15:24 lx11 kernel: Apr 9 16:15:24 lx11 kernel: usb 6-1: USB disconnect, device number 2 Apr 9 16:15:44 lx11 kernel: usb 6-1: new full speed USB device number 3 using uhci_hcd Apr 9 16:15:44 lx11 kernel: usb 6-1: New USB device found, idVendor=03f0, idProduct=1027 Apr 9 16:15:44 lx11 kernel: usb 6-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 Apr 9 16:15:44 lx11 kernel: usb 6-1: Product: Virtual Keyboard Apr 9 16:15:44 lx11 kernel: usb 6-1: Manufacturer: HP Apr 9 16:15:44 lx11 kernel: usb 6-1: configuration #1 chosen from 1 choice Apr 9 16:15:44 lx11 kernel: input: HP Virtual Keyboard as /devices/pci0000:00/0000:00:1e.0/0000:01:04.4/usb6/6-1/6-1:1.0/input/input6 Apr 9 16:15:44 lx11 kernel: generic-usb 0003:03F0:1027.0003: input,hidraw0: USB HID v1.01 Keyboard [HP Virtual Keyboard] on usb-0000:01:04.4-1/input0 Apr 9 16:15:44 lx11 kernel: input: HP Virtual Keyboard as /devices/pci0000:00/0000:00:1e.0/0000:01:04.4/usb6/6-1/6-1:1.1/input/input7 Apr 9 16:15:44 lx11 kernel: generic-usb 0003:03F0:1027.0004: input,hidraw1: USB HID v1.01 Mouse [HP Virtual Keyboard] on usb-0000:01:04.4-1/input1 Apr 9 17:50:33 lx11 varnishd[24950]: Manager got SIGINT Apr 9 17:50:42 lx11 varnishd[18973]: Platform: Linux,2.6.32-358.2.1.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit Apr 9 17:50:42 lx11 varnishd[18973]: child (18974) Started Apr 9 17:50:42 lx11 varnishd[18973]: Child (18974) said Child starts Apr 9 17:50:42 lx11 varnishd[18973]: Child (18974) said SMF.s0 mmap'ed }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Apr 22 09:21:27 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 22 Apr 2013 09:21:27 -0000 Subject: [Varnish] #849: Session timeout while receiving POST data from client causes multiple broken backend requests In-Reply-To: <041.0e6f1a912317111f89bfe1014049adca@varnish-cache.org> References: <041.0e6f1a912317111f89bfe1014049adca@varnish-cache.org> Message-ID: <056.237afa917af2b7b7a3b96bef093e94b2@varnish-cache.org> #849: Session timeout while receiving POST data from client causes multiple broken backend requests -------------------------------------------------+------------------------- Reporter: lew | Owner: tfheen Type: defect | Status: closed Priority: normal | Milestone: Varnish Component: varnishd | 3.0 dev Severity: normal | Version: 2.1.4 Keywords: 503, post, backend write error: 11 | Resolution: invalid (Resource temporarily unavailable) | -------------------------------------------------+------------------------- Comment (by ruben): Replying to [comment:5 phk]: > See also #1041. > > There are a couple of issues in this: > > 1. We use sess_timeout also during client->varnish body transfer. It might be better with separate first/between timeouts like we have on the backend side. It might be important to update the documentation within: param.show -l sess_timeout 60 [seconds] Default is 5 Idle timeout for persistent sessions. If a HTTP request has not been received in this many seconds, the session is closed. This explanation is not entirely correct - it is also the maximum time spent waiting for a client to send its HTTP request to Varnish. -- Ticket URL: Varnish The Varnish HTTP Accelerator