From varnish-bugs at projects.linpro.no Mon Dec 1 02:35:18 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 01 Dec 2008 02:35:18 -0000 Subject: [Varnish] #399: No virtual hosting support Message-ID: <050.0ab5531c085b1edc3fe339a942ddcbf3@projects.linpro.no> #399: No virtual hosting support ------------------+--------------------------------------------------------- Reporter: ehab | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 2.0 | Severity: normal Keywords: | ------------------+--------------------------------------------------------- I have two domains on a server as virtual hosts and I want to use varnish on one. I have a very simple vcl file backend default { .host = "www6.mashy.com"; .port = "80"; } What happens is that it gets files from www.ahlyegypt.com which is the other domain on this server. It seems that the host is only used to get the ip and not to make the HTTP request. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Dec 1 08:56:08 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 01 Dec 2008 08:56:08 -0000 Subject: [Varnish] #399: No virtual hosting support In-Reply-To: <050.0ab5531c085b1edc3fe339a942ddcbf3@projects.linpro.no> References: <050.0ab5531c085b1edc3fe339a942ddcbf3@projects.linpro.no> Message-ID: <059.89429c807e23b15ed52acedfc0793f86@projects.linpro.no> #399: No virtual hosting support --------------------+------------------------------------------------------- Reporter: ehab | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 2.0 Severity: normal | Resolution: invalid Keywords: | --------------------+------------------------------------------------------- Changes (by perbu): * status: new => closed * resolution: => invalid Comment: Hi. The ticketing system is for confirmed bugs only. Please post support requests to the mailing lists (after looking through the documentation and the wiki, of course). Good luck. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Dec 2 12:11:34 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 02 Dec 2008 12:11:34 -0000 Subject: [Varnish] #400: HTTP/1.0 404 Not Found + no Content-Length => no content Message-ID: <053.c18029fa6fb3d7c60156a7702c8fa2a5@projects.linpro.no> #400: HTTP/1.0 404 Not Found + no Content-Length => no content ----------------------+----------------------------------------------------- Reporter: jfbubus | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Setup : varnish 2.0.2 + apache + PHP The application returns a custom PHP error page with a status line of : {{{ HTTP/1.0 404 Not Found }}} The HTTP response do not have a Content-Length header (more details below). When accessing a 404 page, clients get no content. Any other page works. Other combinations work : * Direct access to the backend : OK * HTTP/1.1 200 w/o Content-Length : OK * HTTP/1.0 200 w/o Content-Length : OK * HTTP/1.1 404 w/o Content-Length : OK * HTTP/1.0 404 w/ Content-Length : OK The recommended/documented way of generating a 404 error in PHP forces a HTTP/1.0 response : http://www.php.net/manual/en/function.header.php. (You should not have to force the HTTP version, but that's how PHP works/is documented) Varnish config : {{{ varnishd -a :80 -s malloc,1G -P /var/run/varnishd.pid -b localhost:81 -n /var/run/varnish/ }}} The default VCL is used Test code : {{{ }}} Access through varnish : {{{ curl -v http://localhost/test.php * About to connect() to localhost port 80 * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 80 > GET /test.php HTTP/1.1 > User-Agent: curl/7.15.1 (x86_64-suse-linux) libcurl/7.15.1 OpenSSL/0.9.8a zlib/1.2.3 libidn/0.6.0 > Host: localhost > Accept: */* > < HTTP/1.0 404 Not Found < Server: Apache < Cache-Control: max-age=0 < Expires: Tue, 02 Dec 2008 11:51:02 GMT < Content-Type: text/html < Date: Tue, 02 Dec 2008 11:51:02 GMT < X-Varnish: 2027003591 < Age: 0 < Via: 1.1 varnish * HTTP/1.0 connection set to keep alive! < Connection: keep-alive * Connection #0 to host localhost left intact * Closing connection #0 }}} Same using HTTP/1.0 : {{{ curl -0 -v "http://localhost/test.php" * About to connect() to localhost port 80 * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 80 > GET /test.php HTTP/1.0 > User-Agent: curl/7.15.1 (x86_64-suse-linux) libcurl/7.15.1 OpenSSL/0.9.8a zlib/1.2.3 libidn/0.6.0 > Host: localhost > Accept: */* > < HTTP/1.0 404 Not Found < Server: Apache < Cache-Control: max-age=0 < Expires: Tue, 02 Dec 2008 11:56:59 GMT < Content-Type: text/html < Date: Tue, 02 Dec 2008 11:56:59 GMT < X-Varnish: 2027003751 < Age: 0 < Via: 1.1 varnish < Connection: close * Closing connection #0 }}} Direct access to the apache backend : {{{ curl -v http://localhost:81/test.php * About to connect() to localhost port 81 * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 81 > GET /test.php HTTP/1.1 > User-Agent: curl/7.15.1 (x86_64-suse-linux) libcurl/7.15.1 OpenSSL/0.9.8a zlib/1.2.3 libidn/0.6.0 > Host: localhost:81 > Accept: */* > < HTTP/1.0 404 Not Found < Date: Tue, 02 Dec 2008 11:47:26 GMT < Server: Apache < Cache-Control: max-age=0 < Expires: Tue, 02 Dec 2008 11:47:26 GMT < Connection: close < Content-Type: text/html Works! Works! Works! Works! Works! Works [...] }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Dec 2 12:20:10 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 02 Dec 2008 12:20:10 -0000 Subject: [Varnish] #400: HTTP/1.0 404 Not Found + no Content-Length => no content In-Reply-To: <053.c18029fa6fb3d7c60156a7702c8fa2a5@projects.linpro.no> References: <053.c18029fa6fb3d7c60156a7702c8fa2a5@projects.linpro.no> Message-ID: <062.7498573ec2f3405406e2aff7eea85f9d@projects.linpro.no> #400: HTTP/1.0 404 Not Found + no Content-Length => no content ----------------------+----------------------------------------------------- Reporter: jfbubus | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by jfbubus): Erratum : my test for "HTTP/1.0 200 w/o Content-Length: OK" was wrong. I'm currently unable to test this case, and it might not work. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 4 21:05:00 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 04 Dec 2008 21:05:00 -0000 Subject: [Varnish] #401: Post Gzip after composing page with esi:includes Message-ID: <050.0a272c50f5c24cc022ae5d570f978e25@projects.linpro.no> #401: Post Gzip after composing page with esi:includes -------------------------+-------------------------------------------------- Reporter: xaos | Owner: phk Type: enhancement | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: 2.0 Severity: normal | Keywords: esi gzip accept-encoding compress -------------------------+-------------------------------------------------- Hello, i have the following scenario: Master page with 2 esi:includes, nearly the same as described in http://varnish.projects.linpro.no/ticket/352 To ease the scenario, I would like to have the following functionality: Request the backend WITHOUT accept-encoding headers (could be removed using vcl). Without the accept-encoding header, the request for the backend is not compressed, esi:includes could be parsed. After that, varnish should be able to gzip/deflate the created page in order to send a compressed version to the user. For now, my only possibility is to send a non-compressed output to the user which is suboptimal. cheers xaos -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Dec 5 09:08:25 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 05 Dec 2008 09:08:25 -0000 Subject: [Varnish] #402: send_timeout cause connections to be prematurely closed Message-ID: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> #402: send_timeout cause connections to be prematurely closed ----------------------+----------------------------------------------------- Reporter: havardf | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Keywords: send_timeout connections ----------------------+----------------------------------------------------- Active connections( that is, clients downloading and accepting data ) will be closed by varnish if the download takes more than send_timeout seconds. We run varnish on linux/debian etch 64bits. We have seen this problem in both v1.0.4 and 2.0~tp2-0. There seems to be a discrepancy between how send_timeout is defined in 'man varnishd' and what it actually does. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Dec 5 11:32:26 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 05 Dec 2008 11:32:26 -0000 Subject: [Varnish] #403: Crash on VCL syntax error with Varnish 2.0.2 Message-ID: <050.849b7245d319acc6706a1df9f6ed409a@projects.linpro.no> #403: Crash on VCL syntax error with Varnish 2.0.2 -------------------+-------------------------------------------------------- Reporter: olau | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- I just had a crash when I used ~= instead of the correct ~. When I changed ~= to ~, everything was fine. It crashes both with on vcl.load in the management interface and when giving the VCL file on the command line. Here's the complete (faulty) VCL: {{{ backend default { .host = "127.0.0.1"; .port = "8002"; } backend imageserver { .host = "127.0.0.1"; .port = "8001"; } sub vcl_recv { if (req.http.host == "media.yayart.com" || req.url ~= "^/media/") { set req.backend = imageserver; pass; } if (req.request == "GET" && req.http.Cookie) { lookup; } } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Dec 8 06:22:50 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 08 Dec 2008 06:22:50 -0000 Subject: [Varnish] #404: who use intel 82572EI PRO/1000 PT Desktop Adapter ? why it limit varnish capability?? Message-ID: <052.932400eb4d2450dd89eec4945a9faa59@projects.linpro.no> #404: who use intel 82572EI PRO/1000 PT Desktop Adapter ? why it limit varnish capability?? --------------------+------------------------------------------------------- Reporter: chenxy | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | --------------------+------------------------------------------------------- It only support 2500requests/s and cpu with 8 cores 30% idle circumstance: freebsd 7.0-RELEASE-p6,amd64,16G memory sysctl -a | grep dev.em {{{ dev.em.0.%desc: Intel(R) PRO/1000 Network Connection 6.9.5 dev.em.0.%driver: em dev.em.0.%location: slot=0 function=0 dev.em.0.%pnpinfo: vendor=0x8086 device=0x10b9 subvendor=0x103c subdevice=0x704a class=0x020000 dev.em.0.%parent: pci3 dev.em.0.debug: -1 dev.em.0.stats: -1 dev.em.0.rx_int_delay: 0 dev.em.0.tx_int_delay: 66 dev.em.0.rx_abs_int_delay: 66 dev.em.0.tx_abs_int_delay: 66 dev.em.0.rx_processing_limit: 100 dev.em.1.%desc: Intel(R) PRO/1000 Network Connection 6.9.5 dev.em.1.%driver: em dev.em.1.%location: slot=0 function=0 dev.em.1.%pnpinfo: vendor=0x8086 device=0x105e subvendor=0x8086 subdevice=0x135e class=0x020000 dev.em.1.%parent: pci5 dev.em.1.debug: -1 dev.em.1.stats: -1 dev.em.1.rx_int_delay: 0 dev.em.1.tx_int_delay: 66 dev.em.1.rx_abs_int_delay: 66 dev.em.1.tx_abs_int_delay: 66 dev.em.1.rx_processing_limit: 100 dev.em.2.%desc: Intel(R) PRO/1000 Network Connection 6.9.5 dev.em.2.%driver: em dev.em.2.%location: slot=0 function=1 dev.em.2.%pnpinfo: vendor=0x8086 device=0x105e subvendor=0x8086 subdevice=0x135e class=0x020000 dev.em.2.%parent: pci5 dev.em.2.debug: -1 dev.em.2.stats: -1 dev.em.2.rx_int_delay: 0 dev.em.2.tx_int_delay: 66 dev.em.2.rx_abs_int_delay: 66 dev.em.2.tx_abs_int_delay: 66 dev.em.2.rx_processing_limit: 100 }}} But I test another server with Broadcom NetXtreme II BCM5708 ,Varnish can support 5000Requests/s and cpu with 4 cores 60% idle sysctl -a | grep bce {{{ hw.bce.msi_enable: 1 hw.bce.tso_enable: 1 dev.bce.0.%desc: Broadcom NetXtreme II BCM5708 1000Base-T (B2) dev.bce.0.%driver: bce dev.bce.0.%location: slot=0 function=0 dev.bce.0.%pnpinfo: vendor=0x14e4 device=0x164c subvendor=0x103c subdevice=0x7038 class=0x020000 dev.bce.0.%parent: pci3 dev.bce.0.mbuf_alloc_failed: 0 dev.bce.0.tx_dma_map_failures: 0 dev.bce.0.stat_IfHcInOctets: 15909040362 dev.bce.0.stat_IfHCInBadOctets: 131667110 dev.bce.0.stat_IfHCOutOctets: 185212112207 dev.bce.0.stat_IfHCOutBadOctets: 0 dev.bce.0.stat_IfHCInUcastPkts: 161658777 dev.bce.0.stat_IfHCInMulticastPkts: 438938 dev.bce.0.stat_IfHCInBroadcastPkts: 477415 dev.bce.0.stat_IfHCOutUcastPkts: 184433486 dev.bce.0.stat_IfHCOutMulticastPkts: 0 dev.bce.0.stat_IfHCOutBroadcastPkts: 1420 dev.bce.0.stat_emac_tx_stat_dot3statsinternalmactransmiterrors: 0 dev.bce.0.stat_Dot3StatsCarrierSenseErrors: 0 dev.bce.0.stat_Dot3StatsFCSErrors: 0 dev.bce.0.stat_Dot3StatsAlignmentErrors: 0 dev.bce.0.stat_Dot3StatsSingleCollisionFrames: 0 dev.bce.0.stat_Dot3StatsMultipleCollisionFrames: 0 dev.bce.0.stat_Dot3StatsDeferredTransmissions: 0 dev.bce.0.stat_Dot3StatsExcessiveCollisions: 0 dev.bce.0.stat_Dot3StatsLateCollisions: 0 dev.bce.0.stat_EtherStatsCollisions: 0 dev.bce.0.stat_EtherStatsFragments: 0 dev.bce.0.stat_EtherStatsJabbers: 0 dev.bce.0.stat_EtherStatsUndersizePkts: 0 dev.bce.0.stat_EtherStatsOverrsizePkts: 0 dev.bce.0.stat_EtherStatsPktsRx64Octets: 2089470 dev.bce.0.stat_EtherStatsPktsRx65Octetsto127Octets: 144899332 dev.bce.0.stat_EtherStatsPktsRx128Octetsto255Octets: 13352557 dev.bce.0.stat_EtherStatsPktsRx256Octetsto511Octets: 491736 dev.bce.0.stat_EtherStatsPktsRx512Octetsto1023Octets: 129797 dev.bce.0.stat_EtherStatsPktsRx1024Octetsto1522Octets: 1612238 dev.bce.0.stat_EtherStatsPktsRx1523Octetsto9022Octets: 0 dev.bce.0.stat_EtherStatsPktsTx64Octets: 2884585 dev.bce.0.stat_EtherStatsPktsTx65Octetsto127Octets: 37504043 dev.bce.0.stat_EtherStatsPktsTx128Octetsto255Octets: 16146437 dev.bce.0.stat_EtherStatsPktsTx256Octetsto511Octets: 6290529 dev.bce.0.stat_EtherStatsPktsTx512Octetsto1023Octets: 8170588 dev.bce.0.stat_EtherStatsPktsTx1024Octetsto1522Octets: 113438724 dev.bce.0.stat_EtherStatsPktsTx1523Octetsto9022Octets: 0 dev.bce.0.stat_XonPauseFramesReceived: 0 dev.bce.0.stat_XoffPauseFramesReceived: 0 dev.bce.0.stat_OutXonSent: 0 dev.bce.0.stat_OutXoffSent: 0 dev.bce.0.stat_FlowControlDone: 0 dev.bce.0.stat_MacControlFramesReceived: 0 dev.bce.0.stat_XoffStateEntered: 0 dev.bce.0.stat_IfInFramesL2FilterDiscards: 1370958 dev.bce.0.stat_IfInRuleCheckerDiscards: 0 dev.bce.0.stat_IfInFTQDiscards: 0 dev.bce.0.stat_IfInMBUFDiscards: 0 dev.bce.0.stat_IfInRuleCheckerP4Hit: 916351 dev.bce.0.stat_CatchupInRuleCheckerDiscards: 0 dev.bce.0.stat_CatchupInFTQDiscards: 0 dev.bce.0.stat_CatchupInMBUFDiscards: 0 dev.bce.0.stat_CatchupInRuleCheckerP4Hit: 0 dev.bce.0.com_no_buffers: 0 dev.bce.1.%desc: Broadcom NetXtreme II BCM5708 1000Base-T (B2) dev.bce.1.%driver: bce dev.bce.1.%location: slot=0 function=0 dev.bce.1.%pnpinfo: vendor=0x14e4 device=0x164c subvendor=0x103c subdevice=0x7038 class=0x020000 dev.bce.1.%parent: pci5 dev.bce.1.mbuf_alloc_failed: 0 dev.bce.1.tx_dma_map_failures: 0 dev.bce.1.stat_IfHcInOctets: 4753538 dev.bce.1.stat_IfHCInBadOctets: 170783904 dev.bce.1.stat_IfHCOutOctets: 6144 dev.bce.1.stat_IfHCOutBadOctets: 0 dev.bce.1.stat_IfHCInUcastPkts: 1553 dev.bce.1.stat_IfHCInMulticastPkts: 86 dev.bce.1.stat_IfHCInBroadcastPkts: 71772 dev.bce.1.stat_IfHCOutUcastPkts: 95 dev.bce.1.stat_IfHCOutMulticastPkts: 0 dev.bce.1.stat_IfHCOutBroadcastPkts: 1 dev.bce.1.stat_emac_tx_stat_dot3statsinternalmactransmiterrors: 0 dev.bce.1.stat_Dot3StatsCarrierSenseErrors: 0 dev.bce.1.stat_Dot3StatsFCSErrors: 0 dev.bce.1.stat_Dot3StatsAlignmentErrors: 0 dev.bce.1.stat_Dot3StatsSingleCollisionFrames: 0 dev.bce.1.stat_Dot3StatsMultipleCollisionFrames: 0 dev.bce.1.stat_Dot3StatsDeferredTransmissions: 0 dev.bce.1.stat_Dot3StatsExcessiveCollisions: 0 dev.bce.1.stat_Dot3StatsLateCollisions: 0 dev.bce.1.stat_EtherStatsCollisions: 0 dev.bce.1.stat_EtherStatsFragments: 0 dev.bce.1.stat_EtherStatsJabbers: 0 dev.bce.1.stat_EtherStatsUndersizePkts: 0 dev.bce.1.stat_EtherStatsOverrsizePkts: 0 dev.bce.1.stat_EtherStatsPktsRx64Octets: 71912 dev.bce.1.stat_EtherStatsPktsRx65Octetsto127Octets: 1499 dev.bce.1.stat_EtherStatsPktsRx128Octetsto255Octets: 0 dev.bce.1.stat_EtherStatsPktsRx256Octetsto511Octets: 0 dev.bce.1.stat_EtherStatsPktsRx512Octetsto1023Octets: 0 dev.bce.1.stat_EtherStatsPktsRx1024Octetsto1522Octets: 0 dev.bce.1.stat_EtherStatsPktsRx1523Octetsto9022Octets: 0 dev.bce.1.stat_EtherStatsPktsTx64Octets: 96 dev.bce.1.stat_EtherStatsPktsTx65Octetsto127Octets: 0 dev.bce.1.stat_EtherStatsPktsTx128Octetsto255Octets: 0 dev.bce.1.stat_EtherStatsPktsTx256Octetsto511Octets: 0 dev.bce.1.stat_EtherStatsPktsTx512Octetsto1023Octets: 0 dev.bce.1.stat_EtherStatsPktsTx1024Octetsto1522Octets: 0 dev.bce.1.stat_EtherStatsPktsTx1523Octetsto9022Octets: 0 dev.bce.1.stat_XonPauseFramesReceived: 0 dev.bce.1.stat_XoffPauseFramesReceived: 0 dev.bce.1.stat_OutXonSent: 0 dev.bce.1.stat_OutXoffSent: 0 dev.bce.1.stat_FlowControlDone: 0 dev.bce.1.stat_MacControlFramesReceived: 0 dev.bce.1.stat_XoffStateEntered: 0 dev.bce.1.stat_IfInFramesL2FilterDiscards: 1299564 dev.bce.1.stat_IfInRuleCheckerDiscards: 0 dev.bce.1.stat_IfInFTQDiscards: 0 dev.bce.1.stat_IfInMBUFDiscards: 0 dev.bce.1.stat_IfInRuleCheckerP4Hit: 71858 dev.bce.1.stat_CatchupInRuleCheckerDiscards: 0 dev.bce.1.stat_CatchupInFTQDiscards: 0 dev.bce.1.stat_CatchupInMBUFDiscards: 0 dev.bce.1.stat_CatchupInRuleCheckerP4Hit: 0 dev.bce.1.com_no_buffers: 0 dev.miibus.0.%parent: bce0 dev.miibus.1.%parent: bce1 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Dec 8 11:54:08 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 08 Dec 2008 11:54:08 -0000 Subject: [Varnish] #404: who use intel 82572EI PRO/1000 PT Desktop Adapter ? why it limit varnish capability?? In-Reply-To: <052.932400eb4d2450dd89eec4945a9faa59@projects.linpro.no> References: <052.932400eb4d2450dd89eec4945a9faa59@projects.linpro.no> Message-ID: <061.e26f5eea3904bee95f07f8a37035fdec@projects.linpro.no> #404: who use intel 82572EI PRO/1000 PT Desktop Adapter ? why it limit varnish capability?? --------------------+------------------------------------------------------- Reporter: chenxy | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: invalid Keywords: | --------------------+------------------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => invalid Old description: > It only support 2500requests/s and cpu with 8 cores 30% idle > > circumstance: freebsd 7.0-RELEASE-p6,amd64,16G memory > > sysctl -a | grep dev.em > > {{{ > dev.em.0.%desc: Intel(R) PRO/1000 Network Connection 6.9.5 > dev.em.0.%driver: em > dev.em.0.%location: slot=0 function=0 > dev.em.0.%pnpinfo: vendor=0x8086 device=0x10b9 subvendor=0x103c > subdevice=0x704a class=0x020000 > dev.em.0.%parent: pci3 > dev.em.0.debug: -1 > dev.em.0.stats: -1 > dev.em.0.rx_int_delay: 0 > dev.em.0.tx_int_delay: 66 > dev.em.0.rx_abs_int_delay: 66 > dev.em.0.tx_abs_int_delay: 66 > dev.em.0.rx_processing_limit: 100 > dev.em.1.%desc: Intel(R) PRO/1000 Network Connection 6.9.5 > dev.em.1.%driver: em > dev.em.1.%location: slot=0 function=0 > dev.em.1.%pnpinfo: vendor=0x8086 device=0x105e subvendor=0x8086 > subdevice=0x135e class=0x020000 > dev.em.1.%parent: pci5 > dev.em.1.debug: -1 > dev.em.1.stats: -1 > dev.em.1.rx_int_delay: 0 > dev.em.1.tx_int_delay: 66 > dev.em.1.rx_abs_int_delay: 66 > dev.em.1.tx_abs_int_delay: 66 > dev.em.1.rx_processing_limit: 100 > dev.em.2.%desc: Intel(R) PRO/1000 Network Connection 6.9.5 > dev.em.2.%driver: em > dev.em.2.%location: slot=0 function=1 > dev.em.2.%pnpinfo: vendor=0x8086 device=0x105e subvendor=0x8086 > subdevice=0x135e class=0x020000 > dev.em.2.%parent: pci5 > dev.em.2.debug: -1 > dev.em.2.stats: -1 > dev.em.2.rx_int_delay: 0 > dev.em.2.tx_int_delay: 66 > dev.em.2.rx_abs_int_delay: 66 > dev.em.2.tx_abs_int_delay: 66 > dev.em.2.rx_processing_limit: 100 > > }}} > > But I test another server with Broadcom NetXtreme II BCM5708 ,Varnish > can support 5000Requests/s and cpu with 4 cores 60% idle > > sysctl -a | grep bce > {{{ > hw.bce.msi_enable: 1 > hw.bce.tso_enable: 1 > dev.bce.0.%desc: Broadcom NetXtreme II BCM5708 1000Base-T (B2) > dev.bce.0.%driver: bce > dev.bce.0.%location: slot=0 function=0 > dev.bce.0.%pnpinfo: vendor=0x14e4 device=0x164c subvendor=0x103c > subdevice=0x7038 class=0x020000 > dev.bce.0.%parent: pci3 > dev.bce.0.mbuf_alloc_failed: 0 > dev.bce.0.tx_dma_map_failures: 0 > dev.bce.0.stat_IfHcInOctets: 15909040362 > dev.bce.0.stat_IfHCInBadOctets: 131667110 > dev.bce.0.stat_IfHCOutOctets: 185212112207 > dev.bce.0.stat_IfHCOutBadOctets: 0 > dev.bce.0.stat_IfHCInUcastPkts: 161658777 > dev.bce.0.stat_IfHCInMulticastPkts: 438938 > dev.bce.0.stat_IfHCInBroadcastPkts: 477415 > dev.bce.0.stat_IfHCOutUcastPkts: 184433486 > dev.bce.0.stat_IfHCOutMulticastPkts: 0 > dev.bce.0.stat_IfHCOutBroadcastPkts: 1420 > dev.bce.0.stat_emac_tx_stat_dot3statsinternalmactransmiterrors: 0 > dev.bce.0.stat_Dot3StatsCarrierSenseErrors: 0 > dev.bce.0.stat_Dot3StatsFCSErrors: 0 > dev.bce.0.stat_Dot3StatsAlignmentErrors: 0 > dev.bce.0.stat_Dot3StatsSingleCollisionFrames: 0 > dev.bce.0.stat_Dot3StatsMultipleCollisionFrames: 0 > dev.bce.0.stat_Dot3StatsDeferredTransmissions: 0 > dev.bce.0.stat_Dot3StatsExcessiveCollisions: 0 > dev.bce.0.stat_Dot3StatsLateCollisions: 0 > dev.bce.0.stat_EtherStatsCollisions: 0 > dev.bce.0.stat_EtherStatsFragments: 0 > dev.bce.0.stat_EtherStatsJabbers: 0 > dev.bce.0.stat_EtherStatsUndersizePkts: 0 > dev.bce.0.stat_EtherStatsOverrsizePkts: 0 > dev.bce.0.stat_EtherStatsPktsRx64Octets: 2089470 > dev.bce.0.stat_EtherStatsPktsRx65Octetsto127Octets: 144899332 > dev.bce.0.stat_EtherStatsPktsRx128Octetsto255Octets: 13352557 > dev.bce.0.stat_EtherStatsPktsRx256Octetsto511Octets: 491736 > dev.bce.0.stat_EtherStatsPktsRx512Octetsto1023Octets: 129797 > dev.bce.0.stat_EtherStatsPktsRx1024Octetsto1522Octets: 1612238 > dev.bce.0.stat_EtherStatsPktsRx1523Octetsto9022Octets: 0 > dev.bce.0.stat_EtherStatsPktsTx64Octets: 2884585 > dev.bce.0.stat_EtherStatsPktsTx65Octetsto127Octets: 37504043 > dev.bce.0.stat_EtherStatsPktsTx128Octetsto255Octets: 16146437 > dev.bce.0.stat_EtherStatsPktsTx256Octetsto511Octets: 6290529 > dev.bce.0.stat_EtherStatsPktsTx512Octetsto1023Octets: 8170588 > dev.bce.0.stat_EtherStatsPktsTx1024Octetsto1522Octets: 113438724 > dev.bce.0.stat_EtherStatsPktsTx1523Octetsto9022Octets: 0 > dev.bce.0.stat_XonPauseFramesReceived: 0 > dev.bce.0.stat_XoffPauseFramesReceived: 0 > dev.bce.0.stat_OutXonSent: 0 > dev.bce.0.stat_OutXoffSent: 0 > dev.bce.0.stat_FlowControlDone: 0 > dev.bce.0.stat_MacControlFramesReceived: 0 > dev.bce.0.stat_XoffStateEntered: 0 > dev.bce.0.stat_IfInFramesL2FilterDiscards: 1370958 > dev.bce.0.stat_IfInRuleCheckerDiscards: 0 > dev.bce.0.stat_IfInFTQDiscards: 0 > dev.bce.0.stat_IfInMBUFDiscards: 0 > dev.bce.0.stat_IfInRuleCheckerP4Hit: 916351 > dev.bce.0.stat_CatchupInRuleCheckerDiscards: 0 > dev.bce.0.stat_CatchupInFTQDiscards: 0 > dev.bce.0.stat_CatchupInMBUFDiscards: 0 > dev.bce.0.stat_CatchupInRuleCheckerP4Hit: 0 > dev.bce.0.com_no_buffers: 0 > dev.bce.1.%desc: Broadcom NetXtreme II BCM5708 1000Base-T (B2) > dev.bce.1.%driver: bce > dev.bce.1.%location: slot=0 function=0 > dev.bce.1.%pnpinfo: vendor=0x14e4 device=0x164c subvendor=0x103c > subdevice=0x7038 class=0x020000 > dev.bce.1.%parent: pci5 > dev.bce.1.mbuf_alloc_failed: 0 > dev.bce.1.tx_dma_map_failures: 0 > dev.bce.1.stat_IfHcInOctets: 4753538 > dev.bce.1.stat_IfHCInBadOctets: 170783904 > dev.bce.1.stat_IfHCOutOctets: 6144 > dev.bce.1.stat_IfHCOutBadOctets: 0 > dev.bce.1.stat_IfHCInUcastPkts: 1553 > dev.bce.1.stat_IfHCInMulticastPkts: 86 > dev.bce.1.stat_IfHCInBroadcastPkts: 71772 > dev.bce.1.stat_IfHCOutUcastPkts: 95 > dev.bce.1.stat_IfHCOutMulticastPkts: 0 > dev.bce.1.stat_IfHCOutBroadcastPkts: 1 > dev.bce.1.stat_emac_tx_stat_dot3statsinternalmactransmiterrors: 0 > dev.bce.1.stat_Dot3StatsCarrierSenseErrors: 0 > dev.bce.1.stat_Dot3StatsFCSErrors: 0 > dev.bce.1.stat_Dot3StatsAlignmentErrors: 0 > dev.bce.1.stat_Dot3StatsSingleCollisionFrames: 0 > dev.bce.1.stat_Dot3StatsMultipleCollisionFrames: 0 > dev.bce.1.stat_Dot3StatsDeferredTransmissions: 0 > dev.bce.1.stat_Dot3StatsExcessiveCollisions: 0 > dev.bce.1.stat_Dot3StatsLateCollisions: 0 > dev.bce.1.stat_EtherStatsCollisions: 0 > dev.bce.1.stat_EtherStatsFragments: 0 > dev.bce.1.stat_EtherStatsJabbers: 0 > dev.bce.1.stat_EtherStatsUndersizePkts: 0 > dev.bce.1.stat_EtherStatsOverrsizePkts: 0 > dev.bce.1.stat_EtherStatsPktsRx64Octets: 71912 > dev.bce.1.stat_EtherStatsPktsRx65Octetsto127Octets: 1499 > dev.bce.1.stat_EtherStatsPktsRx128Octetsto255Octets: 0 > dev.bce.1.stat_EtherStatsPktsRx256Octetsto511Octets: 0 > dev.bce.1.stat_EtherStatsPktsRx512Octetsto1023Octets: 0 > dev.bce.1.stat_EtherStatsPktsRx1024Octetsto1522Octets: 0 > dev.bce.1.stat_EtherStatsPktsRx1523Octetsto9022Octets: 0 > dev.bce.1.stat_EtherStatsPktsTx64Octets: 96 > dev.bce.1.stat_EtherStatsPktsTx65Octetsto127Octets: 0 > dev.bce.1.stat_EtherStatsPktsTx128Octetsto255Octets: 0 > dev.bce.1.stat_EtherStatsPktsTx256Octetsto511Octets: 0 > dev.bce.1.stat_EtherStatsPktsTx512Octetsto1023Octets: 0 > dev.bce.1.stat_EtherStatsPktsTx1024Octetsto1522Octets: 0 > dev.bce.1.stat_EtherStatsPktsTx1523Octetsto9022Octets: 0 > dev.bce.1.stat_XonPauseFramesReceived: 0 > dev.bce.1.stat_XoffPauseFramesReceived: 0 > dev.bce.1.stat_OutXonSent: 0 > dev.bce.1.stat_OutXoffSent: 0 > dev.bce.1.stat_FlowControlDone: 0 > dev.bce.1.stat_MacControlFramesReceived: 0 > dev.bce.1.stat_XoffStateEntered: 0 > dev.bce.1.stat_IfInFramesL2FilterDiscards: 1299564 > dev.bce.1.stat_IfInRuleCheckerDiscards: 0 > dev.bce.1.stat_IfInFTQDiscards: 0 > dev.bce.1.stat_IfInMBUFDiscards: 0 > dev.bce.1.stat_IfInRuleCheckerP4Hit: 71858 > dev.bce.1.stat_CatchupInRuleCheckerDiscards: 0 > dev.bce.1.stat_CatchupInFTQDiscards: 0 > dev.bce.1.stat_CatchupInMBUFDiscards: 0 > dev.bce.1.stat_CatchupInRuleCheckerP4Hit: 0 > dev.bce.1.com_no_buffers: 0 > dev.miibus.0.%parent: bce0 > dev.miibus.1.%parent: bce1 > }}} New description: It only support 2500requests/s and cpu with 8 cores 30% idle circumstance: freebsd 7.0-RELEASE-p6,amd64,16G memory sysctl -a | grep dev.em {{{ dev.em.0.%desc: Intel(R) PRO/1000 Network Connection 6.9.5 dev.em.0.%driver: em dev.em.0.%location: slot=0 function=0 dev.em.0.%pnpinfo: vendor=0x8086 device=0x10b9 subvendor=0x103c subdevice=0x704a class=0x020000 dev.em.0.%parent: pci3 dev.em.0.debug: -1 dev.em.0.stats: -1 dev.em.0.rx_int_delay: 0 dev.em.0.tx_int_delay: 66 dev.em.0.rx_abs_int_delay: 66 dev.em.0.tx_abs_int_delay: 66 dev.em.0.rx_processing_limit: 100 dev.em.1.%desc: Intel(R) PRO/1000 Network Connection 6.9.5 dev.em.1.%driver: em dev.em.1.%location: slot=0 function=0 dev.em.1.%pnpinfo: vendor=0x8086 device=0x105e subvendor=0x8086 subdevice=0x135e class=0x020000 dev.em.1.%parent: pci5 dev.em.1.debug: -1 dev.em.1.stats: -1 dev.em.1.rx_int_delay: 0 dev.em.1.tx_int_delay: 66 dev.em.1.rx_abs_int_delay: 66 dev.em.1.tx_abs_int_delay: 66 dev.em.1.rx_processing_limit: 100 dev.em.2.%desc: Intel(R) PRO/1000 Network Connection 6.9.5 dev.em.2.%driver: em dev.em.2.%location: slot=0 function=1 dev.em.2.%pnpinfo: vendor=0x8086 device=0x105e subvendor=0x8086 subdevice=0x135e class=0x020000 dev.em.2.%parent: pci5 dev.em.2.debug: -1 dev.em.2.stats: -1 dev.em.2.rx_int_delay: 0 dev.em.2.tx_int_delay: 66 dev.em.2.rx_abs_int_delay: 66 dev.em.2.tx_abs_int_delay: 66 dev.em.2.rx_processing_limit: 100 }}} But I test another server with Broadcom NetXtreme II BCM5708 ,Varnish can support 5000Requests/s and cpu with 4 cores 60% idle sysctl -a | grep bce {{{ hw.bce.msi_enable: 1 hw.bce.tso_enable: 1 dev.bce.0.%desc: Broadcom NetXtreme II BCM5708 1000Base-T (B2) dev.bce.0.%driver: bce dev.bce.0.%location: slot=0 function=0 dev.bce.0.%pnpinfo: vendor=0x14e4 device=0x164c subvendor=0x103c subdevice=0x7038 class=0x020000 dev.bce.0.%parent: pci3 dev.bce.0.mbuf_alloc_failed: 0 dev.bce.0.tx_dma_map_failures: 0 dev.bce.0.stat_IfHcInOctets: 15909040362 dev.bce.0.stat_IfHCInBadOctets: 131667110 dev.bce.0.stat_IfHCOutOctets: 185212112207 dev.bce.0.stat_IfHCOutBadOctets: 0 dev.bce.0.stat_IfHCInUcastPkts: 161658777 dev.bce.0.stat_IfHCInMulticastPkts: 438938 dev.bce.0.stat_IfHCInBroadcastPkts: 477415 dev.bce.0.stat_IfHCOutUcastPkts: 184433486 dev.bce.0.stat_IfHCOutMulticastPkts: 0 dev.bce.0.stat_IfHCOutBroadcastPkts: 1420 dev.bce.0.stat_emac_tx_stat_dot3statsinternalmactransmiterrors: 0 dev.bce.0.stat_Dot3StatsCarrierSenseErrors: 0 dev.bce.0.stat_Dot3StatsFCSErrors: 0 dev.bce.0.stat_Dot3StatsAlignmentErrors: 0 dev.bce.0.stat_Dot3StatsSingleCollisionFrames: 0 dev.bce.0.stat_Dot3StatsMultipleCollisionFrames: 0 dev.bce.0.stat_Dot3StatsDeferredTransmissions: 0 dev.bce.0.stat_Dot3StatsExcessiveCollisions: 0 dev.bce.0.stat_Dot3StatsLateCollisions: 0 dev.bce.0.stat_EtherStatsCollisions: 0 dev.bce.0.stat_EtherStatsFragments: 0 dev.bce.0.stat_EtherStatsJabbers: 0 dev.bce.0.stat_EtherStatsUndersizePkts: 0 dev.bce.0.stat_EtherStatsOverrsizePkts: 0 dev.bce.0.stat_EtherStatsPktsRx64Octets: 2089470 dev.bce.0.stat_EtherStatsPktsRx65Octetsto127Octets: 144899332 dev.bce.0.stat_EtherStatsPktsRx128Octetsto255Octets: 13352557 dev.bce.0.stat_EtherStatsPktsRx256Octetsto511Octets: 491736 dev.bce.0.stat_EtherStatsPktsRx512Octetsto1023Octets: 129797 dev.bce.0.stat_EtherStatsPktsRx1024Octetsto1522Octets: 1612238 dev.bce.0.stat_EtherStatsPktsRx1523Octetsto9022Octets: 0 dev.bce.0.stat_EtherStatsPktsTx64Octets: 2884585 dev.bce.0.stat_EtherStatsPktsTx65Octetsto127Octets: 37504043 dev.bce.0.stat_EtherStatsPktsTx128Octetsto255Octets: 16146437 dev.bce.0.stat_EtherStatsPktsTx256Octetsto511Octets: 6290529 dev.bce.0.stat_EtherStatsPktsTx512Octetsto1023Octets: 8170588 dev.bce.0.stat_EtherStatsPktsTx1024Octetsto1522Octets: 113438724 dev.bce.0.stat_EtherStatsPktsTx1523Octetsto9022Octets: 0 dev.bce.0.stat_XonPauseFramesReceived: 0 dev.bce.0.stat_XoffPauseFramesReceived: 0 dev.bce.0.stat_OutXonSent: 0 dev.bce.0.stat_OutXoffSent: 0 dev.bce.0.stat_FlowControlDone: 0 dev.bce.0.stat_MacControlFramesReceived: 0 dev.bce.0.stat_XoffStateEntered: 0 dev.bce.0.stat_IfInFramesL2FilterDiscards: 1370958 dev.bce.0.stat_IfInRuleCheckerDiscards: 0 dev.bce.0.stat_IfInFTQDiscards: 0 dev.bce.0.stat_IfInMBUFDiscards: 0 dev.bce.0.stat_IfInRuleCheckerP4Hit: 916351 dev.bce.0.stat_CatchupInRuleCheckerDiscards: 0 dev.bce.0.stat_CatchupInFTQDiscards: 0 dev.bce.0.stat_CatchupInMBUFDiscards: 0 dev.bce.0.stat_CatchupInRuleCheckerP4Hit: 0 dev.bce.0.com_no_buffers: 0 dev.bce.1.%desc: Broadcom NetXtreme II BCM5708 1000Base-T (B2) dev.bce.1.%driver: bce dev.bce.1.%location: slot=0 function=0 dev.bce.1.%pnpinfo: vendor=0x14e4 device=0x164c subvendor=0x103c subdevice=0x7038 class=0x020000 dev.bce.1.%parent: pci5 dev.bce.1.mbuf_alloc_failed: 0 dev.bce.1.tx_dma_map_failures: 0 dev.bce.1.stat_IfHcInOctets: 4753538 dev.bce.1.stat_IfHCInBadOctets: 170783904 dev.bce.1.stat_IfHCOutOctets: 6144 dev.bce.1.stat_IfHCOutBadOctets: 0 dev.bce.1.stat_IfHCInUcastPkts: 1553 dev.bce.1.stat_IfHCInMulticastPkts: 86 dev.bce.1.stat_IfHCInBroadcastPkts: 71772 dev.bce.1.stat_IfHCOutUcastPkts: 95 dev.bce.1.stat_IfHCOutMulticastPkts: 0 dev.bce.1.stat_IfHCOutBroadcastPkts: 1 dev.bce.1.stat_emac_tx_stat_dot3statsinternalmactransmiterrors: 0 dev.bce.1.stat_Dot3StatsCarrierSenseErrors: 0 dev.bce.1.stat_Dot3StatsFCSErrors: 0 dev.bce.1.stat_Dot3StatsAlignmentErrors: 0 dev.bce.1.stat_Dot3StatsSingleCollisionFrames: 0 dev.bce.1.stat_Dot3StatsMultipleCollisionFrames: 0 dev.bce.1.stat_Dot3StatsDeferredTransmissions: 0 dev.bce.1.stat_Dot3StatsExcessiveCollisions: 0 dev.bce.1.stat_Dot3StatsLateCollisions: 0 dev.bce.1.stat_EtherStatsCollisions: 0 dev.bce.1.stat_EtherStatsFragments: 0 dev.bce.1.stat_EtherStatsJabbers: 0 dev.bce.1.stat_EtherStatsUndersizePkts: 0 dev.bce.1.stat_EtherStatsOverrsizePkts: 0 dev.bce.1.stat_EtherStatsPktsRx64Octets: 71912 dev.bce.1.stat_EtherStatsPktsRx65Octetsto127Octets: 1499 dev.bce.1.stat_EtherStatsPktsRx128Octetsto255Octets: 0 dev.bce.1.stat_EtherStatsPktsRx256Octetsto511Octets: 0 dev.bce.1.stat_EtherStatsPktsRx512Octetsto1023Octets: 0 dev.bce.1.stat_EtherStatsPktsRx1024Octetsto1522Octets: 0 dev.bce.1.stat_EtherStatsPktsRx1523Octetsto9022Octets: 0 dev.bce.1.stat_EtherStatsPktsTx64Octets: 96 dev.bce.1.stat_EtherStatsPktsTx65Octetsto127Octets: 0 dev.bce.1.stat_EtherStatsPktsTx128Octetsto255Octets: 0 dev.bce.1.stat_EtherStatsPktsTx256Octetsto511Octets: 0 dev.bce.1.stat_EtherStatsPktsTx512Octetsto1023Octets: 0 dev.bce.1.stat_EtherStatsPktsTx1024Octetsto1522Octets: 0 dev.bce.1.stat_EtherStatsPktsTx1523Octetsto9022Octets: 0 dev.bce.1.stat_XonPauseFramesReceived: 0 dev.bce.1.stat_XoffPauseFramesReceived: 0 dev.bce.1.stat_OutXonSent: 0 dev.bce.1.stat_OutXoffSent: 0 dev.bce.1.stat_FlowControlDone: 0 dev.bce.1.stat_MacControlFramesReceived: 0 dev.bce.1.stat_XoffStateEntered: 0 dev.bce.1.stat_IfInFramesL2FilterDiscards: 1299564 dev.bce.1.stat_IfInRuleCheckerDiscards: 0 dev.bce.1.stat_IfInFTQDiscards: 0 dev.bce.1.stat_IfInMBUFDiscards: 0 dev.bce.1.stat_IfInRuleCheckerP4Hit: 71858 dev.bce.1.stat_CatchupInRuleCheckerDiscards: 0 dev.bce.1.stat_CatchupInFTQDiscards: 0 dev.bce.1.stat_CatchupInMBUFDiscards: 0 dev.bce.1.stat_CatchupInRuleCheckerP4Hit: 0 dev.bce.1.com_no_buffers: 0 dev.miibus.0.%parent: bce0 dev.miibus.1.%parent: bce1 }}} Comment: If you have performance problems with your NIC, Varnish is probably not the right place to file a bug. We just use the BSD sockets API, and accept the performance we get from the kernel. You might have more luck asking for help on a FreeBSD specific mailing list. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Dec 8 12:33:10 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 08 Dec 2008 12:33:10 -0000 Subject: [Varnish] #397: varnish panic and restarts every 3-5 minutes In-Reply-To: <060.27b3b11a1b62f3ca458585ac81a37f8c@projects.linpro.no> References: <060.27b3b11a1b62f3ca458585ac81a37f8c@projects.linpro.no> Message-ID: <069.e4bfec2ebdfe417f53bf98eb976f799e@projects.linpro.no> #397: varnish panic and restarts every 3-5 minutes ----------------------------+----------------------------------------------- Reporter: maheshollalwar | Owner: phk Type: defect | Status: new Priority: high | Milestone: Varnish 2.1 release Component: varnishd | Version: 2.0 Severity: major | Resolution: Keywords: | ----------------------------+----------------------------------------------- Comment (by maheshollalwar): Child (2520) said Closed fds: 4 6 10 11 13 14 Child (2520) said Child starts Child (2520) said managed to mmap 1610612736 bytes of 1610612736 Child (2520) said Ready Child (2520) died signal=6 Child (2520) Panic message: Assert error in WS_Reserve(), cache_ws.c line 156: Condition(ws->r == NULL) not true. thread = (cache-worker)sp = 0xafc52004 { fd = 14, id = 14, xid = 1859351212, client = 202.137.236.116:38679, step = STP_HIT, handling = 0x0, ws = 0xafc5204c { id = "sess", {s,f,r,e} = {0xafc524dc,,+218,(nil),+16384}, }, worker = 0x4a9f80e0 { }, vcl = { srcname = { "/etc/varnish/rediff.vcl", "Default", }, }, obj = 0x7e5ff000 { refcnt = 6, xid = 1859349194, ws = 0x7e5ff018 { id = "obj", {s,f,r,e} = {0x7e5ff1ec,,+7690,(nil),+7700}, }, http = { ws = 0x7e5ff018 { id = "obj", {s,f,r,e} = {0x7e5ff1ec,,+7690,(nil),+7700}, }, hd = { "Date: Mon, 08 Dec 2008 12:35:45 GMT", "Vary: *", "Last-Modified: Mon, 10 Nov 2008 12:36:44 GMT", "Content-Type: application/x-shockwave-flash", "Content-Length: 25719", "cache-control: max-age=604800", "Server: Rediff CDN", }, }, len = 25719, store = { 25719 { 43 57 53 06 be 84 00 00 78 9c 84 7c 05 5c 55 cb |CWS.....x..|.\U.| fa f6 da 9b 16 49 a5 bb bb a5 1b a4 bb bb 5b ba |.....I........[.| a5 b7 88 4a 23 dd a0 74 23 dd 02 12 12 d2 dd d2 |...J#..t#.......| 0d d2 a8 c4 b7 d1 73 ee f1 dc cf bf 77 e1 9e df |......s.....w...| [25655 more] }, }, }, }, Child cleanup complete child (6241) Started Child (6241) said Closed fds: 4 6 10 11 13 14 Child (6241) said Child starts Child (6241) said managed to mmap 1610612736 bytes of 1610612736 Child (6241) said Ready -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Dec 8 12:38:49 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 08 Dec 2008 12:38:49 -0000 Subject: [Varnish] #397: varnish panic and restarts every 3-5 minutes In-Reply-To: <060.27b3b11a1b62f3ca458585ac81a37f8c@projects.linpro.no> References: <060.27b3b11a1b62f3ca458585ac81a37f8c@projects.linpro.no> Message-ID: <069.77ac7af0b28749a3168cd7d6eb68c215@projects.linpro.no> #397: varnish panic and restarts every 3-5 minutes ----------------------------+----------------------------------------------- Reporter: maheshollalwar | Owner: phk Type: defect | Status: new Priority: high | Milestone: Varnish 2.1 release Component: varnishd | Version: 2.0 Severity: major | Resolution: Keywords: | ----------------------------+----------------------------------------------- Comment (by phk): I think you're simply running out of workspace, try setting the sess_worspace parameter to 16384 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Dec 8 13:28:14 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 08 Dec 2008 13:28:14 -0000 Subject: [Varnish] #397: varnish panic and restarts every 3-5 minutes In-Reply-To: <060.27b3b11a1b62f3ca458585ac81a37f8c@projects.linpro.no> References: <060.27b3b11a1b62f3ca458585ac81a37f8c@projects.linpro.no> Message-ID: <069.2f4e06df5a174b3333f0c267788a3172@projects.linpro.no> #397: varnish panic and restarts every 3-5 minutes ----------------------------+----------------------------------------------- Reporter: maheshollalwar | Owner: phk Type: defect | Status: new Priority: high | Milestone: Varnish 2.1 release Component: varnishd | Version: 2.0 Severity: major | Resolution: Keywords: | ----------------------------+----------------------------------------------- Comment (by maheshollalwar): I have a 64-bit machine and running centos 5.2 What is the optimal value that I can use. Also I'm having two backend's and whenever I'm putting load on both the backends, it is crashing. Below is my config :- DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/rediff.vcl \ -u varnish -g varnish \ -s file,/var/lib/varnish/varnish_storage.bin,60G \ -h classic,500009 \ -p listen_depth=4096 \ -p obj_workspace=32768 \ -p sess_workspace=32768" -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Dec 8 16:56:09 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 08 Dec 2008 16:56:09 -0000 Subject: [Varnish] #397: varnish panic and restarts every 3-5 minutes In-Reply-To: <060.27b3b11a1b62f3ca458585ac81a37f8c@projects.linpro.no> References: <060.27b3b11a1b62f3ca458585ac81a37f8c@projects.linpro.no> Message-ID: <069.08711b0c9b71b16a164921fa0a513aca@projects.linpro.no> #397: varnish panic and restarts every 3-5 minutes ----------------------------+----------------------------------------------- Reporter: maheshollalwar | Owner: phk Type: defect | Status: closed Priority: high | Milestone: Varnish 2.1 release Component: varnishd | Version: 2.0 Severity: major | Resolution: invalid Keywords: | ----------------------------+----------------------------------------------- Changes (by perbu): * status: new => closed * resolution: => invalid Comment: The ticket system is for confirmed bugs. Please use the mailing list. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Dec 12 11:07:17 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 12 Dec 2008 11:07:17 -0000 Subject: [Varnish] #405: Varnish problem with purge requests Message-ID: <052.cf256d302e48e7823f83c4eea4d95c5e@projects.linpro.no> #405: Varnish problem with purge requests ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Keywords: ----------------------+----------------------------------------------------- I use Varnish 2.0.2, in FreeBSD/amd64 7.0-RELEASE. I use a VCL like the one in http://varnish.projects.linpro.no/wiki/VCLExamplePurging. It works fine. But after a hickup on one of my webservers (where it starts to send 404 and bogus content), my Varnish servers (often all of them) switch from handling PURGE, to forwarding the PURGE requests to my web servers. Ngrep dump: {{{ T 32.32.32.241:49548 -> 32.32.33.219:80 [AP] PURGE / HTTP/1.0 Host: foo.bar.no Connection: close #### T 32.32.33.239:80 -> 32.32.33.219:52540 [AS] ....... PmPF.... ## T 32.32.33.219:52540 -> 32.32.33.239:80 [AP] PURGE / HTTP/1.0. Host: foo.bar.no. X-Varnish: 1526176926. X-Forwarded-For: 32.32.32.241. . # T 32.32.33.239:80 -> 32.32.33.219:52540 [AFP] HTTP/1.0 501 Not Supported. .

Not Supported Method

..t: */*. Referer: http://www.bt.no/. Accept-Language: no. UA-CPU: x86. Accept-Encoding: gzip, deflate. User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 2.0.50727; .NET CLR 1.1.4322 ### T 32.32.33.219:80 -> 32.32.32.241:49548 [AP] HTTP/1.0 501 Not Supported. X-Varnish-IP: 32.32.33.219. Cache-Control: max-age=1800. Date: Fri, 12 Dec 2008 10:40:10 GMT. X-Varnish: 1526176926. Age: 0. Via: 1.1 varnish. Connection: close. }}} In that dump, the IPs are: 32.32.32.241 Administrative server that actually sends the PURGE request 32.32.33.219 Cache server (VIP on a load balancer) 32.32.33.239 Web server (VIP on a load balancer) How can this be? Varnish is supposed to handle PURGE here, Apache is not supposed to get those requests at all. Varnishlog from the same event: {{{ 8 ReqStart c 32.32.32.241 49548 1526176926 8 RxRequest c PURGE 8 RxURL c / 8 RxProtocol c HTTP/1.0 8 RxHeader c Host: foo.bar.no 8 RxHeader c Connection: close 8 VCL_call c recv 8 VCL_acl c MATCH aipurge 32.32.32.241/32 8 VCL_return c lookup 8 VCL_call c hash 8 VCL_return c hash 8 HitPass c 1525573757 8 VCL_call c pass 8 VCL_return c pass 9 BackendClose - opnew 9 BackendOpen b opnew 32.32.33.219 52540 32.32.33.239 80 8 Backend c 9 opnew opnew 9 TxRequest b PURGE 9 TxURL b / 9 TxProtocol b HTTP/1.0 9 TxHeader b Host: foo.bar.no 9 TxHeader b X-Varnish: 1526176926 9 TxHeader b X-Forwarded-For: 32.32.32.241 9 RxProtocol b HTTP/1.0 9 RxStatus b 501 9 RxResponse b Not Supported 8 ObjProtocol c HTTP/1.0 8 ObjStatus c 501 8 ObjResponse c Not Supported 9 BackendReuse b opnew 8 TTL c 1526176926 RFC 1800 1229078410 0 0 0 0 8 VCL_call c fetch 8 TTL c 1526176926 VCL 1800 1229078410 8 VCL_return c pass 8 Length c 0 8 VCL_call c deliver 8 VCL_return c deliver 8 TxProtocol c HTTP/1.0 8 TxStatus c 501 8 TxResponse c Not Supported 8 TxHeader c X-Varnish-IP: 32.32.33.219 8 TxHeader c Cache-Control: max-age=1800 8 TxHeader c Date: Fri, 12 Dec 2008 10:40:10 GMT 8 TxHeader c X-Varnish: 1526176926 8 TxHeader c Age: 0 8 TxHeader c Via: 1.1 varnish 8 TxHeader c Connection: close 8 ReqEnd c 1526176926 1229078410.067601204 1229078410.068562508 0.000068188 0.000908375 0.000052929 8 SessionClose c Connection: close 8 StatSess c 32.32.32.241 49548 0 1 1 0 1 1 192 0 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Dec 12 11:10:40 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 12 Dec 2008 11:10:40 -0000 Subject: [Varnish] #405: Varnish problem with purge requests In-Reply-To: <052.cf256d302e48e7823f83c4eea4d95c5e@projects.linpro.no> References: <052.cf256d302e48e7823f83c4eea4d95c5e@projects.linpro.no> Message-ID: <061.371750d5ef677613f977459293344262@projects.linpro.no> #405: Varnish problem with purge requests ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by anders): The problem seems to go over by itself after some minutes. No restart required. But it is still a problem I would like to see fixed. Purge handling is critical. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Dec 16 14:45:02 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 16 Dec 2008 14:45:02 -0000 Subject: [Varnish] #406: Add information about ESI documents needing to be XML docs Message-ID: <052.92730d808dedd28564a2c2af6d65094d@projects.linpro.no> #406: Add information about ESI documents needing to be XML docs ---------------------------------+------------------------------------------ Reporter: eugaia | Type: documentation Status: new | Priority: low Milestone: | Component: documentation Version: 2.0 | Severity: trivial Keywords: esi xml opening tag | ---------------------------------+------------------------------------------ Hi, One problem I found when using ESI was that it didn't work if I had text before the first tag and if the document itself isn't doesn't have an opening . I'm sure that this is part of the ESI recommendation, and is my fault for not reading things thoroughly, but I felt it might be useful to mention it on this page: http://varnish.projects.linpro.no/wiki/ESIfeatures perhaps at the bottom in the things to remember, as others may be puzzled why things are not working as they expected. On a level not relating to the documentation, but which I'll include here : have you considered dropping the requirement for an opening XML tag with ESI processing? This restricts using ESI to XML-type documents, or ones that can be solely built from parts, unless you use something like at the beginning so that further ESI's are correctly processed. This seems a little unnecessary. One use, for example, might be to build CSS files 'on the fly' using Varnish - I may well do this, rather than doing CSS or HTML 'includes' - as they will be quicker to load in the browser. It seems to me that you are gaining nothing by imposing the opening tag requirement, and you are creating an added overhead (both when you do something like including an empty file like above and I'm guessing under normal ESI processing, since are you not checking for opening tags?). Just a thought, anyway. Well done on a cracking product. Looking forward to future updates/improvements. -- Ticket URL: Varnish The Varnish HTTP Accelerator From herocjx at sohu.com Mon Dec 15 10:00:12 2008 From: herocjx at sohu.com (herocjx at sohu.com) Date: Mon, 15 Dec 2008 18:00:12 +0800 Subject: [Varnish] #84: High number of "dropped work requests" Message-ID: <90307BC99DC649F6A078EA8255B17536@200809051617> varnish Create worker thread failed 12 Cannot allocate memory -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish-bugs at projects.linpro.no Tue Dec 16 17:02:53 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 16 Dec 2008 17:02:53 -0000 Subject: [Varnish] #407: Documentation : the restart sample does not work if the backend is dead Message-ID: <053.4b0667720c2c2bf409f72fa6bd1366bb@projects.linpro.no> #407: Documentation : the restart sample does not work if the backend is dead ---------------------+------------------------------------------------------ Reporter: jfbubus | Type: documentation Status: new | Priority: normal Milestone: | Component: documentation Version: trunk | Severity: normal Keywords: | ---------------------+------------------------------------------------------ The VCLExampleRestarts sample does not work if the backend is away. It works better using probes & health checking : {{{ backend b1 { .host = "fs.freebsd.dk"; .port = "82"; .probe = { .url = "/probe.cgi"; .timeout = 34 ms; .interval = 1s; .window = 10; .threshold = 8; } } backend b2 { .host = "fs.freebsd.dk"; .port = "81"; .probe = { .url = "/probe.cgi"; .timeout = 34 ms; .interval = 1s; .window = 10; .threshold = 8; } } backend b3 { .host = "fs.freebsd.dk"; .port = "80"; .probe = { .url = "/probe.cgi"; .timeout = 34 ms; .interval = 1s; .window = 10; .threshold = 8; } } sub vcl_recv { set req.backend = b1; if (req.restarts == 1 || !req.backend.healthy) { set req.backend = b2; } if (req.restarts == 2 || !req.backend.healthy) { set req.backend = b3; } } sub vcl_fetch { ## If the request to the backend returns a code other than 200, restart the loop ## If the number of restarts reaches the value of the parameter max_restarts, ## the request will be error'ed. max_restarts defaults to 4. This prevents ## an eternal loop in the event that, e.g., the object does not exist at all. if (obj.status != 200 && obj.status != 403 && obj.status != 404) { restart; } } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Dec 16 17:03:48 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 16 Dec 2008 17:03:48 -0000 Subject: [Varnish] #407: Documentation : the restart sample does not work if the backend is dead In-Reply-To: <053.4b0667720c2c2bf409f72fa6bd1366bb@projects.linpro.no> References: <053.4b0667720c2c2bf409f72fa6bd1366bb@projects.linpro.no> Message-ID: <062.0670f8b534c8195607b76c4cbff50636@projects.linpro.no> #407: Documentation : the restart sample does not work if the backend is dead ---------------------------+------------------------------------------------ Reporter: jfbubus | Owner: Type: documentation | Status: new Priority: normal | Milestone: Component: documentation | Version: trunk Severity: normal | Resolution: Keywords: | ---------------------------+------------------------------------------------ Comment (by jfbubus): Link to the wiki page [wiki:VCLExampleRestarts]. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Dec 16 20:58:56 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 16 Dec 2008 20:58:56 -0000 Subject: [Varnish] #408: Varnish crash with -h critbit Message-ID: <052.1dfd87ee48c6d695b0cbdaef82616f0a@projects.linpro.no> #408: Varnish crash with -h critbit ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- I am running Varnish trunk/3458 on FreeBSD/amd64 7.0-RELEASE. After running for several days with millions of objects in the cache, Varnish crashed. Backtrace: {{{ (gdb) bt #0 0x0000000000414807 in hsh_rush (oh=0x1350f7a040) at cache_hash.c:375 Cannot access memory at address 0x7fff97cbe978 }}} Unfortunately that is all I have for now. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Dec 16 22:38:25 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 16 Dec 2008 22:38:25 -0000 Subject: [Varnish] #359: regsub documented as using $1, $2 etc, but actually uses \1, \2 for replacement strings In-Reply-To: <052.caf5e0adaa1ed9af7ff10129b15d8a9c@projects.linpro.no> References: <052.caf5e0adaa1ed9af7ff10129b15d8a9c@projects.linpro.no> Message-ID: <061.a915bd5178db83622ed092fdfea9a4a6@projects.linpro.no> #359: regsub documented as using $1,$2 etc, but actually uses \1, \2 for replacement strings ---------------------------+------------------------------------------------ Reporter: eugaia | Owner: Type: documentation | Status: reopened Priority: low | Milestone: Component: documentation | Version: trunk Severity: minor | Resolution: Keywords: regsub | ---------------------------+------------------------------------------------ Changes (by hajile): * status: closed => reopened * resolution: fixed => Comment: I can reproduce this debian sid x64, varnish 2.0.2 maybe regex library you are using has some modes like posix/perl/awk and apparently uses awk-style regexps? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Dec 17 02:28:56 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 17 Dec 2008 02:28:56 -0000 Subject: [Varnish] #409: Segmentation fault with malformed regex ~! Message-ID: <052.44f7b40c9e8de9781e864a9037b0b1de@projects.linpro.no> #409: Segmentation fault with malformed regex ~! ----------------------+----------------------------------------------------- Reporter: eugaia | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: major | Keywords: ----------------------+----------------------------------------------------- I have experienced a segmentation fault with a VCL entry like if ( req.url ~ ! "\.(png|jpg|gif|js|css)$" ) { ... } # incorrect regex check syntax which could (correctly) read if (! req.url ~ "\.(png|jpg|gif|js|css)$" ) { ... } When loading a VCL file with this in it, it caused Varnish to crash, indicating a segmentation fault. Also, when trying to restart Varnish, it indicated that there was a segmentation fault rather than simply saying that the VCL wasn't valid. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Dec 17 02:30:49 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 17 Dec 2008 02:30:49 -0000 Subject: [Varnish] #409: Segmentation fault with malformed regex ~! In-Reply-To: <052.44f7b40c9e8de9781e864a9037b0b1de@projects.linpro.no> References: <052.44f7b40c9e8de9781e864a9037b0b1de@projects.linpro.no> Message-ID: <061.6bd41f98ceda4db1cf4a6762e10e44b1@projects.linpro.no> #409: Segmentation fault with malformed regex ~! ----------------------+----------------------------------------------------- Reporter: eugaia | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: major | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by eugaia): I forgot to add that I'm using: Ubuntu 8.04 Varnish 2.0.2 in case that's of use. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Dec 17 07:32:14 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 17 Dec 2008 07:32:14 -0000 Subject: [Varnish] #410: Varnish uses up all system RAM on OpenVZ VPS Message-ID: <052.577f7cc74b7ad374f42f1810ecbe4620@projects.linpro.no> #410: Varnish uses up all system RAM on OpenVZ VPS --------------------+------------------------------------------------------- Reporter: eugaia | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 2.0 | Severity: blocker Keywords: | --------------------+------------------------------------------------------- Hi, I'm not sure whether this is a problem with the OpenVZ (VPS) platform, but I cannot get Varnish to run on it. I compile using standard options (I just used a --prefix), and Varnish compiles 'OK', but when I try to start Varnish, using the following daemon options: -a 0.0.0.0:80 -n simpl -f /simpl/conf/run/varnish/simpl.vcl -u varnish -w 50,1000,10 -s file,/tmp/varnish,100M -T 127.0.0.1:4100 I get the following error: Starting HTTPd accelerator: storage_file: filename: /tmp/varnish size 100 MB. Using old SHMFILE Notice: locking SHMFILE in core failed: Resource temporarily unavailable Note, I originally had the SHM file to be 50%, and then tried lower amounts. 100MB is well below the maximum amount of RAM I have available on my VPS (around 880MB), however, when Varnish is started, it uses up all (or almost all) the RAM. I have Varnish running perfectly happily on my laptop, however. My OpenVZ VPS runs Fedora8, and it uses i686 chips. Varnish version 2.0.2. If it would be helpful, I'm quite happy for you to log in to my server to do any testing - contact me privately through the email address I have registered should you wish to do so. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Dec 17 07:43:38 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 17 Dec 2008 07:43:38 -0000 Subject: [Varnish] #410: Varnish uses up all system RAM on OpenVZ VPS In-Reply-To: <052.577f7cc74b7ad374f42f1810ecbe4620@projects.linpro.no> References: <052.577f7cc74b7ad374f42f1810ecbe4620@projects.linpro.no> Message-ID: <061.bdf959cc6fb5837e1970a358c2f7155f@projects.linpro.no> #410: Varnish uses up all system RAM on OpenVZ VPS ---------------------+------------------------------------------------------ Reporter: eugaia | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 2.0 Severity: blocker | Resolution: Keywords: | ---------------------+------------------------------------------------------ Comment (by eugaia): Sorry, I should have put the component as varnishd. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Dec 17 12:31:04 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 17 Dec 2008 12:31:04 -0000 Subject: [Varnish] #411: restart not available in vcl_deliver Message-ID: <052.e821395e1393af362ec2cde4dc955c46@projects.linpro.no> #411: restart not available in vcl_deliver ----------------------+----------------------------------------------------- Reporter: hajile | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Reason for restart in vcl_deliver (not in fetch or hit) is to cache a redirect (in my case 303: See other) in fetch phase, then restart with url=Location right in varnish, without redirecting the client (thus optimizing internal network traffic). Both operations are perfectly cacheable and at the same time consistent (no need to purge two urls, purge only the primary one). This feature is very useful for proxying RESTful services: /object/{id} -> return document [cached, purge when return document changes] /object/?findByNonPrimaryButUniqueKey=bla -> HTTP 303 ( /object/{id} ) [cached, purge only when this unique key changes] /object/?findByOtherNonPrimaryButUniqueKey=22895723658 -> HTTP 303 ( /object/{id} ) [cached, -""-] sub vcl_hit { if (obj.http.X-Magic-Redirect == "1") { set req.url = obj.http.Location; restart; } } sub vcl_fetch { if (obj.status == 301 || obj.status == 302 || obj.status == 303 || obj.status == 307) { if (obj.status == 303) { set obj.http.X-Magic-Redirect = "1"; } deliver; } } sub vcl_deliver { if (resp.http.X-Magic-Redirect == "1") { unset resp.http.X-Magic-Redirect; restart; } deliver; } -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Dec 17 14:33:03 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 17 Dec 2008 14:33:03 -0000 Subject: [Varnish] #246: Make req available in vcl_deliver In-Reply-To: <058.53d2945701dca8896a94831a3f392163@projects.linpro.no> References: <058.53d2945701dca8896a94831a3f392163@projects.linpro.no> Message-ID: <067.f84688f4547848ca4231af154d85340c@projects.linpro.no> #246: Make req available in vcl_deliver --------------------------+------------------------------------------------- Reporter: alex.taggart | Owner: phk Type: enhancement | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | --------------------------+------------------------------------------------- Changes (by hajile): * status: closed => reopened * resolution: invalid => Comment: in conjunction with http://varnish.projects.linpro.no/ticket/411 this will let us set req.url for redirect right in vcl_deliver (just before restart). Then it may be useful. After all, there's no reason trying to protect config writer of design mistakes - if bad design is currently the only available option then it's better than having no feature at all. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Dec 17 19:42:26 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 17 Dec 2008 19:42:26 -0000 Subject: [Varnish] #412: make caching still possible in case of 'restart' from vcl_fetch and implement 'restart' from vcl_hit Message-ID: <052.a34c2c991e339001ae66cdaed7e49c57@projects.linpro.no> #412: make caching still possible in case of 'restart' from vcl_fetch and implement 'restart' from vcl_hit -------------------------+-------------------------------------------------- Reporter: hajile | Owner: phk Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: -------------------------+-------------------------------------------------- This is useful for caching redirects and following them internally (to minimize network traffic) for example with this config (internally following 303: See Other): {{{ backend test { .host = "127.0.0.1"; .port = "8080"; } sub vcl_hit { if (obj.http.X-Magic-Redirect == "1") { set req.url = obj.http.Location; restart; } } sub vcl_fetch { if (obj.status == 303) { set obj.cacheable = true; set obj.http.X-Magic-Redirect = "1"; set req.url = obj.http.Location; restart; } } }}} {{{ --- varnish-2.0.2-orig/bin/varnishd/cache_center.c 2008-12-17 19:26:58.000000000 +0300 +++ varnish-2.0.2/bin/varnishd/cache_center.c 2008-12-17 19:36:43.000000000 +0300 @@ -413,8 +413,13 @@ switch (sp->handling) { case VCL_RET_RESTART: - sp->obj->ttl = 0; - sp->obj->cacheable = 0; + if (sp->obj->ttl == 0) { + sp->obj->cacheable = 0; + } + if (sp->obj->objhead != NULL && sp->obj->cacheable == 1) { + VRY_Create(sp); + EXP_Insert(sp->obj); + } HSH_Unbusy(sp); HSH_Deref(sp->obj); sp->obj = NULL; @@ -529,6 +534,14 @@ VCL_hit_method(sp); + if (sp->handling == VCL_RET_RESTART) { + sp->step = STP_RECV; + HSH_Deref(sp->obj); + sp->obj = NULL; + sp->director = NULL; + return (0); + } + if (sp->handling == VCL_RET_DELIVER) { /* Dispose of any body part of the request */ FetchReqBody(sp); }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 10:15:58 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 10:15:58 -0000 Subject: [Varnish] #412: make caching still possible in case of 'restart' from vcl_fetch and implement 'restart' from vcl_hit In-Reply-To: <052.a34c2c991e339001ae66cdaed7e49c57@projects.linpro.no> References: <052.a34c2c991e339001ae66cdaed7e49c57@projects.linpro.no> Message-ID: <061.0f9798b6e20807af5dd3767ab8a94d53@projects.linpro.no> #412: make caching still possible in case of 'restart' from vcl_fetch and implement 'restart' from vcl_hit -------------------------+-------------------------------------------------- Reporter: hajile | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: That is a very interesting use of VCL. I think it might be a good idea to explicitly think about and set the TTL in vcl_fetch{}. As of r3465 this works. Thanks! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 10:20:21 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 10:20:21 -0000 Subject: [Varnish] #410: Varnish uses up all system RAM on OpenVZ VPS In-Reply-To: <052.577f7cc74b7ad374f42f1810ecbe4620@projects.linpro.no> References: <052.577f7cc74b7ad374f42f1810ecbe4620@projects.linpro.no> Message-ID: <061.e082c2a44d7b0208a1ecabdb0979370f@projects.linpro.no> #410: Varnish uses up all system RAM on OpenVZ VPS ---------------------+------------------------------------------------------ Reporter: eugaia | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 2.0 Severity: blocker | Resolution: worksforme Keywords: | ---------------------+------------------------------------------------------ Changes (by phk): * status: new => closed * resolution: => worksforme Comment: I'm slightly surprised that Varnish would use 8 times as much RAM as you have given it storage, but absent information about what amount of traffic you throw at it or any other details, it is awfully hard for me to say something intelligent. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 10:31:09 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 10:31:09 -0000 Subject: [Varnish] #409: Segmentation fault with malformed regex ~! In-Reply-To: <052.44f7b40c9e8de9781e864a9037b0b1de@projects.linpro.no> References: <052.44f7b40c9e8de9781e864a9037b0b1de@projects.linpro.no> Message-ID: <061.5d7134122db021699a706213d38ae50f@projects.linpro.no> #409: Segmentation fault with malformed regex ~! ----------------------+----------------------------------------------------- Reporter: eugaia | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: major | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: (In [3466]) Add a missing error check. Fixes #409 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 10:34:56 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 10:34:56 -0000 Subject: [Varnish] #403: Crash on VCL syntax error with Varnish 2.0.2 In-Reply-To: <050.849b7245d319acc6706a1df9f6ed409a@projects.linpro.no> References: <050.849b7245d319acc6706a1df9f6ed409a@projects.linpro.no> Message-ID: <059.76e42a1995e0621b8edf16d87690ff92@projects.linpro.no> #403: Crash on VCL syntax error with Varnish 2.0.2 --------------------+------------------------------------------------------- Reporter: olau | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: This is the same issue as #409 and it is fixed in r3453 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 11:30:06 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 11:30:06 -0000 Subject: [Varnish] #400: HTTP/1.0 404 Not Found + no Content-Length => no content In-Reply-To: <053.c18029fa6fb3d7c60156a7702c8fa2a5@projects.linpro.no> References: <053.c18029fa6fb3d7c60156a7702c8fa2a5@projects.linpro.no> Message-ID: <062.58ec714afac13b8255c7dcea3ab2d69c@projects.linpro.no> #400: HTTP/1.0 404 Not Found + no Content-Length => no content ----------------------+----------------------------------------------------- Reporter: jfbubus | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: (In [3470]) Change the logic that decides when we attempt EOF fetches from the backend. The new logic is: If (HEAD) /* happens only on pass */ do not fetch body. else if (Content-Length) fetch body according to length else if (chunked) fetch body as chunked else if (other transfer-encoding) fail else if (Connection: keep-alive) fetch no body, set Length = 0 else if (Connection: close) fetch body until EOF else if (HTTP < 1.1) fetch body until EOF else fetch no body, set Length = 0 let me know if this breaks anything that should work. Fixes #400 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 11:38:53 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 11:38:53 -0000 Subject: [Varnish] #401: Post Gzip after composing page with esi:includes In-Reply-To: <050.0a272c50f5c24cc022ae5d570f978e25@projects.linpro.no> References: <050.0a272c50f5c24cc022ae5d570f978e25@projects.linpro.no> Message-ID: <059.a2a9f6114ae7dd6c2e42c7af6cad5b97@projects.linpro.no> #401: Post Gzip after composing page with esi:includes -----------------------------------------------+---------------------------- Reporter: xaos | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: 2.0 Severity: normal | Resolution: invalid Keywords: esi gzip accept-encoding compress | -----------------------------------------------+---------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: This is already on the "wishlist" http://varnish.projects.linpro.no/wiki/PostTwoShoppingList -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 11:50:16 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 11:50:16 -0000 Subject: [Varnish] #398: logging off phpbb always causes "Error 503 Service Unavailable" In-Reply-To: <053.fcc528fb5f49a4e239de8466a1c0a6d5@projects.linpro.no> References: <053.fcc528fb5f49a4e239de8466a1c0a6d5@projects.linpro.no> Message-ID: <062.6609a33551c9afbdb22ca8ff4c5878aa@projects.linpro.no> #398: logging off phpbb always causes "Error 503 Service Unavailable" -------------------------------------------+-------------------------------- Reporter: mmpower | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: Error 503 Service Unavailable | -------------------------------------------+-------------------------------- Old description: > I have the backend running phpbb (www.phpbb.com) forum software. > everytime user logs out from phpbb the browser shows this kind of error > message: > > "Error 503 Service Unavailable > > Service Unavailable > Guru Meditation: > > XID: 1569421074 > Varnish" > > there is no problem logging in to phpbb. no problme to login/off to other > web apps on the same host via varnishd... > > Varnish version is 2.02. > OS: linux (Fedora 6), 32 bit > amount of ram : 6GB > > varnishlog trace info for the failing request: > > 14 SessionOpen c 72.83.129.191 54449 *:80 > 14 ReqStart c 72.83.129.191 54449 1569432969 > 14 RxRequest c GET > 14 RxURL c > /forum/login.php?logout=true&sid=cab5706b5795a7586969328f9df81f8c > 14 RxProtocol c HTTP/1.1 > 14 RxHeader c Host: www.haiguinet.com > 14 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS > X 10.4; en-US; rv:1.9.0.4) Gecko/2008102920 Firefox/3.0.4 > 14 RxHeader c Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 14 RxHeader c Accept-Language: en-us,en;q=0.5 > 14 RxHeader c Accept-Encoding: gzip,deflate > 14 RxHeader c Accept-Charset: UTF-8,* > 14 RxHeader c Keep-Alive: 300 > 14 RxHeader c Connection: keep-alive > 14 RxHeader c Referer: > http://www.haiguinet.com/forum/index.php?sid=cab5706b5795a7586969328f9df81f8c > 14 RxHeader c Cookie: setframeview=0; > userstatusserver=http%3A//211.100.30.224%3A8080/flashIM/wdkstatus; > flashservers=rtmp%3A//211.100.30.224%3A80/flashIM%2Crtmp%3A//wdkchat1.vicp.net%3A80/freechat%2Chttp%3A//211.100.30.224%3A8080/flashgo%2Chttp%3A//wdkchat1.vicp.net%3 > 14 VCL_call c recv pass > 14 VCL_call c pass pass > 14 Backend c 10 default default > 14 VCL_call c error deliver > 14 Length c 453 > 14 VCL_call c deliver deliver > 14 TxProtocol c HTTP/1.1 > 14 TxStatus c 503 > 14 TxResponse c Service Unavailable > 14 TxHeader c Server: Varnish > 14 TxHeader c Retry-After: 0 > 14 TxHeader c Content-Type: text/html; charset=utf-8 > 14 TxHeader c Content-Length: 453 > 14 TxHeader c Date: Sun, 30 Nov 2008 06:16:44 GMT > 14 TxHeader c X-Varnish: 1569432969 > 14 TxHeader c Age: 0 > 14 TxHeader c Via: 1.1 varnish > 14 TxHeader c Connection: close > 14 ReqEnd c 1569432969 1228025804.806035519 1228025804.817787170 > 0.329633474 0.011698008 0.000053644 > > > my VCL file: > > backend default { > .host = "127.0.0.1"; > .port = "81"; > } > sub vcl_recv { > remove req.http.X-Forwarded-For; > set req.http.X-Forwarded-For = client.ip; > > if (req.request == "GET" && req.url ~ > "\.(jpg|jpeg|gif|ico|html|htm|css|js|pdf|xls|vsd|doc|ppt|iso)$") { > lookup; > } > if (req.request == "GET" && (req.url ~ "www.haiguinet.com/$" || req.url > ~ "www.haiguinet.com$" || req.url ~ "get_data.php\?")) { > lookup; > } > if (req.http.Expect) { > pipe; > } > if (req.http.Cache-Control ~ "no-cache") { > pass; > } > > } > > sub vcl_fetch { > > if (obj.http.Pragma ~ "no-cache" > || obj.http.Cache-Control ~ "no-cache" > || obj.http.Cache-Control ~ "private") { > pass; > } > if (req.url ~ "\.(png|gif|jpg|swf|css)$") { > unset obj.http.set-cookie; > } > > } New description: I have the backend running phpbb (www.phpbb.com) forum software. everytime user logs out from phpbb the browser shows this kind of error message: "Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 1569421074 Varnish" there is no problem logging in to phpbb. no problme to login/off to other web apps on the same host via varnishd... Varnish version is 2.02. OS: linux (Fedora 6), 32 bit amount of ram : 6GB varnishlog trace info for the failing request: {{{ 14 SessionOpen c 72.83.129.191 54449 *:80 14 ReqStart c 72.83.129.191 54449 1569432969 14 RxRequest c GET 14 RxURL c /forum/login.php?logout=true&sid=cab5706b5795a7586969328f9df81f8c 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: www.haiguinet.com 14 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.4; en-US; rv:1.9.0.4) Gecko/2008102920 Firefox/3.0.4 14 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 14 RxHeader c Accept-Language: en-us,en;q=0.5 14 RxHeader c Accept-Encoding: gzip,deflate 14 RxHeader c Accept-Charset: UTF-8,* 14 RxHeader c Keep-Alive: 300 14 RxHeader c Connection: keep-alive 14 RxHeader c Referer: http://www.haiguinet.com/forum/index.php?sid=cab5706b5795a7586969328f9df81f8c 14 RxHeader c Cookie: setframeview=0; userstatusserver=http%3A//211.100.30.224%3A8080/flashIM/wdkstatus; flashservers=rtmp%3A//211.100.30.224%3A80/flashIM%2Crtmp%3A//wdkchat1.vicp.net%3A80/freechat%2Chttp%3A//211.100.30.224%3A8080/flashgo%2Chttp%3A//wdkchat1.vicp.net%3 14 VCL_call c recv pass 14 VCL_call c pass pass 14 Backend c 10 default default 14 VCL_call c error deliver 14 Length c 453 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Retry-After: 0 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Content-Length: 453 14 TxHeader c Date: Sun, 30 Nov 2008 06:16:44 GMT 14 TxHeader c X-Varnish: 1569432969 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 ReqEnd c 1569432969 1228025804.806035519 1228025804.817787170 0.329633474 0.011698008 0.000053644 my VCL file: backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; if (req.request == "GET" && req.url ~ "\.(jpg|jpeg|gif|ico|html|htm|css|js|pdf|xls|vsd|doc|ppt|iso)$") { lookup; } if (req.request == "GET" && (req.url ~ "www.haiguinet.com/$" || req.url ~ "www.haiguinet.com$" || req.url ~ "get_data.php\?")) { lookup; } if (req.http.Expect) { pipe; } if (req.http.Cache-Control ~ "no-cache") { pass; } } sub vcl_fetch { if (obj.http.Pragma ~ "no-cache" || obj.http.Cache-Control ~ "no-cache" || obj.http.Cache-Control ~ "private") { pass; } if (req.url ~ "\.(png|gif|jpg|swf|css)$") { unset obj.http.set-cookie; } } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 11:51:57 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 11:51:57 -0000 Subject: [Varnish] #398: logging off phpbb always causes "Error 503 Service Unavailable" In-Reply-To: <053.fcc528fb5f49a4e239de8466a1c0a6d5@projects.linpro.no> References: <053.fcc528fb5f49a4e239de8466a1c0a6d5@projects.linpro.no> Message-ID: <062.ce6bb73aac2c4fc96e436c21c2e0435e@projects.linpro.no> #398: logging off phpbb always causes "Error 503 Service Unavailable" -------------------------------------------+-------------------------------- Reporter: mmpower | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: Error 503 Service Unavailable | -------------------------------------------+-------------------------------- Comment (by phk): Did you run varnishlog with "-c" argument ? If so, could you leave that out so that we can also see the log entries for the backend communication ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 11:52:31 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 11:52:31 -0000 Subject: [Varnish] #392: Varnish installs but doesn't start on suse 10.3 (X86-64) In-Reply-To: <052.3c6bf685f7cf98159b81e6983206c77b@projects.linpro.no> References: <052.3c6bf685f7cf98159b81e6983206c77b@projects.linpro.no> Message-ID: <061.828926257f5e0a3347743a68e84b5c02@projects.linpro.no> #392: Varnish installs but doesn't start on suse 10.3 (X86-64) --------------------+------------------------------------------------------- Reporter: plfgoa | Owner: Type: task | Status: new Priority: high | Milestone: Component: build | Version: 2.0 Severity: major | Resolution: Keywords: | --------------------+------------------------------------------------------- Old description: > Hi, > > When I try to start varnish 2.0.2 in debug mode , > > ./varnishd -d -a :9999 -f /home/accel/varnish/etc/varnish.vcl -s > file,/home/accel/varnish/var/cache,1G > > I get the following message > > storage_file: filename: /home/accel/varnish/var/cache size 1024 MB. > > Using old SHMFILE > Debugging mode, enter "start" to start child > start > child (28673) Started > Pushing vcls failed: Internal error: No VCL_conf symbol > > Child (28673) said Closed fds: 4 9 10 12 13 > Child (28673) said Child starts > Child (28673) said managed to mmap 1073741824 bytes of 1073741824 > Child (28673) said Ready > unlink ./vcl.1P9zoqAU.so > > OS: openSUSE 10.3 (X86-64) > Varnish : 2.0.2 (built from source) > gcc version 4.2.1 (SUSE Linux) > > What could be the issue here ? > > Thank you. > > -Paras New description: Hi, When I try to start varnish 2.0.2 in debug mode , {{{ ./varnishd -d -a :9999 -f /home/accel/varnish/etc/varnish.vcl -s file,/home/accel/varnish/var/cache,1G I get the following message storage_file: filename: /home/accel/varnish/var/cache size 1024 MB. Using old SHMFILE Debugging mode, enter "start" to start child start child (28673) Started Pushing vcls failed: Internal error: No VCL_conf symbol Child (28673) said Closed fds: 4 9 10 12 13 Child (28673) said Child starts Child (28673) said managed to mmap 1073741824 bytes of 1073741824 Child (28673) said Ready unlink ./vcl.1P9zoqAU.so OS: openSUSE 10.3 (X86-64) Varnish : 2.0.2 (built from source) gcc version 4.2.1 (SUSE Linux) }}} What could be the issue here ? Thank you. -Paras -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 11:53:39 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 11:53:39 -0000 Subject: [Varnish] #392: Varnish installs but doesn't start on suse 10.3 (X86-64) In-Reply-To: <052.3c6bf685f7cf98159b81e6983206c77b@projects.linpro.no> References: <052.3c6bf685f7cf98159b81e6983206c77b@projects.linpro.no> Message-ID: <061.a70eca04fadf553944143ab4d646988e@projects.linpro.no> #392: Varnish installs but doesn't start on suse 10.3 (X86-64) --------------------+------------------------------------------------------- Reporter: plfgoa | Owner: Type: task | Status: new Priority: high | Milestone: Component: build | Version: 2.0 Severity: major | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by phk): I sounds like a problem with the VCL compiler, but I would have expected messages about that. Can you try again with two "-d" arguments, to see if that gives more detail ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 12:00:43 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 12:00:43 -0000 Subject: [Varnish] #97: Varnishd fails to start in FreeBSD if IPv6 support is missing in kernel and -T localhost: is used In-Reply-To: <061.520150cb6e2cd14d384b19671cf259ef@projects.linpro.no> References: <061.520150cb6e2cd14d384b19671cf259ef@projects.linpro.no> Message-ID: <070.a2fc07610557089bed369621b095653d@projects.linpro.no> #97: Varnishd fails to start in FreeBSD if IPv6 support is missing in kernel and -T localhost: is used -----------------------------+---------------------------------------------- Reporter: anders at fupp.net | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: fixed Keywords: | -----------------------------+---------------------------------------------- Changes (by phk): * status: reopened => closed * resolution: => fixed Comment: (In [3473]) Only fail -T argument if none of the addresses it resolves to can be listend on. Fixes #97 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 12:04:25 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 12:04:25 -0000 Subject: [Varnish] #383: Unexplained vcl behaivior maybe header related to header modify In-Reply-To: <056.d969776f70d684566868c4d4abb7abaf@projects.linpro.no> References: <056.d969776f70d684566868c4d4abb7abaf@projects.linpro.no> Message-ID: <065.67f0580767c44ece47aa845606b221d7@projects.linpro.no> #383: Unexplained vcl behaivior maybe header related to header modify ------------------------+--------------------------------------------------- Reporter: chrisrixon | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: vcl header | ------------------------+--------------------------------------------------- Comment (by phk): I need more information, in particular sample transactions that does the wrong thing, preferably caught in varnishlog. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 12:05:39 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 12:05:39 -0000 Subject: [Varnish] #379: The varnish have no response most of the time , is it the bug? In-Reply-To: <051.87655e50bc8fca878e4c18f54c9f6ca6@projects.linpro.no> References: <051.87655e50bc8fca878e4c18f54c9f6ca6@projects.linpro.no> Message-ID: <060.1d918fe5579c958e035c6a528f89dd65@projects.linpro.no> #379: The varnish have no response most of the time ,is it the bug? --------------------+------------------------------------------------------- Reporter: Ajian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by phk): Please provide varnishlog output for the failing transaction(s). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 12:07:10 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 12:07:10 -0000 Subject: [Varnish] #275: Varnish child stops responding In-Reply-To: <052.c197ec9345dac369784c70e85c4b85ec@projects.linpro.no> References: <052.c197ec9345dac369784c70e85c4b85ec@projects.linpro.no> Message-ID: <061.1dd3fcc123588ecdab53185b9a879ffc@projects.linpro.no> #275: Varnish child stops responding ---------------------------------+------------------------------------------ Reporter: chenxy | Owner: phk Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: worksforme Keywords: Varnish child stuck | ---------------------------------+------------------------------------------ Changes (by phk): * status: new => closed * resolution: => worksforme Comment: I don't see outstanding bugs in this ticket any more. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 12:07:51 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 12:07:51 -0000 Subject: [Varnish] #378: unresonable varnish free smf number In-Reply-To: <052.618f7be89c19415305663059b7a0e23a@projects.linpro.no> References: <052.618f7be89c19415305663059b7a0e23a@projects.linpro.no> Message-ID: <061.afd8f9c864402390e4cfd7ee87e3dd4c@projects.linpro.no> #378: unresonable varnish free smf number ----------------------+----------------------------------------------------- Reporter: 191919 | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 2.0 Severity: blocker | Resolution: Keywords: | ----------------------+----------------------------------------------------- Old description: > varnish stopped responding and a wget "localhost" timeouts and no reply > received. > > I noticed a value of the stats probe is strange: > > 18446744073709551573 N small free smf > > Is it reasonable? > > # telnet localhost 6000 > Trying 127.0.0.1... > Connected to localhost.localdomain (127.0.0.1). > Escape character is '^]'. > stats > 200 2733 > 2633573 Client connections accepted > 2633564 Client requests received > 872254 Cache hits > 57 Cache hits for pass > 1460964 Cache misses > 1712598 Backend connections success > 0 Backend connections not attempted > 0 Backend connections too many > 80 Backend connections failures > 2 Backend connections reuses > 11310 Backend connections recycles > 0 Backend connections unused > 3 N struct srcaddr > 3 N active struct srcaddr > 131052 N struct sess_mem > 48034 N struct sess > 262 N struct object > 264 N struct objecthead > 61 N struct smf > 18446744073709551573 N small free smf > 1 N large free smf > 1 N struct vbe_conn > 74 N struct bereq > 40 N worker threads > 382 N worker threads created > 0 N worker threads not created > 316 N worker threads limited > 0 N queued work requests > 5757 N overflowed work requests > 0 N dropped work requests > 5 N backends > 1460570 N expired objects > 0 N LRU nuked objects > 0 N LRU saved objects > 0 N LRU moved objects > 0 N objects on deathrow > 0 HTTP header overflows > 0 Objects sent with sendfile > 2327561 Objects sent with write > 0 Objects overflowing workspace > 2633572 Total Sessions > 2633564 Total Requests > 251433 Total pipe > 389 Total pass > 1460960 Total fetch > 882556140 Total header bytes > 20790925758 Total body bytes > 2585539 Session Closed > 0 Session Pipeline > 0 Session Read Ahead > 0 Session Linger > 0 Session herd > 179969709 SHM records > 16041689 SHM writes > 216649 SHM flushes due to overflow > 2490 SHM MTX contention > 68 SHM cycles through buffer > 2913748 allocator requests > 9 outstanding allocations > 1261568 bytes allocated > 214747471872 bytes free > 0 SMA allocator requests > 0 SMA outstanding allocations > 0 SMA outstanding bytes > 0 SMA bytes allocated > 0 SMA bytes free > 884 SMS allocator requests > 0 SMS outstanding allocations > 0 SMS outstanding bytes > 343064 SMS bytes allocated > 343064 SMS bytes freed > 1461182 Backend requests made > 1 N vcl total > 1 N vcl available > 0 N vcl discarded > 1 N total active purges > 1 N new purges added > 0 N old purges deleted > 0 N objects tested > 0 N regexps tested against New description: varnish stopped responding and a wget "localhost" timeouts and no reply received. I noticed a value of the stats probe is strange: 18446744073709551573 N small free smf Is it reasonable? {{{ # telnet localhost 6000 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. stats 200 2733 2633573 Client connections accepted 2633564 Client requests received 872254 Cache hits 57 Cache hits for pass 1460964 Cache misses 1712598 Backend connections success 0 Backend connections not attempted 0 Backend connections too many 80 Backend connections failures 2 Backend connections reuses 11310 Backend connections recycles 0 Backend connections unused 3 N struct srcaddr 3 N active struct srcaddr 131052 N struct sess_mem 48034 N struct sess 262 N struct object 264 N struct objecthead 61 N struct smf 18446744073709551573 N small free smf 1 N large free smf 1 N struct vbe_conn 74 N struct bereq 40 N worker threads 382 N worker threads created 0 N worker threads not created 316 N worker threads limited 0 N queued work requests 5757 N overflowed work requests 0 N dropped work requests 5 N backends 1460570 N expired objects 0 N LRU nuked objects 0 N LRU saved objects 0 N LRU moved objects 0 N objects on deathrow 0 HTTP header overflows 0 Objects sent with sendfile 2327561 Objects sent with write 0 Objects overflowing workspace 2633572 Total Sessions 2633564 Total Requests 251433 Total pipe 389 Total pass 1460960 Total fetch 882556140 Total header bytes 20790925758 Total body bytes 2585539 Session Closed 0 Session Pipeline 0 Session Read Ahead 0 Session Linger 0 Session herd 179969709 SHM records 16041689 SHM writes 216649 SHM flushes due to overflow 2490 SHM MTX contention 68 SHM cycles through buffer 2913748 allocator requests 9 outstanding allocations 1261568 bytes allocated 214747471872 bytes free 0 SMA allocator requests 0 SMA outstanding allocations 0 SMA outstanding bytes 0 SMA bytes allocated 0 SMA bytes free 884 SMS allocator requests 0 SMS outstanding allocations 0 SMS outstanding bytes 343064 SMS bytes allocated 343064 SMS bytes freed 1461182 Backend requests made 1 N vcl total 1 N vcl available 0 N vcl discarded 1 N total active purges 1 N new purges added 0 N old purges deleted 0 N objects tested 0 N regexps tested against }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 18 12:19:44 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Dec 2008 12:19:44 -0000 Subject: [Varnish] #376: Varnish core-dumps on ctrl-c In-Reply-To: <052.e6b196f75eb9cb191e0cd030b908f316@projects.linpro.no> References: <052.e6b196f75eb9cb191e0cd030b908f316@projects.linpro.no> Message-ID: <061.d63bdea8716cb137838330802600bbcc@projects.linpro.no> #376: Varnish core-dumps on ctrl-c ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by phk): I can not reproduce this, does it still happen for you in -trunk ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sat Dec 20 06:20:26 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sat, 20 Dec 2008 06:20:26 -0000 Subject: [Varnish] #413: Incorect handling of escape in esi:include src attribute Message-ID: <057.f4223b8d94bfb88f09d673e93f736e06@projects.linpro.no> #413: Incorect handling of escape in esi:include src attribute -------------------------+-------------------------------------------------- Reporter: andrewmcnnz | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Keywords: esi escape -------------------------+-------------------------------------------------- I'm seeing the following bug in Varnish 2.0.2, compiled from source on Ubuntu. The following syntax is incorrect as it is not well formed xml: The problem is the '&' which should be escaped like so: http://www.htmlhelp.com/tools/validator/problems.html#amp However it seems that Varnish does not unescape the html entity before interpreting the URL, and the wrong request arguments are sent. ie the cgi string as sent by varnishd is identical to how it appears in the esi:include src attribute text. Failure to escape ampersands in urls embedded in html is an extremely common bug, and quite probably should be interpreted generously, but correct code must be allowed to work also. I'm working around this by ';' delimiters, which is recommended practice, if not common. Shouldn't be necessary though. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Dec 22 10:36:55 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 22 Dec 2008 10:36:55 -0000 Subject: [Varnish] #387: Varnish crashes on Missing errorhandling code in fetch_chunked In-Reply-To: <052.22bfa1b1cfef87b231048e0ed3d9726b@projects.linpro.no> References: <052.22bfa1b1cfef87b231048e0ed3d9726b@projects.linpro.no> Message-ID: <061.531f80b0d6e7a7f8669dcf9061192261@projects.linpro.no> #387: Varnish crashes on Missing errorhandling code in fetch_chunked ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Changes (by anders): * version: trunk => 2.0 Comment: This still happens in Varnish 2.0.2, on FreeBSD/amd64 7.0-RELEASE: {{{ Dec 22 11:25:02 aicache6 kernel: varnishd(pid 8140 uid 0) aborted: Missing errorhandling code in fetch_chunked(), cache_fetch.c line 113: Condition(be > bp) not true. errno = 22 (Invalid argum Dec 22 10:25:02 aicache6 varnishd[8139]: Child (8140) Panic message: Missing errorhandling code in fetch_chunked(), cache_fetch.c line 113: Condition(be > bp) not true. errno = 22 (Invalid argument) thread = (cache-worker)sp = 0x826d77008 { fd = 279, id = 279, xid = 1235058958, client = 85.167.62.185:15826, step = STP_FETCH, handling = FETCH, ws = 0x826d77078 { id = "sess", {s,f,r,e} = {0x826d777b0,,+773,0x0,+8192}, }, worker = 0x7ffff8fc8ac0 { }, vcl = { srcname = { "/usr/local/etc/varnish.vcl", "Default", }, }, obj = 0x808a57000 { refcnt = 1, xid = 1235058958, ws = 0x808a57028 { id = "obj", {s,f,r,e} = {0x808a57358,,+240,0x0,+7336}, }, http = { ws = 0x808a57028 { id = "obj", {s,f,r,e} = {0x808a57358,,+240,0x0,+7336}, }, hd = { "Set-Cookie: AlteonP=aeb69f51aeb69f40baeeba3a; path=/", "Date: Mon, 22 Dec 2008 10:25:41 GMT", "Server: Apache", "Cache-Co Dec 22 11:25:02 aicache6 kernel: Dec 22 11:25:02 aicache6 kernel: "Cache-Co Dec 22 10:25:02 aicache6 varnishd[8139]: child (37137) Started Dec 22 10:25:02 aicache6 varnishd[8139]: Child (37137) said Closed fds: 3 4 5 8 9 11 12 Dec 22 10:25:02 aicache6 varnishd[8139]: Child (37137) said Child starts Dec 22 10:25:02 aicache6 varnishd[8139]: Child (37137) said Ready }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Dec 23 10:31:27 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 23 Dec 2008 10:31:27 -0000 Subject: [Varnish] #405: Varnish problem with purge requests In-Reply-To: <052.cf256d302e48e7823f83c4eea4d95c5e@projects.linpro.no> References: <052.cf256d302e48e7823f83c4eea4d95c5e@projects.linpro.no> Message-ID: <061.e511c1a881a027e2cc385dbc95f276bf@projects.linpro.no> #405: Varnish problem with purge requests ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by anders): Checking this problem more closely, I want to comment on a few things: - it seems to occur against one backend at a time only. I have two purge checks in my monitoring setups, which checks purge for different backends, and it's only one of them that fails. - it can take up to half an hour before the problem goes over by itself. In any case, I still believe this is a problem with Varnish. It should not send PURGE requests to the backend at all. I did not post my complete VCL here because it is getting big and complicated. I'll send it to the developer that will look at the problem, if necessary. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Dec 23 10:32:54 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 23 Dec 2008 10:32:54 -0000 Subject: [Varnish] #405: Varnish problem with purge requests In-Reply-To: <052.cf256d302e48e7823f83c4eea4d95c5e@projects.linpro.no> References: <052.cf256d302e48e7823f83c4eea4d95c5e@projects.linpro.no> Message-ID: <061.23130f5c9b4613660edb8a62681994f4@projects.linpro.no> #405: Varnish problem with purge requests ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by anders): One more thing: I do not use Varnish' load balancing capabilities where this problem occurs. I do use backends which are load balancing addresses handled by Nortel hardware based load balancers however. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Dec 23 15:19:52 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 23 Dec 2008 15:19:52 -0000 Subject: [Varnish] #100: Setting the TTL does not adjust Expires: header In-Reply-To: <049.f779265ea82196cf4acaa40d47ee6ca5@projects.linpro.no> References: <049.f779265ea82196cf4acaa40d47ee6ca5@projects.linpro.no> Message-ID: <058.1dd1bdf1c28d5a4729a7355c1637986b@projects.linpro.no> #100: Setting the TTL does not adjust Expires: header ----------------------+----------------------------------------------------- Reporter: des | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by andrewmcnnz): I would have thought it could be harmful to link the varnish cache expiry and the expiry in caches beyond the range of a forced expiration. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Dec 24 09:39:24 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 24 Dec 2008 09:39:24 -0000 Subject: [Varnish] #392: Varnish installs but doesn't start on suse 10.3 (X86-64) In-Reply-To: <052.3c6bf685f7cf98159b81e6983206c77b@projects.linpro.no> References: <052.3c6bf685f7cf98159b81e6983206c77b@projects.linpro.no> Message-ID: <061.752ba42ab5616ff7e8d80b05983ecd11@projects.linpro.no> #392: Varnish installs but doesn't start on suse 10.3 (X86-64) --------------------+------------------------------------------------------- Reporter: plfgoa | Owner: Type: task | Status: new Priority: high | Milestone: Component: build | Version: 2.0 Severity: major | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by plfgoa): I downloaded latest code from the trunk and it worked fine. Thanks. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Dec 24 09:40:04 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 24 Dec 2008 09:40:04 -0000 Subject: [Varnish] #392: Varnish installs but doesn't start on suse 10.3 (X86-64) In-Reply-To: <052.3c6bf685f7cf98159b81e6983206c77b@projects.linpro.no> References: <052.3c6bf685f7cf98159b81e6983206c77b@projects.linpro.no> Message-ID: <061.041d9be3843f1c598d27fee9692d5a8e@projects.linpro.no> #392: Varnish installs but doesn't start on suse 10.3 (X86-64) --------------------+------------------------------------------------------- Reporter: plfgoa | Owner: Type: task | Status: new Priority: high | Milestone: Component: build | Version: 2.0 Severity: major | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by plfgoa): Replying to [comment:3 phk]: > I sounds like a problem with the VCL compiler, but I would have expected messages about that. > > Can you try again with two "-d" arguments, to see if that gives more detail ? I downloaded latest code from the trunk and it worked fine. Thanks. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Dec 25 23:25:54 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 25 Dec 2008 23:25:54 -0000 Subject: [Varnish] #414: Another Varnish crash with -h critbit Message-ID: <052.d1b5f20b8f9b6ec243165098c2b9a912@projects.linpro.no> #414: Another Varnish crash with -h critbit ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Running Varnish trunk/3458 with -h critbit in FreeBSD/amd64 7.0-RELEASE, I got this crash: {{{ Child (15012) died signal=6 Child (15012) Panic message: Assert error in HSH_Deref(), cache_hash.c line 440: Condition((oh)->magic == 0x1b96615d) not true. thread = (cache-timeout) }}} Unfortunately, no core-dump was written. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Dec 30 05:56:52 2008 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 30 Dec 2008 05:56:52 -0000 Subject: [Varnish] #415: Varnish Hangs Message-ID: <052.1ae6aa888aa0b4fd1a42dba13f520e37@projects.linpro.no> #415: Varnish Hangs --------------------+------------------------------------------------------- Reporter: plfgoa | Type: defect Status: new | Priority: high Milestone: | Component: build Version: trunk | Severity: major Keywords: | --------------------+------------------------------------------------------- Hi, I have encountered a strange problem with varnish wherein it just hangs and stops responding to http requests even though it is in running state . The backends run without issue , one can telnet to varnish console also . What could be the issue ? I had downloaded varnish from trunk also to check if this issue goes but it tends to reoccur . I get following messages in varnishlog . 0 VCL_return - discard 0 ExpKill - 958922603 -30 0 ExpPick - 958922604 ttl 0 VCL_call - timeout 0 VCL_return - discard 0 ExpKill - 958922604 -30 0 ExpPick - 958922605 ttl 0 VCL_call - timeout 0 VCL_return - discard 0 ExpKill - 958922605 -30 0 ExpPick - 958930071 prefetch 0 VCL_call - prefetch 0 VCL_return - fetch 0 Debug - "Attempt Prefetch 958930071" 0 ExpPick - 958922606 ttl 0 VCL_call - timeout 0 VCL_return - discard 0 ExpKill - 958922606 -30 0 ExpPick - 958930072 prefetch 0 VCL_call - prefetch 0 VCL_return - fetch 0 Debug - "Attempt Prefetch 958930072" 0 ExpPick - 959145014 ttl 0 VCL_call - timeout 0 VCL_return - discard Thanks in advance. -Paras -- Ticket URL: Varnish The Varnish HTTP Accelerator