Project

General

Profile

Actions

How to configure multiple load balanced fastcgi back-ends with lighty

I have tried to find a proper howto on this topic last week. I could not so I decided to put one together as soon as time permits.
Here it goes.

fastcgi is fast but still extremely CPU intensive under high traffic. If you cannot afford external load balancer units (price or the added complexity can be prohibitive)
and you run lighttpd you can still reach about the same effect for relatively little investment in either time or money.

The fastcgi handler distributes the incoming requests among multiple back-ends. The only thing you need to do is define those back-ends.
Unfortunately the documentation isn't very helpful with that (either that or I am not good enough at reading it :) ).

The example below has one virtual host with three PHP fastcgi back-ends defined.


$HTTP["host"] =~ "tst" {
   server.name = "tst" 
   server.document-root = "/var/www/" 
   fastcgi.server             =   ( ".php" =>
                                     (
                                     "R1" => ( "host" => "192.168.0.10",
                                        "port" => 1029 ),
                                     "R2" => ( "host" => "192.168.0.11",
                                        "port" => 1029 ),
                                     "S1" => ( "socket" => "/tmp/php-fastcgi.socket",
                                        "bin-path" => "/usr/bin/php5-cgi",
                                        "max-procs" => 1,
                                        "bin-environment" => (
                                           "PHP_FCGI_CHILDREN" => "4",
                                           "PHP_FCGI_MAX_REQUESTS" => "1000"))
                                     )
                                  )
   status.statistics-url = "/server-counters" 
   status.status-url          = "/server-status" 
   status.config-url          = "/server-config" 
}

Note: If you happen to define a "host" for the IPC method, lighty will silently fail.

Two for the above back-ends are available over TCP one over IPC. IPC has a little lower CPU overhead. So it is worth going that way if you want a local fcgi instance.
The content must be duplicated on all hosts (Actually only the php, perl, ruby whatever scripts). The web server only uses the dynamic content files to determine whether the requested URI is 404 or not (if you have no local fastcgi instance, that is). Also make absolutely certain that database access works from all the machines.

Here is a short script that could be used for starting the fastcgi processes on the back-end machines (Ugly, Lame, Works):


#!/bin/bash

PHP_FCGI_CHILDREN=4
PHP_FCGI_MAX_REQUESTS=1000
FCGI_WEB_SERVER_ADDRS="127.0.0.1,192.168.0.9" 
USER=www-data
GROUP=www-data
PHP=/usr/bin/php5-cgi
PORT=1029
PIDF=/var/run/fcgi.pid

export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS FCGI_WEB_SERVER_ADDRS

case $1 in 
 start)
      /usr/bin/spawn-fcgi -P $PIDF -p $PORT -C $PHP_FCGI_CHILDREN -u $USER -g $GROUP -f $PHP 2>&1
      ;;
 stop)
   kill `cat $PIDF`
esac

You can see that requests are coming in and are distributed among the back-ends by lighty:


fastcgi.active-requests: 5
fastcgi.backend.R1.0.connected: 655
fastcgi.backend.R1.0.died: 0
fastcgi.backend.R1.0.disabled: 0
fastcgi.backend.R1.0.load: 5
fastcgi.backend.R1.0.overloaded: 0
fastcgi.backend.R1.load: 309
fastcgi.backend.R2.0.connected: 4361
fastcgi.backend.R2.0.died: 0
fastcgi.backend.R2.0.disabled: 0
fastcgi.backend.R2.0.load: 0
fastcgi.backend.R2.0.overloaded: 0
fastcgi.backend.R2.load: 53
fastcgi.backend.S1.0.connected: 4989
fastcgi.backend.S1.0.died: 0
fastcgi.backend.S1.0.disabled: 0
fastcgi.backend.S1.0.load: 0
fastcgi.backend.S1.0.overloaded: 6
fastcgi.backend.S1.load: 48
fastcgi.requests: 10005

The connected lines tell you how many requests each back-end got so far.

That's all folks.

Notes:
  • All this won't help you a bit if you get I/O bound while serving the static parts from the box running lighty.
  • The fastcgi parameters in the examples are not tuned for performance.
  • The number of requests is uneven because back-end R1 was behind a really slow line.
  • See this: http://trac.lighttpd.net/trac/ticket/596

Updated by Anonymous over 11 years ago · 3 revisions