request-queue-limit option for mod_fastcgi
Ticket #399 (FastCGI performance on high load) does a good job of describing the problem. In short, Lighttpd queues all requests going to a fastcgi server (or group of processes), and if the offerered load is higher than the server can handle, new requests are delayed more and more while old requests for which we can no longer send a response are processed.
This patch gives a simple alternative solution to the problem: set the request-queue-limit option for that server, and when lighttpd recieves a request that would be processed by that server, but that server already has more than request-queue-limit processes queued up, lighttpd itself simply returns a 503 instead of queuing the new request. This ensures that at least some requests will be satisified by the fastcgi server, rather than letting the server get into a backlog situation where no requests are satisified because the user has given up by the time the server gets around to processing his request.
I'd suggest that this patch go into the next 1.4 release, since it's minimally invasive, and easily removed once a better solution is available.
-- Curt Sampson <cjs
Updated by Anonymous about 13 years ago
I modified the patch to work with 1.4.19 Debian package. Should also work with other distros / upstream lighttpd though.
Just copy it into debian/patches/20_request-queue-limit.patch and add the name to debian/patches/series
-- Tom Fernandes <anyaddress
Updated by gstrauss about 5 years ago
Very old ticket marked high priority.
At first glance, it still seems like a good idea to add an option limiting the outstanding load on a backend.
However, this solution is only accurate when there is a single, frontend web server, since the load count is maintained locally. Having multiple server.max-workers, or separate frontend servers renders inaccurate the load count of backend servers because the counts are maintained independently on each frontend server. A better solution would be at the backend FastCGI source, though I realize that a typical, simple backend FastCGI server processes requests serially, relying on the listen backlog queue for queue management.
This patch is still useful for the use case where there is a single frontend web server in front of a pod of FastCGI servers, including the case where there is a load balancer in front of many frontend web servers, with each frontend web server relaying requests back to a set of FastCGI backends exclusively dedicated to that one, frontend web server.
Would the patch be acceptable along with an update to the documentation with usage caveats mentioned above?
Updated by gstrauss about 5 years ago
- Priority changed from Normal to Low
recent commits lower the priority of this specific feature request.
With recent commits, if the client disconnects, lighttpd now cancels fastcgi requests that have not yet been sent to the backend. (https://github.com/lighttpd/lighttpd1.4/pull/53). Also, lighttpd allows you to tune the socket listen backlog for backends separately from frontend connections (https://github.com/lighttpd/lighttpd1.4/pull/50) See also further discussion in #399, #2116, #1825
As noted for mod_cgi in #2431
(If looking to arbitrarily limit the number of outstanding requests to backends, perhaps a more generic solution would be to write a module to allow URI paths to apply limits to number of outstanding requests waiting for backends for that URI path, and could return 503 Service Unavailable for any new requests to such a URI path comes in while the limit has been reached)
Updated by gstrauss almost 4 years ago
- Status changed from New to Fixed
- Target version set to 1.4.x
Since lighttpd 1.4.40, the
"listen-backlog" attribute of each host in
fastcgi.server will create listening socket for each backend process with that socket connection queue limit.
If lighttpd does not start the backend, then it is up to whatever starts the backend FastCGI process to set the connection backlog limit on the listening socket.
Also available in: Atom