Feature #967

request-queue-limit option for mod_fastcgi

Added by Anonymous over 10 years ago. Updated 12 months ago.

Target version:
Start date:
Due date:
% Done:


Missing in 1.5.x:


Ticket #399 (FastCGI performance on high load) does a good job of describing the problem. In short, Lighttpd queues all requests going to a fastcgi server (or group of processes), and if the offerered load is higher than the server can handle, new requests are delayed more and more while old requests for which we can no longer send a response are processed.

This patch gives a simple alternative solution to the problem: set the request-queue-limit option for that server, and when lighttpd recieves a request that would be processed by that server, but that server already has more than request-queue-limit processes queued up, lighttpd itself simply returns a 503 instead of queuing the new request. This ensures that at least some requests will be satisified by the fastcgi server, rather than letting the server get into a backlog situation where no requests are satisified because the user has given up by the time the server gets around to processing his request.

I'd suggest that this patch go into the next 1.4 release, since it's minimally invasive, and easily removed once a better solution is available.

-- Curt Sampson <cjs

mod_fastcgi.patch.4424 - -- Curt Sampson <cjs (3 KB) Anonymous, 2007-01-07 10:01

request-queue-limit.patch View - -- Tom Fernandes <anyaddress (3 KB) Anonymous, 2008-03-19 21:32

mod_fastcgi.patch View - modified patch to work with 1.4.19 debian version -- Tom Fernandes <anyaddress (3 KB) Anonymous, 2008-03-19 21:38

Related issues

Related to Feature #2431: mod_cgi process limit Fixed 2012-08-06
Related to Feature #1530: cgi.max_processes addition New
Duplicated by Feature #1451: Queuing requests to FastCGI backed rather then sending them. Duplicate


#1 Updated by Anonymous about 9 years ago

I modified the patch to work with 1.4.19 Debian package. Should also work with other distros / upstream lighttpd though.

Just copy it into debian/patches/20_request-queue-limit.patch and add the name to debian/patches/series

-- Tom Fernandes <anyaddress

#2 Updated by gstrauss about 1 year ago

Very old ticket marked high priority.

At first glance, it still seems like a good idea to add an option limiting the outstanding load on a backend.

However, this solution is only accurate when there is a single, frontend web server, since the load count is maintained locally. Having multiple server.max-workers, or separate frontend servers renders inaccurate the load count of backend servers because the counts are maintained independently on each frontend server. A better solution would be at the backend FastCGI source, though I realize that a typical, simple backend FastCGI server processes requests serially, relying on the listen backlog queue for queue management.

This patch is still useful for the use case where there is a single frontend web server in front of a pod of FastCGI servers, including the case where there is a load balancer in front of many frontend web servers, with each frontend web server relaying requests back to a set of FastCGI backends exclusively dedicated to that one, frontend web server.

Would the patch be acceptable along with an update to the documentation with usage caveats mentioned above?

#3 Updated by stbuehler about 1 year ago

  • Description updated (diff)
  • Assignee deleted (jan)
  • Priority changed from High to Normal

@gstrauss: No promises on review time.

#4 Updated by gstrauss about 1 year ago

#5 Updated by gstrauss 12 months ago

  • Priority changed from Normal to Low

recent commits lower the priority of this specific feature request.

With recent commits, if the client disconnects, lighttpd now cancels fastcgi requests that have not yet been sent to the backend. ( Also, lighttpd allows you to tune the socket listen backlog for backends separately from frontend connections ( See also further discussion in #399, #2116, #1825

As noted for mod_cgi in #2431

(If looking to arbitrarily limit the number of outstanding requests to backends, perhaps a more generic solution would be to write a module to allow URI paths to apply limits to number of outstanding requests waiting for backends for that URI path, and could return 503 Service Unavailable for any new requests to such a URI path comes in while the limit has been reached)

#6 Updated by gstrauss 3 months ago

  • Duplicated by Feature #1451: Queuing requests to FastCGI backed rather then sending them. added

#7 Updated by gstrauss 3 months ago

Also available in: Atom