Project

General

Profile

Actions

Feature #967

closed

request-queue-limit option for mod_fastcgi

Added by Anonymous about 17 years ago. Updated over 6 years ago.

Status:
Fixed
Priority:
Low
Category:
mod_fastcgi
Target version:
ASK QUESTIONS IN Forums:

Description

Ticket #399 (FastCGI performance on high load) does a good job of describing the problem. In short, Lighttpd queues all requests going to a fastcgi server (or group of processes), and if the offerered load is higher than the server can handle, new requests are delayed more and more while old requests for which we can no longer send a response are processed.

This patch gives a simple alternative solution to the problem: set the request-queue-limit option for that server, and when lighttpd recieves a request that would be processed by that server, but that server already has more than request-queue-limit processes queued up, lighttpd itself simply returns a 503 instead of queuing the new request. This ensures that at least some requests will be satisified by the fastcgi server, rather than letting the server get into a backlog situation where no requests are satisified because the user has given up by the time the server gets around to processing his request.

I'd suggest that this patch go into the next 1.4 release, since it's minimally invasive, and easily removed once a better solution is available.

-- Curt Sampson <cjs


Files

mod_fastcgi.patch.4424 (3 KB) mod_fastcgi.patch.4424 -- Curt Sampson <cjs Anonymous, 2007-01-07 10:01
request-queue-limit.patch (3 KB) request-queue-limit.patch -- Tom Fernandes <anyaddress Anonymous, 2008-03-19 21:32
mod_fastcgi.patch (3 KB) mod_fastcgi.patch modified patch to work with 1.4.19 debian version -- Tom Fernandes <anyaddress Anonymous, 2008-03-19 21:38

Related issues 3 (0 open3 closed)

Related to Feature #2431: mod_cgi process limitFixed2012-08-06Actions
Related to Feature #1530: cgi.max_processes additionFixedActions
Has duplicate Feature #1451: Queuing requests to FastCGI backed rather then sending them.DuplicateActions
Actions #1

Updated by Anonymous about 16 years ago

I modified the patch to work with 1.4.19 Debian package. Should also work with other distros / upstream lighttpd though.

Just copy it into debian/patches/20_request-queue-limit.patch and add the name to debian/patches/series

-- Tom Fernandes <anyaddress

Actions #2

Updated by gstrauss about 8 years ago

Very old ticket marked high priority.

At first glance, it still seems like a good idea to add an option limiting the outstanding load on a backend.

However, this solution is only accurate when there is a single, frontend web server, since the load count is maintained locally. Having multiple server.max-workers, or separate frontend servers renders inaccurate the load count of backend servers because the counts are maintained independently on each frontend server. A better solution would be at the backend FastCGI source, though I realize that a typical, simple backend FastCGI server processes requests serially, relying on the listen backlog queue for queue management.

This patch is still useful for the use case where there is a single frontend web server in front of a pod of FastCGI servers, including the case where there is a load balancer in front of many frontend web servers, with each frontend web server relaying requests back to a set of FastCGI backends exclusively dedicated to that one, frontend web server.

Would the patch be acceptable along with an update to the documentation with usage caveats mentioned above?

Actions #3

Updated by stbuehler about 8 years ago

  • Description updated (diff)
  • Assignee deleted (jan)
  • Priority changed from High to Normal

@gstrauss: No promises on review time.

Actions #4

Updated by gstrauss almost 8 years ago

Actions #5

Updated by gstrauss almost 8 years ago

  • Priority changed from Normal to Low

recent commits lower the priority of this specific feature request.

With recent commits, if the client disconnects, lighttpd now cancels fastcgi requests that have not yet been sent to the backend. (https://github.com/lighttpd/lighttpd1.4/pull/53). Also, lighttpd allows you to tune the socket listen backlog for backends separately from frontend connections (https://github.com/lighttpd/lighttpd1.4/pull/50) See also further discussion in #399, #2116, #1825

As noted for mod_cgi in #2431

(If looking to arbitrarily limit the number of outstanding requests to backends, perhaps a more generic solution would be to write a module to allow URI paths to apply limits to number of outstanding requests waiting for backends for that URI path, and could return 503 Service Unavailable for any new requests to such a URI path comes in while the limit has been reached)

Actions #6

Updated by gstrauss about 7 years ago

  • Has duplicate Feature #1451: Queuing requests to FastCGI backed rather then sending them. added
Actions #7

Updated by gstrauss about 7 years ago

Actions #8

Updated by gstrauss over 6 years ago

  • Status changed from New to Fixed
  • Target version set to 1.4.x

Since lighttpd 1.4.40, the "listen-backlog" attribute of each host in fastcgi.server will create listening socket for each backend process with that socket connection queue limit.

If lighttpd does not start the backend, then it is up to whatever starts the backend FastCGI process to set the connection backlog limit on the listening socket.

Actions

Also available in: Atom