Project

General

Profile

Actions

Feature #1389

closed

Multiplexing Support in FastCGI

Added by Anonymous over 16 years ago. Updated over 7 years ago.

Status:
Missing Feedback
Priority:
Low
Category:
mod_fastcgi
Target version:
-
ASK QUESTIONS IN Forums:

Description

Lighty was build as a classic solution of 10K problem - Server under high loads.
It implements event driven logic, however, FastCGI appications can't enjoy from it.

Classic example:

1. Three requests coming: a, b, c
a. Requires access to DB
b. Cached in memory
c. Cached in memory

2. FastCGI appication recieves request a. It send request to DB. Meanwhile it continues to read inputs and ansvers b, c from the cache.

3. Ansver from DB recieved and send to the client (a).

FastCGI protocol supports multiplexing of the request over single
socket (example 4 in FastCGI documentation) however lighty
do not implement this, thus request b and c in the example
above should be sent to other processes. In this case
it is impossible to write event driven fastcgi appication that
work with lighty.

This is very inportant for proper solution for high load sites.

Some links:
http://cryp.to/publications/fastcgi/

Actions #1

Updated by darix over 16 years ago

your fastcgi application could bind multiple sockets and you could configure those sockets in the lighttpd config file. though this requires external spawning.

Actions #2

Updated by darix over 16 years ago

anyway. do you have a sample FCGI application that implements multiplexing?

Actions #3

Updated by Anonymous over 16 years ago

Replying to darix:

anyway. do you have a sample FCGI application that implements multiplexing?

There are several frameworks that do this for you, for example:
http://jonpy.sourceforge.net/fcgi.html

Also there is a problem of a checken and the egg. There are almost no web
servers that support multiplexing over one socket (neither Apache, nor nginx or IIS) and thus there are almost no frameworks that do the job for you.

your fastcgi application could bind multiple sockets and you could configure those sockets in the lighttpd config file. though this requires external spawning.

How many sockets should I bind on? I mean, if we try to solve 10K problem - 10K simultanious connections I need to open 10K sockets??

Yes, I know, it is too much for real cases.

But anyway, why should I bind to many sockets and enforce lighty to open them as well instead of multiplexing them over single socket?

Anyway:
  1. I still will be limited on processing certain amount of simultanious connections.
  2. Why should bind on too many sockets that in most of case they are not in use?
Actions #4

Updated by Anonymous over 15 years ago

Ok, I know this ticket is like 11 months old now, but I would like to add my vote for Multiplexing support in FastCGI.

Consider an AJAX webchat that uses one persistent connection to wait for data. Such a connection would be only one request but could easily take a few minutes. Without multiplexing, this would require a separate socket for every connection, and that could easily result in requiring 10K sockets for often-used long-running requests.

And whether there are existing applications using this technology really shouldn't have any influence making this decision, specific applications are developed for specific environments. It's not like the X-Sendfile feature was used by any existing applications before it was implemented...

Actions #5

Updated by over 14 years ago

I would like to add my vote as well.

Actions #6

Updated by stbuehler over 14 years ago

  • Assignee deleted (jan)
  • Priority changed from High to Normal
  • Missing in 1.5.x set to No

I hope you do realize that multiplexing is basically just the same as using multiple sockets, only you do it in userspace?

And you don't need to bind multiple sockets, you just have to use a real event loop in your FastCGI app to accept more than one connection from a listening socket. I guess it might be "nice to have", but i do not see a "high" priority here; keep-alive is way more important.

And while we just read as many bytes as possible from FastCGI backends in 1.x, we support memory limits for the buffers in our next version; if the buffer is full, we just don't read the backend anymore - but this isn't a good idea with multiplexing, as you cannot select which request id you want to block (i still plan to add optional multiplexing in 2.0 if i find the time and a good design for it).

Btw: implementing a feature is more helpful than "voting" for it :)
But i might add that i doubt we would accept a patch for 1.4, and that probably no one has time to review a patch for 1.5.

Actions #7

Updated by gstrauss over 7 years ago

  • Status changed from New to Need Feedback
  • Priority changed from Normal to Low
  • Target version deleted (1.5.0)
  • Missing in 1.5.x deleted (No)

As stbuehler mentioned, one criticism of FastCGI multiplexing is lack of flow control.

If your app needs multiplexing, then HTTP/2 is a better solution, though not one that is supported in lighttpd.

Is there a FastCGI multiplexing use case that you can describe where HTTP/2 would not be as good or better?

Actions #8

Updated by gstrauss over 7 years ago

  • Status changed from Need Feedback to Missing Feedback

Creating socket connections is very fast on modern OS like Linux and FreeBSD, especially over unix domain sockets. Two decades ago, that might not have been the case, especially on some older OS like AIX and Solaris (from 20 years ago). I do not know if that is still the case.

Repeating the question from above:

Is there a FastCGI multiplexing use case that you can describe where HTTP/2 would not be as good or better?

And can you demonstrate that connection creation and teardown is the bottleneck? If so, please re-open this feature request.

BTW, a future version of mod_fastcgi might support keeping the fastcgi socket connection open for reuse, even if multiplexing is not implemented.

Actions

Also available in: Atom