Bug #1999

fastcgi performance problem with large responses

Added by tomh about 10 years ago. Updated about 10 years ago.

Target version:
Start date:
Due date:
% Done:


Estimated time:
Missing in 1.5.x:


When the fastcgi code reads a large chunk of response data from the network in one read in creates a single chunk and then calls fastcgi_get_packet() repeatedly to extract packets from it.

The first thing that routine does is to create a buffer and copy entire chunks to it until it has at least 8 bytes (for the fcgi packet header) after which it gets the packet length from the head and (if necessary) copies more data from the remaining chunks until it has the whole packet, or discards data from the buffer if it had obtained too much when locating the header.

This is wasteful, and the bigger the chunk is the worse it gets. If you have a megabyte chunk that came from the network in one read then you keep creating a buffer and copying large amounts of data into it even though most of it will be discarded as the maximum fcgi packet length is 64Kb.

The attached patch changes the first loop in fastcgi_get_packet() to only copy enough bytes from the chunk to allow the header to be decoded. The second loop then continues as before and extracts the rest of the packet.

lighttpd-fastcgi.patch (827 Bytes) lighttpd-fastcgi.patch Patch to process large fastcgi responses more efficiently tomh, 2009-06-05 22:29

Associated revisions

Revision 2509 (diff)
Added by stbuehler about 10 years ago

Improve FastCGI performance (fixes #1999)

Revision b063f018 (diff)
Added by stbuehler about 10 years ago

Improve FastCGI performance (fixes #1999)

git-svn-id: svn:// 152afb58-edef-0310-8abb-c4023f1b3aa9



Updated by stbuehler about 10 years ago

  • Status changed from New to Fixed
  • % Done changed from 0 to 100

Applied in changeset r2509.


Updated by stbuehler about 10 years ago

  • Target version set to 1.4.23

Also available in: Atom