Project

General

Profile

Actions

Bug #949

closed

fastcgi, cgi, flush, php5 problem.

Added by Anonymous about 17 years ago. Updated over 7 years ago.

Status:
Fixed
Priority:
High
Category:
core
Target version:
ASK QUESTIONS IN Forums:

Description

Hi,

I seem to be having a problem with PHP5.2.0 and lighttpd 1.4.13, both compiled from source using the tutorials provided by lighttpd. It seems to randomly exit with no messages as to why in the log files,

A way to reproduce this is by creating the file,

$c = 0;
while(true) {
$c++;
echo $c .", ";
flush();
}
?>

If you run this in your browser pressing the stop button and refresh button quickly you will have a similar result.

Has any one else had a similar problem and found a fix?

Mike

-- xxxmikey_bxxx


Files

All_Detail (16 KB) All_Detail Anonymous, 2008-01-01 06:58

Related issues 8 (0 open8 closed)

Related to Bug #758: memory fragmentation leads to high memory usage after peaksFixedActions
Related to Bug #760: Random crashing on FreeBSD 6.1FixedActions
Related to Feature #933: lighty should buffer responses (after it grows above certain size) on diskFixedActions
Related to Bug #881: memory usage when ssl.engine used and large data uploaded through CGIFixedActions
Related to Bug #1265: SSL + file upload = lots of memoryFixedActions
Related to Bug #1283: Memory usage increases when proxy+ssl+large fileFixedActions
Related to Bug #1387: lighttpd+fastcgi memory problemFixedActions
Has duplicate Bug #2083: Excessive Memory usage with streamed files from PHPFixed2009-10-13Actions
Actions #1

Updated by Anonymous about 17 years ago

If I run this with a similar configuration (php 5.2), I end up lighty using all available memory and crashing the server.

-- alex

Actions #2

Updated by Anonymous about 17 years ago

of course it will crash your server if have compiled your php without memory limit. :-P

if you have compiled you PHP WITH memory_limit you are hitting this PHP bug most likely

http://bugs.php.net/bug.php?id=38274 ( fixed in 5.2.1RC1)

-- judas.iscariote

Actions #3

Updated by Anonymous about 17 years ago

Nope, this is not the case.

a) PHP is compiled with memory_limit and is set to a moderate 64M, and
b) it's the lighttpd process that takes over the entire available memory, not PHP.

-- alex

Actions #4

Updated by stbuehler over 16 years ago

PHP does not need much memory for this loop - it just creates content.

Did anyone test this and did not have this problem?

For shared hosting this really is critical.

Actions #5

Updated by Anonymous over 16 years ago

I can approve this behaviour using lightTPD 1.4.17 . It seems that the webserver wants to cache the content generated by PHP (5.2.1) . This is really critical for shared hosting.

-- stefan

Actions #6

Updated by stbuehler over 16 years ago

Yeah, i tested it again on 1.4.17 too, still broken.

Actions #7

Updated by jrabbit over 16 years ago

I can confirm this issue is NOT present in version 1.5.0 using the new proxy core architecture. Both PHP and lighttpd use constant memory whilst running the provided test script.

Actions #8

Updated by Anonymous about 16 years ago

Replying to :

Hi,

I seem to be having a problem with PHP5.2.0 and lighttpd 1.4.13, both compiled from source using the tutorials provided by lighttpd. It seems to randomly exit with no messages as to why in the log files,

A way to reproduce this is by creating the file,

$c = 0;
while(true) {
$c++;
echo $c .", ";
flush();
}
?>

If you run this in your browser pressing the stop button and refresh button quickly you will have a similar result.

Has any one else had a similar problem and found a fix?

Mike

Actions #9

Updated by Anonymous about 16 years ago

This problem is found in mod_cgi too. Actually, the thing is that lighttpd is ignoring the 'flush' from php. The php documenation says that the 'flush' is specifically meant to clear ALL cache, including the one in the web server, and will push the data to the Browser.

So the crux of the problem is that script-'flush'-call is not properly implemented on lighttpd. This is wreaking havoc with our system, where you cannot write a download script in php.
Thanks.
Actions #10

Updated by Anonymous about 16 years ago

cleaned up fields that the above spammer messed up.

as clarification from my analysis as well, lighty isn't necessarily ignoring flush(), it just isn't blocking until data is completely flushed to the web browser (the intended effect). This leads to loops that rely on flush() to block executing immediately, and lighty ends up with a massive memory footprint. Try such an example with a 1GB file and you'll figure it out pretty fast.

Actions #11

Updated by stbuehler about 16 years ago

  1. As long as the connection is not closed, the output from the backend gets buffered, so if you send 1GB with fastcgi lighty will buffer 1GB.
  2. In 1.4 the memory will not be freed but reused later; 1.5 seems to free unused memory.
  3. If you want to send big files (> 50MB), just use X-Sendfile.
  4. The content if buffered as buffering is preferred over blocking the fastcgi-backend.

That doesn't solve the problem for shared hosting, i know. Looks like an option is needed to set a limit for how much is cached before blocking the backend.

Actions #12

Updated by jrabbit about 16 years ago

I don't think blocking the backend for shared hosting is an ideal solution either, as it means you start having to grow your fastcgi thread pools to cater for slow clients, and therefore have to run so many that if get a lot of hits from fast clients quickly, you can overwhelm the server. At the moment, the size of the fastcgi pool acts as a nice brake to queue up requests, and only process a limited number in parallel ensuring the machine runs smoothly.

On a shared hosting environment, a more useful option would be to simply abandon the request and return error 500 if the size limit for fastcgi output was reached - i.e. force the hosted sites to change their code to be more efficient and use X-Sendfile rather than block the backend. That prevents a bad hosted site impacting the rest.

Actions #13

Updated by stbuehler about 16 years ago

I don't think you want to allow x-sendfile for shared hosting - as your users could just send every file the webserver can read (i.e. your ssl key file).

But an option to limit the output of a fastcgi wouldn't be bad too.

Actions #14

Updated by admin about 16 years ago

i.e. your ssl key file

Shouldn't that be unreadable for the user Lighttpd is running as?

Actions #15

Updated by admin about 16 years ago

Why can't X-Sendfile be made safe?

Actions #16

Updated by Anonymous about 16 years ago

Replying to Olaf van der Spek:

Why can't X-Sendfile be made safe?

So what do we do in the case of cgi? X-sendfile doesn't work for CGI. Isn't this ever going to be fixed? The correct behavior for the flush is to block the script. If it increases the load on the server, that's a separate problem, and that needs to be handled separately. Anyway, a script need not always be sending a file. Sometimes, it could be reading from a socket, so in such cases too x-sendfile is not usable. The attitude towards this bug makes me feel as if the entire lighttpd project is run by amateurs. Time to move to nginx.

Actions #17

Updated by admin about 16 years ago

So what do we do in the case of cgi? X-sendfile doesn't work for CGI.

One option would be to enable X-Sendfile for CGI as well.

Sometimes, it could be reading from a socket, so in such cases too x-sendfile is not usable.

True.

This is wreaking havoc with our system, where you cannot write a download script in php.

Why would a download script require flush?

The php documenation says that the 'flush' is specifically meant to clear ALL cache, including the one in the web server, and will push the data to the Browser.

Does it?

flush() has no effect on the buffering scheme of your web server or the browser on the client side.

Several servers, especially on Win32, will still buffer the output from your script until it terminates before transmitting the results to the browser.

Server modules for Apache like mod_gzip may do buffering of their own that will cause flush() to not result in data being sent immediately to the client.

Actions #18

Updated by Anonymous about 16 years ago

Replying to Olaf van der Spek:

This is wreaking havoc with our system, where you cannot write a download script in php.

Why would a download script require flush?

Yes, you are actually right. This should work even without flush. Lighty should set a particular memory limit for each each script's output buffer, and when that limit is reached, the script should be blocked. I am being charitable by pushing the responsiblity to flush the output buffer to the script writer, but actually it is lighty's job to take care of this irrespective of if the flush is actually called.

But let us forget all the irrelevant details and focus on the problem: ON a slow download, a runaway script will make lighty use 1GB of memory. How can you call that behavior sane? It is utterly brain damaged. This is a very serious and critical bug in lighttpd, and should be fixed, and I am getting a feeling that the devs are not intelligent enough to find a solution to this.

The php documenation says that the 'flush' is specifically meant to clear ALL cache, including the one in the web server, and will push the data to the Browser.

Does it?

I think you ignored the actual first statement from the flush manual. Of course, win32 servers will behave weirdly, but is that anything new? Again, I think flush shouldn't be brought into the picture at all. Lighty should have a internal MAX-output-buffer concept. When that limit is reached, the script should be blocked.

The current behavior is brain damaged even when you consider the core tenets of Computer Science. All drivers will have a specific amount of output-buffer. When that limit is reached, the print function if blocked till the buffer clears up. This is how terminal drivers work.

---------------------- Flush manual

Flushes the output buffers of PHP and whatever backend PHP is using (CGI, a web server, etc). This effectively tries to push all the output so far to the user's browser.


We should just forget 'flush' altogether.  Lighty should have a MAX-output-buffer concept. When this limit is reached, the backend script should be blocked, till the buffer is cleared.
Actions #19

Updated by admin about 16 years ago

How can you call that behavior sane?

I can't.

It is utterly brain damaged. This is a very serious and critical bug in lighttpd, and should be fixed,

I agree.

I think you ignored the actual first statement from the flush manual.

I kinda did, but the flush manual appears to contradict itself. I think CGI and FastCGI don't even have a mechanism to send a flush signal to the web server.

Lighty should set a particular memory limit for each each script's output buffer, and when that limit is reached, the script should be blocked.

I agree. It might also be nice to have an option to kill the script in that case.

Actions #20

Updated by Anonymous about 16 years ago

OK, so now what?

I mean, isn't this a bit ugly to say the least. You have a bug marked critical and with priority high, and it has been left there for a full year. Would it be possible to pay someone to get this fixed?
Please...... Can Jan make some comment on this entire thing? This makes people who promote lighttpd look foolish. We are trying to make a case for lighty, and we are falling on our face, because critical bugs are ignored for entire years.
I would really really appreciate some response from concerned people.
thanks.
Actions #21

Updated by jrabbit about 16 years ago

Unless I'm missing something, this is fixed - in 1.5.

It is not necessary to fix every bug reported in a 1.4.x patch - some of them require an architecture change, and so it is perfectly reasonable for the developers to fix them in the next version.

Actions #22

Updated by admin about 16 years ago

Would it be possible to pay someone to get this fixed?

Probably.

because critical bugs are ignored for entire years.

Apparently it's not an important issue for a lot of Lighttpd users.

Actions #23

Updated by Anonymous about 16 years ago

Apparently it's not an important issue for a lot of Lighttpd users.

I am not sure how you define 'a lot of lighttpd users'. This is a very critical problem in the core engine. There is indeed a lot of criticisms all over--especially by Ruby-On-Rails folks about lighty's memory leaks--to the extent where they explicitly discourage use of Lighty with RoR, and I think this must be the actual root cause of all those complaints.

There has been lots of complaints about lighty's memory leaks, and this I think is the root of all those bugs.
Actions #24

Updated by Anonymous about 16 years ago

Replying to jrabbit:

Unless I'm missing something, this is fixed - in 1.5.

I wouldn't make an issue if it was so. If you use proxy_core module, it works fine. But what about mod_cgi? Is this fixed in the mod_cgi for 1.5? If so, then there is no problem, and I would gladly switch to 1.5, but I don't think that's the case.

Can anyone confirm this bug is fixed in mod_cgi for 1.5?
thanks.
Actions #25

Updated by admin about 16 years ago

I am not sure how you define 'a lot of lighttpd users'.

All those users that are happily running this web server.

This is a very critical problem in the core engine. There is indeed a lot of criticisms all over--especially by Ruby-On-Rails folks about lighty's memory leaks--to the extent where they explicitly discourage use of Lighty with RoR, and I think this must be the actual root cause of all those complaints.

I don't use RoR, so I don't really know. Why is it such an issue with RoR? Is it being used to send large files?
Where again X-Sendfile is not an option?

Actions #26

Updated by Anonymous about 16 years ago

Replying to Olaf van der Spek:

I am not sure how you define 'a lot of lighttpd users'.

All those users that are happily running this web server.

Yes, a lot of people who haven't yet hit on this bug. I don't think lighty will survive with this kind of architecture. There is a consensus lighty is buggier than say nginx.

I don't use RoR, so I don't really know. Why is it such an issue with RoR? Is it being used to send large files?
Where again X-Sendfile is not an option?

Sendfile will not work for mod_cgi. also, won't work if the script is reading from socket. Let us not drag send-file into this. It is not even a workaround.

Actions #27

Updated by admin about 16 years ago

I didn't say X-Sendfile is a solution that works for everyone.
Isn't RoR usually deployed with FastCGI?
Yes, you already mentioned it can't be socket for data from a socket. I'm just wondering, in what use cases do you have tons of data from a socket?

Actions #28

Updated by Anonymous about 16 years ago

Dear Lighttpd,

I posted this bug over 1 year ago. And im amazed with the current flurry of activity on this again =]

I had waited a long time for a fix, and as far as i know, there was none. I believe there is another thread which describes a similar problem and symptoms to this, maybe a solution was given there?

Anyway, Just to add my two pence.
I notice alot of this thread is trying to find "when" you would want to do this. Well I have a few. I use to use flush() alot with a web ad server. I would send a large amount of data. Say an XML feed, or image file. Then the server would need to update a database, such as Stats, number of clicks, loads etc. after the flush(), as i would want the user to recieve the image and not have to wait for the database update at the same time.

Another one, is an AJAX application, When uploading a file to the server, I would use a flush() command to send the percent of the file already recieved by the server.

However, that aside, Rather than talk about if its needed. why not fix the problem in the first place?

How do other webserver's handle this? and if this problem still does exist, Id say lighttpd really does need a good long rethink.

Mike

Actions #29

Updated by Anonymous about 16 years ago

Replying to Olaf van der Spek:

I didn't say X-Sendfile is a solution that works for everyone.
Isn't RoR usually deployed with FastCGI?
Yes, you already mentioned it can't be socket for data from a socket. I'm just wondering, in what use cases do you have tons of data from a socket?

It is a very complex clustered download program. If the data doesn't reside on the local disk, it transparently connects to another server on the cluster and gets the data, and sends it as download. The end user doesn't know if the file exists on the local machine or remote. The program will transparently handle it.

Anyway, the application is run as cgi and no way can it be converted to fast-cgi, since there's going to be a lot of blocked processes.
Actions #30

Updated by admin about 16 years ago

Fair enough.

Actions #31

Updated by Anonymous almost 16 years ago

The problem here is that lighty is violating one of the fundamental premises of Software Design itself. No physical resource is infinite. Every buffer everywhere has to have a particular limit. When that particular limit is reached, the program trying to use that buffer is blocked till more space is available. For instance, "printf" function will block neatly after the terminal buffer is filled up.

This is bad design, and to be frank an unpardonable bug. Please please fix this thing. Lighty is actually unusable as a general purpose backend webserver. I cannot overstress the critical nature of this bug. At least fix this in mod_cgi and document clearly that for mod_fcgi, you have to use X-sendfile.

Actions #32

Updated by Anonymous over 15 years ago

This bug isn't just related to lighty. The same problem exists between apache and mod_cgi / mod_fcgi and there's no apparant fix. This is a major problem that does not seem to be getting resolved on any front.

Actions #33

Updated by stbuehler over 15 years ago

  • Target version changed from 1.4.20 to 1.4.21
Actions #34

Updated by icy about 15 years ago

  • Target version changed from 1.4.21 to 1.4.22
  • Patch available set to No
Actions #35

Updated by stbuehler about 15 years ago

  • Target version changed from 1.4.22 to 1.4.23
Actions #36

Updated by stbuehler almost 15 years ago

  • Target version changed from 1.4.23 to 1.4.24
Actions #37

Updated by stbuehler almost 15 years ago

  • Target version changed from 1.4.24 to 1.4.x
Actions #38

Updated by andrewsuth about 14 years ago

I think comment 18 hit the nail on the head with his input, best summarised by: "Lighty should have a MAX-output-buffer concept. When this limit is reached, the backend script should be blocked, till the buffer is cleared."

I'm glad to see that there is an issue about this because I was tearing my hair out trying to get flush() working with GZip compression enabled. Without a max buffer size, it won't flush.

This feature is available in most modern HTTP servers and I think it's about time it made it into Lightttpd. So I'm putting had up as one of the many users who thinks this is a necessary addition.

Actions #39

Updated by Niek about 14 years ago

andrewsuth wrote:

I think comment 18 hit the nail on the head with his input, best summarised by: "Lighty should have a MAX-output-buffer concept. When this limit is reached, the backend script should be blocked, till the buffer is cleared."

I'm glad to see that there is an issue about this because I was tearing my hair out trying to get flush() working with GZip compression enabled. Without a max buffer size, it won't flush.

This feature is available in most modern HTTP servers and I think it's about time it made it into Lightttpd. So I'm putting had up as one of the many users who thinks this is a necessary addition.

Agree with this comment, it's really annoying that Lighty doesn't have the option to disable/limit buffering. I too have a couple of scripts that do a manual ob_flush(), this works great from the command line (php-cgi filename.php), but not when served through Lighty. Nginx e.g. offers such a config parameter: http://wiki.nginx.org/NginxHttpFcgiModule#fastcgi_buffer_size

Actions #40

Updated by stbuehler about 14 years ago

  • Category changed from mod_fastcgi to core
  • Assignee deleted (jan)
  • Missing in 1.5.x set to No

Oh, so you are sure it would be better if lighttpd would support buffer limits? Really?

Please stop posting here... We know it would be better, but we will not change it in 1.4.x as it is "stable" (And yes, lighttpd2 will (already does) support it).
I just keep this bug open so people find it and know about it.

Actions #41

Updated by gstrauss almost 8 years ago

There are quite a bit of antagonistic statements and deplorable behavior in these posts.
  • This is no place for baseless insults.
  • This is no place for entitled demands.
  • Most posts above are neither 100% in the right, nor are they 100% in the wrong.
    • Yes, there are bugs and issues and limitations which should be addressed.
    • Yes, there are workarounds for some use-cases even without fixing the above.
    • Even though many use-cases are valid, not every use-case is optimal or should necessarily be supported.
  • This is open source. Most/all of the time spent on this project is volunteered.
    That also includes money and time spent running and maintaining the servers.
    • Constructive participation is welcomed.
    • Informative bug reports, instructions to replicate, and patches encouraged.
    • Polite discussion and follow-up might help increase priority of issues affecting many people.

If you have an urgent need and are will to pay someone to work on it,
then by all means say so and sketch out an RFP (request for proposal).

Actions #42

Updated by gstrauss almost 8 years ago

Responding to some (very old) comments above with one potential workaround:
Please be aware that in the upcoming lighttpd v1.4.40, X-Sendfile is available (if enabled) for all of CGI, FastCGI, SCGI and there is a config
option for each module (*cgi.x-sendfile-docroot) to configure paths to files allowed to be sent via X-Sendfile. (Following symlinks is enabled by default and is controlled by a separate config option.)

Actions #43

Updated by gstrauss almost 8 years ago

  • Related to Bug #758: memory fragmentation leads to high memory usage after peaks added
Actions #44

Updated by gstrauss almost 8 years ago

  • Related to Bug #760: Random crashing on FreeBSD 6.1 added
Actions #45

Updated by gstrauss almost 8 years ago

  • Related to Feature #933: lighty should buffer responses (after it grows above certain size) on disk added
Actions #46

Updated by gstrauss almost 8 years ago

  • Related to Bug #881: memory usage when ssl.engine used and large data uploaded through CGI added
Actions #47

Updated by gstrauss almost 8 years ago

  • Related to Bug #1265: SSL + file upload = lots of memory added
Actions #48

Updated by gstrauss almost 8 years ago

  • Related to Bug #1283: Memory usage increases when proxy+ssl+large file added
Actions #49

Updated by gstrauss almost 8 years ago

  • Related to Bug #1387: lighttpd+fastcgi memory problem added
Actions #50

Updated by gstrauss almost 8 years ago

  • Status changed from New to Patch Pending
  • Target version changed from 1.4.x to 1.4.40

New: asynchronous, bidirectional streaming support for request and response
Submitted pull request: https://github.com/lighttpd/lighttpd1.4/pull/66

NOTE: streaming support is experimental (and must be enabled in config)
Interfaces and behavior, including defaults, may change, depending on feedback.

default behavior is the existing behavior: fully buffer request body before contacting backend, and fully buffer response body before sending to client

The pull request is not as elegant as I would have liked it to be. There is way too much pre-existing code duplication (with slight modifications) between mod_cgi, mod_fastcgi, mod_scgi, mod_proxy. Buffer sizes are not configurable and use sloppy counters (not exact, but slightly fungible limits are applied). There is still no separate backend timeout. ...and I am sure there are additional limitations/missing features not listed here.

That said, the pull request is functional in my very limited testing.
Please be aware that much more testing is needed. Please help.

Bugs for broken behavior will be accepted and evaluated.
Feature requests filed as bugs (such as requesting more tunables) will be demoted to feature requests and prioritized accordingly.

Constructive feedback appreciated

Thank you.

Actions #51

Updated by gstrauss almost 8 years ago

FYI: although this may be obvious, it needs to be stated: streaming from client->lighttpd->backend or from backend->lighttpd->client needs all parties to support streaming or else the result will not appear to be streaming.

For those using libfcgi, please be aware that (at the time that this is being written in Jun 2016) libfcgi does not support non-blocking operations. You can approximate streaming output from fastcgi using libfcgi by forcing flushes after producing data. You can (inefficiently) approximate streaming input to fastcgi using libfcgi by reading one byte at a time.

Or, you can use a fastcgi framework that supports non-blocking I/O.

However, blocking behavior is more intuitive to many people than is non-blocking behavior, so please evaluate your application needs before coming to the conclusion that streaming is the solution to a problem or issue that you are having.

Actions #52

Updated by gstrauss over 7 years ago

  • Status changed from Patch Pending to Fixed
  • % Done changed from 0 to 100
Actions

Also available in: Atom