Project

General

Profile

Bug #1387

lighttpd+fastcgi memory problem

Added by Anonymous almost 10 years ago. Updated about 1 year ago.

Status:
Fixed
Priority:
Normal
Assignee:
-
Category:
mod_fastcgi
Target version:
Start date:
Due date:
% Done:

100%

Estimated time:
Missing in 1.5.x:

Description

I have a script that limits downloads and tracks bandwidth through php. When someone clicks on the link to download it, lighttpd ends up actually caching the whole file to the ram on the server and usually brings the server into swap and then eventually crashes I need a fix for this because this is a huge problem and all i need is 4 people downloading a 300-500 meg file and my server is completely gone.

Lighttpd Conf:


evasive.max-conns-per-ip = 5
server.document-root = "/home/boxstr/public_html/" 
$HTTP["host"] == "files.xxx.com" {
fastcgi.server = ( "file.php" => ((
                     "bin-path" => "/opt/php5/bin/php-cgi",
                     "socket" => "/tmp/fscgi.socket",
                     "max-procs" => 2,
                     "bin-environment" => (
                       "PHP_FCGI_CHILDREN" => "16",
                       "PHP_FCGI_MAX_REQUESTS" => "10000",
                       "allow-x-send-file" => "enable" 
                     ),
                     "bin-copy-environment" => (
                       "PATH", "SHELL", "USER" 
                     ),
                     "broken-scriptfilename" => "enable" 
                 )))
url.rewrite-final = (
"^/([0-9]+)/(.+)$" => "file.php?userid=$1&file=$2",

)

}
$HTTP["host"] == "www.files.xxx.com" {
fastcgi.server = ( "file.php" => (( 
                     "bin-path" => "/opt/php5/bin/php-cgi",
                     "socket" => "/tmp/fscgi.socket",
                     "max-procs" => 2,
                     "bin-environment" => ( 
                       "PHP_FCGI_CHILDREN" => "16",
                       "PHP_FCGI_MAX_REQUESTS" => "10000",
                       "allow-x-send-file" => "enable" 
                     ),
                     "bin-copy-environment" => (
                       "PATH", "SHELL", "USER" 
                     ),
                     "broken-scriptfilename" => "enable" 
                 )))
url.rewrite-final = (
"^/([0-9]+)/(.+)$" => "file.php?userid=$1&file=$2",
)
server.document-root = "/home/xxx/public_html/" 
}
$HTTP["host"] == "dev.boxstr.com" {
server.document-root = "/home/xx/public_html/dev/" 
}
server.bind = "216.240.146.62" 
server.port = 80
server.username = "xx" 
server.groupname = "xx" 
server.max-write-idle = 600
server.pid-file = "/var/run/lighttpd.pid" 
server.modules = (
                   "mod_fastcgi",
                   "mod_rewrite",
                   "mod_redirect",
                   "mod_status",
                   "mod_setenv",
                   "mod_secdownload",
                   "mod_evasive",
                 )
$SERVER["socket"] == "xxxx.com:80" {
fastcgi.server = ( ".php" => (( 
                     "bin-path" => "/opt/php5/bin/php-cgi",
                     "socket" => "/tmp/fscgi.socket",
                     "max-procs" => 2,
                     "bin-environment" => ( 
                       "PHP_FCGI_CHILDREN" => "16",
                       "PHP_FCGI_MAX_REQUESTS" => "10000" 
                     ),
                     "bin-copy-environment" => (
                       "PATH", "SHELL", "USER" 
                     ),
                     "broken-scriptfilename" => "enable" 
                 )))
status.statistics-url = "/server-counters" 
url.rewrite-final = ( 

"^/([0-9]+)/?$" => "index.php?r=$1",
"^/register/?$" => "account.php?action=register",
"^/login/?$" => "account.php?action=login",
"^/pupload/?$" => "public.php",
"^/pupload/browse(/([0-9]+))?/?$" => "public.php?action=browse&page=$2",
"^/pupload/view/([0-9]+)/?$" => "public.php?action=view&upload_id=$1",
"^/pupload/manage/([0-9]+)/([0-9a-z]+)/?$" => "public.php?action=manage&upload_id=$1&key=$2",
"^/myfiles(/(.*))?$" => "myfiles.php?folder=$1",
"^/members/?$" => "browse.php",
"^/members/([0-9]+)/?$" => "browse.php?page=$1",
"^/members/public/?([0-9]+)?/?$" => "browse.php?public=1&page=$1",
"^/members/info/([0-9]+)$" => "browse.php?action=info&userid=$1",
"^/members/browse/([0-9]+)/?(/.+)?$" => "browse.php?action=browse&userid=$1&folder=$2",
"^/upload(/(.*))?$" => "upload-multiple.php?upload_to=$1",
"^/gallery/([a-z0-9\_]+)/?$" => "/gallery/quickgo.php?a=$1",
"^/go/([a-z0-9\_]+)/?$" => "browse.php?action=browse&username=$1",
"^/files/([0-9]+)/(.+)$" => "file.php?userid=$1&file=$2",

)
}
mimetype.assign             = (
  ".pdf"          =>      "application/pdf",
  ".sig"          =>      "application/pgp-signature",
  ".spl"          =>      "application/futuresplash",
  ".class"        =>      "application/octet-stream",
  ".ps"           =>      "application/postscript",
  ".torrent"      =>      "application/x-bittorrent",
  ".dvi"          =>      "application/x-dvi",
  ".gz"           =>      "application/x-gzip",
  ".pac"          =>      "application/x-ns-proxy-autoconfig",
  ".swf"          =>      "application/x-shockwave-flash",
  ".tar.gz"       =>      "application/x-tgz",
  ".tgz"          =>      "application/x-tgz",
  ".tar"          =>      "application/x-tar",
  ".zip"          =>      "application/zip",
  ".mp3"          =>      "audio/mpeg",
  ".m3u"          =>      "audio/x-mpegurl",
  ".wma"          =>      "audio/x-ms-wma",
  ".wax"          =>      "audio/x-ms-wax",
  ".ogg"          =>      "application/ogg",
  ".wav"          =>      "audio/x-wav",
  ".gif"          =>      "image/gif",
  ".jpg"          =>      "image/jpeg",
  ".jpeg"         =>      "image/jpeg",
  ".png"          =>      "image/png",
  ".xbm"          =>      "image/x-xbitmap",
  ".xpm"          =>      "image/x-xpixmap",
  ".xwd"          =>      "image/x-xwindowdump",
  ".css"          =>      "text/css",
  ".html"         =>      "text/html",
  ".htm"          =>      "text/html",
  ".js"           =>      "text/javascript",
  ".asc"          =>      "text/plain",
  ".c"            =>      "text/plain",
  ".cpp"          =>      "text/plain",
  ".log"          =>      "text/plain",
  ".conf"         =>      "text/plain",
  ".text"         =>      "text/plain",
  ".txt"          =>      "text/plain",
  ".dtd"          =>      "text/xml",
  ".xml"          =>      "text/xml",
  ".mpeg"         =>      "video/mpeg",
  ".mpg"          =>      "video/mpeg",
  ".mov"          =>      "video/quicktime",
  ".qt"           =>      "video/quicktime",
  ".avi"          =>      "video/x-msvideo",
  ".asf"          =>      "video/x-ms-asf",
  ".asx"          =>      "video/x-ms-asf",
  ".wmv"          =>      "video/x-ms-wmv",
  ".bz2"          =>      "application/x-bzip",
  ".tbz"          =>      "application/x-bzip-compressed-tar",
  ".tar.bz2"      =>      "application/x-bzip-compressed-tar" 
 )
static-file.exclude-extensions = ( ".fcgi", ".php", ".rb", "~", ".inc" )
index-file.names = ( "index.html","index.php" )

file download script:


<?php
header('Cache-control: max-age=2592000');
header('Expires: '.gmdate('D, d M Y H:i:s \G\M\T',time()+2592000));
$chunk=20480; // bytes
@set_time_limit(0);
@ignore_user_abort(true);
@set_magic_quotes_runtime(0);
require'includes/db.class.php';
require'includes/functions_mime.inc.php';
require'includes/mysql.class.php';
require'includes/configs.inc.php';
extract($UPL['MYSQL'],EXTR_OVERWRITE);
$M=new mysqlDB($host,$username,$password,$database,0);
function out($f){header('Content-type: image/gif');@readfile($f);exit;}
$DB=new DB;if($DB->open('data/settings/upl_settings.php'))$UFD=$DB->get('userfiles_dir');else exit("Couldn't open ".UPLOADER_SETTINGS);$DB->close();
$userid=@$_GET['userid']?(int)$_GET['userid']:exit('No userid.');
$FILE=@$_GET['file']?$_GET['file']:exit('No file.');
$ACT=@$_GET['action'];
if(get_magic_quotes_gpc()){$FILE=stripslashes($FILE);}
if(strstr($FILE,'../'))exit('Access Denied');
$PATH="$UFD/$userid/$FILE";

if(isset($_SERVER['REQUEST_URI'])&&$ACT!='download')
{
    $fname=basename(rawurldecode($_SERVER['REQUEST_URI']));
    if(strstr($fname,'../'))exit('Access Denied');
    $PATH="$UFD/$userid/".dirname($FILE)."/$fname";
    $FILE=$fname;
    clearstatcache();
}
if(is_file($PATH))
{
    $size=filesize($PATH);
    if(!$M->query(sprintf("SELECT bw_reset_last,bw_reset_period,bw_reset_auto,bw_used,bw_max,bw_xfer_rate FROM uploader_users WHERE userid=%d LIMIT 1;", $userid)))exit($M->error());
    if($M->getRowCount())
    {
        $uinfo=$M->getAssoc();
        $M->free();
        $bw_used=$uinfo['bw_used'];
        $bw_max=$uinfo['bw_max']*1024;
        if($bw_max!=0&&$bw_used>$bw_max)
        {
            if($uinfo['bw_reset_auto'])
            {
                $lstrst=(time()-$uinfo['bw_reset_last'])/86400; // days
                if($lstrst>=$uinfo['bw_reset_period'])
                {
                    $M->query(sprintf("UPDATE uploader_users SET bw_reset_last='%s', bw_used=0 WHERE userid=%d;",time(),$userid));
                    $bw_used=0;
                }
                else out('data/bandwidth_exceeded.gif');
            }else out('data/bandwidth_exceeded.gif');
        }
        # Send & update
$offset = 60 * 60 * 24 * 1;
  header('Pragma: public');

header("Cache-Control: max-age=".$offset.", must-revalidate");
   $ExpStr = "Expires: " . gmdate("D, d M Y H:i:s", time() + $offset) . " GMT";
   header($ExpStr); 
        header('Content-disposition: '.($ACT=='download'?'attachment;':'').'filename="'.(basename($FILE)).'";');
        header('Content-type: '.mime_type($PATH));
        header('Content-length: '.$size);
        $speed=$uinfo['bw_xfer_rate'];
        $sleep=$speed?floor(($chunk/($speed*1024))*1000000):0;
        $sent=0;
        if(false===($fp=fopen($PATH,'rb')))exit;
        do{$buf=fread($fp,$chunk);$sent+=strlen($buf);print$buf;flush();usleep($sleep);}while(!feof($fp)&&!connection_aborted());
        fclose($fp);                    
        $M->query(sprintf("UPDATE uploader_users SET bw_used=bw_used+%f WHERE userid=%d;",$sent/1024,$userid));
    }
    else exit('Could not open user data.');
}
else out('data/file_not_found.gif');
?>


Related issues

Related to Bug #949: fastcgi, cgi, flush, php5 problem.Fixed

Associated revisions

Revision 5a91fd4b (diff)
Added by gstrauss about 1 year ago

[core] buffer large responses to tempfiles (fixes #758, fixes #760, fixes #933, fixes #1387, #1283, fixes #2083)

This replaces buffering entire response in memory which might lead to
huge memory footprint and possibly to memory exhaustion.

use tempfiles of fixed size so disk space is freed as each file sent

update callers of http_chunk_append_mem() and http_chunk_append_buffer()
to handle failures when writing to tempfile.

x-ref:
"memory fragmentation leads to high memory usage after peaks"
https://redmine.lighttpd.net/issues/758
"Random crashing on FreeBSD 6.1"
https://redmine.lighttpd.net/issues/760
"lighty should buffer responses (after it grows above certain size) on disk"
https://redmine.lighttpd.net/issues/933
"Memory usage increases when proxy+ssl+large file"
https://redmine.lighttpd.net/issues/1283
"lighttpd+fastcgi memory problem"
https://redmine.lighttpd.net/issues/1387
"Excessive Memory usage with streamed files from PHP"
https://redmine.lighttpd.net/issues/2083

Revision 18a7b2be (diff)
Added by gstrauss about 1 year ago

[core] option to stream response body to client (fixes #949, #760, #1283, #1387)

Set server.stream-response-body = 1 or server.stream-response-body = 2
to have lighttpd stream response body to client as it arrives from the
backend (CGI, FastCGI, SCGI, proxy).

default: buffer entire response body before sending response to client.
(This preserves existing behavior for now, but may in the future be
changed to stream response to client, which is the behavior more
commonly expected.)

x-ref:
"fastcgi, cgi, flush, php5 problem."
https://redmine.lighttpd.net/issues/949
"Random crashing on FreeBSD 6.1"
https://redmine.lighttpd.net/issues/760
"Memory usage increases when proxy+ssl+large file"
https://redmine.lighttpd.net/issues/1283
"lighttpd+fastcgi memory problem"
https://redmine.lighttpd.net/issues/1387

History

#1 Updated by Anonymous almost 10 years ago

Sorry, but I fail to see the purpose of that script.

Isn't that what x-lighttpd-send-file and connection.kbytes-per-second is for?

About the buffering, I'd more see it as a feature. I think it is important, that it's not possible for a client to lock the whole FCGI server by just creating PHP_FCGI_CHILDREN connections and reading at homeopathic rates from them.

#2 Updated by admin almost 10 years ago

About the buffering, I'd more see it as a feature.

Maybe, but buffering 100+ mb per request in memory doesn't sound like a smart solution.
It'd be better to block the FCGI backend then to make the entire server swap if a FCGI backend decides to generate such a big response.

#3 Updated by Anonymous almost 10 years ago

I have similar problem.
In my case I have another lighty as backend for proxy. When I try to download ~500Mb in multithread way (Using Flashget), i get Out of memory oom-killer starts his work :(

no fastcgi, just plain download.

master lighttpd server (port 80)


server.event-handler = "linux-sysepoll" 
server.modules +=  ("mod_proxy")
proxy.server  = ( "" => ( ( "host" => "127.0.0.1", "port" => 81 ) ) )

slave lighttpd server (port 81)


server.event-handler = "linux-sysepoll" 
server.port          = 81
server.bind          = "127.0.0.1" 

#4 Updated by stbuehler over 9 years ago

- the buffering is preferred over blocking. if you have a problem with that, bad luck for you.
- i don't think your script really limits the download rate, it only tracks started downloads and possibly used traffic. so why not using x-lighttpd-sendfile?
- the only possible option is that lighty buffers the response on disk like it does with requests in 1.5

#5 Updated by admin over 9 years ago

the buffering is preferred over blocking.

Why?

#6 Updated by stbuehler over 9 years ago

I think the answer is here:

About the buffering, I'd more see it as a feature. I think it is important, that it's not possible for a client to lock the whole FCGI server by just creating PHP_FCGI_CHILDREN connections and reading at homeopathic rates from them.

And there is no simple solution to balance between this 2 options.

And what you are trying to do is just the wrong way - having one blocking php-backend running just to limit traffic. If you really need this behaviour, then write your own lighty module.

But i really think you do not limit the download rate (or your lighty-mem would not be flooded), so why (again!!!!) do you not use x-lighttpd-sendfile?

#7 Updated by admin over 9 years ago

so why (again!!!!) do you not use x-lighttpd-sendfile?

Note that I'm not the bug reporter.

I think it is important, that it's not possible for a client to lock the whole FCGI server by just creating PHP_FCGI_CHILDREN connections and reading at homeopathic rates from them.

Of course some buffering is required. But when considering other FastCGI servers, this behaviour could certainly be undesirable.

#8 Updated by Anonymous about 9 years ago

I agree that this script is a pretty bad approach (this should be done in the webserver), but lighttpd buffering that much data is obviously bad for any kind of large dynamic content. The general solution would be to buffer large responses to disk, like POST uploads are handled, to avoid the unreasonable memory usage. This should, of course, abort properly if the connection is closed (so potentially hundreds of megs aren't moved on the server when a client makes a request and closes the connection).

This should also optionally start sending these parts as chunked data as soon as they're available, to avoid waiting for the whole response to be written, which may take a long time; this precludes sending Content-Length, so should be optional (maybe specified by including Transfer-Encoding: chunked in the response headers).

#9 Updated by Anonymous about 9 years ago

What if one's fastcgi scripts sends large dynamic content, that cannot be sent using x-lighttpd-sendfile? I am running into exactly this issue, with a very similar script. Although in my case, the contents being sent are not only rate-limited, but also generated dynamically at request time. Lighttpd crashes and burns horribly in this case, eating all the memory in the system.

Why not leave the blocking vs buffering choice up to the users, instead of saying "too bad for you"?

-- Bram Avontuur

#10 Updated by stbuehler about 9 years ago

So... just code it. But don't expect us to include it in 1.4.x, as 1.4.x is the stable branch.

Or try 1.5.x and X-LIGHTTPD-send-tempfile.

#11 Updated by stbuehler almost 9 years ago

  • Target version changed from 1.4.20 to 1.4.21

#12 Updated by icy over 8 years ago

  • Status changed from New to Wontfix
  • Priority changed from Urgent to Normal
  • Target version deleted (1.4.21)
  • Patch available set to No

Won't fix in 1.4

#13 Updated by Scumpeter over 2 years ago

  • Status changed from Wontfix to Reopened

Hi,

I reopened this bug, because I just encountered it and I think it is a very serious issue. This bug enables your websites visitors to crash your webserver.

In my case it was an instance of OwnCloud that triggered this bug. A user tried to download a folder from owncloud, which is done by zipping this folder on-the-fly and streaming the zipped content to the user. While streaming the zip, lighttpd used up all my servers memory and oom-killer first killed php and then lighty.
The bug can also be triggered with (big) single files in OwnCloud when they are encrypted.

Even if you won't fix the issue itself please implement something that stops lighttpd from using up all available memory.

Best regards,
Eike

#15 Updated by samjam over 2 years ago

You can't even work around this deficiency by having your dynamic page generator emit to a named pipe and then doing sendfile on the named pipe.

#16 Updated by gstrauss about 1 year ago

  • Related to Bug #949: fastcgi, cgi, flush, php5 problem. added

#17 Updated by gstrauss about 1 year ago

  • Status changed from Wontfix to Patch Pending
  • Assignee deleted (jan)
  • Target version set to 1.4.40

New: asynchronous, bidirectional streaming support for request and response
Submitted pull request: https://github.com/lighttpd/lighttpd1.4/pull/66

included in the pull request is buffering large responses to temporary files instead of keeping it all in memory

#18 Updated by gstrauss about 1 year ago

  • Status changed from Patch Pending to Fixed
  • % Done changed from 0 to 100

Also available in: Atom