dealing with memory leaks when proxying large files
Added by twhaples about 14 years ago
So, I've got an application server. Two kinds of clients connect to the application server: a few thousands of automatic processes mostly involved in periodically sending light traffic, and users who access the system through a web browser. Due to constraints outside my control, they both need to talk to the same server over HTTPS on port 443 (on the same IP address). The main application server is none too happy dealing with all these connections at once and setting up SSL connections for them repeatedly, so instead, lighttpd has been set up to be a proxy; there's a simple rule that handles the automatic processes one way and handles the users another way. Yay.
It works like a dream....
... until the user tries to download a large dynamically-generated file, and runs into a memory leak which sounds suspiciously like this: http://redmine.lighttpd.net/issues/1283
It seems this is "WONTFIX" and the official advice is "Don't send big files via proxy, cgi, scgi, ..."
a) What constitutes a "large file"? Exactly how few bytes of file being proxied/CGI'd/SCGI'd/etc does lighttpd officially support?
b) I'm sure there's been some discussion about why it's WONTFIX and it would be kind of lame to rehash it extensively. Could anyone refer me to the preexisting rationale for not fixing this?
c) I'm about to sit down in front of the lighttpd source and try to fix this because it's kind of really important to my project (and probably marginally less painful than writing this sort of proxy from scratch). Is there any background information on this particular bug available or any well-known landmines I should avoid?
Replies (1)
RE: dealing with memory leaks when proxying large files - Added by Olaf-van-der-Spek about 14 years ago
Can't you use a real proxy, like Varnish?
a) The entire file is buffered inside Lighttpd, so 'too large' depends on how much memory you want Lighttpd to use.
b) Will get fixed in 2.0