Memory leak in stat_cache.c on a 64bit server
Running r2000 of lighttpd 1.5.0 on a CentOS 4 64-bit machine I am seeing a memory leak with no modules loaded and a load test script that repeatedly downloads the same 7.5k flat file:
==12008== 7,680 (3,680 direct, 4,000 indirect) bytes in 20 blocks are definitely lost in loss record 15 of 15 ==12008== at 0x4905D27: calloc (vg_replace_malloc.c:279) ==12008== by 0x41DD8B: stat_cache_entry_init (stat_cache.c:175) ==12008== by 0x41DF7A: stat_cache_get_entry_internal (stat_cache.c:324) ==12008== by 0x41E364: stat_cache_get_entry_async (stat_cache.c:484) ==12008== by 0x417129: handle_get_backend (response.c:505) ==12008== by 0x4132B7: connection_state_machine (connections.c:1074) ==12008== by 0x408BAA: lighty_mainloop (server.c:1005) ==12008== by 0x40A274: main (server.c:1739) ==12008== ==12008== LEAK SUMMARY: ==12008== definitely lost: 3,680 bytes in 20 blocks. ==12008== indirectly lost: 4,000 bytes in 100 blocks. ==12008== possibly lost: 0 bytes in 0 blocks. ==12008== still reachable: 6,588 bytes in 34 blocks. ==12008== suppressed: 0 bytes in 0 blocks.
The number of direct blocks reported is always twice the number of HTTP requests processed. The config file settings used are:
server.groupname = "lighttpd" server.errorlog = "/web/var/log/error.test.log" server.pid-file = "/var/run/lighttpd/lighttpd.test.pid" server.port = 12345 server.document-root = "/web/www/html/"
Updated by jrabbit over 12 years ago
Further investigation has revealed I did not have the glib2-devel package on the server. Installing this sends the code down a different path inside stat_cache.c as HAVE_GLIB_H is then defined. Built in this configuration, the server does not exhibit the problem. The 64-bit platform probably isn't significant. I've reduced the severity as a workaround is availble.
Updated by peto over 11 years ago
- Status changed from Fixed to Reopened
I'm pretty sure this one isn't fixed. When glib isn't installed, it hits the "still have to store the sce somewhere" code path, which doesn't have a hash to store it in and never frees it (on success).
The interface seems based around the caller never storing the result, and freeing the return value later on in stat_cache_trigger_cleanup, assuming that the caller didn't store the result and that it won't be used. That seems a little loose on how the lifetime of the return value is defined, but a fix would be to store all return values in a simple array, and free all of them every time stat_cache_trigger_cleanup is called.
Also available in: Atom