Project

General

Profile

Actions

Bug #607

closed

lighttpd + mod_ssl stalls on POST requests between 8317 and 16381 bytes long

Added by Anonymous over 18 years ago. Updated about 13 years ago.

Status:
Duplicate
Priority:
Normal
Category:
core
Target version:
-
ASK QUESTIONS IN Forums:

Description


# My lighttpd.conf file
server.modules = ( "mod_access",
                   "mod_alias",
                   "mod_scgi",
                   "mod_accesslog",
                   "mod_rewrite",
                   "mod_staticfile" )

server.port             = 8003

server.document-root    = "/home/jtate/mercurial/tp/raa" 
var.logbase     = "/tmp" 

#Note that this is required if you wish to run multiple lighttpd processes
server.pid-file         = "/var/run/raa-lighttpd.pid" 
server.errorlog         = var.logbase + "/lighttpd.error.log" 
accesslog.filename      = var.logbase + "/lighttpd.access.log" 

mimetype.assign         = ( ".js"   => "text/javascript",
                            ".css"  => "text/css",
                            ".png"  => "image/png",
                            ".jpg"  => "image/jpeg",
                            ".gif"  => "image/gif",
                            ".ico"  => "image/x-icon" 
                          )

alias.url = (
            "/favicon.ico" => server.document-root + "/content/images/favicon.ico",
            "/static/" => server.document-root + "/content/")

$HTTP["url"] !~ "^/(static/|favicon.ico)" {
    scgi.server = (
                    "/" =>
                      ( "127.0.0.1" =>
                        (
                          "host" => "127.0.0.1",
                          "port" => 4000,
                          "check-local" => "disable" 
                        )
                      )
                  )
}

ssl.engine = "enable" 
ssl.pemfile = "/etc/ssl/pem/raa.pem" 

Note that the hang doesn't seem to happen from Firefox when posting this amount of data, but does happen in both perl (LWP) and python (httplib) test programs we wrote (which we will upload soon). Turning off SSL also seems to stop the stall.

Also of note, when the client times out, a protocol error is what is reported, but if the client is killed, lighttpd finishes the request: opens the connection to the scgi server and processes the request, as if the server is waiting for something more from the client before actually doing any work.

It's not important what scgi app is served since an strace shows that no connection is even opened to the scgi server for the failing requrest.

Another interesting point is that the same thing happens between 24701 and 23765 bytes. Another stall range starts at 41085. For every 16K of post data there is a 8065 byte windo where a request will stall lighttpd.

-- jtate


Files

testlighttpd.py (493 Bytes) testlighttpd.py test post using python jtate, 2006-04-04 15:30
lighttpd.ssl.patch (360 Bytes) lighttpd.ssl.patch Patch that fixes the 16K boundary condition. jtate, 2006-07-07 12:22
lighttpd-1.4.25_fix_ssl_connection_stall.patch (375 Bytes) lighttpd-1.4.25_fix_ssl_connection_stall.patch , 2010-02-10 05:32

Related issues 1 (0 open1 closed)

Is duplicate of Bug #2197: lighttpd stalls when reading fragmented ssl requestsFixed2010-05-11Actions
Actions #1

Updated by jtate over 18 years ago

I turned on debug.log-state-handling, and got the following output:


# for a request of 8316 bytes:
2006-04-04 11:19:33: (connections.c.1311) state at start 8 req-start
2006-04-04 11:19:33: (connections.c.1324) state for fd 8 req-start
2006-04-04 11:19:33: (connections.c.1571) state for fd 8 read
2006-04-04 11:19:33: (connections.c.1685) state at exit: 8 read
2006-04-04 11:19:34: (connections.c.1311) state at start 8 read
2006-04-04 11:19:34: (connections.c.1571) state for fd 8 read
2006-04-04 11:19:34: (connections.c.1685) state at exit: 8 read
2006-04-04 11:19:34: (connections.c.1311) state at start 8 read
2006-04-04 11:19:34: (connections.c.1571) state for fd 8 read
2006-04-04 11:19:34: (connections.c.1685) state at exit: 8 read
2006-04-04 11:19:34: (connections.c.1311) state at start 8 req-end
2006-04-04 11:19:34: (connections.c.1339) state for fd 8 req-end
2006-04-04 11:19:34: (connections.c.1571) state for fd 8 readpost
2006-04-04 11:19:34: (connections.c.1685) state at exit: 8 readpost
2006-04-04 11:19:34: (connections.c.1311) state at start 8 readpost
2006-04-04 11:19:34: (connections.c.1571) state for fd 8 readpost
2006-04-04 11:19:34: (connections.c.1365) state for fd 8 handle-req
2006-04-04 11:19:34: (connections.c.1685) state at exit: 8 handle-req
2006-04-04 11:19:34: (connections.c.1311) state at start 8 handle-req
2006-04-04 11:19:34: (connections.c.1365) state for fd 8 handle-req
2006-04-04 11:19:34: (connections.c.1451) state for fd 8 resp-start
2006-04-04 11:19:34: (connections.c.1579) state for fd 8 write
2006-04-04 11:19:34: (connections.c.1685) state at exit: 8 write
2006-04-04 11:19:34: (connections.c.1311) state at start 8 write
2006-04-04 11:19:34: (connections.c.1579) state for fd 8 write
2006-04-04 11:19:34: (connections.c.1467) state for fd 8 resp-end
2006-04-04 11:19:34: (connections.c.1522) state for fd 8 connect
2006-04-04 11:19:34: (connections.c.1685) state at exit: 8 connect
2006-04-04 11:19:34: (connections.c.1311) state at start 8 connect
2006-04-04 11:19:34: (connections.c.1522) state for fd 8 connect
2006-04-04 11:19:34: (connections.c.1685) state at exit: 8 connect
2006-04-04 11:19:34: (connections.c.1311) state at start 8 connect
2006-04-04 11:19:34: (connections.c.1522) state for fd 8 connect
2006-04-04 11:19:34: (connections.c.1685) state at exit: 8 connect

# For a request of 8317 bytes
2006-04-04 11:19:52: (connections.c.1311) state at start 8 req-start
2006-04-04 11:19:52: (connections.c.1324) state for fd 8 req-start
2006-04-04 11:19:52: (connections.c.1571) state for fd 8 read
2006-04-04 11:19:52: (connections.c.1685) state at exit: 8 read
2006-04-04 11:19:52: (connections.c.1311) state at start 8 read
2006-04-04 11:19:52: (connections.c.1571) state for fd 8 read
2006-04-04 11:19:52: (connections.c.1685) state at exit: 8 read
2006-04-04 11:19:52: (connections.c.1311) state at start 8 read
2006-04-04 11:19:52: (connections.c.1571) state for fd 8 read
2006-04-04 11:19:52: (connections.c.1685) state at exit: 8 read
2006-04-04 11:19:52: (connections.c.1311) state at start 8 req-end
2006-04-04 11:19:52: (connections.c.1339) state for fd 8 req-end
2006-04-04 11:19:52: (connections.c.1571) state for fd 8 readpost
2006-04-04 11:19:52: (connections.c.1685) state at exit: 8 readpost
2006-04-04 11:19:52: (connections.c.1311) state at start 8 readpost
2006-04-04 11:19:52: (connections.c.1571) state for fd 8 readpost
2006-04-04 11:19:52: (connections.c.1685) state at exit: 8 readpost
2006-04-04 11:20:53: (connections.c.1311) state at start 8 error
2006-04-04 11:20:53: (connections.c.1658) shutdown for fd 8
2006-04-04 11:20:53: (connections.c.1533) state for fd 8 close
2006-04-04 11:20:53: (connections.c.1562) connection closed for fd 8
2006-04-04 11:20:53: (connections.c.1522) state for fd 8 connect
2006-04-04 11:20:53: (connections.c.1685) state at exit: 8 connect

Note that 60 seconds elapses before the error state kicks in.

Actions #2

Updated by jan over 18 years ago

  • Status changed from New to Fixed
  • Resolution set to fixed

fixed in changeset r1095

Actions #3

Updated by Anonymous over 18 years ago

  • Status changed from Fixed to Need Feedback
  • Resolution deleted (fixed)

I'm still seeing a problem after installing the latest code from svn, except for me it is happening between 12476 - 16382 bytes inclusive, and every 16K bytes after that. I'm using the same test program as above. If you aren't seeing the problem, I'll put together a simplified version of my conf file and debugging information.

-- brad

Actions #4

Updated by bgreenlee over 18 years ago

It looks like that fixed the problem. Thanks! I noticed that it hasn't been added to the svn trunk, though.

Actions #5

Updated by jan about 18 years ago

  • Status changed from Need Feedback to Fixed
  • Resolution set to fixed

fixed in 1.4.12

Actions #6

Updated by jtate about 18 years ago

confirmed fixed.

Actions #7

Updated by almost 15 years ago

The fix was NOT applied in 1.4.25 yet. The unfixed version still suffers from stall issue. After applied the patch the issue is solved. Since this bug has been closed for 3 years and 1.4.25 was just released for few months, it is quite likely that the latest version is still suffered from the same issue.

The attachment is the same patch target for version 1.4.25.

Actions #8

Updated by almost 15 years ago

  • Target version set to 1.4.x
Actions #9

Updated by Olaf-van-der-Spek about 14 years ago

  • Target version changed from 1.4.x to 1.4.29
Actions #10

Updated by stbuehler almost 14 years ago

  • Status changed from Reopened to Duplicate
  • Priority changed from High to Normal
  • Target version deleted (1.4.29)
  • Missing in 1.5.x set to No
Actions #11

Updated by Olaf-van-der-Spek almost 14 years ago

Duplicate of what?

Actions #12

Updated by aron about 13 years ago

I think it is a duplicate of #2197, fixed in r2729

Actions

Also available in: Atom