Project

General

Profile

Bug #1946

Enabling server.kbytes-per-second disables max-read/write-idle

Added by Delphy over 10 years ago. Updated over 10 years ago.

Status:
Wontfix
Priority:
Normal
Assignee:
-
Category:
core
Target version:
Start date:
2009-03-22
Due date:
% Done:

0%

Estimated time:
Missing in 1.5.x:

Description

Using the following in a config file:

server.max-keep-alive-requests = 0
server.max-keep-alive-idle = 0
server.max-read-idle = 60
server.max-write-idle = 120

evasive.max-conns-per-ip = 2

connection.kbytes-per-second = 200
server.kbytes-per-second = 3500

The server.kbytes-per-second effectively turns off the max-write-idle, so that if a connection stalls and is idle, it is not closed and instead stays in the list. Thus, on a server-status, you get times in the hundreds or thousands of seconds (depending on how long you have your server running) even when zero bytes have been read or written to the connection in the interim.

Disabling the server.kbytes-per-second value (but keeping the rest enabled) results in the server working as normal - connections get closed effectively, however the server is then not bandwidth throttled and quickly eats up all possible network bandwidth which is not sustainable in any kind of mid to long term.

Since this is one of the main features of lighttpd and the primary reason for choosing it, this is an extremely important setting that should work out of the box.

A discussion of this issue can be found here: http://redmine.lighttpd.net/boards/2/topics/984

History

#1

Updated by icy over 10 years ago

  • Target version changed from 1.4.22 to 1.4.23
#2

Updated by Delphy over 10 years ago

  • Target version deleted (1.4.23)

Affected version is 1.4.22.

#3

Updated by icy over 10 years ago

  • Target version set to 1.4.23
#4

Updated by stbuehler over 10 years ago

The code itself looks basically fine; we guess the problem is that the server load is too high to get back to normal, i.e. the limit is hit too early, and some connections never are allowed to send content again.

You could either try to lower buffer limits (sysctl net.ipv4.tcp_wmem), so lighty cannot send too much data on one connection, so more connection have the change to send something before the limit is hit, or try lower per-connection limits or use a lower max-connections limit.

If the connections don't even vanish if the limit isn't hit anymore, that would be a bug we may be able to fix - but i don't think that is the case.

#5

Updated by peto over 10 years ago

I've found that the rate limiting doesn't work terribly well: it's not very fair (buffer up some connections a lot, and give little or nothing to others), and the 1sec interval makes it very chunky, so if you're at the limit, a connection that wants to download a 1k JSON request will often sit around for a full second while transmissions are disabled.

It's probably sufficient for quickly limiting a personal server that only a few people will use, but for an active website this is really the job of kernel (or router) QoS. It has much more control, since it operates after TCP buffering, instead of before, and will probably be much more responsive. Also, you want limits to be per-IP, not per-connection, so you don't encourage people to abuse parallel downloaders to get a bigger chunk (userspace could do this, but I'm pretty sure lighttpd doesn't) ...

#6

Updated by Delphy over 10 years ago

I think stbuehler is right - some kind of other server load was affecting this. Since I dropped the kbytes-per-second down from 4000 to 3500 it's been a lot better - not perfect but much much better than before. So it's obviously only able to work up to a server load limit point.

You can go ahead and close this bug now.

#7

Updated by icy over 10 years ago

Sorry for abusing this ticket but I couldn't find any other way to message you Peto.
I always find your comments insightful and worthy.
Do you want to join us on IRC and maybe become a dev? I think the project could benefit from your contributions :)

#8

Updated by stbuehler over 10 years ago

  • Status changed from New to Wontfix

Hm, this bug is neither invalid nor fixed - so i close it as wontfix, as i don't see how we could this better without recoding lighty from scratch (we already do that anyway :) ).

I basically agree with peto: the kernel has probably better ways to this. But there are some cases where you would want different limits/"pools" for different connections depending on the request. So it would be nice if lighty could do better here.

#9

Updated by peto over 10 years ago

If you want high-quality rate limiting and also to tune the limiting based on aspects of the request which are only available by understanding the HTTP request (and are therefore lighttpd's job), it's tricky. In theory, you can set flags on the TCP connection to pass along specific bits of information to the kernel (eg. priority), though pipelining would make this a little fuzzy and I've never tried it at this level.

icy: Maybe I'll drop in, but I have too many projects, and with most of my patches sitting around indefinitely as is, I don't feel very motivated...

Also available in: Atom