Traffic Shaping » History » Revision 3
« Previous |
Revision 3/8
(diff)
| Next »
icy, 2010-09-19 20:02
Traffic Shaping¶
There are many reasons why one would want to throttle the rate at which connections can download data from your webserver.
Those range from Quality of Service (QoS) to providing better speed for paying customers.
Basically there are three categories of throttling in lighttpd which differ in the way by which available bandwidth is shared:
- per connection
- per client IP address
- per "pool"
Throttling bandwidth per connection¶
io.throttle 100kbyte;
By using this method, each connection is limited to a certain rate at which it can download.
Different connections don't share available bandwidth quotas (other than the physical limit of your upstream pipe).
This can be useful for example if you have a video streaming site and want to limit the rate at which videos are streamed
to 1mbit/s in order to save traffic in case the client decides not to watch the whole video and therefor doesn't need to preload the complete file.
Connection throttling also supports an initial burst
which makes the limit not apply until the specified amount of traffic has been sent to the client.
io.throttle 1mbyte => 100kbyte;
The above example limits a connection to 100 kilobyte/s after 1 megabyte. Especially useful for video streaming.
However, this technique doesn't make sense for pure file download services because a client can simply open multiple connections in order to circumvent the per-connection limit. And this is exactly where the next method comes into play.
Throttling bandwidth per client IP address (not implemented yet)
¶
For download services that allow more than one connection per IP, limiting the download rate by IP is essential in order to prevent clients from getting around the limit by opening multiple connections.
All connections from the same IP address share the bandwidth assigned to them.
Throttling bandwidth per "pool"¶
if req.path =^ "/downloads/" { io.throttle_pool "downloads" => 90mbit; }
The third and last technique is not as self-explanatory as the previous two. It lets you group connections together which then get their fair share of bandwidth from a pool specific to this group. Pools have a name which can be any string.
An example pictures this pretty well: Suppose you have a server with a 100mbit/s upstream connection and on your website a popular download section consisting of a lot of big files.
These downloads will by far eat up the most of your bandwidth but you don't want them to eat up all of it, making the main page load only very slowly.
To prevent this, you can group all requests for files in your download section and assign them to a pool which is limited to 90mbit/s, guaranteeing all other requests 10mbit/s of available bandwidth.
Connections cannot be part of more than one throttle pool at a time.
Mixing methods¶
io.throttle_pool "server limit" => 10mbit; # global limit for all connections io.throttle 100kbyte => 1mbit; # per connection limit with innitial burst
The three methods of throttling explained above are not mutually exclusive, meaning you can combine them.
For example you can limit each connection to 1mbit/s and your whole server (by using a global throttle pool) to 50mbit/s.
Pools have the highest priority, then the limit by IP address is considered and finally the per connection one.
What this means is that if you have a connection that is in a pool which is limited to 50mbit/s having currently 24 other connections in it and limiting connections to 10mbit/s will result in the active connection getting roughly 2mbit/s (if the other connections in the pool are not too slow).
Updated by icy about 14 years ago · 3 revisions