Project

General

Profile

Actions

Feature #3191

closed

Evaluation of remote_addr for mod_maxminddb for multiplexed connections

Added by fstelzer almost 2 years ago. Updated almost 2 years ago.

Status:
Fixed
Priority:
Normal
Category:
mod_extforward
Target version:
ASK QUESTIONS IN Forums:
No

Description

It seems like I have hit 2 different but similar bugs within lighttpd/modules.
I run lighttpd as a backend behind a loadbalancer that does connection pooling / multiplexing.

1.
When only doing http/1 on the backend connections I get the correct remote_addr for every request even though the connection is reused by the loadbalancer.
When enabling http/2 for the backend, mod_extforward seems to only re-evaluate the x-forwarded-for header once for every new connection.
I see multiple requests with a different header values give me the same REMOTE_ADDR (usually the one from the first request / when the connection was set up).

2. mod_maxminddb seems to have a similar behaviour of not re-evaluating the remote_addr for muliple requests on the same pooled connection. In this case it to happens for both http/1 and http/2


Related issues 1 (0 open1 closed)

Related to Feature #3192: RFE: mod_extforward and multiplexed requests via HTTP/2FixedActions
Actions #1

Updated by gstrauss almost 2 years ago

  • Status changed from New to Need Feedback

mod_extforward seems to only re-evaluate the x-forwarded-for header once for every new connection.

Correct. That is by design and is intentional.

lighttpd expects a TCP connection per client. If the client authenticated using TLS client certificate authentication, then that is by connection, not by request. There can be many requests on a single connection. Similarly, if using a load balancer sending HAProxy PROXY protocol, the information provided by the load balancer is per-connection, not per-request.

mod_maxminddb seems to have a similar behaviour

Correct. That is by design and is intentional. On the first request that is evaluated by mod_maxminddb, the result is the same for all subsequent requests on the same connection. If the first request on the connection was not evaluated by mod_maxminddb, it is still possible that a subsequent request gets evaluated by mod_maxminddb. mod_maxminddb does not evaluate requests unless that data is used by another module which handles the request (e.g. a FastCGI backend). mod_maxminddb evaluation is deferred until required by another module handling the request.


You seem to be suggesting that the above incorrect behavior. On the contrary: as I have described, the behavior is by design and is intentional.

It sounds like your load balancer is reusing the same connection for different clients and that the difference matters to you; if the difference did not matter to you, then you would not have filed this issue. Since the difference does matter to you, please reconfigure your load balancer not to reuse the same connection for different clients.

Why do you think that lighttpd should re-evaluate this information per-request? That would be a performance regression to re-evaluate mod_extforward and mod_maxminddb per-request rather than per-connection for the expected scenario of one client per connection (and many requests from that one client on that same connection).

Actions #2

Updated by fstelzer almost 2 years ago

gstrauss wrote in #note-1:
Thanks for the quick response and the clarification of the design choice.

It sounds like your load balancer is reusing the same connection for different clients and that the difference matters to you; if the difference did not matter to you, then you would not have filed this issue. Since the difference does matter to you, please reconfigure your load balancer not to reuse the same connection for different clients.

Why do you think that lighttpd should re-evaluate this information per-request? That would be a performance regression to re-evaluate mod_extforward and mod_maxminddb per-request rather than per-connection for the expected scenario of one client per connection (and many requests from that one client on that same connection).

Many Loadbalancers (We are using F5 Hardware, Envoy proxy and Varnish) try to optizime backend connections by multiplexing many requests onto a pool of TCP connections. Either by simply using HTTP/1 keep-alive connections or for HTTP/2 this is even more efficient by using even less TCP connections and using multiple streams per Connection.
This is a rather efficient way of reducing latency and overhead by completely eliminating the backend connection setup for most requests.
We are utilizing this for many years without issue since we were parsing the X-Forwarded-For header within the fcgi application and doing the geoip lookup in it as well.
To make live easier for the app we tried to introduce the two lighttpd modules but stumbled onto the mentioned effects.

I agree that this can be a performance regression if redonefor every request. But the re-evaluation would only need to happen in case specific headers change and would have no effect otherwise. I'm not sure about what other modules could be affected in a similar way though.

As far as i understand the docs correctly mod_access will utilize the real client ip extracted from x-forwarded-for. I'm not sure about a config conditional using $HTTP["remoteip"]. But both will grant access to a resource in case a connection will be reused for a new client if the first one had the correct IP. I think this way of connection pooling is fairly common in proxies & loadbalancers so this could also lead to some security issues.

For mutual TLS Authentication a loadbalancer will either forward the raw connection from the client or only authenticatue itself against the backend. So for both cases this is expected behaviour.

Actions #3

Updated by gstrauss almost 2 years ago

  • Category deleted (mod_extforward)
  • Status changed from Need Feedback to Invalid
  • Target version deleted (1.4.xx)

Many Loadbalancers (We are using F5 Hardware, Envoy proxy and Varnish) try to optizime backend connections by multiplexing many requests onto a pool of TCP connections.

Your post appears to make numerous broad, sweeping assumptions, some of which are true only in specific scenarios.

Load balancers are often configurable. As I wrote in my original response above:

It sounds like your load balancer is reusing the same connection for different clients and that the difference matters to you; if the difference did not matter to you, then you would not have filed this issue. Since the difference does matter to you, please reconfigure your load balancer not to reuse the same connection for different clients.

An example: HAProxy connection pooling can be configured as "never", "safe", "aggressive", and "always". There are reasons one option is labelled "safe", and the others (besides "never") are not labelled "safe". The other options are valid in specific configurations, and might not be valid in other configurations. Those other options are not valid with lighttpd mod_extforward, since lighttpd mod_extforward was designed with different constraints.

I agree that this can be a performance regression if redonefor every request. But the re-evaluation would only need to happen in case specific headers change and would have no effect otherwise. I'm not sure about what other modules could be affected in a similar way though.

Seems like you are suggesting it is just a SMOP

No, changing this in lighttpd would not be trivial and would require more resources. (You might be fine with that, but many other people do not need it.) lighttpd would have to be modified to allocate additional memory for every request and to copy the IP for every request. lighttpd does not currently do this, as the IP is per connection, and I do not have any plans to change this in lighttpd.

If I had to guess, I'd guess that you are in netops and trying to optimize number of connections. That's fine. I encourage you to evaluate and look for other bottlenecks, too. If number of open connections (mostly idle) is your biggest bottleneck, then I must say congratulations on everything else!


In your case, a solution for you might be to use mod_magnet and a lua interface to maxminddb instead of using mod_extforward and mod_maxminddb.
Doing so may allow you to use aggressive load balancer connection pooling, while also allowing your simple lua script (perhaps with a simple cache) to check maxminddb for requests where the IP has changed and the backend serving the request needs the geoip information. You might elide the extra work for maxminddb lookup if serving static files. (If you want the geoip information for access logging, then I recommend post-processing the access log offline.)

Some alternatives for lua interfaces to maxminddb:
https://github.com/fabled/lua-maxminddb, or from LuaRocks https://luarocks.org/modules/wesen1/lua-maxminddb
https://github.com/anjia0532/lua-resty-maxminddb, also available from LuaRocks or OPM https://opm.openresty.org/package/anjia0532/lua-resty-maxminddb/

Actions #4

Updated by gstrauss almost 2 years ago

As far as i understand the docs correctly mod_access will utilize the real client ip extracted from x-forwarded-for. I'm not sure about a config conditional using $HTTP["remoteip"]. But both will grant access to a resource in case a connection will be reused for a new client if the first one had the correct IP. I think this way of connection pooling is fairly common in proxies & loadbalancers so this could also lead to some security issues.

I plan to update the mod_extforward doc to highlight this.

lighttpd is flexible and can be configured. How you configure your system is up to you.

mod_extforward changes the perceived IP of the connection, and subsequent per-request modules use that perceived IP, including mod_access and $HTTP["remoteip"]. That behavior is consistent.

If someone violates the design constraint with the load balancer using the same connection for different clients and they fail to test their configuration, then that operator might introduce a security issue on their site. Since you did test, you discovered that the behavior did not match your expectation.

Actions #5

Updated by fstelzer almost 2 years ago

gstrauss wrote in #note-3:

Many Loadbalancers (We are using F5 Hardware, Envoy proxy and Varnish) try to optizime backend connections by multiplexing many requests onto a pool of TCP connections.

Your post appears to make numerous broad, sweeping assumptions, some of which are true only in specific scenarios.

Load balancers are often configurable. As I wrote in my original response above:

It sounds like your load balancer is reusing the same connection for different clients and that the difference matters to you; if the difference did not matter to you, then you would not have filed this issue. Since the difference does matter to you, please reconfigure your load balancer not to reuse the same connection for different clients.

True and most solutions allow turning this of. I just wanted to point out that popular ones (like varnish & envoy) show this behaviour in their default config.

I agree that this can be a performance regression if redonefor every request. But the re-evaluation would only need to happen in case specific headers change and would have no effect otherwise. I'm not sure about what other modules could be affected in a similar way though.

Seems like you are suggesting it is just a SMOP

No, changing this in lighttpd would not be trivial and would require more resources. (You might be fine with that, but many other people do not need it.) lighttpd would have to be modified to allocate additional memory for every request and to copy the IP for every request. lighttpd does not currently do this, as the IP is per connection, and I do not have any plans to change this in lighttpd.

Sorry if my comment came across this way. I'm not suggesting this would be only a small change and I have no idea how involved it would be. You are much better suited to judge this. All I meant was that the perfomance regression would possibly be limited to the described case. If this is not the case as you describe then it's not a desireable one.

I plan to update the mod_extforward doc to highlight this.
lighttpd is flexible and can be configured. How you configure your system is up to you.

Very much appreciated.

Actions #6

Updated by gstrauss almost 2 years ago

Since I evaluated the code, here are my quick notes on what would need to be done to support changing X-Forwarded-For on an HTTP/2 connection:
  • con->dst_addr and con->dst_addr_buf pointers would need to be copied into new members r->dst_addr and r->dst_addr_buf for every request.
  • If the IP changes, those r members would need to be reallocated and regenerated (IP address is re-stringified to enforce normalization of IP string).
  • All use of con->dst_addr and con->dst_addr_buf would need to change to use r->dst_addr and r->dst_addr_buf.
  • At the end of every request, those r members would need to be checked if they match con, or else would need to be free'd.

If you are reusing HTTP/1.1 connections to lighttpd and using mod_extforward and mod_maxminddb, that might work the way you desire, as a single request at a time is in progress on that connection for HTTP/1.1. [Edit: not ok for mod_maxminddb; the database result is cached per connection through lighttpd 1.4.69]

In any case, if your load balancer is in the same data center as the lighttpd servers, then latency between load balancer and lighttpd servers is likely small compared to latency to client. That is why I suggested looking for other bottlenecks, too.

Actions #7

Updated by gstrauss almost 2 years ago

As you pointed out in your original post, and as I explained, mod_maxminddb looks up the IP once per connection.

If I can do so cleanly, I may adjust that behavior to work better with mod_extforward and HTTP/1.1. I plan to update the mod_maxminddb doc, too.

Actions #8

Updated by gstrauss almost 2 years ago

  • Tracker changed from Bug to Feature
  • Subject changed from Evaluation of remote_addr for mod_extforward and mod_maxminddb for multiplexed connections to Evaluation of remote_addr for mod_maxminddb for multiplexed connections
  • Category set to mod_extforward
  • Status changed from Invalid to Patch Pending
  • Target version set to 1.4.70

I have modified mod_maxminddb (for a future version: lighttpd 1.4.70) to check if the IP changes between requests.

Actions #9

Updated by gstrauss almost 2 years ago

lighttpd commit: 21987c86 (Oct 2020) documents the design choice for mod_extforward, made during development of HTTP/2 in lighttpd.

[mod_extforward] preserve changed addr for h2 con

Preserve changed addr for lifetime of h2 connection; upstream proxy
should not reuse same h2 connection for requests from different clients

I have revisited the code, and can limit most of the remote addr management overhead to those using mod_extforward, instead of affecting everybody for every request. The cost to everybody is 2 additional pointers in (request_st *), which is 16-bytes per request for 64-bit builds. Not free, but acceptable.

If you are able to build lighttpd using lighttpd source code and build instructions, I would appreciate some help testing and benchmarking the commits on my development branch https://git.lighttpd.net/lighttpd/lighttpd1.4/src/branch/personal/gstrauss/master Using this dev branch, are you able to use mod_extforward and mod_maxminddb along with your desired load balancer configuration to use HTTP/2 and reuse connections for multiple clients? Does doing so improve the overall performance of your stack?

Actions #10

Updated by fstelzer almost 2 years ago

gstrauss wrote in #note-9:

lighttpd commit: 21987c86 documents the design choice for mod_extforward, made during development of HTTP/2 in lighttpd.
[...]

I have revisited the code, and can limit most of the remote addr management overhead to those using mod_extforward, instead of affecting everybody for every request.

The cost to everybody is 2 additional pointers in (request_st *), which is 16-bytes per request for 64-bit builds. Not free, but acceptable.

If you are able to build lighttpd using lighttpd source code and build instructions, I would appreciate some help testing and benchmarking the commits on my development branch https://git.lighttpd.net/lighttpd/lighttpd1.4/src/branch/personal/gstrauss/master Using this dev branch, are you able to use mod_extforward and mod_maxminddb along with your desired load balancer configuration to use HTTP/2 and reuse connections for multiple clients? Does doing so improve the overall performance of your stack?

Thank you very much for looking into this. I've done some tests and benchmarks this morning using your latest dev build vs. clean 1.4.69
mod_extforward for pooled connections using http/1.1 is looking correct now. I always get the correct IP in the in REMOTE_ADDR.
When using http/2 the behaviour looks just like before. Depending on which connection i hit i get the wrong IP.

mod_maxminddb still has the original behaviour showing a country code not matching the remote_addr both for http/1 & 2.

I can force both of these to always output the correct ip when setting keep-alive-idle = 0,keep-alive-requests = 1 for both http/1.1 & http/2

I also did some benchmarking using wrk2 for both version. Please take these with a grain of salt. I don't have a perfect environment and had some outliers. The benchmarking tool is located on physical hardware ~1.5ms away from the loadbalancer. The loadbalancer & the webserver are both VMs. So there can always be some noisy neighbors. I ran the benchmark 3 times per variant with a lighty restart in between.

2 threads and 1000 connections 60s duration 1000rq/s fixed rate
run 3 consecutive times, lighttpd restart in between.
The difference for http/1.1 is quite visible. Interestingly for http/2 not so much, even showing some better numbers in the original version without keep-alive? But these are so close that it might be just noise in the stats.
Do the keep-alive settings work differently for http/2 or are those applied at all?
I could not find any config settings regarding max http/2 streams or similar but since my remote_addr problem seems to go away i assume keep-alive is applied to http/2 as well.
I know it's not quite fair to compare no keep-alive at all to the fully pooled connections, But my test loadbalancer (envoy) does not offer the option to do pooling per client. This is basically testing the worst case now where every client would only do a single request.

orig version http/1.1,keep-alive-idle = 0,keep-alive-requests = 1
Thread Stats Avg Stdev Max +/- Stdev
Latency 14.07ms 19.56ms 255.49ms 86.50%
Latency 12.81ms 17.92ms 237.44ms 86.10%
Latency 10.23ms 12.30ms 212.61ms 84.45%

orig version http/1.1,keep-alive-idle = 60,keep-alive-requests = 10000
Thread Stats Avg Stdev Max +/- Stdev
Latency 7.83ms 9.59ms 91.52ms 86.71%
Latency 8.72ms 11.28ms 104.96ms 86.89%
Latency 8.98ms 11.74ms 138.75ms 86.21%

orig version http/2,keep-alive-idle = 0,keep-alive-requests = 1
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.70ms 5.17ms 54.69ms 87.03%
Latency 5.53ms 4.73ms 35.07ms 86.60%
Latency 5.37ms 4.49ms 50.40ms 86.75%

orig version http/2,keep-alive-idle = 60,keep-alive-requests = 10000
Thread Stats Avg Stdev Max +/- Stdev
Latency 8.78ms 11.98ms 123.78ms 88.47%
Latency 6.32ms 6.56ms 72.51ms 86.68%
Latency 5.88ms 5.38ms 66.11ms 86.25%

dev version http/1.1,keep-alive-idle = 0,keep-alive-requests = 1
Thread Stats Avg Stdev Max +/- Stdev
Latency 10.42ms 12.86ms 81.47ms 82.77%
Latency 11.36ms 16.19ms 224.38ms 87.51%
Latency 12.95ms 19.05ms 227.46ms 87.46%

dev version http/1.1,keep-alive-idle = 60,keep-alive-requests = 10000
Thread Stats Avg Stdev Max +/- Stdev
Latency 8.59ms 10.65ms 93.44ms 85.60%
Latency 7.37ms 8.06ms 63.55ms 85.73%
Latency 7.61ms 8.77ms 89.41ms 85.85%

dev version http/2,keep-alive-idle = 0,keep-alive-requests = 1
Thread Stats Avg Stdev Max +/- Stdev
Latency 7.40ms 7.86ms 57.66ms 85.67%
Latency 7.24ms 7.60ms 61.31ms 85.42%
Latency 8.17ms 10.04ms 120.58ms 87.21%

dev version http/2,keep-alive-idle = 60,keep-alive-requests = 10000
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.78ms 5.94ms 79.23ms 87.82%
Latency 5.81ms 5.32ms 54.62ms 86.37%
Latency 6.55ms 6.81ms 78.27ms 86.29%

If i can help any way to debug this further (add some debug logs to the modules, or doing a request with a gdb connected to lighty showing some internal state) let me know.
I've looked at your patch but there's too much lighttpd internal i still don't know anything about. I'll look into it more tomorrow morning.

Actions #11

Updated by gstrauss almost 2 years ago

Thanks for testing. I admit that I have not carefully tested various scenarios.

mod_extforward for pooled connections using http/1.1 is looking correct now. I always get the correct IP in the in REMOTE_ADDR.
When using http/2 the behaviour looks just like before. Depending on which connection i hit i get the wrong IP.

I might have to make some changes in mod_proxy.c, if that is involved. Are you using mod_proxy? mod_fastcgi? other?

The new code in mod_extforward does not differentiate between HTTP/1.x and HTTP/2, so it is curious to me that REMOTE_ADDR is working for you with HTTP/1.x, but not with HTTP/2. There may be modifications I need to make elsewhere.

mod_maxminddb still has the original behaviour showing a country code not matching the remote_addr both for http/1 & 2.

Ok. Will look into it. Nothing immediately jumps out at me when looking at the code.

For benchmarking, how large are the pages returned from your backend? lighttpd on a single CPU should be able to push over 10k-100k req/s in your local network were you not limiting wrk to 1k req/s. (BTW, you're also "benchmarking" wrk, or I should say the benchmark is limited by wrk. I think that h2load can push more req/s over HTTP/2.) Therefore, unless your backend takes a few ms itself to generate the response, I would expect the latency to be slightly more than RTT when keep-alive is used (and so TCP connection has already been established, so no round-trips for that).

For kicks, did you try wrk directly at a single lighttpd server, bypassing the load balancer?

Do the keep-alive settings work differently for http/2 or are those applied at all?

The keep-alive setting works in a similar way to HTTP/1.1 and limits the number of HTTP/2 streams before lighttpd sends a graceful HTTP/2 GOAWAY with H2_E_NO_ERROR.

Actions #12

Updated by gstrauss almost 2 years ago

mod_maxminddb still has the original behaviour showing a country code not matching the remote_addr both for http/1 & 2.

If you get a chance before I do, please test with this additional patch. It disables the one-element cache in mod_maxminddb for each connection (which I preserved for benefit of a single client is sending multiple requests on the connection). The lighttpd overhead for handling your requests should be on the order of 10-20us including the maxminddb lookup, assuming your VMs are reasonably powered.

--- a/src/mod_maxminddb.c
+++ b/src/mod_maxminddb.c
@@ -458,6 +458,7 @@ REQUEST_FUNC(mod_maxminddb_request_env_handler)

     handler_ctx ** const hctx = (handler_ctx **)&r->con->plugin_ctx[p->id];

+  #if 0
     if (*hctx && sock_addr_is_addr_eq((sock_addr *)&(*hctx)->addr, dst_addr)) {
         const array * const env = (*hctx)->env;
         for (uint32_t i = 0; i < env->used; ++i) {
@@ -469,6 +470,7 @@ REQUEST_FUNC(mod_maxminddb_request_env_handler)
         }
         return HANDLER_GO_ON;
     }
+  #endif

     array *env = NULL;
     if (*hctx && r->http_version <= HTTP_VERSION_1_1) {

BTW, please verify that you have set up your load balancer for your tests to only hit instances of lighttpd running the same dev code from my development branch.

Actions #13

Updated by fstelzer almost 2 years ago

gstrauss wrote in #note-11:

Thanks for testing. I admit that I have not carefully tested various scenarios.

mod_extforward for pooled connections using http/1.1 is looking correct now. I always get the correct IP in the in REMOTE_ADDR.
When using http/2 the behaviour looks just like before. Depending on which connection i hit i get the wrong IP.

I might have to make some changes in mod_proxy.c, if that is involved. Are you using mod_proxy? mod_fastcgi? other?

No mod_proxy, but mod_fastcgi with a php-fpm backend. Nothing special besides this in the config i think.

The new code in mod_extforward does not differentiate between HTTP/1.x and HTTP/2, so it is curious to me that REMOTE_ADDR is working for you with HTTP/1.x, but not with HTTP/2. There may be modifications I need to make elsewhere.

mod_maxminddb still has the original behaviour showing a country code not matching the remote_addr both for http/1 & 2.

Ok. Will look into it. Nothing immediately jumps out at me when looking at the code.

For benchmarking, how large are the pages returned from your backend? lighttpd on a single CPU should be able to push over 10k-100k req/s in your local network were you not limiting wrk to 1k req/s. (BTW, you're also "benchmarking" wrk, or I should say the benchmark is limited by wrk. I think that h2load can push more req/s over HTTP/2.) Therefore, unless your backend takes a few ms itself to generate the response, I would expect the latency to be slightly more than RTT when keep-alive is used (and so TCP connection has already been established, so no round-trips for that).

The 1k rq/s limit is a specific rate fixed by wrk. For latency measurements i explicitly do not want to saturate the server, loadbalancer or the benchmarking tool. Therefore wrk sets a fixed request rate and then measures latency over all percentiles (i only posted the avg/max values). (https://github.com/giltene/wrk2 if you're interested).
For my test case i used a very simple php var_dump() on the remote_add & geoip_country_code values. so very small content size.
I think the 6-8ms is reasonable for an established connection considering the php backend, the network link and the loadbalancer in between.

For kicks, did you try wrk directly at a single lighttpd server, bypassing the load balancer?

Unfortunately wrk is only able to do http/1.1. And for direct testing without pooling i need to adjust the benchmark parameters to use less connections. Otherwise any extra latency for tcp connects would be hidden when doing 1000rq/s over 1000connections. And now when i think about it i should probably do the same for testing via the loadbalancer. Even though that the pooling it does should offset this effect. As long as i still get the 1krq/s through then the number of connections between loadbalancer & client should not matter. Those are only opened once.

But from within the webservers local network (some benchmarking hardware against the vm webserver) i get:
with keep-alive:
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.23ms 1.51ms 26.35ms 95.15%
Latency 1.76ms 752.22us 24.53ms 89.32%
Latency 2.00ms 0.92ms 19.30ms 90.01%

and with keep-alive turned off:
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.25ms 1.70ms 28.93ms 95.81%
Latency 3.50ms 8.14ms 132.10ms 96.97%
Latency 2.33ms 1.69ms 26.69ms 94.27%

Not sure if this really is enough of a difference to consider since the test scenario is far from perfect. Though any difference in overhead for a new connection would compound for systems in between the loadbalancer & webserver and the RTT in between.
In the ideal scenario the loadbalancer would keep open a single tcp/http/2 connection basically forever multiplexing all client requests over multiple http/2 streams.

I'll try to find a better benchmarking endpoint right in front of the loadbalancer to get rid of any latency in connecting to it. The Loadbalancer is a VM but with a SR-IOV Network card directly attached to it (so no virtual switches or other stuff in between).

To get reliable latency measurements i would probably need to set this all up in hardware and diretctly connected to each other.

Obviously there are many more (and usually more significant) performance improvements in the php code written by our devs ;)
The thing i'm trying to optimize for on my end is latency and efficiency for the loadbalancers. We are seeing many (often small) requests from many different clients. So a lot of connection churn for the loadbalancers and conntrack/netfilter which i'm trying to avoid or reduce. Especially if i get huge spikes in new requests. You can only do so much with rate limiting when the traffic from a DDoS is hard to differentiate to that new advertisement served on a big site ^^

Actions #14

Updated by fstelzer almost 2 years ago

gstrauss wrote in #note-12:

mod_maxminddb still has the original behaviour showing a country code not matching the remote_addr both for http/1 & 2.

If you get a chance before I do, please test with this additional patch. It disables the one-element cache in mod_maxminddb for each connection (which I preserved for benefit of a single client is sending multiple requests on the connection). The lighttpd overhead for handling your requests should be on the order of ~10us including the maxminddb lookup, assuming your VMs are reasonably powered.

[...]

BTW, please verify that you have set up your load balancer for your tests to only hit instances of lighttpd running the same dev code from my development branch.

yes. when trying to apply your patch i noticed that i did a mistake with the build/deploy of the dev version :/ (i verified that i was hitting a dev build but did not compare the commit / hash for it).
I now really tried your dev branch (still without the additional maxminddb patch) and the remote_addr & maxminddb stuff seem to work. Sorry for the mistake. My bad.

I will do some more testing & a new benchmark to see if I can spot any difference to before. But at first glance this looks good!

Actions #15

Updated by gstrauss almost 2 years ago

I found in my testing that I needed to make an adjustment to maxminddb for that one-element cache, specifically for HTTP/1.x.

I forced-pushed a change to my dev branch. I believe that both mod_extforward and mod_maxminddb should now work with HTTP/1.x and HTTP/2 with load balancers which reuse the same connection for multiple different clients.

Actions #16

Updated by gstrauss almost 2 years ago

If you want to repeat my tests, I chose two random IPs and interleaved requests alternating the IPs using X-Forwarded-For on the same connection.
curl commands using HTTP/1.1 and HTTP/2 at bottom.

lighttpd -D -f /dev/shm/test.conf

/dev/shm/test.conf

server.port = 8080
server.document-root = "/dev/shm" 
server.max-keep-alive-requests = 65535
server.modules += ("mod_extforward", "mod_maxminddb", "mod_magnet")

extforward.forwarder = ( "127.0.0.1" => "trust")

maxminddb.activate = "enable" 
maxminddb.db = "/usr/share/GeoIP/GeoLite2-City.mmdb" 
maxminddb.env = ("GEOIP_CITY_NAME" => "city/names/en")

magnet.attract-raw-url-to = ("/dev/shm/geo.lua")

/dev/shm/geo.lua

local r = lighty.r
local addr = r.req_attr["request.remote-addr"]
local city = r.req_env["GEOIP_CITY_NAME"] or ''

r.resp_body.set({ addr, ' ', city, '\n' })

if addr == "71.71.71.71" and city == "Kernersville" then
  return 200
end

if addr == "72.72.72.72" and city == "Randolph" then
  return 200
end

return 500

In a different shell from the one running lighttpd in the foreground (lighttpd -D)

curl --http1.1 -H "X-Forwarded-For: 71.71.71.71" http://localhost:8080/ --next -H "X-Forwarded-For: 72.72.72.72" http://localhost:8080/ --next -H "X-Forwarded-For: 71.71.71.71" http://localhost:8080/ --next -H "X-Forwarded-For: 72.72.72.72" http://localhost:8080/ --next -H "X-Forwarded-For: 71.71.71.71" http://localhost:8080/ --next -H "X-Forwarded-For: 72.72.72.72" http://localhost:8080/ --next -H "X-Forwarded-For: 71.71.71.71" http://localhost:8080/ --next -H "X-Forwarded-For: 72.72.72.72" http://localhost:8080/ --next -H "X-Forwarded-For: 71.71.71.71" http://localhost:8080/ --next -H "X-Forwarded-For: 72.72.72.72" http://localhost:8080/

curl --http2 -H "X-Forwarded-For: 71.71.71.71" http://localhost:8080/ --next -H "X-Forwarded-For: 72.72.72.72" http://localhost:8080/ --next -H "X-Forwarded-For: 71.71.71.71" http://localhost:8080/ --next -H "X-Forwarded-For: 72.72.72.72" http://localhost:8080/ --next -H "X-Forwarded-For: 71.71.71.71" http://localhost:8080/ --next -H "X-Forwarded-For: 72.72.72.72" http://localhost:8080/ --next -H "X-Forwarded-For: 71.71.71.71" http://localhost:8080/ --next -H "X-Forwarded-For: 72.72.72.72" http://localhost:8080/ --next -H "X-Forwarded-For: 71.71.71.71" http://localhost:8080/ --next -H "X-Forwarded-For: 72.72.72.72" http://localhost:8080/

On localhost on my laptop, the requests on the same connection were served in under 100us average latency when using wrk with 1 thread and 5 connections.
wrk --latency -t 1 -c 5 -H "X-Forwarded-For: 71.71.71.71" http://localhost:8080/
wrk --latency -t 1 -c 5 -H "X-Forwarded-For: 72.72.72.72" http://localhost:8080/

That 100us average latency is without the network and without FastCGI and the backend PHP -- and with hitting the one-element cache in mod_maxminddb when the IP does not change -- but you should be able to extend this to check the latency of the load balancers if you were to, for example, run envoy on localhost and tests load-balancing requests through envoy to multiple instances of lighttpd also running on localhost (and on different ports).

Actions #17

Updated by fstelzer almost 2 years ago

yes, looks good. thanks a lot.
Is the "thank you" paypal still correct (https://www.lighttpd.net/thank-you/)?
It's from 2007, so just wanted to make sure it reaches the correct destination.

Actions #18

Updated by gstrauss almost 2 years ago

The link is correct and any donations go towards hosting costs. (I changed the link you posted to use https://www.lighttpd.net/thank-you/)

Thank you for your feedback and for your participation in testing.

Actions #19

Updated by gstrauss almost 2 years ago

  • Related to Feature #3192: RFE: mod_extforward and multiplexed requests via HTTP/2 added
Actions #20

Updated by gstrauss almost 2 years ago

  • Status changed from Patch Pending to Fixed
Actions

Also available in: Atom