Project

General

Profile

Actions

Bug #3146

closed

Missing last chunk with HTTP/2 in rare cases

Added by flynn about 2 years ago. Updated almost 2 years ago.

Status:
Fixed
Priority:
Normal
Category:
TLS
Target version:
-
ASK QUESTIONS IN Forums:
No

Description

This a not reproducable problem, that is not easy to trigger:

git checkouts over https fail with the following message

error: RPC failed; curl 56 LibreSSL SSL_read: Connection reset by peer, errno 54
error: 1458 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output

Setup/conditions:
- checkout(=response) must be big (>100MB), small checkouts never trigger this problem
- it' s hard to reproduce on a linux client, easier on OSX
- on linux I can only trigger it inside a container, not on a host (timing issue?)
- happens always at the end of the response with varying remaining sizes < 8kB
- happens only with HTTP/2 requests, forcing HTTP/1.1 on the client seems to solve the issue
- lighttpd version 1.4.64 is configured with backend over mod_proxy (unix domain socket, not TCP), server.stream-response-body = 1, server.stream-request-body = 0
- mod_openssl for https/TLS
- no corresponding messages in error.log
- bytes sent for the body (%b) in access log is always 2-4kB smaller for HTTP/2 requests than HTTP/1.1

Maybe related to #3111

Actions #1

Updated by gstrauss about 2 years ago

  • Category changed from mod_proxy_backend_http to TLS

I'll try to repro. Might take some time.

Actions #2

Updated by gstrauss about 2 years ago

Trying to narrow this down. Is the problem reproducible (over time) if proxy.header += ( "force-http10" => "enable" ) ? Is the backend sending an HTTP/1.1 response with Transfer-Encoding: chunked ?

From the above, it sounds like the client is git. Can you describe the backend?

Actions #3

Updated by flynn about 2 years ago

Client is git (new versions e.g. 2.35.1), backend is GitLab (current stable version) connected over unix domain socket.

Backend sends HTTP/1.1 200 OK with Transfer-Encoding: chunked and Connection: close.

The test with force-http10 is started, but needs some time.

Actions #4

Updated by flynn about 2 years ago

Test with force-http10 failed on the first attempt.

I have also enabled "upgrade" => "enable" in proxy.header.

Actions #5

Updated by gstrauss about 2 years ago

Hmm. I guess I as a separate issue, "force-http10" => "enable" should overwrite and disable "upgrade"
(Edit: lighttpd mod_proxy already does not forward Upgrade to backend if "force-http10" => "enable")

I think that gitlab might require HTTP/1.1.

I'm working on some low-impact instrumentation in lighttpd and hope to share it tomorrow.

Actions #6

Updated by gstrauss about 2 years ago

Test with force-http10 failed on the first attempt.

Failed how? With the issue in the original post (200 OK with truncated body) or did it fail with a 4xx or 5xx response?

Actions #7

Updated by flynn about 2 years ago

Same failure as in the original post, no change except the size of the expected bytes.

Actions #8

Updated by gstrauss about 2 years ago

Since the issue can happen for you when the backend sends Transfer-Encoding: chunked and when it sends Content-Length, then the problem is probably not related to lighttpd reading from the backend and de-chunking Transfer-Encoding: chunked.

I have been tracing other code paths, but so far have not tracked the issue down.

Actions #9

Updated by flynn about 2 years ago

From my knowledge this issue only happens on large git checkouts with Transfer-Encoding: chunked, never with Content-Length.

To verify a client issue I'll try to setup haproxy as alternative reverse proxy before gitlab.

Actions #10

Updated by gstrauss about 2 years ago

Test with force-http10 failed on the first attempt.

Failed how? With the issue in the original post (200 OK with truncated body) or did it fail with a 4xx or 5xx response?

Same failure as in the original post, no change except the size of the expected bytes.

I must have misunderstood your response. Is gitlab sending Transfer-Encoding: chunked in response to an HTTP/1.0 request (with "force-http10" => "enable")?

Actions #11

Updated by flynn about 2 years ago

I did not debug the HTTP/1.0 request with strace on the UNIX domain socket as I did for the HTTP/1.1 request, so I'm not sure.

The term expected bytes referred to the second line of the git error message, e.g.

error: 1458 bytes of body are still expected

not the a Content-Length header in the server response.

Actions #12

Updated by gstrauss about 2 years ago

I have been trying (unsucessfully) to reproduce this using curl with TLS (https) and http2 to connect to lighttpd which proxies back to a second lighttpd which runs a CGI to send a response in blocks, totalling a bit more than 100MB. Both lighttpd instances have server.stream-response-body = 1

- Inside lighttpd, r->resp_body_finished = 1 should not be set until the Transfer-Encoding: chunked final "chunked" block is received ("0\r\n\r\n", or with trailers).
- If r->resp_body_finished = 1 is not set, then lighttpd should not consider the response complete.
- If lighttpd receives POLLRDHUP from the backend lighttpd instance, lighttpd will read the socket buffers to EOF and will process all that data before ending the request.

Something must not be working as intended...

Actions #13

Updated by gstrauss about 2 years ago

Low impact patch for instrumentation. If this patch issues trace when the error occurs for you, then we might be on the right track to narrow down the issue. I have not managed to trigger this trace (except for the false positive when "force-http10" => "enable" to the backend, and EOF from backend is how end of response from backend is detected). If this patch does not issue trace when the error occurs for you, then the error must be elsewhere. (The error probably is elsewhere, but let's rule this area out).

--- a/src/http-header-glue.c
+++ b/src/http-header-glue.c
@@ -678,6 +678,13 @@ void http_response_backend_done (request_st * const r) {
                                                           CONST_STR_LEN("Expires"));
                        }
                  #endif
+                       else if (r->http_version == HTTP_VERSION_2) {
+                               log_error(r->conf.errh, __FILE__, __LINE__,
+                                         "h2 !resp_body_finished scratchpad:%lld dechunk_done:%d gw_chunked:%lld",
+                                         (long long)r->resp_body_scratchpad,
+                                         r->gw_dechunk ? r->gw_dechunk->done : -99,
+                                         r->gw_dechunk ? (long long)r->gw_dechunk->gw_chunked : -88);
+                       }
                        r->resp_body_finished = 1;
                }
        default:

Actions #14

Updated by flynn about 2 years ago

I added the patch on the affected server, but I cannot trigger the problem and have to wait until Monday.

But I already get messages like this on other requests:

(http-header-glue.c.682) h2 !resp_body_finished scratchpad:-1 dechunk_done:-99 gw_chunked:-88
Actions #15

Updated by gstrauss about 2 years ago

(http-header-glue.c.682) h2 !resp_body_finished scratchpad:-1 dechunk_done:-99 gw_chunked:-88

If too much noise results from the patch, I can extend the patch to omit trace for backends responding with neither Transfer-Encoding: chunked or Content-Length. The trace should probably also be restricted to (r->http_status == 200)

Actions #16

Updated by gstrauss about 2 years ago

--- a/src/http-header-glue.c
+++ b/src/http-header-glue.c
@@ -678,6 +678,13 @@ void http_response_backend_done (request_st * const r) {
                                                           CONST_STR_LEN("Expires"));
                        }
                  #endif
+                       else if (r->http_version == HTTP_VERSION_2 && (r->resp_body_scratchpad != -1 || r->gw_dechunk) && r->http_status < 300) {
+                               log_error(r->conf.errh, __FILE__, __LINE__,
+                                         "h2 !resp_body_finished scratchpad:%lld dechunk_done:%d gw_chunked:%lld",
+                                         (long long)r->resp_body_scratchpad,
+                                         r->gw_dechunk ? r->gw_dechunk->done : -99,
+                                         r->gw_dechunk ? (long long)r->gw_dechunk->gw_chunked : -88);
+                       }
                        r->resp_body_finished = 1;
                }
        default:
Actions #18

Updated by gstrauss about 2 years ago

As an aside, and possibly a different issue, I had trouble with git clone of my test git repository (containing a 105MB object from a large commit of random data) using a cgit backend. The workaround was to disable the cgit cache in /etc/cgitrc with cache-size=0

Actions #19

Updated by flynn about 2 years ago

The problem occurs also with current git versions, e.g. 2.35.1

Before writing this ticket I already found the complaints on stackoverflow and tried some of the suggested solutions.

I wrote this ticket, because I recognized the problem after updating lighttpd to version 1.4.64, which is just a coincidence, not an explanation.

I'll try more test on Monday and Tuesday.

Actions #20

Updated by gstrauss about 2 years ago

I wrote this ticket, because I recognized the problem after updating lighttpd to version 1.4.64, which is just a coincidence, not an explanation.

Upgrade from which version? I took a quick look at changes between lighttpd 1.4.63 and lighttpd 1.4.64 and did not immediately see any related changes.
I wonder if the large git object with which you are having trouble is relatively new in your git repository.

When you have a chance next week: As I wrote above, the issue might be related to configuration settings in how git handles objects received over HTTP. If you have the exact request sent by a git client in your lighttpd access log, what happens if you make the same request using curl -o /tmp/curl.out directly as the client?

I haven't had a chance to look further into cgit, but was surprised how the cgit cache apparently mishandled the large git object.

Actions #21

Updated by flynn about 2 years ago

I found this curl issue: https://github.com/curl/curl/issues/6526

This fits perfect my observations:
  • linux has no issue, because Debian builds curl with GnuTLS (not openssl)
  • OSX has the issue, because curl is build with openssl

I'll try to verify this by building git with curl using openssl on linux/Debian.

Actions #22

Updated by gstrauss about 2 years ago

FYI: lighttpd attempts to sent TLS "close notify" alert to cleanly end TLS connections; lighttpd does not just close() TLS connections (unless there is a network error)

If you have sufficient space in server.upload-dirs, you might test server.stream-response-body = 0 for the responses from the backend, as that setting will cause lighttpd to buffer the response from the backend, and for lighttpd to send Content-Length with the response.

Actions #23

Updated by gstrauss about 2 years ago

I found this curl issue: https://github.com/curl/curl/issues/6526

To clarify above, I do not think that issue applies to the case reported here because a) lighttpd sends TLS close notify alert, and b) over HTTP/2, curl will know the end of data when it receives h/2 end stream flag.
(The issue reported here is not an HTTP/1.x response missing Content-Length and Transfer-Encoding: chunked)

Actions #24

Updated by gstrauss about 2 years ago

  • Status changed from New to Need Feedback

Given multiple different approaches, I have not been able to reproduce this issue.

I have traced through the lighttpd code, and lighttpd confirmed that does not set r->resp_body_finished until lighttpd finishes receiving the response from the backend. When r->resp_body_finished is set, lighttpd makes sure to send everything in r->write_queue before considering the request complete. lighttpd does not consider a request complete until r->resp_body_finished is set and r->write_queue is empty. This seems pretty air-tight.

With #3111, the issue was that r->write_queue was not empty, but lighttpd was also not rescheduling the connection quickly enough for further writes. The response from lighttpd was still complete, but the final bytes were delayed.

In the issue you are reporting here, the client is reporting that it is not receiving a complete response before connection is closed due to an error.
error: RPC failed; curl 56 LibreSSL SSL_read: Connection reset by peer, errno 54
error: 1458 bytes of body are still expected

With HTTP/2, the DATA frames "frame" the response body, so HTTP/2 protocol knows whether or not it has received the complete response body when it receives the DATA frame with END STREAM flag set (or HEADERS frame with trailers). error: 1458 bytes of body are still expected suggests that the response was sent with HTTP response header Content-Length, or that the git protocol for index-pack contains this information. (I have not dug into libcurl or git code.)

As I mentioned above, lighttpd attempts to properly end TLS connections by sending TLS close notify alert (and waiting a little while to receive close notify from the client) before close() of the connection. Yet, the client reporting error: RPC failed; curl 56 LibreSSL SSL_read: Connection reset by peer, errno 54 suggests that ECONNRESET is being returned by SSL_read() instead of something like EOF. ...maybe I'll trace through some lighttpd code looking for error conditions, and maybe write a debug patch to report when such conditions occur.

With the modified patch to http_response_backend_done() a few comments above, I do not think you'll get any trace. If you do not get any trace, then (when the issue occurs) it must be lower in the protocol stack, either HTTP/2 framing or network writes, or possibly on the client side. If you do manage to reproduce the issue, it would be interesting to see what is happening on the client side if you have set
export GIT_TRACE_PACKET=1
export GIT_TRACE=1
export GIT_CURL_VERBOSE=1

In lighttpd.conf, you can set debug.log-ssl-noise = "enable" to get error trace for things like lighttpd detecting ECONNRESET from the client.

I'd have to provide a small debug patch to issue trace from src/h2.c:h2_send_rst_stream_id() on errors resulting in lighttpd sending RST_STREAM h2 frame.
The following patch should not add much noise to the logs on a private network, but might add some noise for internet-facing servers which might be testing or fuzzing with invalid HTTP/2.

--- a/src/h2.c
+++ b/src/h2.c
@@ -313,6 +313,7 @@ h2_send_rst_stream_id (uint32_t h2id, connection * const con, const request_h2er
     rst_stream.u[3] = htonl(e);
     chunkqueue_append_mem(con->write_queue,  /*(+3 to skip over align padding)*/
                           (const char *)rst_stream.c+3, sizeof(rst_stream)-3);
+log_error(NULL, __FILE__, __LINE__, "(%s) sending h2 RST_STREAM on stream id:%u, error code:%d", con->dst_addr_buf.ptr, h2id, e);
 }

Actions #25

Updated by flynn about 2 years ago

Setting debug.log-ssl-noise = "enable" does not show any additional log messages, the log message about the h2 RST_STREAM does not show up on failure.

Using the GIT TRACE variables at the end the following messages are shown:

09:32:05.770120 http.c:664              == Info: We are completely uploaded and fine
09:32:05.892899 http.c:611              <= Recv header, 0000000013 bytes (0x0000000d)
09:32:05.892974 http.c:623              <= Recv header: HTTP/2 200
09:32:05.893006 http.c:611              <= Recv header, 0000000025 bytes (0x00000019)
09:32:05.893028 http.c:623              <= Recv header: cache-control: no-cache
09:32:05.893050 http.c:611              <= Recv header, 0000000052 bytes (0x00000034)
09:32:05.893070 http.c:623              <= Recv header: content-type: application/x-git-upload-pack-result
09:32:05.893091 http.c:611              <= Recv header, 0000000023 bytes (0x00000017)
09:32:05.893111 http.c:623              <= Recv header: x-accel-buffering: no
09:32:05.893134 http.c:611              <= Recv header, 0000000037 bytes (0x00000025)
09:32:05.893161 http.c:623              <= Recv header: date: Thu, 10 Mar 2022 08:32:05 GMT
09:32:05.893185 http.c:611              <= Recv header, 0000000002 bytes (0x00000002)
09:32:05.893207 http.c:623              <= Recv header:
09:32:05.893386 pkt-line.c:80           packet:        clone< packfile
09:32:05.893560 run-command.c:654       trace: run_command: git index-pack --stdin --fix-thin '--keep=fetch-pack 32913 on yyyyyyy.' --check-self-contained-and-connected
09:32:05.893646 pkt-line.c:80           packet:     sideband< PACK ...
09:32:05.910032 git.c:458               trace: built-in: git index-pack --stdin --fix-thin '--keep=fetch-pack 32913 on yyyyyyy.' --check-self-contained-and-connected
09:32:57.226250 http.c:664              == Info: LibreSSL SSL_read: Connection reset by peer, errno 54
09:32:57.228054 http.c:664              == Info: Failed receiving HTTP2 data
eset by peer
09:32:57.228130 http.c:664              == Info: LibreSSL SSL_write: Broken pipe, errno 32
09:32:57.228145 http.c:664              == Info: Failed sending HTTP2 data
e
09:32:57.228168 http.c:664              == Info: Connection #0 to host xxxxxxxxx left intact
error: RPC failed; curl 56 LibreSSL SSL_read: Connection reset by peer, errno 54
error: 1886 bytes of body are still expected
09:32:57.228369 pkt-line.c:80           packet:          git> 0002
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output

Hostnames of client and server are replaced by yyyyyyy and xxxxxxxxx

Actions #26

Updated by flynn about 2 years ago

Replacing mod_openssl with mod_nss seems to solve the issue, git checkout succeeds three times with out failure.

By the way: the option debug.log-ssl-noise = "enable" seems broken in mod_nss:

lighttpd -f /etc/lighttpd/lighttpd.conf -tt
2022-03-10 09:55:55: (configfile-glue.c.214) got a string but expected a short: debug.log-ssl-noise enable
2022-03-10 09:55:55: (server.c.1281) Initialization of plugins failed. Going down.
Actions #27

Updated by gstrauss about 2 years ago

Setting debug.log-ssl-noise = "enable" does not show any additional log messages, the log message about the h2 RST_STREAM does not show up on failure.

This suggests that something else on the network is causing ECONNRESET reported by the client, since lighttpd should be issuing trace (with those patches) if something occurs where lighttpd closes the connection without first sending TLS close notify alert. Are there other application proxies between the client and the server? Are there firewalls (including the firewall on the lighttpd server) messing with things? Is there an MTU mismatch between networks?

You mentioned that the issue started occurring around the time that you upgrade to lighttpd 1.4.64. Did you upgrade openssl libraries at that point, too? From which old version to which new version (of lighttpd and of openssl libs)?

Replacing mod_openssl with mod_nss seems to solve the issue, git checkout succeeds three times with out failure.

In these most recent tests, were there any changes to git or libcurl (and its TLS library dependencies) underneath git on the client?

FYI: I recommend lighttpd mod_gnutls over lighttpd mod_nss. NSS libraries are extremely client-focused and due to these limitations, lighttpd mod_nss does not support all the (TLS server-side) features in the other modules.

By the way: the option debug.log-ssl-noise = "enable" seems broken in mod_nss:

Yeah, mistaken initialization in mod_nss, mod_mbedtls, and mod_gnutls.

--- a/src/mod_nss.c
+++ b/src/mod_nss.c
@@ -1977,7 +1977,7 @@ SETDEFAULTS_FUNC(mod_nss_set_defaults)
         T_CONFIG_STRING,
         T_CONFIG_SCOPE_CONNECTION }
      ,{ CONST_STR_LEN("debug.log-ssl-noise"),
-        T_CONFIG_SHORT,
+        T_CONFIG_BOOL,
         T_CONFIG_SCOPE_CONNECTION }
      ,{ CONST_STR_LEN("ssl.verifyclient.ca-file"),
         T_CONFIG_STRING,

Actions #28

Updated by flynn about 2 years ago

All tests are made on the same client (Mac OSX) with the same git version 2.35.1, same binary. Tests are made without VPN. Network middleware like firewalls can be excluded, because just switching to mod_nss on the server solves the problem in one minute.

mod_gnutls is still not part of the official debian package, I have to create/modify my own debian package.

I cannot correlate a timestamp/event to the first occurance of the problem:
  • update from lighttpd 1.4.63 -> 1.4.64 in mid January
  • no openssl updates before begin of March
  • first issue reported to me 10th of February
I guess it is more the git reprository itself, that triggers the problem:
  • only a few clients trigger the problem
  • only some (big) repositories trigger the problem, these repositories are new, so this is the best correlation I found
Actions #29

Updated by gstrauss about 2 years ago

only some (big) repositories trigger the problem, these repositories are new, so this is the best correlation I found

Sounds to me like those large repos correlate with when the problem was noticed, and so the issue could have been present for a while, but not triggered.

Musing aloud: Since you noted that the issue appears to be resolved with lighttpd mod_nss, this suggests that the issue is not in lighttpd code which receives the response from the backend (including removing Transfer-Encoding: chunked) and the issue is not in lighttpd HTTP/2 framing code, which is the same on top of lighttpd TLS modules (mod_openssl, mod_nss, mod_gnutls, etc). This points to the TLS layer, so mod_openssl, but not mod_nss. In all the lighttpd TLS modules, lighttpd takes pains to send the response and then send TLS close notify, and lighttpd reports network error conditions (with debug.log-ssl-noise = "enable"), so the client reporting ECONNRESET is still most curious to me. I think I'll sleep on it to see if I can think of next steps to approach this.

In the meantime, does the issue occur when using server.stream-response-body = 0 for those problem git requests? I have a feeling that it might.

Actions #30

Updated by gstrauss about 2 years ago

For the requests that succeed using lighttpd mod_nss, how many bytes (at the application layer) are reported in the access log?
For the same requests which fail using lighttpd mod_openssl, are the same number of bytes reported in the access log?

Actions #31

Updated by flynn about 2 years ago

No failures with GnuTLS, GnuTLS is the only module reporting always the same size in the access log.

bytes reported in access log:
  • mod_nss: between 135436364 and 135443093
  • mod_gnutls: exact 135436364 in three attempts
  • mod-openssl: between 135436364 and 135442833
Actions #32

Updated by gstrauss about 2 years ago

bytes reported in access log:
  • mod_nss: between 135436364 and 135443093
  • mod_gnutls: exact 135436364 in three attempts
  • mod-openssl: between 135436364 and 135442833

Is that for the request or for the connection ("PRI * HTTP/2.0")? Assuming that is for the request, it is interesting that the sizes of the response may differ, since the response in r->write_queue from the backend is expected (by us, and in this case for git index-pack request) to be the same size once Transfer-Encoding: chunked is removed. On the other hand, response size may change for the connection depending on streaming, e.g. when and how much data arrives and subsequently how many HTTP/2 data frames are constructed and sent.

Actions #33

Updated by flynn about 2 years ago

bytes reported are taken from line in the access log with the POST-request (not PRI).

Actions #34

Updated by gstrauss about 2 years ago

I am still tracing through code. No luck yet in tracking this down.

If you have a chance, are you able to run with lighttpd 1.4.63 to see if the problem does not occur (during a length of time that you would expect it to occur at least once)? For a 64-bit build of lighttpd, I do not think there are any critical patches between lighttpd 1.4.63 and lighttpd 1.4.64.

Actions #35

Updated by gstrauss about 2 years ago

FYI: An issue has been identified in https://redmine.lighttpd.net/boards/2/topics/10325 relating to use of temporary files for large responses from backends, and it may or may not be related to this issue.

Actions #36

Updated by gstrauss about 2 years ago

The issue in https://redmine.lighttpd.net/boards/2/topics/10325 was somewhere with pwritev() on ancient eglibc and ancient kernel, but is probably not applicable to modern systems. lighttpd started using pwritev(), where available, in lighttpd 1.4.61.

If you have a chance, are you able to run with lighttpd 1.4.63 to see if the problem does not occur (during a length of time that you would expect it to occur at least once)? For a 64-bit build of lighttpd, I do not think there are any critical patches between lighttpd 1.4.63 and lighttpd 1.4.64.

Actions #37

Updated by flynn about 2 years ago

Because I did major updates (openssl, libc, ...) on the server, I will not able to fully restore the situation with lightpd version 1.4.63 but I do the tests tomorrow ...

Actions #38

Updated by flynn about 2 years ago

Maybe offtopic/not related: on this website (redmine,lighttpd.net) I get sometimes connection problems:
- vists one url
- do long time nothing, at least 30-60min
- switch/click to another link the browser reports a connection problem
- explicit reloading the page solves the problem

Maybe a session timeout problem?

Direct connection, no proxy, no (known) middle ware involved.

Actions #39

Updated by gstrauss about 2 years ago

Because I did major updates (openssl, libc, ...) on the server, I will not able to fully restore the situation with lightpd version 1.4.63 but I do the tests tomorrow ...

I have not been able to reproduce this, so I appreciate any help in bisecting to narrow down when this issue might have been introduced.

Maybe offtopic/not related: on this website (redmine,lighttpd.net) I get sometimes connection problems:

Unrelated, since stbuehler runs lighttpd2 on this server. I'll pass along the feedback. Thanks.

Actions #40

Updated by flynn about 2 years ago

I cannot get the bug reproduced, neither with version lighttpd 1.4.63 nor with 1.4.64 using mod_openssl.

But: the user of the client has got a big update (compared to the last tests more than 10 days ago) on his machine, which includes a curl update from version 7.64.1 to 7.81.0. This time the test also went very smoothly and at maximum download rate. The reported problems occurred at significant lower download rates (~50% and less).

From my side this ticket can be closed as not relevant/client issue.

I apologize for the work created.

Actions #41

Updated by gstrauss about 2 years ago

  • Status changed from Need Feedback to Fixed
  • Target version changed from 1.4.65 to 1.4.xx

Thank you for following up. It is useful have this info to recommend that people check and upgrade curl libraries if having issues with dependent apps such as git.

I am marking this issue "Fixed" rather than "Invalid" since you have provided a solution, even though the solution does not appear to be due to an issue in lighttpd. Thanks!

Actions #42

Updated by gstrauss about 2 years ago

  • Target version deleted (1.4.xx)
Actions #43

Updated by gstrauss almost 2 years ago

@flynn: FYI: there is a newly analyzed issue in #3089 with libnghttp2, used by curl, which under certain scenarios could result in very slow uploads and high CPU utilization.

lighttpd 1.4.65 will include a workaround to reduce the occurrence of degenerative upload behavior by libnghttp2.

Actions

Also available in: Atom