Bug #3049
closedToo much content with HTTP/2.0
Description
- HTTP/1.1 gives correct size, in my example 16289403 bytes
- HTTP/2.0 gives some bytes more, in my example 16289410 bytes
Tested with curl with and without the option --http1.1
.
The "additional" bytes in HTTP/2.0 case are linefeeds (0D0A0D0A ....)
Updated by gstrauss almost 4 years ago
I was not immediately able to reproduce the issue you reported when I used a small file, so I'll try with a larger file.
Would you describe your test file that produced the above? In #3046, my test sent "excess\n", which is 7 characters, and happens to match your difference above, but you do say that the additional bytes are "\r\n\r\n". However that is only 4 bytes.
Did the resulting file have excess bytes, or did curl report those as excess? There is no Transfer-Encoding: chunked
with HTTP/2, so I interpreted what you said above to mean that your backend is sending HTTP/1.1 Transfer-Encoding: chunked
response, and the client is either HTTP/1.1 or HTTP/2.
Related, but different: I found that for an HTTP/1.1 client and HTTP/1.1 mod_proxy backend sending Transfer-Encoding: chunked
, and server.stream-response-body = 1
, that some excess data sent by the backend might be passed through to the client. This is due to an optimization that does not modify the data from the backend if receiving chunked data from backend and sending chunked data to client. I might make an adjustment to handle this, as long as the adjustment is not expensive, but do not consider this to be a high impact problem.
Updated by flynn almost 4 years ago
The test file is a ~16MB jar-file (= zip-file).
The Transfer-Encoding: chunked
relates to the backend, there is no Content-Length in the response.
I downloaded the file twice with curl with and without --http1.1
, the files produced by curl have different size by 7 bytes.
The found additional bytes above have been verified with a hex editor: the full sequence has been 000D0A0D0A0D0A (sorry I missed the leading zero byte in the first message).
- the first and second request produce the orginal/correct filesize
- request 3, 4 and 5 produce the 7 byte bigger file
- request 6 produce the original/correct filesize
- request 7 produce the 7 byte bigger file
So it needs some attempts to trigger the problem, but happens in about 50% of the requests in my tests.
Updated by flynn almost 4 years ago
- either the filesize is correct
- or it is exact 7 bytes bigger, I did NOT find any other difference than 7 in about 50 tests
Updated by gstrauss almost 4 years ago
I have fixed the issue for HTTP/1.1 excess data that I noted above, and I fixed some additional edge cases parsing Transfer-Encoding: chunked
from the backend. This patch has been pushed to my dev branch.
I haven't been able to reproduce the issue with HTTP/2 (testing before applying the patch mentioned above). I am using server.stream-response-body = 1
for both front-end lighttpd running mod_proxy and backend lighttpd running mod_cgi. I have tried some different timings for sleep in Perl script below, and I have tried chunked encoding with 1 MB chunks instead of 4KB chunks. It should not matter, but I am testing using cleartext requests, and not using TLS.
#!/usr/bin/perl $|=1; print "Status: 200\r\nTransfer-Encoding: chunked\r\n\r\n"; for (1..4096) { print "1000\r\n","a" x 0x1000,"\r\n"; select(undef, undef, undef, 0.001); } print "0\r\n\r\n";
The following commands produce /dev/shm/out with the same exact size of 16MB (16777216 bytes).
curl -s -o /dev/shm/out --http2-prior-knowledge "http://127.0.0.1:8080/cgi.pl?t"
curl -s -o /dev/shm/out --http2-prior-knowledge "http://127.0.0.1:8080/cgi.pl?t"
Are you able to reproduce the bug with the above?
Are you able to reproduce the bug with the patches at the tip of personal/gstrauss/master
?
Updated by gstrauss almost 4 years ago
...Still a work in progress, as some of my tests are failing
[Edit] actually, my tests are passing -- it was an accidentally commented out line in the test script which I had been modifying.
Updated by gstrauss almost 4 years ago
- Category set to core
- Status changed from New to Patch Pending
Do you know if there are any trailers that are being sent with the response?
I have experimented with segmenting the final parts of the chunked encoding, but I still have not been able to reproduce the issue with HTTP/2.
(I do note the correlation that the end of the final non-zero data chunk, plus the final chunk, comes to 7 bytes: \r\n0\r\n\r\n
, and that I believe my last patch for HTTP/1.1 chunked decoding addresses any potential duplication of those final 7 bytes)
Updated by flynn almost 4 years ago
I added your recent patches and cannot trigger the problem anymore in more than 20 attempts.
Also the original request/workflow, that I reduced to this single request as the cause, works now.
Big thanks for your efforts and the quick fix!
Updated by gstrauss almost 4 years ago
Many thanks to you for your thorough workout of lighttpd and for the effort in reducing test cases, reporting bugs, following up, and more!
Updated by gstrauss almost 4 years ago
- Status changed from Patch Pending to Fixed
Applied in changeset cabced1f9fd6028067a1a3507a306b5a7c76fda7.
Also available in: Atom