Project

General

Profile

Actions

Bug #2680

closed

regression: uploading large file when disk space is tight

Added by rgenoud about 9 years ago. Updated about 9 years ago.

Status:
Fixed
Priority:
Normal
Category:
core
Target version:
ASK QUESTIONS IN Forums:

Description

since this commit :
Revision 3010
increase upload temporary chunk file size from 1MB to 16MB

From: Stefan Bühler <>

uploading a file in the same partition as the lighttpd temp folder requires more free space than before (filesize +16MB instead of filesize +1MB)

The use case is:
embedded board, 60MiB tmpfs on /tmp, flash memory read-only.
before this commit, I could upload a 50MiB file into /tmp/, after this commit, it fails, because it needs the file size + the size of the chunk (thus 66MiB).

Maybe the solution is to use a config parameter.

Actions #1

Updated by stbuehler about 9 years ago

We don't preallocate disk space, so I don't think it will require "file size + size of one chunk".

Actions #2

Updated by stbuehler about 9 years ago

  • Target version set to 1.4.38

I think I got now what the problem is: the temp files allocated by lighttpd still are of the same size together, but during streaming the upload to a backend the backend might require space too, and lighttpd will only free a file after it is done with it, i.e. after "chunk size".

Then an upload will require file size + chunk size in total as you said.

Actions #3

Updated by stbuehler about 9 years ago

  • Status changed from New to Fixed
  • % Done changed from 0 to 100

Applied in changeset r3050.

Actions

Also available in: Atom