Project

General

Profile

Actions

Feature #2824

closed

inetd/wait mode with auto-shutdown after idle timeout

Added by Guenther_Brunthaler over 6 years ago. Updated 4 months ago.

Status:
Fixed
Priority:
Low
Category:
core
Target version:
-
ASK QUESTIONS IN Forums:
No

Description

I would like to be able to run lighttpd as an inetd/wait service, serving not just one (like option -1) but multiple requests from stdin. And it should terminate after being idle for some time.

In other words, I would like to see the combination of the already-existing -1 and -i options.

Consider the advantages: Infrequently used Web services in resource-restricted environments could launch lighttpd as a "wait"-service via inetd, just like option -1 already does.

But unlike -1, the new mode would serve any number of requests, making lighttpd run as efficient as a normal daemon, except for the startup time for the very first request.

Admittedly, systemd's socket activation could already be used today to implement this.

But what systemd cannot do, and generally sucks at, is stopping auto activated services once they are no longer used. It only ever automatically starts things, never stops them.

This is why the combination with an idle timeout value would be so helpful: After the initial launch via inetd, lighttpd could serve requests for minutes or hours even, not just for fractions of seconds / single requests.

Nevertheless, as soon as no-one has been accessing the web service any more for some time (perhaps at night?), lighttpd would shut down gracefully and automatically, releasing any resources, until inetd starts it again once the next period of activity starts.

In addition to the obvious advantages like resource savings during the time it is not running, this new mode would also help to alleviate memory leaks that are said to plague long-running lighttpd services in some situations. Because every time lighttpd (and its associated fcgi-processes) shut down, the whole process group has terminated and any leaks are gone as well.

I also hope it would not be too difficult to implement this new feature: The required basic building blocks (-i and -1 options) already exist; they only needed to be combined in a new way.

Actions #1

Updated by Guenther_Brunthaler over 6 years ago

I realize that what I have requested is probably already possible with systemd by using the -i option and normal socket activation, i. e. not via inetd.

However, it would be nice to use lighttpd without systemd.

Many small installations run BusyBox which features inetd and could thus launch lighttpd in the described way, but not systemd which needs much more resources to run.

Also, systemd usage is a highly emotional topic, and not everyone likes to run systemd.

Furthermore, systemd does not run on platforms other than Linux (such as BSD).

Therefore it were best if lighttpd could provide the requested functionality with or without systemd. Then everyone can be happy.

Actions #2

Updated by gstrauss over 6 years ago

  • Priority changed from Normal to Low
  • Target version deleted (1.4.x)

But unlike -1, the new mode would serve any number of requests, making lighttpd run as efficient as a normal daemon, except for the startup time for the very first request.

It sounds like you're trying to over optimize. Can you describe how the current functionality is a real, measurable problem for you? You can use -1 -i "timeout" right now to do what you desire, and lighttpd will serve multiple keep-alive requests on the same connection. The next connection that comes in will start up a new instance, depending on how you have tuned inetd. If you are on a small system running BusyBox, you probably do not have a complex lighttpd.conf connecting to many external resources (e.g. databases), and so lighttpd startup is expected to be reasonably fast. If lighttpd is starting a heavyweight backend service that takes a long time to start up, then that is probably the problem you want to solve so that you can use lighttpd -1 -i "timeout" with a long server.max-keep-alive-idle timeout. (You could have lighttpd start up a simple service which checks if a longer-running backend daemon is running, and starts it up with its own idle-timeout if that heavyweight backend server is not already running.)

As for your comment about "easy to implement", the -1 mode expects an already-accept()ed connection and so can not be reused as you suggest. The code for -i could be used as-is, but for lighttpd to listen on a socket already created on stdin, lighttpd would have to save STDIN_FILENO at startup (instead of close()ing it), and then would have to skip socket(), bind(), listen() calls, which it currently does for any sockets to which it is configured to listen. I suppose it would be straightforward, but not trivial, to special-case if server.bind = "/dev/stdin" in a few places and extend network_init() to take an extra arg of a fd dup()d from STDIN_FILENO before STDIN_FILENO is closed.

Anyway, I think I need more convincing (more empirical data, not idle musings) that this feature is actually worth implementing.

Actions #3

Updated by Guenther_Brunthaler over 6 years ago

and so lighttpd startup is expected to be reasonably fast

It is, but I would like to avoid the per-connection process-creation overhead of inetd/nowait.

Sometimes a service is not generally used frequently (or at all), but there are peaks (such as during nightly batch operations) where large quantities of connections should to be processed as efficiently as possible.

the -1 mode expects an already-accept()ed connection and so can
not be reused as you suggest

I was not aware of those implementation details, thank you for explaining. It would thus need more work than anticipated.

I suppose it would be straightforward, but not trivial, to
special-case

This sounds like slowing lighttpd down in the general case, what I certainly do not want at all! If it is not possible to implement the new feature efficiently, forget it!

I certainly do not want to be responsible for slowing down lighttpd during normal operation, because its raw speed and low resource usage is the primary reason why I love lighttpd so much!

I need more convincing (more empirical data, not idle musings)
that this feature is actually worth implementing.

Unfortunately this is sort of a chicken-egg-problem:

Due to my knowledge there is no web-server out there which implements the feature that I have described. So it is hard to compare inetd/wait operation, inetd/nowait operation and normal heavy-daemon operation.

Things to consider, though:

  • AFIK, no-one ever uses HTTP via inetd for high-volume connections even in cases where long periods of inactivity are known. IMO this is exactly because of inetd/nowait's overhead.
  • Some people recommend nginx over lighttpd, because the latter supposedly "leaks memory like a sieve" [ https://serverfault.com/questions/114895/why-is-nginx-more-popular-than-lighttpd/114952 ]. I do not support such statements because I have never experienced similar troubles myself. But it is clear that any leaked memory would be freed once lighttpd shuts down itself after some period of inactivity. This is a general advantage for all inetd services, yet the startup overhead for inetd/nowait services is deemed too expensive by many people.
  • A lot of sites potentially (another thing where we are lacking statistics) already use lighttpd -i in combination with systemd's socket activation to achieve exactly what I want. This would make systemd a crutch for lighttpd, effectively extending its capabilities. Unfortunately, this would have to be paid with the relatively high resource usage of systemd itself, possibly nullifying the resource-saving effects of socket activation in the first place.
  • lighttpd would be the first web server to support such a feature, making it not just the best but the only choice in such situations.
Actions #4

Updated by gstrauss over 6 years ago

Sometimes a service is not generally used frequently (or at all), but there are peaks (such as during nightly batch operations) where large quantities of connections should to be processed as efficiently as possible.

Contrived example gets a contrived solution: start up dedicated lighttpd instance before batch jobs begin, and shut down lighttpd dedicated instance after batch jobs end.

Due to my knowledge there is no web-server out there which implements the feature that I have described. So it is hard to compare inetd/wait operation, inetd/nowait operation and normal heavy-daemon operation.

That nobody is doing something that is not terribly difficult to do is not an argument in favor of doing it. It may, in fact, support the opposite argument.

AFIK, no-one ever uses HTTP via inetd for high-volume connections even in cases where long periods of inactivity are known. IMO this is exactly because of inetd/nowait's overhead.

lighttpd is generally light-weight. Sometimes backends (like some fastcgi servers) are much heavier. Look at lighttpd git master (the upcoming lighttpd 1.4.46) where lighttpd can scale up and scale down the number of running backend server instances based on activity. (experimental new feature)

Some people recommend nginx over lighttpd, because the latter supposedly "leaks memory like a sieve"

The page you linked to is from 2010. There are no known memory leaks in the current version of lighttpd. What some ill-informed people called memory leaks has been "fixed" in versions of lighttpd since lighttpd 1.4.40 as lighttpd was changed to send large generated output to disk rather than buffering it all in memory. See #2083 and issues linked to #949 for further details.

A lot of sites potentially (another thing where we are lacking statistics) already use lighttpd -i in combination with systemd's socket activation to achieve exactly what I want.

Citation needed. The -i feature to lighttpd was added in lighttpd 1.4.40.

Your justifications for this feature request remain "pie in the sky" with little backing evidence, and your argument that evidence does not exist does not help your case. If lighttpd startup is fast and inetd + lighttpd startup is too slow, then is it because there is too much traffic? Maybe inetd is not the right choice here. As I suggested above, try spinning up a dedicated instance of lighttpd for the time when known peak load will occur from a known other process.

Actions #5

Updated by Guenther_Brunthaler over 6 years ago

start up dedicated lighttpd instance before batch jobs begin,
and shut down lighttpd dedicated instance after batch jobs end.

This is actually what I have been doing so far. But it requires administrative access to the machine running the service, which would not be required with inetd/wait.

Another possibility would be using a cron job to start/stop the dedicated instance. But this is dangerous, because the run time of the batch jobs may vary, and they may also come in variable volume from varying sets of client machines.

There is also the danger that an unexpected batch job from a different machine tries to access the service after it has been shut down. The on-demand nature of inetd/wait would eliminate that danger.

Your justifications for this feature request remain "pie in the sky"
with little backing evidence

I will continue to look for any future implementation of the requested feature in any web server, and will report back here with comparison results as (hopefully!) backing evidence once I find such an implementation.

Actions #6

Updated by gstrauss over 6 years ago

I will continue to look for any future implementation of the requested feature in any web server, and will report back here with comparison results as (hopefully!) backing evidence once I find such an implementation.

Justification is not "hey look: it's faster". I already know that is likely to be the case.
Justification might be something along the lines of "I can only serve X requests per second using inetd and my slow backend, and I tested lighttpd git master with scaling up backends as needed, and I still could only reach Y requests per second. However, unless I can do Z requests per second, the services hitting the server get failures due to running out of resource or timeouts, and for some reason I am unable to have these services retry or use keep-alive or other mechanism to keep load steady and reduce load peaks"

Actions #7

Updated by Guenther_Brunthaler over 6 years ago

Justification is not "hey look: it's faster". I already know that is likely
to be the case.

Then I am not sure what sort of "evidence" you actually want me to provide.

It would be a new feature, allowing on-demand service setups which are currently not possible with the same efficiency on systems where systemd is not available (say BSD).

Compared to the "start/stop manually"-method the new feature had the advantage that no administrative action is necessary, and no administrative privileges by the process scheduling the batch jobs are required either.

It would thus be a "nice to have" feature, creating new use cases for lighttpd.

But it is certainly not a feature that one cannot live without.

Like most new features.

No one actually needs a new and better car if the old one is still working fine.

and I tested lighttpd git master with scaling up backends as needed

What does that even mean? If there are resource restrictions on some backend, how should they be "scaled up"? Aside from a hardware upgrade, there is usually not much one can do.

Anyway, all this is just a feature request.

You obviously are not convinced that the whole Internet superserver / on-demand approach (with the same performance as a dedicated server) is a useful feature.

So it seems likely that this request will remain low priority and will eventually be implemented, like never.

Which is your prerogative, of course, because YOU decide, and there is no obligation for any feature request to be picked for implementation.

So let's leave it at this: You know there is someone out there who thinks the whole superserver approach is not dead yet and would like lighttpd to support it.

You don't like the idea and therefore it will remain a wish not to be fullfilled.

But the ticket here has at least the advantage that other people which should want the same thing can see it and will not create the same ticket again. So it saves you the trouble to deal with this issue in the future again.

Otherwise, keep up the good work!

I am very glad that lighttpd exists, and it will remain to be my favorite web server. It is still the best of the available options.

Actions #8

Updated by gstrauss over 6 years ago

You have spent a lot of time responding, but hopefully you could spend a little more time listening.

We both agree that this is a feature request.

I have asked for some empirical data to help quantify the potential value, to see if this is worth the effort.

You have only responded with "I think this is a great idea", but not with anything like "I tried the suggestions you made above, but they are not quite sufficient, and here's the data comparing them".

and I tested lighttpd git master with scaling up backends as needed

What does that even mean? If there are resource restrictions on some backend, how should they be >"scaled up"? Aside from a hardware upgrade, there is usually not much one can do.

It means you didn't read my entire response which included "Look at lighttpd git master (the upcoming lighttpd 1.4.46) where lighttpd can scale up and scale down the number of running backend server instances based on activity. (experimental new feature)", did not try to understand it, and did not read the documentation. See Docs_ConfigurationOptions max-procs and min-procs and #1162

Actions #9

Updated by Guenther_Brunthaler over 6 years ago

but hopefully you could spend a little more time listening

I really tried hard, but obviously have failed at understanding:

  • You want empirical data comparing different approaches as evidence for the need to implement a new feature. This just does not make sense. If the different approaches can accomplish the same as requested feature, then the new feature is obviously unnecessary. If they can't do what the new feature could do, there is no way to provide useful comparison data. Its like requesting two blind people to explain the benefit of using color for something, where the feature request would be to do the latter.
  • I did already provide you with empirical data on a boolean basis, by explaining what the current implementation cannot do (save the process creation overhead with -1), and why it would be less efficient and use more resources. You even agreed to that:

Justification is not "hey look: it's faster".
I already know that is likely to be the case.

So you already know that the feature would be faster (and not just "likely": if process creation overhead was to be removed, it must be faster, even though the difference might only be noticeable when a larger numbers of requests will be served).

  • You made the suggestion of manually starting/stopping a dedicated server before batch processing takes place. I explained that this is inappropriate for situations where the client machine requesting batch services does not have administrative permissions on the server machine for starting or stopping the dedicated server. But even when it is possible it would be a mess: It would open the possibility to a number of race conditions, where one client machine is done with its batch processing and shuts the service down, while a different client machine is still using the service. This means there would be a need to establish some sort of resource reservation protocols between the client and server machines. All this would be unnecessary with inetd+wait support which would "just work".
  • I already acknowledged that systemd can be used to implement the new feature without help from lighttpd, but that there are reservations against using systemd in terms of both resource efficiency and cross-platform portability. There are also doubts about systemd's reliability, especially related to security, because of its huge code base and also empirically because of the large number of bugs already found so far.
  • I admit that I did not check out the git repository of lightpd, because I do not have that kind of bandwidth at where I am working right now (2G access only). Guilty as charged in this regard.
  • However, I did read #1162 (now) but still do not understand how that relates to my feature request. Obviously, "min-procs = 0" would solve my problem. But equally as obvious, min-procs can never be 0, because then there would be no process left starting more instances on demand. So, implementing min-procs = 0 would require an additional dedicated "guardian" or "master" process, making it de facto min-procs = 1 again - which is exactly what I wanted to avoid with the new inetd+wait feature.

With inetd+wait, there would not be any dedicated guardian for lighttpd. Instead, a single inetd instance assumes that role. And not just for lighttpd, but for any number of simlilar "on-demand" services. Which means inetd+wait can pool the resources of multiple dedicated "guardian" processes into a single process, thereby saving resources. It also eliminates the requirement to implement "guardian" processes for different kinds of services.

And contrary to inetd+nowait, it would not be less efficient than a dedicated server instance in times of peak load.

So, it has advantages which can be enumerated like I did above.

But for actual comparison giving hard performance numbers as evidence, I would need an implementation first... which is where the dog bites its own tail.

Actions #10

Updated by gstrauss over 6 years ago

I really tried hard, but obviously have failed at understanding

You have not tried really hard to produce any empirical data, or if you have, you have shared none of it here.

You want empirical data comparing different approaches as evidence for the need to implement a new feature. This just does not make sense.

I already acknowledged that systemd can be used to implement the new feature without help from lighttpd,

Please read those two statements of yours more carefully. Does one perhaps contradict the other in any way? Can the systemd alternate solution be used to model and approximate the feature requested?

You have requested a new feature, something which takes work, and when asked to perform a small amount of token work to help make a case that implementing this feature is worth my effort, you have not only balked, but spewed either repetition or nonsense in response.

Also, please stop trying to tell me what inetd does. Repeating yourself does not provide any empirical data. I am intimately aware of supervisors and have been using them for decades. (The mid 90's were two decades ago.) Here are some patches I published for daemontools: https://www.gluelogic.com/code/daemontools/

However, I did read #1162 (now) but still do not understand how that relates to my feature request. Obviously, "min-procs = 0" would solve my problem. But equally as obvious, min-procs can never be 0, because then there would be no process left starting more instances on demand. So, implementing min-procs = 0 would require an additional dedicated "guardian" or "master" process, making it de facto min-procs = 1 again - which is exactly what I wanted to avoid with the new inetd+wait feature.

You obviously have not tried implementing this (and said as much). min-procs = 0 is for the backend heavy-weight processes. lighttpd still runs. A 32-bit lighttpd might consume about 1.5 MB RSS (resident memory) including some useful lighttpd modules. While not nothing, it can in some cases be considerably less than keeping additional heavy-weight backends runnning idly. For some people, this may be a very good improvement to their lighttpd+backends memory footprint, and this solution is already implemented and can be benchmarked and compared to other solutions.

I did already provide you with empirical data on a boolean basis, by explaining what the current implementation cannot do (save the process creation overhead with -1), and why it would be less efficient and use more resources.

Please use your favorite search engine for "scientific method" and "empirical data". I have agreed that your hypothesis is promising, but you have provided no empirical data to back this up. Your "thought experiment" about fork/no-fork is not empirical data. I am not interested in the binary "better/worse". I am interested in "approximately how much better?" How much better than the base case? How much better than min-procs = 0 ? Is it worth my effort to implement this additional feature?

Actions #11

Updated by Guenther_Brunthaler over 6 years ago

Can the systemd alternate solution be used to model
and approximate the feature requested?

Unfortunately, systemd does not run on my box and so I cannot test. I already explained why systemd is not an option for everyone. This includes me.

Also, please stop trying to tell me what inetd does

I described only those parts of its operation which are necessary for providing the basis required to actually use the feature I requested. I have no idea how many developers are working on lighttpd, so it is quite possible that some of them know about inetd's operational details while others don't. I just wanted to be as clear as possible.

I am intimately aware of supervisors

I never doubted that at least one of lighttpd's developers knew, because otherwise they would have had a hard time implementing the -1 option.

Here are some patches I published for daemontools

So it seems you are a follower of the daemontools/runit/s6-school. This might explain your approach to minprocs=0 which also requires a permanently running supervisory process.

Of the beforementioned init-systems, I have only actual experience with BusyBox' minimalistic implementation of runit. But I assume the others are similar in spirit.

I liked many aspects of runit, especially that the supervisory processes are really lightweight.

What I liked less was the fact that they were there at all!

I always liked the inittab approach better: Why run 2 * N processes for providing N services, or even 3 * N processes when svlogd instances have to capture logging data produced from the services?

With the inittab approach a single supervisory process will do, and there is also a single logging process (syslogd) instead of a svlogd instance per service.

It gets even better if one of the processes in inittab is inetd: Then the single init process monitors whether inetd is running and re-starts if if crashed.

And inetd itself forks service-processes on-demand.

But in order to run efficiently in high-load situations (and not just occasionally) from inetd, inetd "wait"-support by a service is mandatory.

min-procs = 0 is for the backend heavy-weight processes. lighttpd still runs.

Which is exactly what I want to avoid! I do not say min-procs/max-procs is a bad feature, it absolutely makes sense. (At least on multi-processor/core machines.)

But it is a different feature than inetd+wait because of the "lighttpd still runs"-part. Making lighttpd not run permanently is the whole point of my request, aside from being more efficient than inetd+nowait.

Your "thought experiment" about fork/no-fork is not empirical data.

Well this is true and will remain true until I have a means of providing such data. As explained, systemd is not available for testing to me.

I therefore conclude the only way for me to provide the empirical data you want to see is to implement the inetd+wait feature myself somehow.

OK, I will try that, but it might take some time because I have no idea where to start. Plus I have zero experience with socket programming. Well, the BusyBox implementation of httpd seems to be small. Perhaps I can kludge inetd+wait support in there somehow.

Overall, it seems making feature requests for lighttpd is really hard.

Actions #12

Updated by gstrauss over 6 years ago

  • Status changed from New to Missing Feedback
  • Target version set to 1.4.46

Overall, it seems making feature requests for lighttpd is really hard.

You seem to have no problem producing words.

Please come back when you have produced some empirical data, instead of making a million excuses why you are unable to produce any.

Actions #13

Updated by stbuehler over 6 years ago

I somehow doubt this is going to be implemented in 1.4.46 :)

Anyway: I think it would be nice to support the systemd socket activation with the Accept=no variant, i.e. passing the listening file descriptors.

This activation uses a well defined environment API, which could also be emulated by other systems (i.e. inetd).

So, my proposal:

When started with the oneshot option -1, lighttpd first tries to detect a systemd socket activation environment; if it finds it, it will take all passed listening sockets and accept requests for them (but won't bind any other configured sockets), if not, the behavior stays like it is now.

@Guenther_Brunthaler: If you combine this with the idle option, it should do exactly what you want.

If we wanted to support systemd socket activation without the oneshot option, the question is whether we'd want to bind sockets not passed by systemd.

Actions #14

Updated by Guenther_Brunthaler over 6 years ago

@stbuehler: If I understand that environment protocol correctly, it would then suffice to launch a script via inetd which exec's lighttpd like this:

#! /bin/sh
SD_LISTEN_FDS_START=0
LISTEN_FDS=0
export SD_LISTEN_FDS_START LISTEN_FDS
exec /usr/sbin/lighttpd -i 1800 -f /etc/lighttpd/lighttpd.conf

and lighttpd would do inet+wait.

This would be cool, indeed!

The script launching overhead would also be acceptable, because it is only done infrequently.

I only wonder what the other environment variables LISTEN_PID or LISTEN_FDNAMES should be set to when launching the thing from inetd rather than from systemd.

Actions #15

Updated by Guenther_Brunthaler over 6 years ago

@gstrauss

You seem to have no problem producing words.

That's what feature requests are usually made of.

And also see how far it got me! ;-)

Please come back when you have produced some empirical data

Will do.

instead of making a million excuses why you are unable to produce any.

"Tough crowd", like the entertainer used to say.

Actions #16

Updated by stbuehler over 6 years ago

SD_LISTEN_FDS_START is a constant from the header and is always 3; i.e. you need to dup your fd(s) to 3 (and following) like:

# dup fd 0 to fd 3
exec 3<&0
# close fd 0
exec <&-

$LISTEN_FDNAMES is optional, $LISTEN_PID should be the process id of lighttpd; as you're using exec, lighttpd will replace the shell and keep the shells process id $$:

export LISTEN_PID=$$
exec /usr/sbin/lighttpd ...

Actions #17

Updated by Guenther_Brunthaler over 6 years ago

@stbuehler

Thank you for explaining! Now it makes sense.

And if implemented, it would indeed also solve my problem.

Actions #18

Updated by gstrauss 4 months ago

  • ASK QUESTIONS IN Forums set to No

Option for server.systemd-socket-activation = "enable" was added in lighttpd 1.4.53

Actions #19

Updated by gstrauss 4 months ago

  • Status changed from Missing Feedback to Fixed
  • Target version deleted (1.4.46)
Actions

Also available in: Atom