Discussion:
systemd: Is it wrong?
Steve Dickson
2011-07-08 02:57:54 UTC
Permalink
Hello,

One of the maintainers of systemd and I have been working
together on trying to convert the NFS SysV init scripts
into systemd services. Here is the long trail...

https://bugzilla.redhat.com/show_bug.cgi?id=699040

The point is this, with fairly complicated system,
some events need to have happen, like loading modules,
before other events happen, like setting parameters of
those loaded modules.

Currently the ExecStart commands can be started and end
before the ExecStartPre even start. This means setting
modules parameters within the same service file are
impossible.

I suggested that a boundary be set that all ExecStartPre
commands finish before any ExecStart commands start,
which would allow complicated subsystems, like NFS,
to start in a very stable way...

So is it wrong? Shouldn't there away to allow certain
parts of a system to synchronously configure some
things so other parts will come up as expected?

tia,

steved.
Neil Horman
2011-07-08 10:47:46 UTC
Permalink
Post by Steve Dickson
Hello,
One of the maintainers of systemd and I have been working
together on trying to convert the NFS SysV init scripts
into systemd services. Here is the long trail...
https://bugzilla.redhat.com/show_bug.cgi?id=699040
The point is this, with fairly complicated system,
some events need to have happen, like loading modules,
before other events happen, like setting parameters of
those loaded modules.
Currently the ExecStart commands can be started and end
before the ExecStartPre even start. This means setting
modules parameters within the same service file are
impossible.
I would have though intuitively that Pre commands would complete prior to
ExecStart command in the same service file. To not do so seems like a bug to
me.
Neil
"Jóhann B. Guðmundsson"
2011-07-08 11:57:04 UTC
Permalink
Post by Neil Horman
Post by Steve Dickson
Hello,
One of the maintainers of systemd and I have been working
together on trying to convert the NFS SysV init scripts
into systemd services. Here is the long trail...
https://bugzilla.redhat.com/show_bug.cgi?id=699040
The point is this, with fairly complicated system,
some events need to have happen, like loading modules,
before other events happen, like setting parameters of
those loaded modules.
Currently the ExecStart commands can be started and end
before the ExecStartPre even start. This means setting
modules parameters within the same service file are
impossible.
I would have though intuitively that Pre commands would complete prior to
ExecStart command in the same service file. To not do so seems like a bug to
me.
Neil
What I think got him confused was the order of execution he saw in the
status output which is reversed as in the first run command is at the
bottom and the last run command on the top..

A simple test case show that the commands are correctly ordered and
executed sequentially and after the other one has finished...

[Unit]
Description=Test ordering + time of Exec

[Service]
Type=oneshot
ExecStartPre=/usr/bin/logger 1
ExecStartPre=/bin/sleep 1
ExecStartPre=/usr/bin/logger 2
ExecStartPre=/bin/sleep 2
ExecStart=/usr/bin/logger 3
ExecStart=/bin/sleep 3
ExecStart=/usr/bin/logger 4
ExecStart=/bin/sleep 4
ExecStartPost=/usr/bin/logger 5
ExecStartPost=/bin/sleep 5
ExecStartPost=/usr/bin/logger 6
ExecStartPost=/bin/sleep 6
ExecStartPost=/usr/bin/logger 7
RemainAfterExit=yes

[root at valhalla system]# grep logger /var/log/messages
Jul 8 11:37:30 valhalla logger: 1
Jul 8 11:37:31 valhalla logger: 2 <-- waited for 1s
Jul 8 11:37:33 valhalla logger: 3 <-- waited for 2s
Jul 8 11:37:36 valhalla logger: 4 <-- waited for 3s
Jul 8 11:37:40 valhalla logger: 5 <-- waited for 4s
Jul 8 11:37:45 valhalla logger: 6 <-- waited for 5s
Jul 8 11:37:51 valhalla logger: 7 <-- waited for 6s
[root at valhalla system]#

test.service
Loaded: loaded (/lib/systemd/system/test.service)
Active: active (exited) since Fri, 08 Jul 2011 11:54:13 +0000; 3s ago
Process: 8754 ExecStartPost=/usr/bin/logger 7 (code=exited,
status=0/SUCCESS) <-- last
Process: 8752 ExecStartPost=/bin/sleep 6 (code=exited,
status=0/SUCCESS)
Process: 8750 ExecStartPost=/usr/bin/logger 6 (code=exited,
status=0/SUCCESS)
Process: 8748 ExecStartPost=/bin/sleep 5 (code=exited,
status=0/SUCCESS)
Process: 8746 ExecStartPost=/usr/bin/logger 5 (code=exited,
status=0/SUCCESS)
Process: 8743 ExecStart=/bin/sleep 4 (code=exited, status=0/SUCCESS)
Process: 8741 ExecStart=/usr/bin/logger 4 (code=exited,
status=0/SUCCESS)
Process: 8739 ExecStart=/bin/sleep 3 (code=exited, status=0/SUCCESS)
Process: 8737 ExecStart=/usr/bin/logger 3 (code=exited,
status=0/SUCCESS)
Process: 8735 ExecStartPre=/bin/sleep 2 (code=exited,
status=0/SUCCESS)
Process: 8733 ExecStartPre=/usr/bin/logger 2 (code=exited,
status=0/SUCCESS)
Process: 8731 ExecStartPre=/bin/sleep 1 (code=exited,
status=0/SUCCESS)
Process: 8729 ExecStartPre=/usr/bin/logger 1 (code=exited,
status=0/SUCCESS) <--- first
CGroup: name=systemd:/system/test.service


JBG
Lennart Poettering
2011-07-08 12:24:54 UTC
Permalink
Post by Neil Horman
Post by Steve Dickson
Hello,
One of the maintainers of systemd and I have been working
together on trying to convert the NFS SysV init scripts
into systemd services. Here is the long trail...
https://bugzilla.redhat.com/show_bug.cgi?id=699040
The point is this, with fairly complicated system,
some events need to have happen, like loading modules,
before other events happen, like setting parameters of
those loaded modules.
Currently the ExecStart commands can be started and end
before the ExecStartPre even start. This means setting
modules parameters within the same service file are
impossible.
I would have though intuitively that Pre commands would complete prior to
ExecStart command in the same service file. To not do so seems like a bug to
me.
Yes, correct. ExecStartPre= lines are executed in the same order as they
appear in the unit file, and one by one. There is never more than
one ExecStartPre= process running, and if it is then this would be a bug.

Lennart

--
Lennart Poettering - Red Hat, Inc.
Lennart Poettering
2011-07-08 12:24:54 UTC
Permalink
Post by Neil Horman
Post by Steve Dickson
Hello,
One of the maintainers of systemd and I have been working
together on trying to convert the NFS SysV init scripts
into systemd services. Here is the long trail...
https://bugzilla.redhat.com/show_bug.cgi?id=699040
The point is this, with fairly complicated system,
some events need to have happen, like loading modules,
before other events happen, like setting parameters of
those loaded modules.
Currently the ExecStart commands can be started and end
before the ExecStartPre even start. This means setting
modules parameters within the same service file are
impossible.
I would have though intuitively that Pre commands would complete prior to
ExecStart command in the same service file. To not do so seems like a bug to
me.
Yes, correct. ExecStartPre= lines are executed in the same order as they
appear in the unit file, and one by one. There is never more than
one ExecStartPre= process running, and if it is then this would be a bug.

Lennart

--
Lennart Poettering - Red Hat, Inc.
JB
2011-07-08 11:42:34 UTC
Permalink
...
Hi,

you are right about the synchronization problem within a service file exec
environment, at least as you showed it in that particular Bugzilla case.

Let's review it once again here (I am on F15), but I reserve the right to be
wrong as I have mostly ignored systemd for the time being :-)

Your nfslock.service file:

[Unit]
Description=NFS file locking service.
After=syslog.target network.target rpcbind.service
ConditionPathIsDirectory=/sys/module/sunrpc

[Service]
Type=forking
PIDFile=/var/run/sm-notify.pid
EnvironmentFile=/etc/sysconfig/nfsservices
ExecStartPre=/sbin/modprobe -q lockd $LOCKDARG
ExecStartPre=/sbin/rm -f /var/run/sm-notify.pid
ExecStart=/sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT
ExecStart=/sbin/sysctl -w fs.nfs.nlm_udpport=$LOCKD_UDPPORT
ExecStart=/sbin/rpc.statd $RPCSTATDARS
ExecStartPost=/sbin/sysctl -w fs.nfs.nlm_tcpport=0
ExecStartPost=/sbin/sysctl -w fs.nfs.nlm_udpport=0

[Install]
WantedBy=multi-user.target
Also=rpcbind.socket

or a variation of the above:

...
ExecStartPre=/sbin/modprobe -q lockd $LOCKDARG
...
ExecStartPre=/sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT
...

and the failure is:

$ systemctl status nfslock.service
nfslock.service - NFS file locking service.
Loaded: loaded (/lib/systemd/system/nfslock.service)
Active: failed since Thu, 07 Jul 2011 14:35:39 -0400; 28s ago
Process: 2186 ExecStartPre=/sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT
(code=exited, status=255)
Process: 2184 ExecStartPre=/sbin/modprobe -q lockd $LOCKDARG (code=exited,
status=0/SUCCESS)
$

Yes, there is a potential problem as jobs can be created for execution in one
order, but actually scheduled for execution or executed in a different order.

So, what are the solutions ?

- in this particular case
There is a service file for that purpose already ?
$ cat /lib/systemd/system/systemd-modules-load.service
...
[Unit]
Description=Load Kernel Modules
DefaultDependencies=no
Conflicts=shutdown.target
After=systemd-readahead-collect.service systemd-readahead-replay.service
Before=sysinit.target shutdown.target
ConditionDirectoryNotEmpty=/etc/modules-load.d

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/lib/systemd/systemd-modules-load
$

and this service file too ?
$ cat /lib/systemd/system/systemd-sysctl.service
...
[Unit]
Description=Apply Kernel Variables
DefaultDependencies=no
Conflicts=shutdown.target
After=systemd-readahead-collect.service systemd-readahead-replay.service
Before=sysinit.target shutdown.target
ConditionPathExists=|/etc/sysctl.conf
ConditionDirectoryNotEmpty=|/etc/sysctl.d

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/lib/systemd/systemd-sysctl
$

The sys admin would:
- add a required lockd.conf to /etc/modules-load.d/
- edit /etc/sysctl with required param settings

I guess you could modify the nfslock.service file like this:
...
Requires=systemd-modules-load.service systemd-sysctl.service
...
After=syslog.target network.target rpcbind.service ystemd-modules-load.service
systemd-sysctl.service
...

and without these:
...
ExecStartPre=/sbin/modprobe -q lockd $LOCKDARG
ExecStart=/sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT
ExecStart=/sbin/sysctl -w fs.nfs.nlm_udpport=$LOCKD_UDPPORT
...

I hope that the env variables used by modprobe and 'sysctl -w' as used here
($LOCKDARG, $LOCKD_TCPPORT, $LOCKD_UDPPORT) would be available in the above
corresponding service execution envs as being "required" "sub-services"
(see Requires=... and After=...) to nfslock.service, which provides for
EnvironmentFile=/etc/sysconfig/nfsservices .

I hope that would work as desired, that is, synchronously.

- systemd has to manipulate SYSTEMD.EXEC(5) scheduling environment to make
sure that sequential execution as in a service file *means* synchronous
execution, with internal completion code checking and error processing.
Warning:
manipulating scheduling environment is always tricky and can cause
problems and complications, in particular in real-time environment, as
multithreading programming shows.
- perhaps systemd needs some conditional constructs to check for completion
code of commands executed (sequentially) in a service file.
e.g.
ExecStartPre="/sbin/modprobe -q lockd $LOCKDARG",SetOnError=error1
ExecStartPre="/sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT",IfError=error1=0

This would be useful here, but it appears to duplicate options from
systemd.unit(5) and systemd.service(5).

Well, systemd was supposed to simplify those silly and primitive bash-based
SysV and LSB system init environments, cure the lack of parallelism, etc ...
Sorry, just having a bad hair day :-)

JB

Maria Callas - Bellini - Il Pirata - Part 2 (Final Scene)

&quot;Jóhann B. Guðmundsson&quot;
2011-07-08 12:12:40 UTC
Permalink
Post by JB
Hi,
you are right about the synchronization problem within a service file exec
environment, at least as you showed it in that particular Bugzilla case.
<snip>

No he did not and you are wrong the problem has nothing to do with order
timing and execution but everything to do with this..
ExecStart=/sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT <----

Further explanation can be found on the bug report and a correct working
native systemd nfslock service is attached to that report along with the
necessary modification he needs to do in the sysconfig file he is using.

JBG
JB
2011-07-08 12:41:34 UTC
Permalink
Post by &quot;Jóhann B. Guðmundsson&quot;
Post by JB
Hi,
you are right about the synchronization problem within a service file exec
environment, at least as you showed it in that particular Bugzilla case.
<snip>
No he did not and you are wrong the problem has nothing to do with order
timing and execution but everything to do with this..
ExecStart=/sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT <----
Further explanation can be found on the bug report and a correct working
native systemd nfslock service is attached to that report along with the
necessary modification he needs to do in the sysconfig file he is using.
JBG
Johann,
I think you are "fixing" it to work according to your world view :-)

$ man sysctl
...
SYNOPSIS
...
sysctl [-n] [-e] [-q] -w variable=value ...
...

So, if nfslock.service file contains:
...
ExecStart=/sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT
...

then this is the correct syntax.

If this does not work as processed by systemd, then that means a bug ...

JB
&quot;Jóhann B. Guðmundsson&quot;
2011-07-08 13:15:29 UTC
Permalink
Post by JB
Johann,
I think you are "fixing" it to work according to your world view :-)
Nope
Post by JB
$ man sysctl
...
SYNOPSIS
...
sysctl [-n] [-e] [-q] -w variable=value ...
...
...
ExecStart=/sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT
...
then this is the correct syntax.
If this does not work as processed by systemd, then that means a bug ...
Or more likely this means that the content of the $LOCKD_TCPPORT
variable is not being delivered to /sbin/sysctl -w fs.nfs.nlm_tcpport=
Like for instance if he left it hashed out in the sysconfig file the
service would fail since fs.nfs.nlm_tcpport= is expecting to have some
value
as I explained on the bug report in comment 43

I repeatly asked to see that sysconfig file so I could diagnose the
problem further but I got responses like..

"How does your /etc/sysconfig file look like basically
why does this matter? Its the default one that is installed..."

( Which eventually led me to loose my cool because I really needed to
see what he was passing if any to that command something I'm not
particularly proud of )

So does not the default have the $LOCKD_TCPPORT line hashed out
#$LOCKD_TCPPORT which means you arent passing anything to /sbin/sysctl
-w fs.nfs.nlm_tcpport= which then would result in this..

[root at valhalla system]# /sbin/sysctl -w fs.nfs.nlm_tcpport=
error: Malformed setting "fs.nfs.nlm_tcpport="

And the service would fail to start..

Nothing to do with systemd to but everything to do with the command and
or the sysconfig file

Anyway I fixed his service and it works now as he wanted to be afaikt so
he should be happy and ships the native systemd service file which makes
me be happy and I can cross nfs off
https://fedoraproject.org/wiki/User:Johannbg/Features/SysVtoSystemd

JBG
JB
2011-07-08 14:06:10 UTC
Permalink
Post by &quot;Jóhann B. Guðmundsson&quot;
Post by JB
Johann,
I think you are "fixing" it to work according to your world view
Nope
Post by JB
$ man sysctl
...
SYNOPSIS
...
sysctl [-n] [-e] [-q] -w variable=value ...
...
...
ExecStart=/sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT
...
then this is the correct syntax.
If this does not work as processed by systemd, then that means a bug ...
Or more likely this means that the content of the $LOCKD_TCPPORT
variable is not being delivered to /sbin/sysctl -w fs.nfs.nlm_tcpport=
Like for instance if he left it hashed out in the sysconfig file the
service would fail since fs.nfs.nlm_tcpport= is expecting to have some
value
as I explained on the bug report in comment 43
...
Johann,

we know that.

But the fact is that you do not understand how the systemd program should
work.

The nfslock.service file contains this:
EnvironmentFile=-/etc/sysconfig/nfsservices

Let's assume that LOCKD_TCPPORT is hashed out.

That means, in bash, you get "unset variable" value:
# echo $LOCKD_TCPPORT

#

Then this entry:
ExecStart=/sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT
is formatted as follows after substitution:
ExecStart=/sbin/sysctl -w fs.nfs.nlm_tcpport=

That's why you get, when executing manually in bash, this:
# /sbin/sysctl -w fs.nfs.nlm_tcpport=$LOCKD_TCPPORT
error: Malformed setting "fs.nfs.nlm_tcpport="

This entry is passed to systemd for execution, as is !
It is the responsibility of systemd to parse it, determine what entry it is
(you could put in there any garbage, perhaps a virus, rootkit, etc), then if
a valid executable entry then it should validate its syntax and arguments
(are you still here, Johann ... ?!), and if not valid the entry should not be
executed, that is aborted or error completion code returned to calling env.

So, it is a systemd bug if it does not work ...

Btw, if you are able to pass your "version" of this entry to systemd's
execution environment and it is taken despite a violation of sysctl syntax
and programming security, then Gold help us :-)

JB
Genes MailLists
2011-07-08 15:02:17 UTC
Permalink
Post by JB
This entry is passed to systemd for execution, as is !
It is the responsibility of systemd to parse it, determine what entry it is
(you could put in there any garbage, perhaps a virus, rootkit, etc), then if
a valid executable entry then it should validate its syntax and arguments
(are you still here, Johann ... ?!), and if not valid the entry should not be
executed, that is aborted or error completion code returned to calling env.
That sounds extraordinarily unreasonable to me.

If the unit file is broken with bad input it is broken - it is not
systemd's job to detect/fix bad input to an application - it should do
what it is being asked to do and run it as per the inputs in the file.

If the application fails due to bad input then systemd should (and
presumably does) report the failure that the application itself reports
to systemd.

Systemd cannot possibly know what valid arguments and syntax are/might
be for every application existing and not yet written - and should not.
Thats the job of each application itself - to exit with an error if the
inputs are bad and then systemd should report that error information.

systemd does enough - leave it alone :-)

gene
JB
2011-07-08 16:44:32 UTC
Permalink
...
Yes, it is a little bit harsh.

I was not precise by mentioning systemd only with regard to entry validation.
I should have said systemd (by its own code or that from any utilities) or
the called app itself.

Having said that, there is probably a need to pre-edit/pre-validate input and
systemd execution environment with regard to what could be passed for execution
as a called app.
This stuff is run with root rights, so just passing to it WMD indiscriminately,
without any preconditions, would be dangerous.

JB
Lennart Poettering
2011-07-08 12:23:07 UTC
Permalink
Post by Steve Dickson
Hello,
One of the maintainers of systemd and I have been working
together on trying to convert the NFS SysV init scripts
into systemd services. Here is the long trail...
https://bugzilla.redhat.com/show_bug.cgi?id=699040
The point is this, with fairly complicated system,
some events need to have happen, like loading modules,
before other events happen, like setting parameters of
those loaded modules.
Currently the ExecStart commands can be started and end
before the ExecStartPre even start. This means setting
modules parameters within the same service file are
impossible.
I suggested that a boundary be set that all ExecStartPre
commands finish before any ExecStart commands start,
which would allow complicated subsystems, like NFS,
to start in a very stable way...
So is it wrong? Shouldn't there away to allow certain
parts of a system to synchronously configure some
things so other parts will come up as expected?
I am pretty sure systemd-devel is the better place to discuss this. But
here are a few comments after reading throught the bug report:

Yes, we want that people place each service in an individual service
file. Only then we can supervise the services properly. It is possible
to spawn multiple high-level processes from a single service, but that
is mostly intended as compat kludge to support SysV init scripts where
this is possible. In general however, we want people to do have a 1:1
mapping. Only then we can restart services if needed, we can catch
crashes, and show proper information about your service.

So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).

Note that systemd unit files are not a programming language. And that
for a reason. If you want shell, then use shell, but don't try to misuse
the purposefully simple service file syntax for that.

Also, you should not spawn forking processes in ExecStartPre=, that's
not what it is for. In fact I am pretty sure I will change systemd now
to kill off all remaining processes after each ExecStartPre= command now
that I am aware that people are misusing it like this.

ExecStartPre= is executed strictly in order, and in the order they
appear in the unit file.

I believe that services should be enabled/disabled at one place only,
and that is where you can use "systemctl enable" and "systemctl
disable". Adding a service-specific second-level of disabling in
/etc/sysconfig/ is confusing to the user, and not necessary. You'll do a
great service to your users if they can enable/disable all individual
services the same way. (And UI writers will be thankful for that too)

There's no point in ever unloading kernel modules, unless you do it for
debugging or testing reasons. No init script should ever include an
"rmmod" or "modprobe -r". And we try to make static module loading
unnecessary. There's nowadays auto-loading for most modules in one way
or another, using MODALIAS and similar constructs in the kernel
modules. If you really need to load a module statically, then please do
so via /etc/modules-load.d/ so that the user has centralized control on
this.

This is not going to work:

ExecStart=$FOO bar waldo

I.e. variable substitution for the binary path (it will work for the
arguments, just not for the binary path). This limitation is necessary
due to some SELinux innerworkings in systemd. It's a limitation we
probably could fix if we wanted to, but tbh I find it quite questionable
if you spawn two completely different binaries and still call it by the
same service file.

In general if services use a lot of /etc/sysconfig/ settings then this
is probably an indication that the service code should be fixed and
should just get proper configuration files. If you need to interpret
these files, outside of the daemon, and the simple variable substitution
is not sufficient, and you need a programming language to interpret the
settings, then use a programming language, for example shell. You can
start shell scripts from systemd, like any other executable, and then
exec the real binary in the end. Of course, these solutions are somewhat
hacky, and I think in the long run binaries should be stand-alone and
should be able to read their own configuration themselves. But if you
really need a shell script, then go for it, stick it in
/usr/lib/<yourpackage>/scripts/ or so, and execute that from ExecStart=.

I will probably blog about sysconfig in a systemd world soon.

Type=oneshot is for one shot services, not continously running
services. Type=oneshot is for stuff like fsck, that runs once at boot
and finishes, and only after it finished boot will continue.

I am aware that some things I point out above are not how people used to
do things on SysV, but well, we want to get things right, and if you use
systemd natively, then we ask you to clean up things and not just
translate things 1:1.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Steve Dickson
2011-07-08 13:57:57 UTC
Permalink
Post by Lennart Poettering
Post by Steve Dickson
Hello,
One of the maintainers of systemd and I have been working
together on trying to convert the NFS SysV init scripts
into systemd services. Here is the long trail...
https://bugzilla.redhat.com/show_bug.cgi?id=699040
The point is this, with fairly complicated system,
some events need to have happen, like loading modules,
before other events happen, like setting parameters of
those loaded modules.
Currently the ExecStart commands can be started and end
before the ExecStartPre even start. This means setting
modules parameters within the same service file are
impossible.
I suggested that a boundary be set that all ExecStartPre
commands finish before any ExecStart commands start,
which would allow complicated subsystems, like NFS,
to start in a very stable way...
So is it wrong? Shouldn't there away to allow certain
parts of a system to synchronously configure some
things so other parts will come up as expected?
I am pretty sure systemd-devel is the better place to discuss this. But
I didn't know it existed...
Post by Lennart Poettering
Yes, we want that people place each service in an individual service
file. Only then we can supervise the services properly. It is possible
to spawn multiple high-level processes from a single service, but that
is mostly intended as compat kludge to support SysV init scripts where
this is possible. In general however, we want people to do have a 1:1
mapping. Only then we can restart services if needed, we can catch
crashes, and show proper information about your service.
So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).
Thank you for this explanation. It appears your definition of a
service might be a bit too simple for many subsystems. You seem
to think that on service will only start one system daemon, which
is not the case in the more complex subsystems.
Post by Lennart Poettering
Note that systemd unit files are not a programming language. And that
for a reason. If you want shell, then use shell, but don't try to misuse
the purposefully simple service file syntax for that.
Boy... What I would give for a shell!! 8-)
Post by Lennart Poettering
Also, you should not spawn forking processes in ExecStartPre=, that's
not what it is for. In fact I am pretty sure I will change systemd now
to kill off all remaining processes after each ExecStartPre= command now
that I am aware that people are misusing it like this.
If they are not for forking off process, what are the for? It seems
quite logical that one would use a number of ExecStartPre= commands
to do some set up and then use the ExecStart= to start the daemon.
Post by Lennart Poettering
ExecStartPre= is executed strictly in order, and in the order they
appear in the unit file.
True, but there is no synchronization. Meaning first process can
end after the second process, which think is a problem.
Post by Lennart Poettering
I believe that services should be enabled/disabled at one place only,
and that is where you can use "systemctl enable" and "systemctl
disable". Adding a service-specific second-level of disabling in
/etc/sysconfig/ is confusing to the user, and not necessary. You'll do a
great service to your users if they can enable/disable all individual
services the same way. (And UI writers will be thankful for that too)
In a simple subsystem maybe, but many subsystems have a large number
of configuration knobs that are needed so the subsystem can function
in a large number of different environments. So in the past its
been very handy and straightforward to be able to tweak one file
to set configurations on different, but related, subsystems.
Post by Lennart Poettering
There's no point in ever unloading kernel modules, unless you do it for
debugging or testing reasons. No init script should ever include an
"rmmod" or "modprobe -r". And we try to make static module loading
unnecessary. There's nowadays auto-loading for most modules in one way
or another, using MODALIAS and similar constructs in the kernel
modules. If you really need to load a module statically, then please do
so via /etc/modules-load.d/ so that the user has centralized control on
this.
I agree unloading modules is unnecessary, but, IMHO, its much
similar to manage one start up file that loads multiple modules
than one start up script and multiple modules-load.d files..
Post by Lennart Poettering
ExecStart=$FOO bar waldo
I.e. variable substitution for the binary path (it will work for the
arguments, just not for the binary path). This limitation is necessary
due to some SELinux innerworkings in systemd. It's a limitation we
probably could fix if we wanted to, but tbh I find it quite questionable
if you spawn two completely different binaries and still call it by the
same service file.
Spawning different binaries to do set up, like exporting directories
before the a system daemon is started seems very reasonable and expected
practice.
Post by Lennart Poettering
In general if services use a lot of /etc/sysconfig/ settings then this
is probably an indication that the service code should be fixed and
should just get proper configuration files.
I don't understand this generalization. For a very long time subsystems
have used /etc/sysconfig to store there configuration files and now they
are broken because they do? Plus they are not "proper" configuration files?

If you need to interpret
Post by Lennart Poettering
these files, outside of the daemon, and the simple variable substitution
is not sufficient, and you need a programming language to interpret the
settings, then use a programming language, for example shell. You can
start shell scripts from systemd, like any other executable, and then
exec the real binary in the end. Of course, these solutions are somewhat
hacky, and I think in the long run binaries should be stand-alone and
should be able to read their own configuration themselves. But if you
really need a shell script, then go for it, stick it in
/usr/lib/<yourpackage>/scripts/ or so, and execute that from ExecStart=.
Perfect... I will used this "hacky" to do all the system setup and configuration
that is needed...

Thanks for taking the time....

steved.
Post by Lennart Poettering
I will probably blog about sysconfig in a systemd world soon.
Type=oneshot is for one shot services, not continously running
services. Type=oneshot is for stuff like fsck, that runs once at boot
and finishes, and only after it finished boot will continue.
I am aware that some things I point out above are not how people used to
do things on SysV, but well, we want to get things right, and if you use
systemd natively, then we ask you to clean up things and not just
translate things 1:1.
Lennart
Michal Schmidt
2011-07-08 14:57:31 UTC
Permalink
Post by Steve Dickson
Post by Lennart Poettering
So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).
Thank you for this explanation. It appears your definition of a
service might be a bit too simple for many subsystems. You seem
to think that on service will only start one system daemon, which
is not the case in the more complex subsystems.
Each of the daemons can have its own unit file. We don't have to map the
old initscripts to systemd units 1:1.
Post by Steve Dickson
Post by Lennart Poettering
Also, you should not spawn forking processes in ExecStartPre=, that's
not what it is for. In fact I am pretty sure I will change systemd now
to kill off all remaining processes after each ExecStartPre= command now
that I am aware that people are misusing it like this.
If they are not for forking off process, what are the for?
ExecStartPre= are for set up commands that do not leaved fork processes
behind when they exit.
Post by Steve Dickson
It seems quite logical that one would use a number of ExecStartPre= commands
to do some set up and then use the ExecStart= to start the daemon.
Well, that's exactly how they're used. Do some preparation in
ExecStartPre and run the daemon in ExecStart.
Post by Steve Dickson
Post by Lennart Poettering
ExecStartPre= is executed strictly in order, and in the order they
appear in the unit file.
True, but there is no synchronization. Meaning first process can
end after the second process, which think is a problem.
This must be some misunderstanding, or you're seeing an unusual bug.
It just cannot happen. The second ExecStartPre of the unit is run after
the first one exits, not earlier.
Or do you mean synchronization among several units? Then you want to
specify ordering dependencies (Before, After).

Michal
Steve Dickson
2011-07-10 03:31:35 UTC
Permalink
Post by Michal Schmidt
Post by Steve Dickson
Post by Lennart Poettering
So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).
Thank you for this explanation. It appears your definition of a
service might be a bit too simple for many subsystems. You seem
to think that on service will only start one system daemon, which
is not the case in the more complex subsystems.
Each of the daemons can have its own unit file. We don't have to map the
old initscripts to systemd units 1:1.
So one service can not have multiple daemons?
Post by Michal Schmidt
Post by Steve Dickson
Post by Lennart Poettering
Also, you should not spawn forking processes in ExecStartPre=, that's
not what it is for. In fact I am pretty sure I will change systemd now
to kill off all remaining processes after each ExecStartPre= command now
that I am aware that people are misusing it like this.
If they are not for forking off process, what are the for?
ExecStartPre= are for set up commands that do not leaved fork processes
behind when they exit.
Meaning they are not for daemon processes, which does make sense...
Post by Michal Schmidt
Post by Steve Dickson
It seems quite logical that one would use a number of ExecStartPre= commands
to do some set up and then use the ExecStart= to start the daemon.
Well, that's exactly how they're used. Do some preparation in
ExecStartPre and run the daemon in ExecStart.
Post by Steve Dickson
Post by Lennart Poettering
ExecStartPre= is executed strictly in order, and in the order they
appear in the unit file.
True, but there is no synchronization. Meaning first process can
end after the second process, which think is a problem.
This must be some misunderstanding, or you're seeing an unusual bug.
It just cannot happen. The second ExecStartPre of the unit is run after
the first one exits, not earlier.
Or do you mean synchronization among several units? Then you want to
specify ordering dependencies (Before, After).
Please take a look at https://bugzilla.redhat.com/show_bug.cgi?id=699040#c35
It sure looks like one process is being started for another one ends...

steved.
Lennart Poettering
2011-07-10 22:27:57 UTC
Permalink
Post by Steve Dickson
Post by Michal Schmidt
Post by Steve Dickson
Post by Lennart Poettering
So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).
Thank you for this explanation. It appears your definition of a
service might be a bit too simple for many subsystems. You seem
to think that on service will only start one system daemon, which
is not the case in the more complex subsystems.
Each of the daemons can have its own unit file. We don't have to map the
old initscripts to systemd units 1:1.
So one service can not have multiple daemons?
As mentioned earlier, it can, but we strongly advise you not to do this,
since it makes it hard to supervise and monitor a service, to restart it
when it crashes, to collect exit statusses, to run other services on
failure, to match up log messages, to even inform the user about most
basic service status. We support this for compat with SysV, not as a
feature you should actually use.
Post by Steve Dickson
Post by Michal Schmidt
This must be some misunderstanding, or you're seeing an unusual bug.
It just cannot happen. The second ExecStartPre of the unit is run after
the first one exits, not earlier.
Or do you mean synchronization among several units? Then you want to
specify ordering dependencies (Before, After).
Please take a look at https://bugzilla.redhat.com/show_bug.cgi?id=699040#c35
It sure looks like one process is being started for another one ends...
If you want to know the precise runtime of a process systemd started,
then use "systemctl show" on the service which will show you a lot of
additional low-level data for a service, and timestamps for each
processes are included.

Lennart
--
Lennart Poettering - Red Hat, Inc.
JB
2011-07-10 23:31:56 UTC
Permalink
Post by Lennart Poettering
...
Post by Steve Dickson
So one service can not have multiple daemons?
As mentioned earlier, it can, but we strongly advise you not to do this,
since it makes it hard to supervise and monitor a service, to restart it
when it crashes, to collect exit statusses, to run other services on
failure, to match up log messages, to even inform the user about most
basic service status. We support this for compat with SysV, not as a
feature you should actually use.
...
That's strange. Perhaps I misunderstand you ...

Linux.FR has an interview with Lennart Poettering ...
http://linuxfr.org/nodes/86687/comments/1249943

"LinuxFr.org : Could explain a little bit your sentence: "Systemd is the first
Linux init system which allows you to properly kill a service" ?

Lennart : A service like Apache might have a number of subprocesses running,
like CGI scripts, which might have been written by 3rd parties and hence are
very lightly controlled only. This gives them the power to detach themselves
almost completely from the main Apache service, and this actually happens in
real life.
In systemd this is not possible anymore, and processes can no longer escape the
supervision. That enables us -- for the first time -- to fully kill a service
and all its helper processes, in a way that we can be sure no CGI script can
escape us.
For details see this blog story I posted a while back.
It's kinda ironic that the job of killing a service which appears to simple at
first is actually quite hard to get right, and only now we could fix this
properly. One might have hoped this issue would have been fixed much earlier on
Linux."

Then I would assume that systemd could do it, i.e. control multiple services
of any type (daemons, master/slave, or multithreaded) easily in its own
environment it creates and controlls fully ...

JB
Lennart Poettering
2011-07-11 01:20:56 UTC
Permalink
Post by JB
Post by Lennart Poettering
...
Post by Steve Dickson
So one service can not have multiple daemons?
As mentioned earlier, it can, but we strongly advise you not to do this,
since it makes it hard to supervise and monitor a service, to restart it
when it crashes, to collect exit statusses, to run other services on
failure, to match up log messages, to even inform the user about most
basic service status. We support this for compat with SysV, not as a
feature you should actually use.
...
That's strange. Perhaps I misunderstand you ...
Linux.FR has an interview with Lennart Poettering ...
http://linuxfr.org/nodes/86687/comments/1249943
"LinuxFr.org : Could explain a little bit your sentence: "Systemd is the first
Linux init system which allows you to properly kill a service" ?
Lennart : A service like Apache might have a number of subprocesses running,
like CGI scripts, which might have been written by 3rd parties and hence are
very lightly controlled only. This gives them the power to detach themselves
almost completely from the main Apache service, and this actually happens in
real life.
In systemd this is not possible anymore, and processes can no longer escape the
supervision. That enables us -- for the first time -- to fully kill a service
and all its helper processes, in a way that we can be sure no CGI script can
escape us.
For details see this blog story I posted a while back.
It's kinda ironic that the job of killing a service which appears to simple at
first is actually quite hard to get right, and only now we could fix this
properly. One might have hoped this issue would have been fixed much earlier on
Linux."
Then I would assume that systemd could do it, i.e. control multiple services
of any type (daemons, master/slave, or multithreaded) easily in its own
environment it creates and controlls fully ...
As service/daemon consists of one or more process, with one designated
as main, and practically defining the runtime of the service. It's the
one traditionally storing its PID in the PID file.

In a systemd world we only want one PID file writing daemon per
systemd service. That daemon can spawn as many helper processes as it
wants of course, for example to implement CGI or to do worker processes
like udev, or anything like that.

Lennart
--
Lennart Poettering - Red Hat, Inc.
JB
2011-07-11 08:44:16 UTC
Permalink
...
I would like to verify the systemd's parallelism (or should we say
concurrency in case of systemd ?) claim.

Any parallelization possible to achieve thru configuration of service files ?
What it would look like ?

Let me take a shot at 2 examples of service files below.
Are they correct setup-wise ?
Will both examples be executed sequentially only ?

1.

main-service-1.service:
[Unit]
Description=Main service 1
Requires= ... sub-service-1.service sub-service-2.service
After= ... sub-service-1.service sub-service-2.service
...
[Service]
Type=forking <---------------------- any other type too ?
ExecStartPre=
ExecStartPre=
ExecStart=
ExecStart= /usr/sbin/some-service
ExecStartPost=
ExecStartPost=
...

sub-service-1.service:
[Unit]
Description=Sub service 1
...
[Service]
...

sub-service-2.service:
[Unit]
Description=Sub service 2
...
[Service]
...

2.

main-service-2.service:
[Unit]
Description=Main service 2
After= ...
...
[Service]
Type=forking <---------------------- any other type too ?
ExecStartPre= sub-service-1.service
ExecStartPre= sub-service-2.service
ExecStart=
ExecStart= /usr/sbin/some-service
ExecStartPost=
ExecStartPost=
...

sub-service-1.service:
[Unit]
Description=Sub service 1
...
[Service]
...

sub-service-2.service:
[Unit]
Description=Sub service 2
...
[Service]
...

What if sysadmin wants to execute them in parallel because she knows they
can be executed this way (no conflict and no races) ?
How, if by definition, systemd executes them sequentially only ?
Can they be grouped and execution-parallelized in the whole service file, or
at least in subgroups Pre-, regular, and Post- ?

The /etc/init.d/nfs under current SysV init system can start the main service
and include calling sub-services (each of whom is a separate service as well).
Yes, they are executed sequentially.

Can that be done under systemd as well ?
Are the examples given above good for that ?
If yes, they would be examples of a sequential execution as under SysV init ?

So, where would be the promised parallelism in there, in the way the main and
sub services are executed ?

Can you give us a working example of a services setup (or something else) in
systemd where execution-parallelism would be present or at least theoretically
exploitable ?

JB
Michal Schmidt
2011-07-11 09:10:45 UTC
Permalink
Post by JB
Let me take a shot at 2 examples of service files below.
Are they correct setup-wise ?
Will both examples be executed sequentially only ?
1.
[Unit]
Description=Main service 1
Requires= ... sub-service-1.service sub-service-2.service
After= ... sub-service-1.service sub-service-2.service
...
[Service]
Type=forking <---------------------- any other type too ?
ExecStartPre=
ExecStartPre=
ExecStart=
ExecStart= /usr/sbin/some-service
Nitpick: You can have only one ExecStart= in a forking unit. Only
oneshot units can have more.
Post by JB
ExecStartPost=
ExecStartPost=
...
[Unit]
Description=Sub service 1
...
[Service]
...
[Unit]
Description=Sub service 2
...
[Service]
...
First, sub-service-1.service and sub-service-2.service will be started
in parallel. When they're running, main-service-1.service will be
started by processing its ExecStart* commands sequentially.
Post by JB
2.
[Unit]
Description=Main service 2
After= ...
...
[Service]
Type=forking <---------------------- any other type too ?
ExecStartPre= sub-service-1.service
ExecStartPre= sub-service-2.service
This is incorrect. ExecStartPre= expects a command to run, not a unit
name.
Post by JB
What if sysadmin wants to execute them in parallel because she knows
they can be executed this way (no conflict and no races) ?
How, if by definition, systemd executes them sequentially only ?
Can they be grouped and execution-parallelized in the whole service
file, or at least in subgroups Pre-, regular, and Post- ?
Parallelism in systemd happens between multiple units, but never between
ExecStart* commands of one unit.
Requesting parallelism within one unit seems like over-engineering to
me. You can always split your unit to smaller ones if you want
parallelism.
Post by JB
Can you give us a working example of a services setup (or something
else) in systemd where execution-parallelism would be present or at
least theoretically exploitable ?
Take a look at 'systemd-analyze plot' where you can clearly see
services starting in parallel. This Lennart's blog post has an example:
http://0pointer.de/blog/projects/blame-game.html

Michal
JB
2011-07-11 11:08:48 UTC
Permalink
Post by Michal Schmidt
...
Post by JB
1.
[Unit]
Description=Main service 1
Requires= ... sub-service-1.service sub-service-2.service
After= ... sub-service-1.service sub-service-2.service
...
[Service]
Type=forking <---------------------- any other type too ?
ExecStartPre=
ExecStartPre=
ExecStart=
ExecStart= /usr/sbin/some-service
Nitpick: You can have only one ExecStart= in a forking unit. Only
oneshot units can have more.
Post by JB
ExecStartPost=
ExecStartPost=
...
[Unit]
Description=Sub service 1
...
[Service]
...
[Unit]
Description=Sub service 2
...
[Service]
...
First, sub-service-1.service and sub-service-2.service will be started
in parallel. When they're running, main-service-1.service will be
started by processing its ExecStart* commands sequentially.
...
OK.
Q: The sub-service-1.service and sub-service-2.service will be run as stand-
alone processes (of whatever kind they are: daemon, master/slave,
multithreaded), with no back references or dependecies of any kind
(parent-child like, shared resources, etc) to main-service-1.service ?
Post by Michal Schmidt
Post by JB
2.
[Unit]
Description=Main service 2
After= ...
...
[Service]
Type=forking <---------------------- any other type too ?
ExecStartPre= sub-service-1.service
ExecStartPre= sub-service-2.service
This is incorrect. ExecStartPre= expects a command to run, not a unit
name.
OK. Yes, I goofed here ... I meant some executable ...
Let me repeat example 2:

2.
main-service-2.service:
[Unit]
Description=Main service 2
After= ...
...
[Service]
Type=forking <---------------------- any other type too ?
ExecStartPre= exec /etc/init.d/sub-service-1
ExecStartPre= exec /etc/init.d/sub-service-2
ExecStart= /usr/sbin/some-service
ExecStartPost=
ExecStartPost=
...

Would the above be correct setup-wise ?
Are there any restrictions on those Pre (and Post) commands ?
Post by Michal Schmidt
Post by JB
What if sysadmin wants to execute them in parallel because she knows
they can be executed this way (no conflict and no races) ?
How, if by definition, systemd executes them sequentially only ?
Can they be grouped and execution-parallelized in the whole service
file, or at least in subgroups Pre-, regular, and Post- ?
Parallelism in systemd happens between multiple units, but never between
ExecStart* commands of one unit.
Requesting parallelism within one unit seems like over-engineering to
me. You can always split your unit to smaller ones if you want
parallelism.
But this is what Steve, I believe, wants to do with nfs (to have a bunch of
services started from the main one, as under current SysV init system, so his
users are not confused by the startup of all these individual service files).
Post by Michal Schmidt
...
Post by JB
Can you give us a working example of a services setup (or something
else) in systemd where execution-parallelism would be present or at
least theoretically exploitable ?
Take a look at 'systemd-analyze plot' where you can clearly see
services starting in parallel.
Well, I wish I could (I am on F15) ...

$ systemd-analyze plot
<?xml version="1.0" encoding="UTF-8"?>
<svg xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink" width="1793pt" height="2290pt"
viewBox="0 0 1793 2290" version="1.1">
<defs>
<g>
<symbol overflow="visible" id="glyph0-0">
<path style="stroke:none;" d="M 3.8125 -7.96875 C 3.207031 -7.96875 2.746094 -7
...
<use xlink:href="#glyph0-32" x="493.898438" y="2211.25"/>
</g>
</g>
</svg>
$
$ systemd-analyze plot |less
Traceback (most recent call last):
File "/usr/bin/systemd-analyze", line 221, in <module>
surface.finish()
IOError: [Errno 32] Broken pipe
$
Post by Michal Schmidt
http://0pointer.de/blog/projects/blame-game.html
Michal
Thanks.
JB
Michal Schmidt
2011-07-11 12:11:07 UTC
Permalink
Post by JB
Post by Michal Schmidt
First, sub-service-1.service and sub-service-2.service will be started
in parallel. When they're running, main-service-1.service will be
started by processing its ExecStart* commands sequentially.
...
OK.
Q: The sub-service-1.service and sub-service-2.service will be run as stand-
alone processes (of whatever kind they are: daemon, master/slave,
multithreaded),
Yes.
Post by JB
with no back references or dependecies of any kind
(parent-child like, shared resources, etc) to main-service-1.service ?
There is no parent-child relationship between services. The parent of
all services is systemd (PID 1). Services do not spawn other services
directly. Only systemd spawns services.

About "dependencies of any kind": In the example there are requirement
and ordering dependencies between main-service-1.service and the
sub-services (that's what Requires= and After= directives specify).
systemd maintains the graph of these dependencies in its data structures.
Post by JB
2.
[Unit]
Description=Main service 2
After= ...
...
[Service]
Type=forking<---------------------- any other type too ?
ExecStartPre= exec /etc/init.d/sub-service-1
ExecStartPre= exec /etc/init.d/sub-service-2
ExecStart= /usr/sbin/some-service
ExecStartPost=
ExecStartPost=
...
Would the above be correct setup-wise ?
Synchronous starting of other services may lead to deadlocks in some
cases (e.g. https://bugzilla.redhat.com/show_bug.cgi?id=690177).
But even if you avoid the deadlock, this unit file is still horrible.
Using systemd's dependency mechanisms (Requires, Wants, After, ...) is
definitely cleaner.
Post by JB
Are there any restrictions on those Pre (and Post) commands ?
One limitation was already mentioned somewhere in this thread - these
commands must not fork off daemons.
Post by JB
Post by Michal Schmidt
Parallelism in systemd happens between multiple units, but never between
ExecStart* commands of one unit.
Requesting parallelism within one unit seems like over-engineering to
me. You can always split your unit to smaller ones if you want
parallelism.
But this is what Steve, I believe, wants to do with nfs (to have a bunch of
services started from the main one, as under current SysV init system, so his
users are not confused by the startup of all these individual service files).
I proposed a way to do this cleanly using systemd targets elsewhere in
this discussion.
Post by JB
Post by Michal Schmidt
Take a look at 'systemd-analyze plot' where you can clearly see
services starting in parallel.
Well, I wish I could (I am on F15) ...
$ systemd-analyze plot
<?xml version="1.0" encoding="UTF-8"?>
[...]

Store it to an SVG file and then view it.

Michal
JB
2011-07-11 12:56:57 UTC
Permalink
Post by Michal Schmidt
...
Post by JB
2.
[Unit]
Description=Main service 2
After= ...
...
[Service]
Type=forking<---------------------- any other type too ?
ExecStartPre= exec /etc/init.d/sub-service-1
ExecStartPre= exec /etc/init.d/sub-service-2
ExecStart= /usr/sbin/some-service
ExecStartPost=
ExecStartPost=
...
Are there any restrictions on those Pre (and Post) commands ?
One limitation was already mentioned somewhere in this thread - these
commands must not fork off daemons.
This is interesting. Or perhaps I read too much into your above statement ?
We know already that ExecStartPre must contain a command to be executed.
Post by Michal Schmidt
Post by JB
ExecStartPre= exec /etc/init.d/sub-service-1
Note the 'exec' command, which means "Replace the shell with the given
command." with immediate return.
How does systemd know what's in the "/etc/init.d/sub-service-1" process, to be
able to figure out if any daemon is to be forked off ?
Post by Michal Schmidt
...
Post by JB
Post by Michal Schmidt
Parallelism in systemd happens between multiple units, but never between
ExecStart* commands of one unit.
Requesting parallelism within one unit seems like over-engineering to
me. You can always split your unit to smaller ones if you want
parallelism.
But this is what Steve, I believe, wants to do with nfs (to have a bunch of
services started from the main one, as under current SysV init system, so
his users are not confused by the startup of all these individual service
files).
I proposed a way to do this cleanly using systemd targets elsewhere in
this discussion.
Or my example 1 would serve him too ?
Post by Michal Schmidt
...
JB
Michal Schmidt
2011-07-11 13:02:12 UTC
Permalink
Post by JB
Post by JB
ExecStartPre= exec /etc/init.d/sub-service-1
Note the 'exec' command, which means "Replace the shell with the given
command." with immediate return.
How does systemd know what's in the "/etc/init.d/sub-service-1" process, to be
able to figure out if any daemon is to be forked off ?
Of course it does not know that beforehand.
It just hates it when it happens.
Lennart Poettering
2011-07-08 14:57:40 UTC
Permalink
Post by Steve Dickson
Post by Lennart Poettering
I am pretty sure systemd-devel is the better place to discuss this. But
I didn't know it existed...
That would have been the first big free software project without any
mailing list, wouldn't it?
Post by Steve Dickson
Post by Lennart Poettering
Yes, we want that people place each service in an individual service
file. Only then we can supervise the services properly. It is possible
to spawn multiple high-level processes from a single service, but that
is mostly intended as compat kludge to support SysV init scripts where
this is possible. In general however, we want people to do have a 1:1
mapping. Only then we can restart services if needed, we can catch
crashes, and show proper information about your service.
So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).
Thank you for this explanation. It appears your definition of a
service might be a bit too simple for many subsystems. You seem
to think that on service will only start one system daemon, which
is not the case in the more complex subsystems.
Well, I doubt about the "many". In fact, I am aware of only one other
occasion where people were wondering about this. And often the problems
are only perceived problems, because people try to translate their sysv
scripts 1:1 and are unwilling to normalize their scripts while doing
so.
Post by Steve Dickson
Post by Lennart Poettering
Also, you should not spawn forking processes in ExecStartPre=, that's
not what it is for. In fact I am pretty sure I will change systemd now
to kill off all remaining processes after each ExecStartPre= command now
that I am aware that people are misusing it like this.
If they are not for forking off process, what are the for? It seems
quite logical that one would use a number of ExecStartPre= commands
to do some set up and then use the ExecStart= to start the daemon.
This is a misunderstanding. What I tried to say is that they should not
be used to spawn off processes that fork and stay in the
background. Processes in ExecStartPre= are executed synchronously: we
wait for them to finish before we start the next one, and they should
not try to play games with that and spawn stuff in the
background. That's what I meant by saying "you should not spawn
*forking* processes in ExecStartPre=".
Post by Steve Dickson
Post by Lennart Poettering
ExecStartPre= is executed strictly in order, and in the order they
appear in the unit file.
True, but there is no synchronization. Meaning first process can
end after the second process, which think is a problem.
There is synchronization. As I made clear a couple of times, we never
start more than one ExecStartPre= process at a time. We start the next
one only after the previous one finished.
Post by Steve Dickson
Post by Lennart Poettering
I believe that services should be enabled/disabled at one place only,
and that is where you can use "systemctl enable" and "systemctl
disable". Adding a service-specific second-level of disabling in
/etc/sysconfig/ is confusing to the user, and not necessary. You'll do a
great service to your users if they can enable/disable all individual
services the same way. (And UI writers will be thankful for that too)
In a simple subsystem maybe, but many subsystems have a large number
of configuration knobs that are needed so the subsystem can function
in a large number of different environments. So in the past its
been very handy and straightforward to be able to tweak one file
to set configurations on different, but related, subsystems.
Well, nothing stops you from reading the same configuration file from
multiple services. We do that all the time, for example for
/etc/resolv.conf.
Post by Steve Dickson
Post by Lennart Poettering
ExecStart=$FOO bar waldo
I.e. variable substitution for the binary path (it will work for the
arguments, just not for the binary path). This limitation is necessary
due to some SELinux innerworkings in systemd. It's a limitation we
probably could fix if we wanted to, but tbh I find it quite questionable
if you spawn two completely different binaries and still call it by the
same service file.
Spawning different binaries to do set up, like exporting directories
before the a system daemon is started seems very reasonable and expected
practice.
Hmm? You can start as many binaries in ExecStartPre= as you wish, one
after the other, but we don't support that you can change the path of
them dynamically with an env var. env vars are only expanded for
arguments, not for the binary path itself.
Post by Steve Dickson
Post by Lennart Poettering
In general if services use a lot of /etc/sysconfig/ settings then this
is probably an indication that the service code should be fixed and
should just get proper configuration files.
I don't understand this generalization. For a very long time subsystems
have used /etc/sysconfig to store there configuration files and now they
are broken because they do? Plus they are not "proper" configuration files?
People have done lots of things for a long time, that doesn't make it
the most elegant, best and simplest solution.

By proper configuration files I mean configuration files read by the
daemons themselves, instead of files that are actually a script that is
interpreted by a programming language and some more scripts intrefacing
with that.

Or in other words: configuration via command line arguments or
environment variables sucks. Much nicer are proper configuration
files. And faking config files by parsing them in shell and then passing
them of to daemons via env vars and cmdline args is not ideal.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Steve Dickson
2011-07-10 03:32:15 UTC
Permalink
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
I am pretty sure systemd-devel is the better place to discuss this. But
I didn't know it existed...
That would have been the first big free software project without any
mailing list, wouldn't it?
Point! :-)
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
Yes, we want that people place each service in an individual service
file. Only then we can supervise the services properly. It is possible
to spawn multiple high-level processes from a single service, but that
is mostly intended as compat kludge to support SysV init scripts where
this is possible. In general however, we want people to do have a 1:1
mapping. Only then we can restart services if needed, we can catch
crashes, and show proper information about your service.
So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).
Thank you for this explanation. It appears your definition of a
service might be a bit too simple for many subsystems. You seem
to think that on service will only start one system daemon, which
is not the case in the more complex subsystems.
Well, I doubt about the "many". In fact, I am aware of only one other
occasion where people were wondering about this. And often the problems
are only perceived problems, because people try to translate their sysv
scripts 1:1 and are unwilling to normalize their scripts while doing
so.
So basically what you are saying is a service can never consist of
more than one system daemon.
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
Also, you should not spawn forking processes in ExecStartPre=, that's
not what it is for. In fact I am pretty sure I will change systemd now
to kill off all remaining processes after each ExecStartPre= command now
that I am aware that people are misusing it like this.
If they are not for forking off process, what are the for? It seems
quite logical that one would use a number of ExecStartPre= commands
to do some set up and then use the ExecStart= to start the daemon.
This is a misunderstanding. What I tried to say is that they should not
be used to spawn off processes that fork and stay in the
background. Processes in ExecStartPre= are executed synchronously: we
wait for them to finish before we start the next one, and they should
not try to play games with that and spawn stuff in the
background. That's what I meant by saying "you should not spawn
*forking* processes in ExecStartPre=".
Maybe a better way to says is ExecStartPre= should not be used for
daemon process?
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
ExecStartPre= is executed strictly in order, and in the order they
appear in the unit file.
True, but there is no synchronization. Meaning first process can
end after the second process, which think is a problem.
There is synchronization. As I made clear a couple of times, we never
start more than one ExecStartPre= process at a time. We start the next
one only after the previous one finished.
Looking at https://bugzilla.redhat.com/show_bug.cgi?id=699040#c35 appears
this is not the case...
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
I believe that services should be enabled/disabled at one place only,
and that is where you can use "systemctl enable" and "systemctl
disable". Adding a service-specific second-level of disabling in
/etc/sysconfig/ is confusing to the user, and not necessary. You'll do a
great service to your users if they can enable/disable all individual
services the same way. (And UI writers will be thankful for that too)
In a simple subsystem maybe, but many subsystems have a large number
of configuration knobs that are needed so the subsystem can function
in a large number of different environments. So in the past its
been very handy and straightforward to be able to tweak one file
to set configurations on different, but related, subsystems.
Well, nothing stops you from reading the same configuration file from
multiple services. We do that all the time, for example for
/etc/resolv.conf.
True... but the point is before systemd, an admin could tweak one
/etc/sysconfig file which define which daemon were started and
how they were configured... Unless I'm missing something that
is no longer the case... The admin will have to explicitly define
each an every daemon they need to run and how they are configured..
all by hand...
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
ExecStart=$FOO bar waldo
I.e. variable substitution for the binary path (it will work for the
arguments, just not for the binary path). This limitation is necessary
due to some SELinux innerworkings in systemd. It's a limitation we
probably could fix if we wanted to, but tbh I find it quite questionable
if you spawn two completely different binaries and still call it by the
same service file.
Spawning different binaries to do set up, like exporting directories
before the a system daemon is started seems very reasonable and expected
practice.
Hmm? You can start as many binaries in ExecStartPre= as you wish, one
after the other, but we don't support that you can change the path of
them dynamically with an env var. env vars are only expanded for
arguments, not for the binary path itself.
Ok... I did misunderstand what you were saying...
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
In general if services use a lot of /etc/sysconfig/ settings then this
is probably an indication that the service code should be fixed and
should just get proper configuration files.
I don't understand this generalization. For a very long time subsystems
have used /etc/sysconfig to store there configuration files and now they
are broken because they do? Plus they are not "proper" configuration files?
People have done lots of things for a long time, that doesn't make it
the most elegant, best and simplest solution.
By proper configuration files I mean configuration files read by the
daemons themselves, instead of files that are actually a script that is
interpreted by a programming language and some more scripts intrefacing
with that.
Or in other words: configuration via command line arguments or
environment variables sucks. Much nicer are proper configuration
files. And faking config files by parsing them in shell and then passing
them of to daemons via env vars and cmdline args is not ideal.
So basically what you are is the way system daemon have been
started for the last.. say... twenty years or so is completely
wrong and systemd is here to change that! 8-) The point being...

That is your opinion which may or may not be held by the rest
of community... So please recognize it as such and please
be willing to accept dissenting views....

steved.
Jon Masters
2011-07-10 09:46:18 UTC
Permalink
Post by Lennart Poettering
Or in other words: configuration via command line arguments or
environment variables sucks.
I disagree. It doesn't suck. It's the way UNIX and Linux have done this
for dozens of years, and it's the way countless sysadmins know and love.
"Sucks" might be true from the point of view of "hey look at this great
thing I just designed", but it's very much not true from the point of
view of the sysadmin working on the weekend who's just thinking "gee,
what the heck is going on, why won't this just work how it has done for
the past twenty years?". In other words "suck" depends on viewpoint.

Jon.
Matthew Garrett
2011-07-10 15:32:58 UTC
Permalink
Post by Jon Masters
I disagree. It doesn't suck. It's the way UNIX and Linux have done this
for dozens of years, and it's the way countless sysadmins know and love.
"Sucks" might be true from the point of view of "hey look at this great
thing I just designed", but it's very much not true from the point of
view of the sysadmin working on the weekend who's just thinking "gee,
what the heck is going on, why won't this just work how it has done for
the past twenty years?". In other words "suck" depends on viewpoint.
The big kernel lock doesn't suck. It's the way SMP UNIX did things for
dozens of years, and it's the way countless kernel hackers know and
love. "Sucks" might be true from the point of view of "hey look at this
great fine-grained locking I just designed", but it's very much not true
from the poit of the driver author working on the weekend who's just
thinking "gee, what the heck is going on, why won't this just work how
it has done for the past twenty years?". In other words "suck" depends
on viewpoint.

Improvement means change, and change will inevitably upset some people
who would prefer to do things in exactly the same way that they always
have done. If we assert that all viewpoints are equally valid then every
single thing we've done in Fedora sucks. In this case there are sound
technical arguments against configuration by command line argument or
environment variable (just like there are against the BKL), and while we
should obviously attempt to make any transition as painless as possible
for administrators, that doesn't serve as a counter to those technical
arguments. They suck. Unarguably.
--
Matthew Garrett | mjg59 at srcf.ucam.org
Chris Adams
2011-07-10 16:49:19 UTC
Permalink
Post by Matthew Garrett
In this case there are sound
technical arguments against configuration by command line argument or
environment variable
I haven't seen any, just statements that they are somehow "bad" and the
new way is "better".

Command line arguments and/or environment variables allow script-based
startup to adapt to current conditions without having to edit a
configuration file. Now maybe you could argue that every program should
figure out relevant things for itself, but here in the real world, that
will never be the case.

The environment-variable type files in /etc/sysconfig also tend to be
much easier to modify from a script than having to parse and edit random
types of configuration files (since everybody seems to know how to do
config files better than anybody else and their way is just a little bit
different).
--
Chris Adams <cmadams at hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
Matthew Garrett
2011-07-10 17:49:21 UTC
Permalink
Post by Chris Adams
Command line arguments and/or environment variables allow script-based
startup to adapt to current conditions without having to edit a
configuration file. Now maybe you could argue that every program should
figure out relevant things for itself, but here in the real world, that
will never be the case.
The suggestion isn't that having the options is wrong, it's that having
them as the primary means of configuration is poor design. If your
entire configuration takes the form of a shell script that constructs a
set of command line options then you've increased fragility for no
benefit. Having a proper configuration file and allowing admins to
override specific aspects of that from the command line isn't a problem.
--
Matthew Garrett | mjg59 at srcf.ucam.org
Steve Clark
2011-07-10 20:33:35 UTC
Permalink
Post by Matthew Garrett
Post by Chris Adams
Command line arguments and/or environment variables allow script-based
startup to adapt to current conditions without having to edit a
configuration file. Now maybe you could argue that every program should
figure out relevant things for itself, but here in the real world, that
will never be the case.
The suggestion isn't that having the options is wrong, it's that having
them as the primary means of configuration is poor design. If your
entire configuration takes the form of a shell script that constructs a
set of command line options then you've increased fragility for no
benefit. Having a proper configuration file and allowing admins to
override specific aspects of that from the command line isn't a problem.
This is just your opinion - where in the else it this mantra preached.
--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.clark at netwolves.com
http://www.netwolves.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.fedoraproject.org/pipermail/devel/attachments/20110710/9a617a7a/attachment.html
Matthew Garrett
2011-07-10 21:13:59 UTC
Permalink
Post by Steve Clark
Post by Matthew Garrett
The suggestion isn't that having the options is wrong, it's that having
them as the primary means of configuration is poor design. If your
entire configuration takes the form of a shell script that constructs a
set of command line options then you've increased fragility for no
benefit. Having a proper configuration file and allowing admins to
override specific aspects of that from the command line isn't a problem.
This is just your opinion - where in the else it this mantra preached.
These scripts don't sanitise input beforehand. What happens if I'm
logged in as root, change IFS and then do /etc/init.d/nfs restart? Using
shell scripts for this is just a bad idea.
--
Matthew Garrett | mjg59 at srcf.ucam.org
Chris Adams
2011-07-10 21:19:05 UTC
Permalink
Post by Matthew Garrett
These scripts don't sanitise input beforehand. What happens if I'm
logged in as root, change IFS and then do /etc/init.d/nfs restart? Using
shell scripts for this is just a bad idea.
Please cite how many BZs there have been about root doing this. Oh
wait, there aren't any, are there?

If root decides to shoot themselves in the foot, they're going to find a
way. That is again an argument at the extreme theoretical end of
possibility and not the common case. You are just making !$#@ up now;
you have yet to come up with legitimate technical arguments.
--
Chris Adams <cmadams at hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
Matthew Garrett
2011-07-10 21:37:08 UTC
Permalink
Post by Chris Adams
If root decides to shoot themselves in the foot, they're going to find a
way. That is again an argument at the extreme theoretical end of
you have yet to come up with legitimate technical arguments.
Solutions that make it difficult for root to shoot themselves in the
foot are better than solutions that don't. Make everything possible, but
make dangerous things harder than safe things. That's entirely in line
with Unix philosophy.
--
Matthew Garrett | mjg59 at srcf.ucam.org
Chris Adams
2011-07-10 22:19:03 UTC
Permalink
Post by Matthew Garrett
Solutions that make it difficult for root to shoot themselves in the
foot are better than solutions that don't. Make everything possible, but
make dangerous things harder than safe things. That's entirely in line
with Unix philosophy.
When such "solutions" impact ease of config, scriptability,
customizability (script bits are customizable to handle configs that the
init-script writer didn't consider), etc., they are not better than
before.
--
Chris Adams <cmadams at hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
Andreas Schwab
2011-07-11 06:55:40 UTC
Permalink
Post by Matthew Garrett
These scripts don't sanitise input beforehand. What happens if I'm
logged in as root, change IFS and then do /etc/init.d/nfs restart?
Nothing, because the shell always sets IFS to the default on startup.

Andreas.
--
Andreas Schwab, schwab at redhat.com
GPG Key fingerprint = D4E8 DBE3 3813 BB5D FA84 5EC7 45C6 250E 6F00 984E
"And now for something completely different."
Matthew Garrett
2011-07-11 11:47:03 UTC
Permalink
Post by Andreas Schwab
Post by Matthew Garrett
These scripts don't sanitise input beforehand. What happens if I'm
logged in as root, change IFS and then do /etc/init.d/nfs restart?
Nothing, because the shell always sets IFS to the default on startup.
Yes, that does turn out to be a spectacularly poor example. Sorry!
--
Matthew Garrett | mjg59 at srcf.ucam.org
Chris Adams
2011-07-10 20:56:25 UTC
Permalink
Post by Matthew Garrett
The suggestion isn't that having the options is wrong
Well, that's what you said before (conveniently snipped from your
reply). You compared CLI args/env vars to the BKL as something to be
Post by Matthew Garrett
In this case there are sound
technical arguments against configuration by command line argument or
environment variable
You have still failed to enumerate even one of the "sound technical
arguments".
Post by Matthew Garrett
it's that having
them as the primary means of configuration is poor design. If your
entire configuration takes the form of a shell script that constructs a
set of command line options then you've increased fragility for no
benefit. Having a proper configuration file and allowing admins to
override specific aspects of that from the command line isn't a problem.
You are moving the target (to a worst-case example) and still not
winning your argument.

This is more than theoretical to me; a small package I maintain is one
example of a command-line configured daemon. The shmpps daemon is a
tiny little daemon that reads a timing pulse-per-second and updates a
shared memory segment. It uses a few command line arguments to set the
source port/type and shared memory segment destination; right now, that
is done for the init script by a file in /etc/sysconfig.

This is a small, light-weight daemon, and doesn't need a configuration
file parser. This is a valid way that Unix daemons have run for
decades, and you are saying that should be removed. I guess every small
daemon now needs to include its own config file parser, replacing the
already-existing getopt() call? How is this "better"?
--
Chris Adams <cmadams at hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
Matthew Garrett
2011-07-10 21:35:58 UTC
Permalink
Post by Chris Adams
Post by Matthew Garrett
The suggestion isn't that having the options is wrong
Well, that's what you said before (conveniently snipped from your
reply). You compared CLI args/env vars to the BKL as something to be
Post by Matthew Garrett
In this case there are sound
technical arguments against configuration by command line argument or
environment variable
You have still failed to enumerate even one of the "sound technical
arguments".
"Configuration by", not overriding configuration. It's a mistake to have
your daemon's configuration be handled by a shell script that's sourced
into existing environment. It's reasonable for an admin to override
configuration on an as-needed basis.
Post by Chris Adams
Post by Matthew Garrett
it's that having
them as the primary means of configuration is poor design. If your
entire configuration takes the form of a shell script that constructs a
set of command line options then you've increased fragility for no
benefit. Having a proper configuration file and allowing admins to
override specific aspects of that from the command line isn't a problem.
You are moving the target (to a worst-case example) and still not
winning your argument.
The discussion was about having significant quantities of configuration
in /etc/sysconf in the form of shell fragments.
Post by Chris Adams
This is more than theoretical to me; a small package I maintain is one
example of a command-line configured daemon. The shmpps daemon is a
tiny little daemon that reads a timing pulse-per-second and updates a
shared memory segment. It uses a few command line arguments to set the
source port/type and shared memory segment destination; right now, that
is done for the init script by a file in /etc/sysconfig.
And that's a bad thing to do. You're sourcing your configuration in an
unsanitised environment. There's a huge number of ways that this can go
wrong depending on the admin's local configuration, which is clearly
undesirable.
Post by Chris Adams
This is a small, light-weight daemon, and doesn't need a configuration
file parser. This is a valid way that Unix daemons have run for
decades, and you are saying that should be removed. I guess every small
daemon now needs to include its own config file parser, replacing the
already-existing getopt() call? How is this "better"?
Nobody's said it should be removed. Lennart's said that it sucks, and I
agree. But all of this would still be better with a simple config parser
that's shared between any daemons that want it.
--
Matthew Garrett | mjg59 at srcf.ucam.org
Steve Clark
2011-07-10 21:51:28 UTC
Permalink
Post by Matthew Garrett
Post by Chris Adams
Post by Matthew Garrett
The suggestion isn't that having the options is wrong
Well, that's what you said before (conveniently snipped from your
reply). You compared CLI args/env vars to the BKL as something to be
Post by Matthew Garrett
In this case there are sound
technical arguments against configuration by command line argument or
environment variable
You have still failed to enumerate even one of the "sound technical
arguments".
"Configuration by", not overriding configuration. It's a mistake to have
Says you. It has seemed to work OK for the last 25+ years. I don't ever remember having a problem
in my 25+ years of working with UNIX/LINUX with the existing initscripts. Where are the BZ reports
that we are fixing with systemd?
Post by Matthew Garrett
your daemon's configuration be handled by a shell script that's sourced
into existing environment. It's reasonable for an admin to override
configuration on an as-needed basis.
Post by Chris Adams
Post by Matthew Garrett
it's that having
them as the primary means of configuration is poor design. If your
entire configuration takes the form of a shell script that constructs a
set of command line options then you've increased fragility for no
benefit. Having a proper configuration file and allowing admins to
override specific aspects of that from the command line isn't a problem.
You are moving the target (to a worst-case example) and still not
winning your argument.
The discussion was about having significant quantities of configuration
in /etc/sysconf in the form of shell fragments.
Post by Chris Adams
This is more than theoretical to me; a small package I maintain is one
example of a command-line configured daemon. The shmpps daemon is a
tiny little daemon that reads a timing pulse-per-second and updates a
shared memory segment. It uses a few command line arguments to set the
source port/type and shared memory segment destination; right now, that
is done for the init script by a file in /etc/sysconfig.
And that's a bad thing to do. You're sourcing your configuration in an
unsanitised environment. There's a huge number of ways that this can go
wrong depending on the admin's local configuration, which is clearly
undesirable.
Post by Chris Adams
This is a small, light-weight daemon, and doesn't need a configuration
file parser. This is a valid way that Unix daemons have run for
decades, and you are saying that should be removed. I guess every small
daemon now needs to include its own config file parser, replacing the
already-existing getopt() call? How is this "better"?
Nobody's said it should be removed. Lennart's said that it sucks, and I
agree. But all of this would still be better with a simple config parser
that's shared between any daemons that want it.
--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.clark at netwolves.com
http://www.netwolves.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.fedoraproject.org/pipermail/devel/attachments/20110710/13c4ce81/attachment.html
drago01
2011-07-11 09:04:21 UTC
Permalink
Post by Matthew Garrett
The suggestion isn't that having the options is wrong
Well, that's what you said before (conveniently snipped from your
reply). You compared CLI args/env vars to the BKL as something to be
In this case there are sound
technical arguments against configuration by command line argument or
environment variable
You have still failed to enumerate even one of the "sound technical
arguments".
"Configuration by", not overriding configuration. It's a mistake to have
Says you. It has seemed to work OK for the last 25+ years.
Yet another "it has been done for $years so it must be right" kind of
argument ...

Do you configure your webbrowser by passing command line arguments
and/or editing its source code?
Do you configure your word process by passing command line arguments
and/or editing its source code?
Do you configure $app by passing command line arguments and/or editing
its source code?

No and you wouldn't want to change to such a scheme simply because it
hasn't been like that for "25+ years".
The same thing applies to system daemons only because it has been done
like this does not make it right ... and really asking the user to
edit the source code (yes shell scripts are source code not
configuration files) is just wrong.It is kind of odd seeing people
arguing in favor of that ... with the only reason "it has been like
that for $years".

Yes people got used to that simply because there where no alternative
(people adding new scripts simply followed the lead of existing ones).
But now we have a chance to clean up this mess so we should do that
and not try to stick to the past forever.
Chris Adams
2011-07-11 12:43:50 UTC
Permalink
Post by drago01
Do you configure your webbrowser by passing command line arguments
and/or editing its source code?
As a matter of fact, I do use command line arguments to my web browser.
I have several different Firefox profiles set up (regular use, web dev,
clean for testing, etc.), so I modify the normal icon to add "-no-remote
-P regular". I then start the others as needed from a command line
(changing "regular" to "dev" or "clean").

Anyway, comparing GUI applications to system daemons is not a valid
comparison. GUI apps are generally designed to be started with a click,
not a command line. System daemons are generally designed to be started
from a script (since that's the way they've been started since day 1).
--
Chris Adams <cmadams at hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
Chris Adams
2011-07-10 22:17:49 UTC
Permalink
Post by Matthew Garrett
"Configuration by", not overriding configuration. It's a mistake to have
your daemon's configuration be handled by a shell script that's sourced
into existing environment.
You still have yet to cite your "sound technical arguments" for this.
All I have seen so far is your opinion.
Post by Matthew Garrett
And that's a bad thing to do. You're sourcing your configuration in an
unsanitised environment. There's a huge number of ways that this can go
wrong depending on the admin's local configuration, which is clearly
undesirable.
And an admin can break a config file. What is the difference? Please
enumerate some of the "huge number of ways that this can go wrong" in
real world examples (not made-up things like overriding IFS).
--
Chris Adams <cmadams at hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.
Matthew Garrett
2011-07-10 22:33:08 UTC
Permalink
Post by Chris Adams
Post by Matthew Garrett
And that's a bad thing to do. You're sourcing your configuration in an
unsanitised environment. There's a huge number of ways that this can go
wrong depending on the admin's local configuration, which is clearly
undesirable.
And an admin can break a config file. What is the difference? Please
enumerate some of the "huge number of ways that this can go wrong" in
real world examples (not made-up things like overriding IFS).
A malformed configuration file will cause a parse error. A malformed
shell script may execute arbitrary code depending on a wide range of
factors that are outside the control of the author. You're obviously
right that this usually won't be a problem, but if you're writing a
configuration file it's also trivially obvious that a restricted grammar
that restricts the behaviour to anything the daemon is designed to do is
technically preferable to one that allows anything to happen. Program
defensively, and do the same for packaging.
--
Matthew Garrett | mjg59 at srcf.ucam.org
Lennart Poettering
2011-07-10 23:02:11 UTC
Permalink
Post by Chris Adams
Post by Matthew Garrett
In this case there are sound
technical arguments against configuration by command line argument or
environment variable
I haven't seen any, just statements that they are somehow "bad" and the
new way is "better".
Here are a number of reasons:

- You cannot really sensibly add comments to command lines
- Reading and writing shell scripts is much harder for UIs than
configuration files
- Shell scripts are very verbose and hard to read, and you need to
understand shell to do so, hence they are not user friendly, except
for seasoned Unix admins
- Shell scripts are slow
- You cannot just scp config files between hosts because you don't have any
- Configuration parsing errors are not helpful, not helpful at all, and
the traditionally don't end up in syslog
- Configuration options like "-f" or "-i" are not easily understandable
and especially not self-explanatory
- It's trivial to hide security holes in config parsing shell scripts
- IFS, error handling is difficult, and so on

And that's just the most obvious reasons why env vars, and cmdline args
and faked shell-based config files are not particularly nice. I came up
with this in 1min thinking. I am pretty sure I can come up with about 100 more
if you ask me nicely.

Anyway, I figure this is a religious thing, and you cannot argue with
religion, so I'll shut up.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Steve Dickson
2011-07-10 17:32:15 UTC
Permalink
Hey,
Post by Matthew Garrett
Improvement means change, and change will inevitably upset some people
who would prefer to do things in exactly the same way that they always
have done.
I will have to slightly disagree. If improvement does indeed come with
the change, then I believe the change will warmly embraced by 99% of the
community. But if the change introduces confusion, deteriorates "easy-of-use"
and possibly introduces regressions (as systemd just might possibly do,
esp in the early stages), then that change will not be strongly embraced.

Remember, the end game is about making the end user's job easier and
them more productive not imposing our philosophy on them just because
we think its the right thing to do... IMHO...

steved.
Lennart Poettering
2011-07-10 23:06:30 UTC
Permalink
Post by Steve Dickson
Hey,
Post by Matthew Garrett
Improvement means change, and change will inevitably upset some people
who would prefer to do things in exactly the same way that they always
have done.
I will have to slightly disagree. If improvement does indeed come with
the change, then I believe the change will warmly embraced by 99% of
the community. But if the change introduces confusion, deteriorates
"easy-of-use" and possibly introduces regressions (as systemd just
might possibly do, esp in the early stages), then that change will not
be strongly embraced.
I am sorry, but I cannot help myself:

The "easy-of-use" of the current NFS stack is indeed legendary.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Steve Dickson
2011-07-11 01:22:01 UTC
Permalink
Post by Lennart Poettering
Post by Steve Dickson
Hey,
Post by Matthew Garrett
Improvement means change, and change will inevitably upset some people
who would prefer to do things in exactly the same way that they always
have done.
I will have to slightly disagree. If improvement does indeed come with
the change, then I believe the change will warmly embraced by 99% of
the community. But if the change introduces confusion, deteriorates
"easy-of-use" and possibly introduces regressions (as systemd just
might possibly do, esp in the early stages), then that change will not
be strongly embraced.
The "easy-of-use" of the current NFS stack is indeed legendary.
:-) You are right... NFS is legendary... It is one of the oldest
technology today but also one of the most used technology today.

But you also make my point... For some one to come along (with absolutely
no NFS understand whatsoever) and simply disregard all of that history
of how and how not to start things up and simply say the know how
to do it better... is just a bit... well... <You fill in the blank> ;-)

steved.
Ric Wheeler
2011-07-11 07:32:20 UTC
Permalink
Post by Steve Dickson
Post by Lennart Poettering
Post by Steve Dickson
Hey,
Post by Matthew Garrett
Improvement means change, and change will inevitably upset some people
who would prefer to do things in exactly the same way that they always
have done.
I will have to slightly disagree. If improvement does indeed come with
the change, then I believe the change will warmly embraced by 99% of
the community. But if the change introduces confusion, deteriorates
"easy-of-use" and possibly introduces regressions (as systemd just
might possibly do, esp in the early stages), then that change will not
be strongly embraced.
The "easy-of-use" of the current NFS stack is indeed legendary.
:-) You are right... NFS is legendary... It is one of the oldest
technology today but also one of the most used technology today.
But you also make my point... For some one to come along (with absolutely
no NFS understand whatsoever) and simply disregard all of that history
of how and how not to start things up and simply say the know how
to do it better... is just a bit... well...<You fill in the blank> ;-)
steved.
I think that any change that leaves a major subsystem not working is simply not
ready for prime time.

If systemd is to become the default, we need to convince the upstream NFS team
to modify their configuration (and keep in mind that this is a pain since
systemd has not become the default in all distros, so this is effectively a fork).

It is pointless to argue style and design. Rather, let's work this as you would
with anything else. Let's see proposed, tested and working patches from the
change proponents that allows NFS to work fully.

Those patches will have to be acceptable to the upstream of the NFS world which
means Steve and others have to buy in here or they simply won't fly....

Thanks!

Ric
Jon Masters
2011-07-10 19:15:33 UTC
Permalink
Post by Matthew Garrett
Post by Jon Masters
I disagree. It doesn't suck. It's the way UNIX and Linux have done this
for dozens of years, and it's the way countless sysadmins know and love.
"Sucks" might be true from the point of view of "hey look at this great
thing I just designed", but it's very much not true from the point of
view of the sysadmin working on the weekend who's just thinking "gee,
what the heck is going on, why won't this just work how it has done for
the past twenty years?". In other words "suck" depends on viewpoint.
The big kernel lock doesn't suck. It's the way SMP UNIX did things for
dozens of years, and it's the way countless kernel hackers know and
love. "Sucks" might be true from the point of view of "hey look at this
great fine-grained locking I just designed", but it's very much not true
from the poit of the driver author working on the weekend who's just
thinking "gee, what the heck is going on, why won't this just work how
it has done for the past twenty years?". In other words "suck" depends
on viewpoint.
I get your analogy, and your point. But there's a key difference. In the
kernel community (which is relatively much smaller), there are
established well documented means by which people find out about things
like BKL removal and act upon it. There is LWN, there is LKML, there is
an expectation that those working on the kernel read these things.

There should not be, and there is not, an expectation that Linux users
and admins in the wider world follow distribution mailing lists, wiki
pages, and IRC obsessively. Or read blogs. That isn't how it's done.
It's done through slow, gradual change picked up over time, unless you
want the kind of pain that I believe is coming further down the line.

Jon.
Matthew Garrett
2011-07-10 20:20:24 UTC
Permalink
Post by Jon Masters
Post by Matthew Garrett
The big kernel lock doesn't suck. It's the way SMP UNIX did things for
dozens of years, and it's the way countless kernel hackers know and
love. "Sucks" might be true from the point of view of "hey look at this
great fine-grained locking I just designed", but it's very much not true
from the poit of the driver author working on the weekend who's just
thinking "gee, what the heck is going on, why won't this just work how
it has done for the past twenty years?". In other words "suck" depends
on viewpoint.
I get your analogy, and your point. But there's a key difference. In the
kernel community (which is relatively much smaller), there are
established well documented means by which people find out about things
like BKL removal and act upon it. There is LWN, there is LKML, there is
an expectation that those working on the kernel read these things.
We have documentation and we have release notes. There's an expectation
that admins pay attention to these things.
Post by Jon Masters
There should not be, and there is not, an expectation that Linux users
and admins in the wider world follow distribution mailing lists, wiki
pages, and IRC obsessively. Or read blogs. That isn't how it's done.
It's done through slow, gradual change picked up over time, unless you
want the kind of pain that I believe is coming further down the line.
The systemd transition hasn't been rapid, and what we're talking about
here is a change in best practices rather than a change in what's
possible. Your systemd service file can launch a shell script that execs
the daemon. You can stick with a SysV init file instead. But both
approaches change nothing regarding the intrinsic fragility of sourcing
a freeform shell script as application configuration.
--
Matthew Garrett | mjg59 at srcf.ucam.org
Steve Clark
2011-07-10 20:35:30 UTC
Permalink
Post by Matthew Garrett
Post by Jon Masters
Post by Matthew Garrett
The big kernel lock doesn't suck. It's the way SMP UNIX did things for
dozens of years, and it's the way countless kernel hackers know and
love. "Sucks" might be true from the point of view of "hey look at this
great fine-grained locking I just designed", but it's very much not true
from the poit of the driver author working on the weekend who's just
thinking "gee, what the heck is going on, why won't this just work how
it has done for the past twenty years?". In other words "suck" depends
on viewpoint.
I get your analogy, and your point. But there's a key difference. In the
kernel community (which is relatively much smaller), there are
established well documented means by which people find out about things
like BKL removal and act upon it. There is LWN, there is LKML, there is
an expectation that those working on the kernel read these things.
We have documentation and we have release notes. There's an expectation
that admins pay attention to these things.
Post by Jon Masters
There should not be, and there is not, an expectation that Linux users
and admins in the wider world follow distribution mailing lists, wiki
pages, and IRC obsessively. Or read blogs. That isn't how it's done.
It's done through slow, gradual change picked up over time, unless you
want the kind of pain that I believe is coming further down the line.
The systemd transition hasn't been rapid, and what we're talking about
here is a change in best practices rather than a change in what's
possible. Your systemd service file can launch a shell script that execs
the daemon. You can stick with a SysV init file instead. But both
approaches change nothing regarding the intrinsic fragility of sourcing
a freeform shell script as application configuration.
Again you say best practices - where is this written, only in the minds of people pushing systemd.
--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.clark at netwolves.com
http://www.netwolves.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.fedoraproject.org/pipermail/devel/attachments/20110710/844093a9/attachment.html
Alexander Boström
2011-07-10 21:38:24 UTC
Permalink
Hi,

I'm a sysadmin who likes when things change for the better. I also like
systemd. Sure, I read this list and stay informed, but my employer is a
RHEL subscriber so for non-hobby purposes I only need to deal with
change every few years, which is manageable. (SSSD is a "problem" of
this kind in RHEL6, but if you actually look at it you see that it
solves real issues and has the potential to solve even more. The
alternative is stagnation.)

That said, I'd rather have the old SysV scripts than unit files created
"in anger", because yes, the latter will be ugly and annoying to use.

But the goal has to be a set of nice and clean unit files, so please try
to work on that? I've yet to see any sound technical argument about why
it couldn't be done.

I'm sceptical of J?hann's FOO="foo=4711" solution. (Nothing to do with
integers vs. strings, btw, non-set shell variables has always had a
default value of the empty string.)

Perhaps a better approach is either to use a little helper script that
calls sysctl and modprobe if the variables are set (exactly what the
SysV script does) or even better, move this logic into rpc.statd itself
(reimplement in C).

Maybe if you move it into rpc.statd then module autoloading becomes
possible too, which simplifies things further?

To me, the thing about conditionally starting the GSSAPI stuff sounds
like a job for socket activation, but that's just a guess.

/abo
&quot;Jóhann B. Guðmundsson&quot;
2011-07-10 23:08:41 UTC
Permalink
Post by Alexander Boström
I'm sceptical of J?hann's FOO="foo=4711" solution. (Nothing to do with
integers vs. strings, btw, non-set shell variables has always had a
default value of the empty string.)
It achieves afaict the behavior the maintainer wanted if it was up to me
I would have done this ( whole nfs ) completly differently....

Dropped

ExecStartPre=/sbin/modprobe -q lockd $LOCKDARG
ExecStartPre=/sbin/sysctl -w $LOCKD_TCPPORT
ExecStartPre=/sbin/sysctl -w $LOCKD_UDPPORT

Completely and having administrators add and to set these values
manually in /etc/sysctl.conf as I mentioned in comment 30.

Anyway I cant emphasize enough the point to maintainers that do have
submitted service files to package them and ship them as soon as
possible we need them out there as early in the development process as
soon as possible to significantly increase the odds of catching
potential bugs/issue (like before/after ) during the development cycle
which in turn should help minimize the unfortunate user experience
Reindl Harald ( and potentially many others ) went through.

Please follow the packaging guidelines [1][2][3] when packaging and
shipping the unit files and remember to either drop or subpackage the
legacy sysv init script and if you don't have the time to package the
submitted unit file(s) then please make note of that on the bug and I
see if I cant find a proven packager to assist you in packaging and
shipping those submitted unit files.

Those unit files may not be perfect but that should be easily fixable
via update later in the development process and *perfected* once you
have had/found the time to familiarize your self with systemd.

Thanks

JBG

1.https://fedoraproject.org/wiki/Packaging:Guidelines:Systemd
2.https://fedoraproject.org/wiki/Packaging:ScriptletSnippets#Systemd
3.https://fedoraproject.org/wiki/Packaging:Tmpfiles.d
Genes MailLists
2011-07-10 23:16:32 UTC
Permalink
On 07/10/2011 07:08 PM, "J?hann B. Gu?mundsson" wrote:
ell variables has always had a
Post by &quot;Jóhann B. Guðmundsson&quot;
Post by Alexander Boström
default value of the empty string.)
It achieves afaict the behavior the maintainer wanted if it was up to me
I would have done this ( whole nfs ) completly differently....
Dropped
ExecStartPre=/sbin/modprobe -q lockd $LOCKDARG
ExecStartPre=/sbin/sysctl -w $LOCKD_TCPPORT
ExecStartPre=/sbin/sysctl -w $LOCKD_UDPPORT
Completely and having administrators add and to set these values
manually in /etc/sysctl.conf as I mentioned in comment 30.
I don't agree with this approach actually. Doing it this way means
that we now have dependencies for the daemon spread into non-obvious
places not directly associated with starting the daemon.

It is much better to do this in a single place that associated with
the daemon in question - and not have a somewhat hidden dependency in a
generic config file.

gene/
&quot;Jóhann B. Guðmundsson&quot;
2011-07-10 23:31:29 UTC
Permalink
Post by Alexander Boström
ell variables has always had a
Post by &quot;Jóhann B. Guðmundsson&quot;
Post by Alexander Boström
default value of the empty string.)
It achieves afaict the behavior the maintainer wanted if it was up to me
I would have done this ( whole nfs ) completly differently....
Dropped
ExecStartPre=/sbin/modprobe -q lockd $LOCKDARG
ExecStartPre=/sbin/sysctl -w $LOCKD_TCPPORT
ExecStartPre=/sbin/sysctl -w $LOCKD_UDPPORT
Completely and having administrators add and to set these values
manually in /etc/sysctl.conf as I mentioned in comment 30.
I don't agree with this approach actually. Doing it this way means
that we now have dependencies for the daemon spread into non-obvious
places not directly associated with starting the daemon.
I'm actually working under the assumption that the administrator
actually knows what he's doing thus there are no such things to him as
non-obvious places..
Post by Alexander Boström
It is much better to do this in a single place that associated with
the daemon in question - and not have a somewhat hidden dependency in a
generic config file.
Let's just aggree on disagreeing about this approach anyway the last
unit file I submitted does what Steve and you and perhaps many others
want's it to do afaik...

JBG
Genes MailLists
2011-07-11 00:02:38 UTC
Permalink
Post by &quot;Jóhann B. Guðmundsson&quot;
Let's just aggree on disagreeing about this approach anyway the last
unit file I submitted does what Steve and you and perhaps many others
want's it to do afaik...
To be clear - I have as yet no views on systemd unit files et al here
- just saying its healthy to keep things coherent. So my comment is
limited to your specific suggestion of breaking things apart.

In fact I'd prefer a world where every app config file belonged
directly with the app and not elsewhere - for one thing this supports
having multiple versions of apps whereas if there is a single config
(/etc/app.conf) this does not cleanly support multiple app versions.

Compare with:

/usr/lib/app-v1/
etc/app.conf
bin/app
... etc
/usr/lib/app-v2/app.conf
etc/app.conf

and one can eve make a default versions via soft links much as the
alternates scheme does:

/etc/app.conf -> /usr/lib/app-v1/etc/app.conf

etc
J. Randall Owens
2011-07-11 00:19:42 UTC
Permalink
Date: Sun, 10 Jul 2011 17:02:38
From: Genes MailLists <lists at sapience.com>
To: Development discussions related to Fedora <devel at lists.fedoraproject.org>
Subject: Re: systemd: Is it wrong?
Post by &quot;Jóhann B. Guðmundsson&quot;
Let's just aggree on disagreeing about this approach anyway the last
unit file I submitted does what Steve and you and perhaps many others
want's it to do afaik...
To be clear - I have as yet no views on systemd unit files et al here
- just saying its healthy to keep things coherent. So my comment is
limited to your specific suggestion of breaking things apart.
In fact I'd prefer a world where every app config file belonged
directly with the app and not elsewhere - for one thing this supports
having multiple versions of apps whereas if there is a single config
(/etc/app.conf) this does not cleanly support multiple app versions.
/usr/lib/app-v1/
etc/app.conf
bin/app
... etc
/usr/lib/app-v2/app.conf
etc/app.conf
and one can eve make a default versions via soft links much as the
/etc/app.conf -> /usr/lib/app-v1/etc/app.conf
etc
Which brings up the problem that's been bothering me about the
all-in-config suggestion: Where do you put an argument that tells it where
to find the config file, especially in cases where you might have
different instances of the same program running from different config
files, in parallel? Are you going to put it in the config file?

(I'm thinking specifically of the solution I came up with for dhcpd not
handling IPv4 and IPv6 in the same instance, where I ran it twice with
different -cf options, though I think that might have been fixed in dhcpd
since then, but I haven't tested that yet. Yes, it was broken-ish, but
sysconfig files letting you work around things like that is certainly
helpful.)
--
J. Randall Owens | http://www.ghiapet.net/
ProofReading Markup Language | http://prml.sourceforge.net/
&quot;Jóhann B. Guðmundsson&quot;
2011-07-11 00:34:03 UTC
Permalink
Post by Genes MailLists
To be clear - I have as yet no views on systemd unit files et al here
- just saying its healthy to keep things coherent. So my comment is
limited to your specific suggestion of breaking things apart.
Hum I guess we have a bit of miscommunication here..

My suggestion is limited to the nfs service

I aggree with you on the approach in a daemon have a single
configuration file it reads on startup but that is not the case with the
nfs service these actions are taken before the daemon is started up..

JBG
Steve Dickson
2011-07-11 01:39:55 UTC
Permalink
Post by Alexander Boström
ell variables has always had a
Post by &quot;Jóhann B. Guðmundsson&quot;
Post by Alexander Boström
default value of the empty string.)
It achieves afaict the behavior the maintainer wanted if it was up to me
I would have done this ( whole nfs ) completly differently....
Dropped
ExecStartPre=/sbin/modprobe -q lockd $LOCKDARG
ExecStartPre=/sbin/sysctl -w $LOCKD_TCPPORT
ExecStartPre=/sbin/sysctl -w $LOCKD_UDPPORT
Completely and having administrators add and to set these values
manually in /etc/sysctl.conf as I mentioned in comment 30.
I don't agree with this approach actually. Doing it this way means
that we now have dependencies for the daemon spread into non-obvious
places not directly associated with starting the daemon.
I do to agree with this... Instead of simply editing one file
/etc/sysconfig/nfs, they know have to edit multiple files
which goes back to my "easy-of-use" argument... Something
that seems to be ignored...

steved.
Genes MailLists
2011-07-11 03:18:12 UTC
Permalink
Post by Steve Dickson
Post by Genes MailLists
Post by &quot;Jóhann B. Guðmundsson&quot;
Completely and having administrators add and to set these values
manually in /etc/sysctl.conf as I mentioned in comment 30.
I don't agree with this approach actually. Doing it this way means
that we now have dependencies for the daemon spread into non-obvious
places not directly associated with starting the daemon.
I do to agree with this... Instead of simply editing one file
/etc/sysconfig/nfs, they know have to edit multiple files
which goes back to my "easy-of-use" argument... Something
that seems to be ignored...
steved.
I think you're agreeing with me not disagreeing ... :-)

- I a saying separating things into multiple places is not ideal ..
JB
2011-07-11 01:34:54 UTC
Permalink
Post by &quot;Jóhann B. Guðmundsson&quot;
...
Please follow the packaging guidelines [1][2][3] when packaging and
shipping the unit files and remember to either drop or subpackage the
legacy sysv init script ...
...
I would say that this should be formalized in Fedora guidelines.
I would suggest retaining the ability to handle packages and services in
current SysV and LSB system init environment.

I do not see yet that systemd is an improvement over current init system and
should be treated as beta software. There is a lot of noice, and confusion
creating unit files, which should not be as they are basic building blocks ...

What about the technical merits, the promised land of parallelization (which
bash lacks but systemd delivers ?), ease-of-use (bash delivers), ability to
quickly prototype/edit/setup machine with system services (bash delivers),
etc ?
You mean "progress" (always linear ?), "improvement" ?

There are serious points (issue after issue) raised in that thread:
http://lists.fedoraproject.org/pipermail/devel/2011-June/152323.html
that were not satisfactorily answered at all.

I am very suspicious about the quick, ambush-like tactics - do you remember
the reception systemd and GNOME joint attempt to roll over the rest of
UNIX/Linux community received ?

Easy, my fellow Linuxers :-)
Do it for your own interest !

JB
Lennart Poettering
2011-07-10 23:21:37 UTC
Permalink
Post by Jon Masters
Post by Matthew Garrett
The big kernel lock doesn't suck. It's the way SMP UNIX did things for
dozens of years, and it's the way countless kernel hackers know and
love. "Sucks" might be true from the point of view of "hey look at this
great fine-grained locking I just designed", but it's very much not true
from the poit of the driver author working on the weekend who's just
thinking "gee, what the heck is going on, why won't this just work how
it has done for the past twenty years?". In other words "suck" depends
on viewpoint.
I get your analogy, and your point. But there's a key difference. In the
kernel community (which is relatively much smaller), there are
established well documented means by which people find out about things
like BKL removal and act upon it. There is LWN, there is LKML, there is
an expectation that those working on the kernel read these things.
There should not be, and there is not, an expectation that Linux users
and admins in the wider world follow distribution mailing lists, wiki
pages, and IRC obsessively. Or read blogs. That isn't how it's done.
It's done through slow, gradual change picked up over time, unless you
want the kind of pain that I believe is coming further down the line.
Jeez man. You see a communication issue here? If there is one, then it
is in your head.

Be honest for a minute here: we regularly blog about systemd, and the
changes it brings. It's highly technical and with lots of explanations
why we are doing things this way. The stories are usually federated via
LWN. Our documentation is really good and relatively comprehensive (as
many people will happily acknowledge I am sure). We present systemd
almost every month at a different conference. We write computer magazine
articles about it. We often give interviews about the systemd progress
and the background of it. We waste our time on mailing lists with
naysayers like you. We are very active in responding to fdo bz, rhbz,
irc, the mailing list, and try hard to get responses out for all
questions asked.

And yes, I do have the expectation that computer admins look into the
Internet from time to time or pick up a computer magazine once in a
while.

I am pretty sure there are next to zero other Linux system projects
which do as much as we do to get the message out.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Mathieu Bridon
2011-07-11 14:25:14 UTC
Permalink
Post by Lennart Poettering
Our documentation is really good and relatively comprehensive (as
many people will happily acknowledge I am sure).
I'm acknowledging that.

However that's also perhaps its weak point: the systemd documentation is
too comprehensive (is that even possible? :)

I had no idea where to find what I was looking for at first, so I had to
read it entirely. It took me an afternoon (good thing my work involves
understanding systemd), but now that I did, I have enough understanding
of how it vaguely works to know that if I'm searching for "foo", it will
be in page "bar".

If you could just add a simple search engine on those man pages, I'm
sure lots of people would be really happy. :)
--
Mathieu
Steve Clark
2011-07-10 20:32:01 UTC
Permalink
Post by Matthew Garrett
Post by Jon Masters
I disagree. It doesn't suck. It's the way UNIX and Linux have done this
for dozens of years, and it's the way countless sysadmins know and love.
"Sucks" might be true from the point of view of "hey look at this great
thing I just designed", but it's very much not true from the point of
view of the sysadmin working on the weekend who's just thinking "gee,
what the heck is going on, why won't this just work how it has done for
the past twenty years?". In other words "suck" depends on viewpoint.
The big kernel lock doesn't suck. It's the way SMP UNIX did things for
dozens of years, and it's the way countless kernel hackers know and
love. "Sucks" might be true from the point of view of "hey look at this
great fine-grained locking I just designed", but it's very much not true
from the poit of the driver author working on the weekend who's just
thinking "gee, what the heck is going on, why won't this just work how
it has done for the past twenty years?". In other words "suck" depends
on viewpoint.
Improvement means change, and change will inevitably upset some people
who would prefer to do things in exactly the same way that they always
have done. If we assert that all viewpoints are equally valid then every
single thing we've done in Fedora sucks. In this case there are sound
technical arguments against configuration by command line argument or
environment variable (just like there are against the BKL), and while we
should obviously attempt to make any transition as painless as possible
for administrators, that doesn't serve as a counter to those technical
arguments. They suck. Unarguably.
What are the benefits of systemd - other than it is the new fantastic, wonderful latest gizmo!
--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.clark at netwolves.com
http://www.netwolves.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.fedoraproject.org/pipermail/devel/attachments/20110710/3c127e9e/attachment.html
Steve Dickson
2011-07-11 00:59:05 UTC
Permalink
Post by Steve Clark
Post by Matthew Garrett
Post by Jon Masters
I disagree. It doesn't suck. It's the way UNIX and Linux have done this
for dozens of years, and it's the way countless sysadmins know and love.
"Sucks" might be true from the point of view of "hey look at this great
thing I just designed", but it's very much not true from the point of
view of the sysadmin working on the weekend who's just thinking "gee,
what the heck is going on, why won't this just work how it has done for
the past twenty years?". In other words "suck" depends on viewpoint.
The big kernel lock doesn't suck. It's the way SMP UNIX did things for
dozens of years, and it's the way countless kernel hackers know and
love. "Sucks" might be true from the point of view of "hey look at this
great fine-grained locking I just designed", but it's very much not true
from the poit of the driver author working on the weekend who's just
thinking "gee, what the heck is going on, why won't this just work how
it has done for the past twenty years?". In other words "suck" depends
on viewpoint.
Improvement means change, and change will inevitably upset some people
who would prefer to do things in exactly the same way that they always
have done. If we assert that all viewpoints are equally valid then every
single thing we've done in Fedora sucks. In this case there are sound
technical arguments against configuration by command line argument or
environment variable (just like there are against the BKL), and while we
should obviously attempt to make any transition as painless as possible
for administrators, that doesn't serve as a counter to those technical
arguments. They suck. Unarguably.
What are the benefits of systemd - other than it is the new fantastic, wonderful latest gizmo!
Lennart, could you please answer this question? Because if you can't we should
drop systemd from Fedora... IMHO..

tia,

steved.
Lennart Poettering
2011-07-11 01:17:54 UTC
Permalink
Post by Steve Dickson
Post by Steve Clark
What are the benefits of systemd - other than it is the new
fantastic, wonderful latest gizmo!
Lennart, could you please answer this question? Because if you can't we should
drop systemd from Fedora... IMHO..
systemd has not benefits, and you are right we should drop it
right-away.[1]

Lennart

Footnotes:

[1] Unless of course you can use Google. Then you might find this:
http://0pointer.de/blog/projects/why.html -- Discussed widely in
LWN and elswehere. What's the point of heating this up again?
--
Lennart Poettering - Red Hat, Inc.
Steve Dickson
2011-07-11 01:51:37 UTC
Permalink
Post by Lennart Poettering
Post by Steve Dickson
Post by Steve Clark
What are the benefits of systemd - other than it is the new
fantastic, wonderful latest gizmo!
Lennart, could you please answer this question? Because if you can't we should
drop systemd from Fedora... IMHO..
systemd has not benefits, and you are right we should drop it
right-away.[1]
Lennart
http://0pointer.de/blog/projects/why.html -- Discussed widely in
LWN and elswehere. What's the point of heating this up again?
No point... I was just curious as to what the answer was and you
seem to be ignoring him... Plus I was unaware of the above discussion.


steved.
Matthew Garrett
2011-07-11 01:27:17 UTC
Permalink
Post by Steve Dickson
Post by Steve Clark
What are the benefits of systemd - other than it is the new fantastic, wonderful latest gizmo!
Lennart, could you please answer this question? Because if you can't we should
drop systemd from Fedora... IMHO..
You have *got* to be joking.

Systemd has been through a far more elaborate process of acceptance than
pretty much any other feature in Fedora history. Fesco spent an extended
period of time discussing it. We even took the unusual decision of
requesting that it be reverted from being the default in F14 in order to
reduce the chances of it causing problems for users.

Every time the appropriate governance bodies have asked Lennart for
further information on systemd, he's provided useful feedback. He's
helped anyone who's asked for advice on converting their init scripts.
He's explained the advantages of systemd on multiple occasions. And,
because of this, I don't think anyone who was empowered to make this
decision in the first place has any qualms about it now.

It's not reasonable to demand that people spend time reiterating things
that they've said before, and inist that their code be reverted if
they're unwilling to give in to unreasonable requests. That's not how
the project works. It's not how any project works.

The discussion you've been having regarding the disconnect between your
idea of how init scripts should work and Lennart's opinions about how
systemd services should work is interesting and worthwhile. I'd expect
that, given time, it'll result in a reasonable outcome. But can we try
to keep it at that useful level rather than insisting that things be
reverted?
--
Matthew Garrett | mjg59 at srcf.ucam.org
Steve Dickson
2011-07-11 02:40:51 UTC
Permalink
Post by Matthew Garrett
Post by Steve Dickson
Post by Steve Clark
What are the benefits of systemd - other than it is the new fantastic, wonderful latest gizmo!
Lennart, could you please answer this question? Because if you can't we should
drop systemd from Fedora... IMHO..
You have *got* to be joking.
Systemd has been through a far more elaborate process of acceptance than
pretty much any other feature in Fedora history. Fesco spent an extended
period of time discussing it. We even took the unusual decision of
requesting that it be reverted from being the default in F14 in order to
reduce the chances of it causing problems for users.
So did that "elaborate process of acceptance" include the Fedora
community or maybe the Linux community or possibly the Business
community. The point being did the acceptance process include
people who one, will be using systemd and two people who will
be supported their packages using systemd?
Post by Matthew Garrett
Every time the appropriate governance bodies have asked Lennart for
further information on systemd, he's provided useful feedback. He's
helped anyone who's asked for advice on converting their init scripts.
He's explained the advantages of systemd on multiple occasions. And,
because of this, I don't think anyone who was empowered to make this
decision in the first place has any qualms about it now.
I personally find this funny... When I decided to change the
default version of NFS from 3 to 4 I didn't just have to answer to the
"appropriate governance bodies". I had to answer to the Fedora community,
the Linux community plus every mama and papa place that uses Fedora
for their company.

The point being... Change is hard, as long as its for the be better..
If not.. its simply wrong...
Post by Matthew Garrett
It's not reasonable to demand that people spend time reiterating things
that they've said before, and inist that their code be reverted if
they're unwilling to give in to unreasonable requests. That's not how
the project works. It's not how any project works.
Yes it is... If want to change the world be prepared to justify it
every step of way.. That is exactly how projects work!
Post by Matthew Garrett
The discussion you've been having regarding the disconnect between your
idea of how init scripts should work and Lennart's opinions about how
systemd services should work is interesting and worthwhile. I'd expect
that, given time, it'll result in a reasonable outcome. But can we try
to keep it at that useful level rather than insisting that things be
reverted?
I truly truly truly hope so... but at the end of the day... I
simply can't allow a new, untested (in a business environment)
package destabilize a technology that is used by a large number
of our community...

steved.
Matthew Garrett
2011-07-11 02:51:11 UTC
Permalink
Post by Steve Dickson
Post by Matthew Garrett
Systemd has been through a far more elaborate process of acceptance than
pretty much any other feature in Fedora history. Fesco spent an extended
period of time discussing it. We even took the unusual decision of
requesting that it be reverted from being the default in F14 in order to
reduce the chances of it causing problems for users.
So did that "elaborate process of acceptance" include the Fedora
community or maybe the Linux community or possibly the Business
community. The point being did the acceptance process include
people who one, will be using systemd and two people who will
be supported their packages using systemd?
It included the people who are elected by the Fedora community to make
these decisions, in open and publicised meetings where any other member
of the Fedora community was able to bring up any concerns. It included
open discussion in the project mailing lists and on Planet Fedora. It
went through the full feature process twice. I really don't know how
much clearer we could have made it short of calling everybody on their
FAS-registered phone numbers and leaving personal messages. Part of your
responsibility as a maintainer in the project is paying attention to
things that may affect you.
Post by Steve Dickson
Post by Matthew Garrett
Every time the appropriate governance bodies have asked Lennart for
further information on systemd, he's provided useful feedback. He's
helped anyone who's asked for advice on converting their init scripts.
He's explained the advantages of systemd on multiple occasions. And,
because of this, I don't think anyone who was empowered to make this
decision in the first place has any qualms about it now.
I personally find this funny... When I decided to change the
default version of NFS from 3 to 4 I didn't just have to answer to the
"appropriate governance bodies". I had to answer to the Fedora community,
the Linux community plus every mama and papa place that uses Fedora
for their company.
The point being... Change is hard, as long as its for the be better..
If not.. its simply wrong...
If we didn't think it was better, we wouldn't have accepted it as the
default. If you don't think we're competent to make that decision then
there'll be fesco elections in 5 months or so.

But you're right. We're all answerable to the community. And Lennart has
answered the community sufficiently often now that anyone implying that
he's unwilling to is clearly wrong.
Post by Steve Dickson
Post by Matthew Garrett
It's not reasonable to demand that people spend time reiterating things
that they've said before, and inist that their code be reverted if
they're unwilling to give in to unreasonable requests. That's not how
the project works. It's not how any project works.
Yes it is... If want to change the world be prepared to justify it
every step of way.. That is exactly how projects work!
So if I were to insist that NFS be reverted to v3 by default, you'd go
along with that?
Post by Steve Dickson
Post by Matthew Garrett
The discussion you've been having regarding the disconnect between your
idea of how init scripts should work and Lennart's opinions about how
systemd services should work is interesting and worthwhile. I'd expect
that, given time, it'll result in a reasonable outcome. But can we try
to keep it at that useful level rather than insisting that things be
reverted?
I truly truly truly hope so... but at the end of the day... I
simply can't allow a new, untested (in a business environment)
package destabilize a technology that is used by a large number
of our community...
If it's impossible to make NFS work sensibly with systemd then obviously
we'd revert it. But I don't believe that that's the case, and nothing
you've said in this thread has changed my mind there. It's clearly
possible to get NFS working. The question is whether it's possible to do
so in a way that matches your expectations of how users want NFS to
behave, and that's not an issue that results in any destabalisation.
--
Matthew Garrett | mjg59 at srcf.ucam.org
Reindl Harald
2011-07-11 08:57:09 UTC
Permalink
Post by Matthew Garrett
Post by Steve Dickson
I truly truly truly hope so... but at the end of the day... I
simply can't allow a new, untested (in a business environment)
package destabilize a technology that is used by a large number
of our community...
If it's impossible to make NFS work sensibly with systemd then obviously
we'd revert it. But I don't believe that that's the case, and nothing
you've said in this thread has changed my mind there. It's clearly
possible to get NFS working. The question is whether it's possible to do
so in a way that matches your expectations of how users want NFS to
behave, and that's not an issue that results in any destabalisation
my main critic on systemd shipped als default with F15 is that
widely used services like NFS are not converted to systemd
BEFORE systemd replaced upstart

the acceptance and bugfree-state could have been MUCH better
if all services would be converted before the switch because
in this case probably some improvements would have been done
on systemd side

what happended was:
* systemd is pushed
* most services are not converted
* many services have open questions how to go forward with systemd
* what do if some services CAN NOT be converted fully?

this is simply bad on every point of view and point 4 would be a reason
to improve systemd as it is intended to replace SysV/LSB-services and
"in systemd world" is not a good argument if things are not working
properly


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 262 bytes
Desc: OpenPGP digital signature
Url : http://lists.fedoraproject.org/pipermail/devel/attachments/20110711/4817530d/attachment.bin
Florian Müllner
2011-07-11 11:11:11 UTC
Permalink
2011/7/11 Reindl Harald <h.reindl at thelounge.net>
Post by Reindl Harald
my main critic on systemd shipped als default with F15 is that
widely used services like NFS are not converted to systemd
BEFORE systemd replaced upstart
Given that Fedora only used upstart with existing SysV scripts, upstart
should not have been included in the first place according to that argument.
Yet you want to stick with an init system which does not have a single
native service, because some services are used through systemd's SysV
compatibility? Sorry, but that's hardly a credible position, it just makes
you look biased against systemd.

Florian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.fedoraproject.org/pipermail/devel/attachments/20110711/a68448be/attachment.html
Lennart Poettering
2011-07-11 14:03:46 UTC
Permalink
Post by Reindl Harald
Post by Matthew Garrett
Post by Steve Dickson
I truly truly truly hope so... but at the end of the day... I
simply can't allow a new, untested (in a business environment)
package destabilize a technology that is used by a large number
of our community...
If it's impossible to make NFS work sensibly with systemd then obviously
we'd revert it. But I don't believe that that's the case, and nothing
you've said in this thread has changed my mind there. It's clearly
possible to get NFS working. The question is whether it's possible to do
so in a way that matches your expectations of how users want NFS to
behave, and that's not an issue that results in any destabalisation
my main critic on systemd shipped als default with F15 is that
widely used services like NFS are not converted to systemd
BEFORE systemd replaced upstart
It's a bit of a chicken of egg problem.

I actually sent patches which cleaned up part of the NFS stuff to Steve
(for example, socket activation patches for rpcbind), but he declined to
apply them. With those patches at least some of the complexity would go
away, as rpcbind would simply be available, and started as soon as it is
needed, copying what MacOS has been doing in the area of NFS for a while.

If systemd isn't in, people won't wake up, it's that easy.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Steve Dickson
2011-07-11 16:02:08 UTC
Permalink
Post by Lennart Poettering
Post by Reindl Harald
Post by Matthew Garrett
Post by Steve Dickson
I truly truly truly hope so... but at the end of the day... I
simply can't allow a new, untested (in a business environment)
package destabilize a technology that is used by a large number
of our community...
If it's impossible to make NFS work sensibly with systemd then obviously
we'd revert it. But I don't believe that that's the case, and nothing
you've said in this thread has changed my mind there. It's clearly
possible to get NFS working. The question is whether it's possible to do
so in a way that matches your expectations of how users want NFS to
behave, and that's not an issue that results in any destabalisation
my main critic on systemd shipped als default with F15 is that
widely used services like NFS are not converted to systemd
BEFORE systemd replaced upstart
It's a bit of a chicken of egg problem.
I actually sent patches which cleaned up part of the NFS stuff to Steve
(for example, socket activation patches for rpcbind), but he declined to
apply them.
No. The community rejected them because
* They were to evasive which made the code unmaintainable esp
WRT to security fixes.
* You rejected the idea of put the code in a standalone library.
* They were too Fedora specific
* Code stability was also a concern

Here is the thread:
http://marc.info/?t=127950663200001&r=1&w=2

steved.

Garry T. Williams
2011-07-11 02:58:50 UTC
Permalink
Post by Steve Dickson
So did that "elaborate process of acceptance" include the Fedora
community or maybe the Linux community or possibly the Business
community.
You seem to be very late to the party.
--
Garry Williams
Steve Dickson
2011-07-11 03:14:26 UTC
Permalink
Post by Garry T. Williams
Post by Steve Dickson
So did that "elaborate process of acceptance" include the Fedora
community or maybe the Linux community or possibly the Business
community.
You seem to be very late to the party.
Good call.. Unfortunately I live in multiple worlds
and yes I have most definitely came to this party
late... So are we out of beer? 8-)

steved.
Lennart Poettering
2011-07-11 14:06:24 UTC
Permalink
Post by Steve Dickson
Post by Garry T. Williams
Post by Steve Dickson
So did that "elaborate process of acceptance" include the Fedora
community or maybe the Linux community or possibly the Business
community.
You seem to be very late to the party.
Good call.. Unfortunately I live in multiple worlds
and yes I have most definitely came to this party
late... So are we out of beer? 8-)
You have been invited to the party a couple of times btw. For example I
sent you those rpcbind patches which you declined to merge. You have
been made aware of these changes in advance, and I did some of the
conversion work for you even. You cannot really claim that this all was
too fast and nobody told you.

Lennart
--
Lennart Poettering - Red Hat, Inc.
&quot;Jóhann B. Guðmundsson&quot;
2011-07-11 09:03:34 UTC
Permalink
Post by Steve Dickson
I truly truly truly hope so... but at the end of the day... I
simply can't allow a new, untested (in a business environment)
package destabilize a technology that is used by a large number
of our community...
The longer it takes to get a native systemd unit files out there the
more "untested" they will become.

The faster they will be out there the more experience/exposer they will
receive.

No rocket science behind that logic some would even go so far as calling
that common knowledge so if you are truly worried about that, convert
those legacy sysv init file and put the native systemd unit file out
there, the sooner the better..

I think it's time that we hear from you explaining to us how systemd has
brought doom and chaos to the nfs world, how it has destabilize nfs and
left the *major subsystem* not working and it's citizens running for
their lives, committing mass suicide on the side walks and the cities
being overrun by zombies and the nfs world going down in flames...

What do you say?

How about explaining that to us so we can catch and fix bugs ( if any )
in timely manner on either side ;)

JBG
drago01
2011-07-10 19:59:10 UTC
Permalink
Post by Jon Masters
Post by Lennart Poettering
Or in other words: configuration via command line arguments or
environment variables sucks.
I disagree. It doesn't suck. It's the way UNIX and Linux have done this
for dozens of years, and it's the way countless sysadmins know and love.
"Sucks" might be true from the point of view of "hey look at this great
thing I just designed", but it's very much not true from the point of
view of the sysadmin working on the weekend who's just thinking "gee,
what the heck is going on, why won't this just work how it has done for
the past twenty years?". In other words "suck" depends on viewpoint.
Well really "it is perfect because it has been like that for $num
years, how dare you change it" is a very weak argument. It has no
meaning at all from a technical pov. It is just "I am to lazy to learn
something new" ... but one should really expect sysadmins to be able
to keep up with changes like this.
There is a point where you have to break up with the past (and learn
from the mistakes made there) and move on.
It is called "progress".
Jon Masters
2011-07-10 21:20:27 UTC
Permalink
Post by drago01
Post by Jon Masters
Post by Lennart Poettering
Or in other words: configuration via command line arguments or
environment variables sucks.
I disagree. It doesn't suck. It's the way UNIX and Linux have done this
for dozens of years, and it's the way countless sysadmins know and love.
"Sucks" might be true from the point of view of "hey look at this great
thing I just designed", but it's very much not true from the point of
view of the sysadmin working on the weekend who's just thinking "gee,
what the heck is going on, why won't this just work how it has done for
the past twenty years?". In other words "suck" depends on viewpoint.
Well really "it is perfect because it has been like that for $num
years, how dare you change it" is a very weak argument.
Did I say it was perfect? No. Therein lies a confusion. There are many
things that, were they designed now, would be done differently (and I'm
sure if you tracked down the original inventors of many things they
would have horror stories about how assumptions outlived designs). But
once you have several decades of something deployed in the field, you
can't just come overnight (where that is relative - compared to several
decades, 6 months or even 2 years is overnight) and say "what we have is
better, deal" because that's damned expensive from a successful product
point of view. If we want people to use Fedora, and other Linux
distributions in general, they can't have to throw out their books every
couple of years and re-learn the very basics. Conversely, you can change
this stuff, but you have to expect it to take *many years* to get there.
Post by drago01
It has no
meaning at all from a technical pov. It is just "I am to lazy to learn
something new" ... but one should really expect sysadmins to be able
to keep up with changes like this.
This point goes to the heart of why Windows is still so popular and
successful in the marketplace. It's unfortunate, but the real world
doesn't move as quickly as we would like it to do so. If it were just
about technical arguments, everyone, everywhere would have been running
Linux for the past decade or more, and everyone would jump at the
awesomeness (technically) of some of these things. But the reality is
that people are slow to change, organizations are slow to adapt, and
people told "hey, what you've been doing for several decades is wrong"
don't react well. That's what leads to rocking chairs on front porches.
It's not that they're against you, it's that they're faced with having
to use something that's fundamentally changed on them "overnight".

You don't have to take my advice or opinion, it's just that.

Jon.
Steve Clark
2011-07-10 20:29:32 UTC
Permalink
Post by Jon Masters
Post by Lennart Poettering
Or in other words: configuration via command line arguments or
environment variables sucks.
I disagree. It doesn't suck. It's the way UNIX and Linux have done this
for dozens of years, and it's the way countless sysadmins know and love.
"Sucks" might be true from the point of view of "hey look at this great
thing I just designed", but it's very much not true from the point of
view of the sysadmin working on the weekend who's just thinking "gee,
what the heck is going on, why won't this just work how it has done for
the past twenty years?". In other words "suck" depends on viewpoint.
Jon.
This is so true - every admin knows the shell - now we have to learn another new bunch of
stuff - and for what benefits?
--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.clark at netwolves.com
http://www.netwolves.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.fedoraproject.org/pipermail/devel/attachments/20110710/ba585234/attachment.html
Lennart Poettering
2011-07-10 22:53:57 UTC
Permalink
Post by Jon Masters
Post by Lennart Poettering
Or in other words: configuration via command line arguments or
environment variables sucks.
I disagree. It doesn't suck. It's the way UNIX and Linux have done this
for dozens of years, and it's the way countless sysadmins know and love.
"Sucks" might be true from the point of view of "hey look at this great
thing I just designed", but it's very much not true from the point of
view of the sysadmin working on the weekend who's just thinking "gee,
what the heck is going on, why won't this just work how it has done for
the past twenty years?". In other words "suck" depends on viewpoint.
Oh man, you are getting boring.

I never said that people should stop using env vars or cmdline args. But
they suck as primary path of configuration for a daemon.

And don't tell me that the primary way how Linux services have always
been configured was via cmdline arguments. That's simply nonsense. And
even if it was true -- which it isn't -- that doesn't make it a good
idea.

Anyway, I don't think this is a discussion worth having, and this
appears to be mostly an attempt to create flame war out of nothing, so
let's agree to disagree on this, and leave it at that.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Lennart Poettering
2011-07-10 22:47:53 UTC
Permalink
Post by Steve Dickson
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).
Thank you for this explanation. It appears your definition of a
service might be a bit too simple for many subsystems. You seem
to think that on service will only start one system daemon, which
is not the case in the more complex subsystems.
Well, I doubt about the "many". In fact, I am aware of only one other
occasion where people were wondering about this. And often the problems
are only perceived problems, because people try to translate their sysv
scripts 1:1 and are unwilling to normalize their scripts while doing
so.
So basically what you are saying is a service can never consist of
more than one system daemon.
Well, it can, but we encourage you not to do this.

The word "service" is mostly used synonymously with "daemon" in the
systemd context, and we prefer if people write one service file for each
daemon, as this is the easiest to understand for admins and users and
makes sure supervision/monitoring works properly.
Post by Steve Dickson
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
Also, you should not spawn forking processes in ExecStartPre=, that's
not what it is for. In fact I am pretty sure I will change systemd now
to kill off all remaining processes after each ExecStartPre= command now
that I am aware that people are misusing it like this.
If they are not for forking off process, what are the for? It seems
quite logical that one would use a number of ExecStartPre= commands
to do some set up and then use the ExecStart= to start the daemon.
This is a misunderstanding. What I tried to say is that they should not
be used to spawn off processes that fork and stay in the
background. Processes in ExecStartPre= are executed synchronously: we
wait for them to finish before we start the next one, and they should
not try to play games with that and spawn stuff in the
background. That's what I meant by saying "you should not spawn
*forking* processes in ExecStartPre=".
Maybe a better way to says is ExecStartPre= should not be used for
daemon process?
Well, technically, not only daemon processes do this, but that's
nitpicking, so let's not discuss this part further.
Post by Steve Dickson
Post by Lennart Poettering
There is synchronization. As I made clear a couple of times, we never
start more than one ExecStartPre= process at a time. We start the next
one only after the previous one finished.
Looking at https://bugzilla.redhat.com/show_bug.cgi?id=699040#c35 appears
this is not the case...
I think this is a misunderstanding. "systemctl show foo.service" will show you the
actual timestamps when we started/joined a process.
Post by Steve Dickson
Post by Lennart Poettering
Well, nothing stops you from reading the same configuration file from
multiple services. We do that all the time, for example for
/etc/resolv.conf.
True... but the point is before systemd, an admin could tweak one
/etc/sysconfig file which define which daemon were started and
how they were configured... Unless I'm missing something that
is no longer the case... The admin will have to explicitly define
each an every daemon they need to run and how they are configured..
all by hand...
Well, I think it is much easier for admins if services can be
enabled/disabled all the same way, instead of adding arbitrary numbers
of service-specific ways to enable/disable them.

Simplify things, have as many levels of disabling as necessary but as
few as possible, and unify that across the different services. This is
what we want to ensure by getting people to use "systemctl enable"
instead of having service-specific sysconfig files for
disabling/enabling services.
Post by Steve Dickson
Post by Lennart Poettering
By proper configuration files I mean configuration files read by the
daemons themselves, instead of files that are actually a script that is
interpreted by a programming language and some more scripts intrefacing
with that.
Or in other words: configuration via command line arguments or
environment variables sucks. Much nicer are proper configuration
files. And faking config files by parsing them in shell and then passing
them of to daemons via env vars and cmdline args is not ideal.
^^^^^^^^^^^^
Post by Steve Dickson
So basically what you are is the way system daemon have been
started for the last.. say... twenty years or so is completely
wrong and systemd is here to change that! 8-) The point being...
I used the word "not ideal", not "completely wrong". Amazing that you
can misquote me like this even though these were the last three words
of the paragraph you are responding to here! (Jon Masters is much
better at reducing what I say to what he wants to believe I said... he
just silently drops everything I write except the words that are handy
to make his point. There's something to learn from him... ;-))
Post by Steve Dickson
That is your opinion which may or may not be held by the rest
of community... So please recognize it as such and please
be willing to accept dissenting views....
Oh, sure, everything I do and say just reflects my opinions. What else
should it reflect? I am not the pope, and neither the pope's spokesperson.

But I like to believe I have good reasons for what I propose, and I
think particularly in this case I am not alone with this opinion.

I mean, feel free to ignore me. But I'd be delighted if you didn't and
we could get this normalized and make the NFS subsystem a bit nicer to
use and more similar to the rest of the daemons we ship. There's really
no need to make NFS as complex to use as it is.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Steve Dickson
2011-07-11 00:49:56 UTC
Permalink
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).
Thank you for this explanation. It appears your definition of a
service might be a bit too simple for many subsystems. You seem
to think that on service will only start one system daemon, which
is not the case in the more complex subsystems.
Well, I doubt about the "many". In fact, I am aware of only one other
occasion where people were wondering about this. And often the problems
are only perceived problems, because people try to translate their sysv
scripts 1:1 and are unwilling to normalize their scripts while doing
so.
So basically what you are saying is a service can never consist of
more than one system daemon.
Well, it can, but we encourage you not to do this.
The word "service" is mostly used synonymously with "daemon" in the
systemd context, and we prefer if people write one service file for each
daemon, as this is the easiest to understand for admins and users and
makes sure supervision/monitoring works properly.
Ok.. Now understand where my confusion is... Currently when one
want to start the nfs server they type 'service nfs start' which
calls a number of binaries and ultimately a system daemon.

Now if they enable want secure nfs, they edit a file in /etc/systconf
and simply type 'service nfs restart' which again runs a number
of binaries and start a couple of system daemons.

My point it this. You are changing the meaning of 'service'. People
expect a service to be just that as service. When one starts a
service all the needed daemons are started and all the configuration
is done once the service is started.

Unless I'm misunderstanding, for an admin to the same thing as above
the will have to type of string of commands enabling all the needed
daemons... People are not expecting or even what to know which daemon
is need to start up each service... oops.. a service is not longer
as service its a daemon... hmm... let use subsystem... see my point?
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
Also, you should not spawn forking processes in ExecStartPre=, that's
not what it is for. In fact I am pretty sure I will change systemd now
to kill off all remaining processes after each ExecStartPre= command now
that I am aware that people are misusing it like this.
If they are not for forking off process, what are the for? It seems
quite logical that one would use a number of ExecStartPre= commands
to do some set up and then use the ExecStart= to start the daemon.
This is a misunderstanding. What I tried to say is that they should not
be used to spawn off processes that fork and stay in the
background. Processes in ExecStartPre= are executed synchronously: we
wait for them to finish before we start the next one, and they should
not try to play games with that and spawn stuff in the
background. That's what I meant by saying "you should not spawn
*forking* processes in ExecStartPre=".
Maybe a better way to says is ExecStartPre= should not be used for
daemon process?
Well, technically, not only daemon processes do this, but that's
nitpicking, so let's not discuss this part further.
Right.. Not only daemon processes spawn off processes that fork and stay in the
background. How are we suppose to know if he flux capacitor command will or
will leave a process background?
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
There is synchronization. As I made clear a couple of times, we never
start more than one ExecStartPre= process at a time. We start the next
one only after the previous one finished.
Looking at https://bugzilla.redhat.com/show_bug.cgi?id=699040#c35 appears
this is not the case...
I think this is a misunderstanding. "systemctl show foo.service" will show you the
actual timestamps when we started/joined a process.
Fine... I didn't know about the show command...
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
Well, nothing stops you from reading the same configuration file from
multiple services. We do that all the time, for example for
/etc/resolv.conf.
True... but the point is before systemd, an admin could tweak one
/etc/sysconfig file which define which daemon were started and
how they were configured... Unless I'm missing something that
is no longer the case... The admin will have to explicitly define
each an every daemon they need to run and how they are configured..
all by hand...
Well, I think it is much easier for admins if services can be
enabled/disabled all the same way, instead of adding arbitrary numbers
of service-specific ways to enable/disable them.
They have that today... its call the chkconfig which enables/disables
services... not just daemons.
Post by Lennart Poettering
Simplify things, have as many levels of disabling as necessary but as
few as possible, and unify that across the different services. This is
what we want to ensure by getting people to use "systemctl enable"
instead of having service-specific sysconfig files for
disabling/enabling services.
Hang on... you say that 'systemctl enable' will also configure a daemon
like the sysconfig files do? Could you please give me an example of this
this?
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
By proper configuration files I mean configuration files read by the
daemons themselves, instead of files that are actually a script that is
interpreted by a programming language and some more scripts intrefacing
with that.
Or in other words: configuration via command line arguments or
environment variables sucks. Much nicer are proper configuration
files. And faking config files by parsing them in shell and then passing
them of to daemons via env vars and cmdline args is not ideal.
^^^^^^^^^^^^
Post by Steve Dickson
So basically what you are is the way system daemon have been
started for the last.. say... twenty years or so is completely
wrong and systemd is here to change that! 8-) The point being...
I used the word "not ideal", not "completely wrong". Amazing that you
can misquote me like this even though these were the last three words
of the paragraph you are responding to here!
My apologies for misquoting you... I did indeed miss the "not ideal" part.


steved.
Michal Schmidt
2011-07-11 08:42:10 UTC
Permalink
Post by Steve Dickson
Ok.. Now understand where my confusion is... Currently when one
want to start the nfs server they type 'service nfs start' which
calls a number of binaries and ultimately a system daemon.
We could achieve something similar with systemd by providing a target
unit 'nfs.target'. The unit would pull the daemons using requirement
dependencies.
Post by Steve Dickson
Now if they enable want secure nfs, they edit a file in /etc/systconf
and simply type 'service nfs restart' which again runs a number
of binaries and start a couple of system daemons.
This could be another target 'nfs-secure.target'. It would pull
'nfs.target' + more daemons.

The users would start and enable these target units instead of
the units of the individual daemons.

Michal
&quot;Jóhann B. Guðmundsson&quot;
2011-07-11 09:05:53 UTC
Permalink
Post by Michal Schmidt
Post by Steve Dickson
Ok.. Now understand where my confusion is... Currently when one
want to start the nfs server they type 'service nfs start' which
calls a number of binaries and ultimately a system daemon.
We could achieve something similar with systemd by providing a target
unit 'nfs.target'. The unit would pull the daemons using requirement
dependencies.
Post by Steve Dickson
Now if they enable want secure nfs, they edit a file in /etc/systconf
and simply type 'service nfs restart' which again runs a number
of binaries and start a couple of system daemons.
This could be another target 'nfs-secure.target'. It would pull
'nfs.target' + more daemons.
The users would start and enable these target units instead of
the units of the individual daemons.
I was thinking along the same line...

JBG
Reindl Harald
2011-07-11 08:46:22 UTC
Permalink
Post by Steve Dickson
My point it this. You are changing the meaning of 'service'. People
expect a service to be just that as service. When one starts a
service all the needed daemons are started and all the configuration
is done once the service is started.
Unless I'm misunderstanding, for an admin to the same thing as above
the will have to type of string of commands enabling all the needed
daemons... People are not expecting or even what to know which daemon
is need to start up each service... oops.. a service is not longer
as service its a daemon... hmm... let use subsystem... see my point?
the same problem with the logic "if there is a svcname.socket"
you have to do "systemctl stop svcname.socket svcname.service"
or systemd wil fire up it again if you do only
"systemctl stop svcname"

https://bugzilla.redhat.com/show_bug.cgi?id=714525

nobody would expect this and even if someone knows about
this is nothing which makes usabilit?y better

lennart please think about such inputs and try to implement
logic that your tool does what admins normally expect

according to my bugreport (yes the topic was wrong because
unexpected bahvior) "systemctl stop mysqld.service" should
implicitly always stop an according "svcname.socket" as long
we do not restart - STOP means STOP and not restart :-)


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 262 bytes
Desc: OpenPGP digital signature
Url : http://lists.fedoraproject.org/pipermail/devel/attachments/20110711/329963cb/attachment.bin
Michal Schmidt
2011-07-11 12:29:59 UTC
Permalink
Post by Reindl Harald
the same problem with the logic "if there is a svcname.socket"
you have to do "systemctl stop svcname.socket svcname.service"
or systemd wil fire up it again if you do only
"systemctl stop svcname"
https://bugzilla.redhat.com/show_bug.cgi?id=714525
nobody would expect this and even if someone knows about
this is nothing which makes usabilit?y better
lennart please think about such inputs and try to implement
logic that your tool does what admins normally expect
according to my bugreport (yes the topic was wrong because
unexpected bahvior) "systemctl stop mysqld.service" should
implicitly always stop an according "svcname.socket" as long
we do not restart - STOP means STOP and not restart :-)
'systemctl stop ...' means: stop the currently running instance. Nothing
more. It says nothing about what should or should not happen in the
future. Socket activation is a future event.

Usind 'BindTo=' it may be possible to stop the socket when the service
is stopped. I have not tried it.

Michal
Lennart Poettering
2011-07-11 13:58:06 UTC
Permalink
Post by Steve Dickson
Post by Lennart Poettering
Post by Steve Dickson
So basically what you are saying is a service can never consist of
more than one system daemon.
Well, it can, but we encourage you not to do this.
The word "service" is mostly used synonymously with "daemon" in the
systemd context, and we prefer if people write one service file for each
daemon, as this is the easiest to understand for admins and users and
makes sure supervision/monitoring works properly.
Ok.. Now understand where my confusion is... Currently when one
want to start the nfs server they type 'service nfs start' which
calls a number of binaries and ultimately a system daemon.
Now if they enable want secure nfs, they edit a file in /etc/systconf
and simply type 'service nfs restart' which again runs a number
of binaries and start a couple of system daemons.
My point it this. You are changing the meaning of 'service'. People
expect a service to be just that as service. When one starts a
service all the needed daemons are started and all the configuration
is done once the service is started.
I think most people actually expected that one service file would start
one service.
Post by Steve Dickson
Unless I'm misunderstanding, for an admin to the same thing as above
the will have to type of string of commands enabling all the needed
daemons... People are not expecting or even what to know which daemon
is need to start up each service... oops.. a service is not longer
as service its a daemon... hmm... let use subsystem... see my point?
No I don't.

Note that you can have dependencies between services: you can say that
when service A is started service B must be started too and finished
before A. These are *runtime* dependencies. You can also say that when A
is enabled B should be enabled too. These are *install time*
dependencies. The former are suppoirted via Wants= and Requires= in the
[Unit] section, then latter in via Also= in the [Install] section of
unit files.
Post by Steve Dickson
Post by Lennart Poettering
Post by Steve Dickson
Maybe a better way to says is ExecStartPre= should not be used for
daemon process?
Well, technically, not only daemon processes do this, but that's
nitpicking, so let's not discuss this part further.
Right.. Not only daemon processes spawn off processes that fork and stay in the
background. How are we suppose to know if he flux capacitor command will or
will leave a process background?
Hopefully the folks writing a unit file know the software they are
writing it for. It is our intention after all to get these unit files
shipped upstream, i.e. they are written by the upstream developers.
Post by Steve Dickson
Post by Lennart Poettering
Simplify things, have as many levels of disabling as necessary but as
few as possible, and unify that across the different services. This is
what we want to ensure by getting people to use "systemctl enable"
instead of having service-specific sysconfig files for
disabling/enabling services.
Hang on... you say that 'systemctl enable' will also configure a daemon
like the sysconfig files do? Could you please give me an example of this
this?
No, this is about enabling/disabling daemons. systemctl enable/disable
is not about configuring arbitrary parameters to daemons.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Miloslav Trmač
2011-07-11 14:32:02 UTC
Permalink
On Mon, Jul 11, 2011 at 3:58 PM, Lennart Poettering
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
The word "service" is mostly used synonymously with "daemon" in the
systemd context, and we prefer if people write one service file for each
daemon, as this is the easiest to understand for admins and users and
makes sure supervision/monitoring works properly.
Ok.. Now understand where my confusion is... Currently when one
want to start the nfs server they type 'service nfs start' which
calls a number of binaries and ultimately a system daemon.
Now if they enable want secure nfs, they edit a file in /etc/systconf
and simply type 'service nfs restart' which again runs a number
of binaries and start a couple of system daemons.
My point it this. You are changing the meaning of 'service'. People
expect a service to be just that as service. When one starts a
service all the needed daemons are started and all the configuration
is done once the service is started.
I think most people actually expected that one service file would start
one service.
I think most "people" do not equate "service" with a process;
"service" is a _service_ being provided - it may be atd with one
process (service is "makes at(1) work"), httpd with 10 similar
children (service is "answers on port 80"), mailman with 5 or so
different interoperating processes (service is "runs mailing lists").
At least as long as the "service" works, nobody really cares about the
processes. If you will, "service" is "a product that could be bought
and installed".

Just follow the use cases:
- Install, configure and start mailman
- Set up mailman so that it starts at boot
- A huge security hole has been reported in mailman, take mailman down
immediately
- My mailing lists don't work, what's the status of mailman? Is
mailman running? Did mailman report an error?

In every single one of these use cases the sysadmin wants to treat
"mailman" as a single unit, not as five different units (I bet most
mailman sysadmins couldn't even list names of the processes).
Mirek
Lennart Poettering
2011-07-11 14:44:17 UTC
Permalink
Post by Miloslav Trmač
On Mon, Jul 11, 2011 at 3:58 PM, Lennart Poettering
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
The word "service" is mostly used synonymously with "daemon" in the
systemd context, and we prefer if people write one service file for each
daemon, as this is the easiest to understand for admins and users and
makes sure supervision/monitoring works properly.
Ok.. Now understand where my confusion is... Currently when one
want to start the nfs server they type 'service nfs start' which
calls a number of binaries and ultimately a system daemon.
Now if they enable want secure nfs, they edit a file in /etc/systconf
and simply type 'service nfs restart' which again runs a number
of binaries and start a couple of system daemons.
My point it this. You are changing the meaning of 'service'. People
expect a service to be just that as service. When one starts a
service all the needed daemons are started and all the configuration
is done once the service is started.
I think most people actually expected that one service file would start
one service.
I think most "people" do not equate "service" with a process;
Neither does systemd.

A service/daemon can consist of multiple processes. Examples for this
are Apache (main + worker processes + cgi scripts), Avahi (main + chroot
helper process), udev (main + worker processes + callouts), and a lot of
other stuff.

In systemd "daemon" is synonymous to "service". And a daemon/service can
consist of multiple processes, but one of those is the main one, which
defines the runtime (and traditionally is the one whose PID was stored in
the PID file). If that one exits/crashes we consider the service down,
and optionally will restart it.

Steve otoh wants one service to consist of multiple daemons, each of
which can consist of multiple processes. I find that unnecessarily
complex and this is not implemented in system, and have pointed out a
number of times why.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Lennart Poettering
2011-07-11 13:58:06 UTC
Permalink
Post by Steve Dickson
Post by Lennart Poettering
Post by Steve Dickson
So basically what you are saying is a service can never consist of
more than one system daemon.
Well, it can, but we encourage you not to do this.
The word "service" is mostly used synonymously with "daemon" in the
systemd context, and we prefer if people write one service file for each
daemon, as this is the easiest to understand for admins and users and
makes sure supervision/monitoring works properly.
Ok.. Now understand where my confusion is... Currently when one
want to start the nfs server they type 'service nfs start' which
calls a number of binaries and ultimately a system daemon.
Now if they enable want secure nfs, they edit a file in /etc/systconf
and simply type 'service nfs restart' which again runs a number
of binaries and start a couple of system daemons.
My point it this. You are changing the meaning of 'service'. People
expect a service to be just that as service. When one starts a
service all the needed daemons are started and all the configuration
is done once the service is started.
I think most people actually expected that one service file would start
one service.
Post by Steve Dickson
Unless I'm misunderstanding, for an admin to the same thing as above
the will have to type of string of commands enabling all the needed
daemons... People are not expecting or even what to know which daemon
is need to start up each service... oops.. a service is not longer
as service its a daemon... hmm... let use subsystem... see my point?
No I don't.

Note that you can have dependencies between services: you can say that
when service A is started service B must be started too and finished
before A. These are *runtime* dependencies. You can also say that when A
is enabled B should be enabled too. These are *install time*
dependencies. The former are suppoirted via Wants= and Requires= in the
[Unit] section, then latter in via Also= in the [Install] section of
unit files.
Post by Steve Dickson
Post by Lennart Poettering
Post by Steve Dickson
Maybe a better way to says is ExecStartPre= should not be used for
daemon process?
Well, technically, not only daemon processes do this, but that's
nitpicking, so let's not discuss this part further.
Right.. Not only daemon processes spawn off processes that fork and stay in the
background. How are we suppose to know if he flux capacitor command will or
will leave a process background?
Hopefully the folks writing a unit file know the software they are
writing it for. It is our intention after all to get these unit files
shipped upstream, i.e. they are written by the upstream developers.
Post by Steve Dickson
Post by Lennart Poettering
Simplify things, have as many levels of disabling as necessary but as
few as possible, and unify that across the different services. This is
what we want to ensure by getting people to use "systemctl enable"
instead of having service-specific sysconfig files for
disabling/enabling services.
Hang on... you say that 'systemctl enable' will also configure a daemon
like the sysconfig files do? Could you please give me an example of this
this?
No, this is about enabling/disabling daemons. systemctl enable/disable
is not about configuring arbitrary parameters to daemons.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Lennart Poettering
2011-07-10 22:47:53 UTC
Permalink
Post by Steve Dickson
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).
Thank you for this explanation. It appears your definition of a
service might be a bit too simple for many subsystems. You seem
to think that on service will only start one system daemon, which
is not the case in the more complex subsystems.
Well, I doubt about the "many". In fact, I am aware of only one other
occasion where people were wondering about this. And often the problems
are only perceived problems, because people try to translate their sysv
scripts 1:1 and are unwilling to normalize their scripts while doing
so.
So basically what you are saying is a service can never consist of
more than one system daemon.
Well, it can, but we encourage you not to do this.

The word "service" is mostly used synonymously with "daemon" in the
systemd context, and we prefer if people write one service file for each
daemon, as this is the easiest to understand for admins and users and
makes sure supervision/monitoring works properly.
Post by Steve Dickson
Post by Lennart Poettering
Post by Steve Dickson
Post by Lennart Poettering
Also, you should not spawn forking processes in ExecStartPre=, that's
not what it is for. In fact I am pretty sure I will change systemd now
to kill off all remaining processes after each ExecStartPre= command now
that I am aware that people are misusing it like this.
If they are not for forking off process, what are the for? It seems
quite logical that one would use a number of ExecStartPre= commands
to do some set up and then use the ExecStart= to start the daemon.
This is a misunderstanding. What I tried to say is that they should not
be used to spawn off processes that fork and stay in the
background. Processes in ExecStartPre= are executed synchronously: we
wait for them to finish before we start the next one, and they should
not try to play games with that and spawn stuff in the
background. That's what I meant by saying "you should not spawn
*forking* processes in ExecStartPre=".
Maybe a better way to says is ExecStartPre= should not be used for
daemon process?
Well, technically, not only daemon processes do this, but that's
nitpicking, so let's not discuss this part further.
Post by Steve Dickson
Post by Lennart Poettering
There is synchronization. As I made clear a couple of times, we never
start more than one ExecStartPre= process at a time. We start the next
one only after the previous one finished.
Looking at https://bugzilla.redhat.com/show_bug.cgi?id=699040#c35 appears
this is not the case...
I think this is a misunderstanding. "systemctl show foo.service" will show you the
actual timestamps when we started/joined a process.
Post by Steve Dickson
Post by Lennart Poettering
Well, nothing stops you from reading the same configuration file from
multiple services. We do that all the time, for example for
/etc/resolv.conf.
True... but the point is before systemd, an admin could tweak one
/etc/sysconfig file which define which daemon were started and
how they were configured... Unless I'm missing something that
is no longer the case... The admin will have to explicitly define
each an every daemon they need to run and how they are configured..
all by hand...
Well, I think it is much easier for admins if services can be
enabled/disabled all the same way, instead of adding arbitrary numbers
of service-specific ways to enable/disable them.

Simplify things, have as many levels of disabling as necessary but as
few as possible, and unify that across the different services. This is
what we want to ensure by getting people to use "systemctl enable"
instead of having service-specific sysconfig files for
disabling/enabling services.
Post by Steve Dickson
Post by Lennart Poettering
By proper configuration files I mean configuration files read by the
daemons themselves, instead of files that are actually a script that is
interpreted by a programming language and some more scripts intrefacing
with that.
Or in other words: configuration via command line arguments or
environment variables sucks. Much nicer are proper configuration
files. And faking config files by parsing them in shell and then passing
them of to daemons via env vars and cmdline args is not ideal.
^^^^^^^^^^^^
Post by Steve Dickson
So basically what you are is the way system daemon have been
started for the last.. say... twenty years or so is completely
wrong and systemd is here to change that! 8-) The point being...
I used the word "not ideal", not "completely wrong". Amazing that you
can misquote me like this even though these were the last three words
of the paragraph you are responding to here! (Jon Masters is much
better at reducing what I say to what he wants to believe I said... he
just silently drops everything I write except the words that are handy
to make his point. There's something to learn from him... ;-))
Post by Steve Dickson
That is your opinion which may or may not be held by the rest
of community... So please recognize it as such and please
be willing to accept dissenting views....
Oh, sure, everything I do and say just reflects my opinions. What else
should it reflect? I am not the pope, and neither the pope's spokesperson.

But I like to believe I have good reasons for what I propose, and I
think particularly in this case I am not alone with this opinion.

I mean, feel free to ignore me. But I'd be delighted if you didn't and
we could get this normalized and make the NFS subsystem a bit nicer to
use and more similar to the rest of the daemons we ship. There's really
no need to make NFS as complex to use as it is.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Lennart Poettering
2011-07-08 14:57:40 UTC
Permalink
Post by Steve Dickson
Post by Lennart Poettering
I am pretty sure systemd-devel is the better place to discuss this. But
I didn't know it existed...
That would have been the first big free software project without any
mailing list, wouldn't it?
Post by Steve Dickson
Post by Lennart Poettering
Yes, we want that people place each service in an individual service
file. Only then we can supervise the services properly. It is possible
to spawn multiple high-level processes from a single service, but that
is mostly intended as compat kludge to support SysV init scripts where
this is possible. In general however, we want people to do have a 1:1
mapping. Only then we can restart services if needed, we can catch
crashes, and show proper information about your service.
So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).
Thank you for this explanation. It appears your definition of a
service might be a bit too simple for many subsystems. You seem
to think that on service will only start one system daemon, which
is not the case in the more complex subsystems.
Well, I doubt about the "many". In fact, I am aware of only one other
occasion where people were wondering about this. And often the problems
are only perceived problems, because people try to translate their sysv
scripts 1:1 and are unwilling to normalize their scripts while doing
so.
Post by Steve Dickson
Post by Lennart Poettering
Also, you should not spawn forking processes in ExecStartPre=, that's
not what it is for. In fact I am pretty sure I will change systemd now
to kill off all remaining processes after each ExecStartPre= command now
that I am aware that people are misusing it like this.
If they are not for forking off process, what are the for? It seems
quite logical that one would use a number of ExecStartPre= commands
to do some set up and then use the ExecStart= to start the daemon.
This is a misunderstanding. What I tried to say is that they should not
be used to spawn off processes that fork and stay in the
background. Processes in ExecStartPre= are executed synchronously: we
wait for them to finish before we start the next one, and they should
not try to play games with that and spawn stuff in the
background. That's what I meant by saying "you should not spawn
*forking* processes in ExecStartPre=".
Post by Steve Dickson
Post by Lennart Poettering
ExecStartPre= is executed strictly in order, and in the order they
appear in the unit file.
True, but there is no synchronization. Meaning first process can
end after the second process, which think is a problem.
There is synchronization. As I made clear a couple of times, we never
start more than one ExecStartPre= process at a time. We start the next
one only after the previous one finished.
Post by Steve Dickson
Post by Lennart Poettering
I believe that services should be enabled/disabled at one place only,
and that is where you can use "systemctl enable" and "systemctl
disable". Adding a service-specific second-level of disabling in
/etc/sysconfig/ is confusing to the user, and not necessary. You'll do a
great service to your users if they can enable/disable all individual
services the same way. (And UI writers will be thankful for that too)
In a simple subsystem maybe, but many subsystems have a large number
of configuration knobs that are needed so the subsystem can function
in a large number of different environments. So in the past its
been very handy and straightforward to be able to tweak one file
to set configurations on different, but related, subsystems.
Well, nothing stops you from reading the same configuration file from
multiple services. We do that all the time, for example for
/etc/resolv.conf.
Post by Steve Dickson
Post by Lennart Poettering
ExecStart=$FOO bar waldo
I.e. variable substitution for the binary path (it will work for the
arguments, just not for the binary path). This limitation is necessary
due to some SELinux innerworkings in systemd. It's a limitation we
probably could fix if we wanted to, but tbh I find it quite questionable
if you spawn two completely different binaries and still call it by the
same service file.
Spawning different binaries to do set up, like exporting directories
before the a system daemon is started seems very reasonable and expected
practice.
Hmm? You can start as many binaries in ExecStartPre= as you wish, one
after the other, but we don't support that you can change the path of
them dynamically with an env var. env vars are only expanded for
arguments, not for the binary path itself.
Post by Steve Dickson
Post by Lennart Poettering
In general if services use a lot of /etc/sysconfig/ settings then this
is probably an indication that the service code should be fixed and
should just get proper configuration files.
I don't understand this generalization. For a very long time subsystems
have used /etc/sysconfig to store there configuration files and now they
are broken because they do? Plus they are not "proper" configuration files?
People have done lots of things for a long time, that doesn't make it
the most elegant, best and simplest solution.

By proper configuration files I mean configuration files read by the
daemons themselves, instead of files that are actually a script that is
interpreted by a programming language and some more scripts intrefacing
with that.

Or in other words: configuration via command line arguments or
environment variables sucks. Much nicer are proper configuration
files. And faking config files by parsing them in shell and then passing
them of to daemons via env vars and cmdline args is not ideal.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Jon Masters
2011-07-08 18:52:52 UTC
Permalink
[ removing extraneous copy of old Fedora devel list ]
Post by Steve Dickson
Post by Lennart Poettering
I am pretty sure systemd-devel is the better place to discuss this. But
I didn't know it existed...
My $0.02 on this is that this conversation *explcitly* *does not* belong
on systemd-devel. I subscribed to that list to monitor for such
conversations, but most people aren't going to do that. Problems with
Fedora packaging, in the Fedora distribution should be discussed here.

Jon.
Lennart Poettering
2011-07-08 19:54:30 UTC
Permalink
Post by Jon Masters
Post by Steve Dickson
Post by Lennart Poettering
I am pretty sure systemd-devel is the better place to discuss this. But
I didn't know it existed...
My $0.02 on this is that this conversation *explcitly* *does not* belong
on systemd-devel. I subscribed to that list to monitor for such
conversations, but most people aren't going to do that. Problems with
Fedora packaging, in the Fedora distribution should be discussed here.
This is not really about packaging, more about writing unit files. Since
unit files are intended to be included upstream this is better discussed
on systemd-devel.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Jeff Spaleta
2011-07-08 23:59:56 UTC
Permalink
On Fri, Jul 8, 2011 at 11:54 AM, Lennart Poettering
Post by Lennart Poettering
This is not really about packaging, more about writing unit files. Since
unit files are intended to be included upstream this is better discussed
on systemd-devel.
There is some benefit in making it possible for distribution packagers
to be aware of the these sorts of unit file writing pitfall
conversations. I'm not saying the discussion needs to be here. But
cherry-picking some salient to make us aware of, once and awhile, with
some good meaty examples of sysv styled pitfalls wouldn't hurt. In
fact it could be quite instructive for many of us. I think a lot of
us firmly entrenched at the distribution level have a lot of shell
scripting unlearning to do in how we approach service init. As hard
as learning is, unlearning is that much harder. it can be difficult to
pick these conversations out of the chatter here or on the systemd
list (cant keep up with every conversation). That being said you might
consider organizing a compendium of such pitfalls and adding it to
your growing amount of tutorial documentation for a handy reference if
you haven't already begun doing that already (and you may have and
I've missed it).


-jef
Lennart Poettering
2011-07-08 12:23:07 UTC
Permalink
Post by Steve Dickson
Hello,
One of the maintainers of systemd and I have been working
together on trying to convert the NFS SysV init scripts
into systemd services. Here is the long trail...
https://bugzilla.redhat.com/show_bug.cgi?id=699040
The point is this, with fairly complicated system,
some events need to have happen, like loading modules,
before other events happen, like setting parameters of
those loaded modules.
Currently the ExecStart commands can be started and end
before the ExecStartPre even start. This means setting
modules parameters within the same service file are
impossible.
I suggested that a boundary be set that all ExecStartPre
commands finish before any ExecStart commands start,
which would allow complicated subsystems, like NFS,
to start in a very stable way...
So is it wrong? Shouldn't there away to allow certain
parts of a system to synchronously configure some
things so other parts will come up as expected?
I am pretty sure systemd-devel is the better place to discuss this. But
here are a few comments after reading throught the bug report:

Yes, we want that people place each service in an individual service
file. Only then we can supervise the services properly. It is possible
to spawn multiple high-level processes from a single service, but that
is mostly intended as compat kludge to support SysV init scripts where
this is possible. In general however, we want people to do have a 1:1
mapping. Only then we can restart services if needed, we can catch
crashes, and show proper information about your service.

So, I'd suggest strongly not to try starting all services from a single
file. There's a reason why we explicitly forbid having more than one
ExecStart= in a unit file (except for Type=oneshot services).

Note that systemd unit files are not a programming language. And that
for a reason. If you want shell, then use shell, but don't try to misuse
the purposefully simple service file syntax for that.

Also, you should not spawn forking processes in ExecStartPre=, that's
not what it is for. In fact I am pretty sure I will change systemd now
to kill off all remaining processes after each ExecStartPre= command now
that I am aware that people are misusing it like this.

ExecStartPre= is executed strictly in order, and in the order they
appear in the unit file.

I believe that services should be enabled/disabled at one place only,
and that is where you can use "systemctl enable" and "systemctl
disable". Adding a service-specific second-level of disabling in
/etc/sysconfig/ is confusing to the user, and not necessary. You'll do a
great service to your users if they can enable/disable all individual
services the same way. (And UI writers will be thankful for that too)

There's no point in ever unloading kernel modules, unless you do it for
debugging or testing reasons. No init script should ever include an
"rmmod" or "modprobe -r". And we try to make static module loading
unnecessary. There's nowadays auto-loading for most modules in one way
or another, using MODALIAS and similar constructs in the kernel
modules. If you really need to load a module statically, then please do
so via /etc/modules-load.d/ so that the user has centralized control on
this.

This is not going to work:

ExecStart=$FOO bar waldo

I.e. variable substitution for the binary path (it will work for the
arguments, just not for the binary path). This limitation is necessary
due to some SELinux innerworkings in systemd. It's a limitation we
probably could fix if we wanted to, but tbh I find it quite questionable
if you spawn two completely different binaries and still call it by the
same service file.

In general if services use a lot of /etc/sysconfig/ settings then this
is probably an indication that the service code should be fixed and
should just get proper configuration files. If you need to interpret
these files, outside of the daemon, and the simple variable substitution
is not sufficient, and you need a programming language to interpret the
settings, then use a programming language, for example shell. You can
start shell scripts from systemd, like any other executable, and then
exec the real binary in the end. Of course, these solutions are somewhat
hacky, and I think in the long run binaries should be stand-alone and
should be able to read their own configuration themselves. But if you
really need a shell script, then go for it, stick it in
/usr/lib/<yourpackage>/scripts/ or so, and execute that from ExecStart=.

I will probably blog about sysconfig in a systemd world soon.

Type=oneshot is for one shot services, not continously running
services. Type=oneshot is for stuff like fsck, that runs once at boot
and finishes, and only after it finished boot will continue.

I am aware that some things I point out above are not how people used to
do things on SysV, but well, we want to get things right, and if you use
systemd natively, then we ask you to clean up things and not just
translate things 1:1.

Lennart
--
Lennart Poettering - Red Hat, Inc.
Loading...