I gave a talk about Linux system hardening using systemd features at SSTIC 2017 (in French) and at RMLL 2017 (in English, in the security track). Frenc...
Choosing a short term supported kernel release for a short term supported release makes perfect sense. Doing so for a LTS release doesn’t. This is not an assessment of the work that the Ubuntu kernel maintainers will be doing, but rather an assessment of the actual difficulty of the task considering no other distribution will be helping.
Update (2015-10-05): Someone pointed out that Ubuntu has Stable Kernel Updates. Here is an external opinion on the topic.
This Ubuntu release includes Upstart as init process and service manager. They chose to stay with Upstart even though most distributions had already made the switch to systemd and Debian was discussing it.
The first issue I see here is that Upstart is now considered a dead project but will still need support for five years. Some people (including Mark Shuttleworth) are claiming that Upstart is mature and well supported because it was included in a lot of distributions (before systemd took over) such as RHEL 6 which will still be supported after the five years of this Ubuntu LTS.
I strongly disagree with those claims. Most of Upstart features are not used in RHEL 6 for example. Upstart is just an intermediary to launch a massive SysVinit-like script which does all the work:
/etc/rc.d/rc.sysinit. There are almost no native Upstart jobs in RHEL 6 (tty, graphical login manager, control-alt-delete handler and that’s mostly it). Thus everything is just plain init scripts. Therefore the dependency feature is not used, nor is the event feature, nor is the job life cycle feature… The only ones using Upstart are Ubuntu users and the only support comes from the Ubuntu developers.
Update (2015-04-30): Here is a recap of the bugs currently affecting Upstart features. Those are unfixed, accepted by Upstart main developer (at the time) bug reports. This is not just some random list of bugs taken from Launchpad and can not be compared to a similar random list of systemd bugs taken from Bugzilla / Github issues. Those are feature specific accepted bugs. In my opinion, it is very unlikely that those will ever be fixed in the 14.04 life cycle:
From the comment linked above:
So lets recap. the event handling is racy. The pre-stop scripts design is broken. The event conditional logic is broken, the start/stop command logic is racy/broken, the expect command is broken. What’s left. Oh yeah… Upstart’s backwards compatibility for existing SysVinit scripts is good enough to rely on. But if you want to rely on anything upstart purports to bring to the table, get ready for dentist visit like fun time.
So with this misconception out of the way, let’s talk about logind, or systemd pieces used without systemd.
Session management used to be handled by ConsoleKit but the development stopped a few years ago. The newly developed alternative is the logind daemon which is part of the systemd project. But as systemd is not used in Ubuntu, they had to include a stripped down version of the logind daemon to make it work without systemd. This is of course completely unsupported by the upstream systemd project and thus only tested by Ubuntu developers.
Thus you can not rely on Upstart features, you’re missing systemd extensive features and using unsupported modifications which will be worked on only by Ubuntu developers.
This is actually a Debian heritage: services are always added to the boot process and started upon installation, before the administrators even had a chance to configure them. This is a security issue as any mistake in a default service configuration files could expose the system.
Conscientious administrators have to first stop the services as soon as possible after installation and configure them (and eventually remove them from the boot process). This nullifies the supposed advantage of starting and enabling them by default in the first place.
Answers on Serverfaults, AskUbuntu and AskDebian do not provide satisfactory solutions as they involve one time hacks. You will have to be careful to setup those hacks before every
apt-get call and remove them right after. This is also hardly supported and you can’t use plain
apt-get install commands anymore as you would have to make sure none of the installed dependencies install a service automatically started.
Update (2015-10-05): Someone raised the valid point here that a default blocking firewall should be used. This would certainly reduce the impact of this issue. But you will have to setup at install time a default incoming and outgoing blocking firewall. Apart from the fact that is not done by default, there is simply no option to do this in the standard Debian / Ubuntu installer. And this is definitely not a simple task when you have services accessing the network for various reasons. The default Debian / Ubuntu installer also suggest additional packages for installation and firewall rules for those packages would have to be provided. Remedying a poor initial security choice by relying on an additional layer of security is not a good trade-off for such a small convenience gain.
Having services started by default also leads to another issue: you need prompts to ask the administrator for defaults for some services where defaults can’t really be figured out automatically.
Finally, services will be automatically restarted upon updates, and there is no clear way to disable it either. While I agree that services should be restarted as soon as possible after an update, this is a decision that should be left at the administrator’s discretion.
Update (2014-05-01): It was pointed out to me that you could use the following options in
DPkg::Post-Invoke to run a script before and after a package is installed. This would allow you to automate the “start/restart disabling trick”.
All those choices mean that administrators will have to be extremely careful with packages installations and updates. An unfortunate side effect for features supposed to make them less cumbersome.
The MySQL version available in the Ubuntu 14.04 repositories will be the one from Oracle. Mark Shuttleworth backed this choice with the following arguments:
It’s very potent when we are able to give an upstream the ability to deliver their best bits directly to Ubuntu users using the awesome immediacy of the packaging system - we can only do that when we have established a shared set of values, and this is a great example.
As for phobias, the real pitchforks have been those agitating against Oracle. I think Oracle have been an excellent steward of MySQL, with real investment and great quality.
No doubt the MariaDB developers, the original MySQL developers, will have a different view of the subject at hand, just like all the other distributions that chose MariaDB over MySQL. They also made a comparison on features and recently announced the latest version of their MySQL fork.
Update (2014-04-30): I missed the fact that the MariaDB package is available in the Ubuntu 14.04 universe repository.
Update (2015-05-21): As Vincent Bernat correctly pointed out, the universe repository is community maintained and thus not guaranteed to include the latest security fixes (See Repositories: Universe). This is still an issue.
Pollinate is a new “security feature” introduced in Ubuntu 14.04 LTS cloud images. This is a script which will fetch random seed from a pool of “entropy servers” to seed the Linux PRNG. There are several issues with this approach (make sure to read the comments too), and I’ll try to summarize them here.
First, the main goal of this script is to improve the quality of seeds on Ubuntu cloud instances where no random sources are available (no virtio-ng virtual driver, no unpredictable input, no per instance pre-launch seeding). To do this, it fetches random input from a pool of other servers and by default those are the Canonical hosted instances of Pollinate which delivers chunks of random data in return.
One “nice” effect of this “security feature” for Canonical is that they will receive a request for entropy each time a new Ubuntu instance is booted in a cloud that has not changed the default server pool for the pollen script. Like most default configuration options, it is unlikely people will ever bother changing it. Thus Canonical will have a number to brag about, even though it will hardly amount to anything real as there could be more instances in use (private clouds) or less (what if I do each build of my software in a fresh Ubuntu virtual machine for testing?).
As explained in the comments on the post mentioned above, the right way to seed virtual machines in the cloud is to either:
Note: Pollinate/Pollen may actually be an effective way to seed Ubuntu cloud instances on bad cloud providers, but the question here is: Are benefits worth the trouble and the risks? I’m not so sure.
Note: I’m biased here as I mostly studied SELinux.
I recently had a look at the AppArmor support in libvirt (which is implemented as a sVirt driver).
When a new virtual machine is launched using libvirtd, a helper process (/usr/lib/libvirt/virt-aa-helper) will generate an AppArmor profile from the virtual machine’s libvirt XML configuration. This profile will allow (AppArmor) access for the Qemu process to each and every elements defined in the virtual machine configuration. Among those available elements, there is folder sharing from the host to the virtual machine using file system passthrough with VirtFS (Plan 9 folder sharing over VirtIO).
The virt-aa-helper will happily translate any path chosen for sharing in the virtual machine XML configuration into an AppArmor rule allowing read/write access. This disables a good part of the potential protection offered by AppArmor. This by itself will not lead to root access on a system as DAC (Discretionary Access Control or classic Unix RWX permissions) is still enforced (and the Qemu processes are running as a restricted libvirt-qemu user). But this removes the “Mandatory” part in Mandatory Access Control as a simple user allowed to configure virtual machine setups can disable parts of AppArmor for some virtual machines.
A full exploit here will first require an exploit to escape from the virtual machine to the host and then a classic local root exploit which will not be hindered by AppArmor as much as it should have been.
The main issue I see here is that this model allows “untrusted” users to influence significantly the AppArmor profile applied to a virtual machine.
Note 1: I used the VirtFS feature here as it will not prevent Qemu from starting if access is denied to a particular file or directory due to DAC checks, but any device entry in virtual machine XML configuration could be used to achieve the same results.
Note 2: This is a case where MAC checks should prevent users (or even administrators) from doing stupid things. The most important part in the “Mandatory Access Control” model is that the only person making access control decisions at runtime should be the kernel (the reference monitor), based on a policy that has been written in a trusted environment and extensively verified. Partially generating policies at runtime and under user control goes against this principle.
Dustin Kirkland shared the slides from a talk about Ubuntu security he gave recently. What’s important here is that most of what’s mentioned in the talk is not specific to Ubuntu and is also available in other Linux based distributions. The ones that are Ubuntu specific are the pollen “feature” discussed above (Update (2014-05-01): and AppArmor as the default).
“OK, so Ubuntu is not OK on servers, I get it. But I like it on my desktop and laptop, so I could surely keep it there?”
Well, again, you should not. The main difference between a server and a desktop is the graphical stack, which includes the graphics card drivers, the display server, the OpenGL libraries… Ubuntu will rely on unsupported software for this part too as they are still using Compiz as a compositor, even though its development was stopped more than two years ago. Unity is the only desktop environment still using Compiz thus the Ubuntu developers are the only one maintaining it.
They also chose to work on their own display server and compositor: Mir. This decision has already been heavily criticized:
Although Mir isn’t the default display server in Ubuntu 14.04, the code to run it has been added to mesa and other graphics related packages. This means that those packages are using custom patches not supported upstream and developed on only by Mir developers.
#12 – Ubuntu is the majority of public cloud workloads: So Windows is on the majority of desktops thus I should use Windows on all my desktops? I know I’m a bit unfair here but numbers hardly ever told us anything about the quality of software. The “everyone else is doing it so that must be good” mindset is not sound.
#11 – Ubuntu is the #1 platform for production OpenStack deployments: Here we go again with the “more is better” argument, this time with intentionally partial numbers from a survey on operating systems used for production OpenStack instances. Let’s run the numbers again from the source:
|CentOS + RHEL + Scientific Linux||49 + 21 + 2 = 72||34,4%|
|OpenSUSE + SUSE Linux||3 + 3 = 6||2,8%|
|Non Linux + Other||9 + 1 + 1 = 11||5,2%|
Not that impressive anymore. I’m even a bit worried for Debian :).
#10 Ubuntu is built on IAAS for IAAS users: Buzzword mania, no actual feature pointed out.
Ubuntu 14.04 is a Long Term Support release. The default server and desktop installation feature unsupported software (outside of Ubuntu), and some of those have already been unsupported for more than a year.
I don’t understand how people can accept such oxymoron.