Ubuntu 14.04 LTS: Why you should not use it, at all

16 minute read

Ubuntu 14.04 LTS (Trusty Tahr) has been released on April 17th 2014, thus this Long Term Support (LTS) version is brand new. So why am I already telling you not to use it?

Well, there are a couple of reasons, so read on!

TL,DR: Or what distribution should I use?

Update (2016-04-20): Finally found the time to add updates from comments. Updates from 2015-10-05 were written at that time but are only published now.

Update (2015-10-05): Current personal recommendations:

  • Want support? Get RHEL 7. Can’t afford? Get CentOS 7;
  • Server, cloud instance: RHEL / CentOS 7 or Debian 8;
  • Desktop: RHEL / CentOS 7, Fedora, Arch Linux, or Debian 8;
  • Want to run the latest and greatest in a Cloud? Try CoreOS or Project Atomic;
  • Have a specific project requiring a lot of control? Try Gentoo, NixOS.

Your favorite distribution is not listed here? Too bad. Make your constructive case in the comments.

Update (2015-10-05): You may send me an email directly now that the comments are closed.

The major advantage of Ubuntu LTS releases is that they are supported for five years with updates. They are also supposed to feature stable and mature software fitting in the current Linux ecosystem. Here are the reasons this is not the case today and will certainly never be the case with Ubuntu 14.04 LTS.

Update (2014-04-30): I’m definitely not against Ubuntu. I’m only against this particular release. I used Ubuntu for a while back at the beginning. I loved it and used to recommend it to everyone. But the recent choices prevent me from doing it anymore. I’m not criticizing Ubuntu developers nor their work at all. I’m criticizing the choices made for this release and I try to explain why those are a bad fit for a LTS release. Taken one by one, those issues are almost OK, and could turn out not to be an issue at all. But combining that many of them in a release supposed to be sold to anyone as THE default distribution to run for the next two years should raise concerns.

Reason 1: Non LTS Linux kernel

This release includes the 3.13 version of the Linux kernel. Apparently, the 3.13 release was chosen to ensure broader hardware support for this release. This is indeed an important criteria when choosing a desktop kernel but isn’t when looking at servers setups where custom built kernels may be required or cloud computing scenarios where hardware does not matter.

Thus, to ensure a pleasant desktop experience, they chose a kernel release that is not an upstream long term supported kernel. This means that the Ubuntu kernel maintainers will have to do all the backporting work without any help from the community, for five years. I don’t know the people maintaining kernel patches for Ubuntu, but I’m skeptical with regards to their capability to properly maintain and backport fixes to a kernel that no other distribution will use. As a comparison, Red Hat chose the 3.10 longterm supported kernel for RHEL 7 which is yet to be released.

The sane choice here would have been to ship two kernels:

  • one based on a longterm release for servers and cloud deployments (3.10 or 3.12 for example);
  • one based on an up-to-date version for the desktop version.

And this would have also resulted in less work for the kernel maintainer team as :

  • for the server version, part of the backporting and bug fixing would have been done by the community;
  • for the desktop version, the kernel packages could even be shared with future non LTS Ubuntu releases.

One may argue that this would increase complexity for administrators. However, the cloud/server versus desktop split is clearly shown on their download page and this would only impact the default kernel used upon installation. Both kernel versions could be kept in the repositories enabling users or administrators to switch if any issue were to occur.

Update (2014-04-30): Here is a quick recap detailing the kernel versions some distributions chose in the last few years.

Linux kernel versionDistributions using it
2.6.32 (LTR)RHEL 6, Debian 6, Ubuntu 10.04 LTS
2.6.33Fedora 13
2.6.35Fedora 14, Ubuntu 10.10
2.6.38Fedora 15, Ubuntu 11.04
3.0Ubuntu 11.10
3.1Fedora 16
3.2 (LTR)Debian 7, Ubuntu 12.04 LTS
3.3Fedora 17
3.4 (LTR) 
3.5Ubuntu 12.10
3.6Fedora 18
3.6Ubuntu 13.04
3.9Fedora 19
3.10 (LTR)RHEL 7
3.11Fedora 20, Ubuntu 13.10
3.12 (LTR) 
3.13Ubuntu 14.04 LTS


Choosing a short term supported kernel release for a short term supported release makes perfect sense. Doing so for a LTS release doesn’t. This is not an assessment of the work that the Ubuntu kernel maintainers will be doing, but rather an assessment of the actual difficulty of the task considering no other distribution will be helping.

Update (2015-10-05): Someone pointed out that Ubuntu has Stable Kernel Updates. Here is an external opinion on the topic.

Reason 2: Upstart

This Ubuntu release includes Upstart as init process and service manager. They chose to stay with Upstart even though most distributions had already made the switch to systemd and Debian was discussing it.

Support

The first issue I see here is that Upstart is now considered a dead project but will still need support for five years. Some people (including Mark Shuttleworth) are claiming that Upstart is mature and well supported because it was included in a lot of distributions (before systemd took over) such as RHEL 6 which will still be supported after the five years of this Ubuntu LTS.

I strongly disagree with those claims. Most of Upstart features are not used in RHEL 6 for example. Upstart is just an intermediary to launch a massive SysVinit-like script which does all the work: /etc/rc.d/rc.sysinit. There are almost no native Upstart jobs in RHEL 6 (tty, graphical login manager, control-alt-delete handler and that’s mostly it). Thus everything is just plain init scripts. Therefore the dependency feature is not used, nor is the event feature, nor is the job life cycle feature… The only ones using Upstart are Ubuntu users and the only support comes from the Ubuntu developers.

Update (2015-04-30): Here is a recap of the bugs currently affecting Upstart features. Those are unfixed, accepted by Upstart main developer (at the time) bug reports. This is not just some random list of bugs taken from Launchpad and can not be compared to a similar random list of systemd bugs taken from Bugzilla / Github issues. Those are feature specific accepted bugs. In my opinion, it is very unlikely that those will ever be fixed in the 14.04 life cycle:

From the comment linked above:

So lets recap. the event handling is racy. The pre-stop scripts design is broken. The event conditional logic is broken, the start/stop command logic is racy/broken, the expect command is broken. What’s left. Oh yeah… Upstart’s backwards compatibility for existing SysVinit scripts is good enough to rely on. But if you want to rely on anything upstart purports to bring to the table, get ready for dentist visit like fun time.

logind

So with this misconception out of the way, let’s talk about logind, or systemd pieces used without systemd.

Session management used to be handled by ConsoleKit but the development stopped a few years ago. The newly developed alternative is the logind daemon which is part of the systemd project. But as systemd is not used in Ubuntu, they had to include a stripped down version of the logind daemon to make it work without systemd. This is of course completely unsupported by the upstream systemd project and thus only tested by Ubuntu developers.

Thus you can not rely on Upstart features, you’re missing systemd extensive features and using unsupported modifications which will be worked on only by Ubuntu developers.

Reason 3: Services enabled and started by default upon installation

This is actually a Debian heritage: services are always added to the boot process and started upon installation, before the administrators even had a chance to configure them. This is a security issue as any mistake in a default service configuration files could expose the system.

Conscientious administrators have to first stop the services as soon as possible after installation and configure them (and eventually remove them from the boot process). This nullifies the supposed advantage of starting and enabling them by default in the first place.

Answers on Serverfaults, AskUbuntu and AskDebian do not provide satisfactory solutions as they involve one time hacks. You will have to be careful to setup those hacks before every apt-get call and remove them right after. This is also hardly supported and you can’t use plain apt-get install commands anymore as you would have to make sure none of the installed dependencies install a service automatically started.

Update (2015-10-05): Someone raised the valid point here that a default blocking firewall should be used. This would certainly reduce the impact of this issue. But you will have to setup at install time a default incoming and outgoing blocking firewall. Apart from the fact that is not done by default, there is simply no option to do this in the standard Debian / Ubuntu installer. And this is definitely not a simple task when you have services accessing the network for various reasons. The default Debian / Ubuntu installer also suggest additional packages for installation and firewall rules for those packages would have to be provided. Remedying a poor initial security choice by relying on an additional layer of security is not a good trade-off for such a small convenience gain.

Having services started by default also leads to another issue: you need prompts to ask the administrator for defaults for some services where defaults can’t really be figured out automatically.

Finally, services will be automatically restarted upon updates, and there is no clear way to disable it either. While I agree that services should be restarted as soon as possible after an update, this is a decision that should be left at the administrator’s discretion.

Update (2014-05-01): It was pointed out to me that you could use the following options in /etc/apt/apt.conf: DPkg::Pre-Invoke and DPkg::Post-Invoke to run a script before and after a package is installed. This would allow you to automate the “start/restart disabling trick”.

All those choices mean that administrators will have to be extremely careful with packages installations and updates. An unfortunate side effect for features supposed to make them less cumbersome.

Reason 4: MySQL (even though MariaDB is also available)

The MySQL version available in the Ubuntu 14.04 repositories will be the one from Oracle. Mark Shuttleworth backed this choice with the following arguments:

It’s very potent when we are able to give an upstream the ability to deliver their best bits directly to Ubuntu users using the awesome immediacy of the packaging system - we can only do that when we have established a shared set of values, and this is a great example.

As for phobias, the real pitchforks have been those agitating against Oracle. I think Oracle have been an excellent steward of MySQL, with real investment and great quality.

No doubt the MariaDB developers, the original MySQL developers, will have a different view of the subject at hand, just like all the other distributions that chose MariaDB over MySQL. They also made a comparison on features and recently announced the latest version of their MySQL fork.

Update (2014-04-30): I missed the fact that the MariaDB package is available in the Ubuntu 14.04 universe repository.

Update (2015-05-21): As Vincent Bernat correctly pointed out, the universe repository is community maintained and thus not guaranteed to include the latest security fixes (See Repositories: Universe). This is still an issue.

Reason 5: Questionable security choices

Pollinate

Pollinate is a new “security feature” introduced in Ubuntu 14.04 LTS cloud images. This is a script which will fetch random seed from a pool of “entropy servers” to seed the Linux PRNG. There are several issues with this approach (make sure to read the comments too), and I’ll try to summarize them here.

First, the main goal of this script is to improve the quality of seeds on Ubuntu cloud instances where no random sources are available (no virtio-ng virtual driver, no unpredictable input, no per instance pre-launch seeding). To do this, it fetches random input from a pool of other servers and by default those are the Canonical hosted instances of Pollinate which delivers chunks of random data in return.

  • Obvious issue: you have to trust that their servers are OK (well, you’re using their software already, so you better trust them anyway);
  • The initial TLS session will be made with very little entropy available thus leaving the possibility that an attacker could either completely hijack the session or simply decipher the content to gain a notable hindsight into the seeding state of the cloud instance PRNG;

One “nice” effect of this “security feature” for Canonical is that they will receive a request for entropy each time a new Ubuntu instance is booted in a cloud that has not changed the default server pool for the pollen script. Like most default configuration options, it is unlikely people will ever bother changing it. Thus Canonical will have a number to brag about, even though it will hardly amount to anything real as there could be more instances in use (private clouds) or less (what if I do each build of my software in a fresh Ubuntu virtual machine for testing?).

As explained in the comments on the post mentioned above, the right way to seed virtual machines in the cloud is to either:

  • use the host to generate a file with random content and put it in the virtual machine disk image before starting it (virt-builder is able to do this for example). This is hypervisor independent;
  • enable the VirtIO RNG driver in the virtual machine kernel and use the Qemu switch to enable it for virtual machines (documentation). This is obviously Qemu/KVM dependent.

Note: Pollinate/Pollen may actually be an effective way to seed Ubuntu cloud instances on bad cloud providers, but the question here is: Are benefits worth the trouble and the risks? I’m not so sure.

AppArmor

Note: I’m biased here as I mostly studied SELinux.

I recently had a look at the AppArmor support in libvirt (which is implemented as a sVirt driver).

When a new virtual machine is launched using libvirtd, a helper process (/usr/lib/libvirt/virt-aa-helper) will generate an AppArmor profile from the virtual machine’s libvirt XML configuration. This profile will allow (AppArmor) access for the Qemu process to each and every elements defined in the virtual machine configuration. Among those available elements, there is folder sharing from the host to the virtual machine using file system passthrough with VirtFS (Plan 9 folder sharing over VirtIO).

The virt-aa-helper will happily translate any path chosen for sharing in the virtual machine XML configuration into an AppArmor rule allowing read/write access. This disables a good part of the potential protection offered by AppArmor. This by itself will not lead to root access on a system as DAC (Discretionary Access Control or classic Unix RWX permissions) is still enforced (and the Qemu processes are running as a restricted libvirt-qemu user). But this removes the “Mandatory” part in Mandatory Access Control as a simple user allowed to configure virtual machine setups can disable parts of AppArmor for some virtual machines.

A full exploit here will first require an exploit to escape from the virtual machine to the host and then a classic local root exploit which will not be hindered by AppArmor as much as it should have been.

The main issue I see here is that this model allows “untrusted” users to influence significantly the AppArmor profile applied to a virtual machine.

Note 1: I used the VirtFS feature here as it will not prevent Qemu from starting if access is denied to a particular file or directory due to DAC checks, but any device entry in virtual machine XML configuration could be used to achieve the same results.

Note 2: This is a case where MAC checks should prevent users (or even administrators) from doing stupid things. The most important part in the “Mandatory Access Control” model is that the only person making access control decisions at runtime should be the kernel (the reference monitor), based on a policy that has been written in a trusted environment and extensively verified. Partially generating policies at runtime and under user control goes against this principle.

Nothing more than the others

Dustin Kirkland shared the slides from a talk about Ubuntu security he gave recently. What’s important here is that most of what’s mentioned in the talk is not specific to Ubuntu and is also available in other Linux based distributions. The ones that are Ubuntu specific are the pollen “feature” discussed above (Update (2014-05-01): and AppArmor as the default).

Reason 6: Compiz & Mir

“OK, so Ubuntu is not OK on servers, I get it. But I like it on my desktop and laptop, so I could surely keep it there?”

Well, again, you should not. The main difference between a server and a desktop is the graphical stack, which includes the graphics card drivers, the display server, the OpenGL libraries… Ubuntu will rely on unsupported software for this part too as they are still using Compiz as a compositor, even though its development was stopped more than two years ago. Unity is the only desktop environment still using Compiz thus the Ubuntu developers are the only one maintaining it.

They also chose to work on their own display server and compositor: Mir. This decision has already been heavily criticized:

Although Mir isn’t the default display server in Ubuntu 14.04, the code to run it has been added to mesa and other graphics related packages. This means that those packages are using custom patches not supported upstream and developed on only by Mir developers.

Other supposed reasons you should choose Ubuntu

Conclusion

Ubuntu 14.04 is a Long Term Support release. The default server and desktop installation feature unsupported software (outside of Ubuntu), and some of those have already been unsupported for more than a year.

I don’t understand how people can accept such oxymoron.

Updated:

Comments


Comments are disabled on this blog but feel free to start a discussion with me on Mastodon.
You can also contact me directly if you have feedback.