Mike on Tuesday April 29 2014 11:02:08 said:

Let's see the french counter-arguments:

https://linuxfr.org/users/siosm/journaux/ubuntu-14-04-lts-pourquoi-il-vaudrait-mieux-ne-pas-du-tout-s-en-servir#comment-1535423

Sjors on Tuesday April 29 2014 18:01:51 said:

"I don’t know the people maintaining kernel patches for Ubuntu, but I’m skeptical with regards to their capability to properly maintain and backport fixes to a kernel that no other distribution will use."

Any specific reasons to be skeptical about that?

lordbaco on Tuesday April 29 2014 18:20:19 said:

Did Red Hat paid you for this article?

First of all let's compare the number of packages of Ubuntu/Debian vs RedHat/Centos :

RedHat/CentOS contains roughly 3000 packages.
Debian/Ubuntu contains well over 38000 packages.

It's important to have good repository of packages of the software you need... otherwise you need to compile it yourself or install it from strange RPMfind or unmainted sources...

My point of view regarding Ubuntu 14.04 LTS server :

Pro
* Juju Charms are really really cool!
* Ubuntu is the reference OS for the OpenStack project: http://www.zdnet.com/openstacks-top-operating-system-ubuntu-linux-7000027360/
* The number of packages 38k packages

Con
* MySQL selected over MariaDB (Oracle probably paid Ubuntu...)
* Upstart vs Systemd mess vs System V init..; it will take months or years to have the stability of init for all services.. but it's important to have faster boot like MacOS' launchd
* Online search in Unity (but you can disable it 10 seconds, they also received money from Amazon etc) or use https://fixubuntu.com/

Corbin on Tuesday April 29 2014 18:52:34 said:

Let me respond to these one by one. 

Reason 1:

The Ubuntu Kernel team has a known history of supporting linux kernels on their own and their track record speaks for itself. The is no reason they should have any trouble continuing to do so.

https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Reason 2:

The systemd vs Upstart argement needed to be settled a the Debian level. The simple fact is neither system is "superior" as the Debian group clearly pointed out. The fact is that there was no time to switch to systemd in 14.04 so the only real question was if Ubuntu would continue supporting only systemd or would support both to allow a move. Would you have rather they just stuck with Upstart?

Reason 3:

How does this have anything to do with Ubuntu 14.04 This just makes it clear you are writing this purely cause you don't like the Debian ecosystem. If you wanted to have a discussion about that then write a blog about Debian.

Reason 4:

Oh yes how horrible provide direct installs for the most popluar opensource database. Simple fact is that the only people who care about Maria VS MySQL know how to add a repo. Those who just want to install Wordpress/Drupal/etc really fast just want Oracle MySQL. The bigger question here is can somebody please tell me why the MariaDB people don't take the tiny effort to setup a PPA?

Reason 5:

I'm just going to point out your own closing statement. Cause yes nothing more then others, but also nothing less. Considering in how much of the rest of this article you bitch about Ubuntu having their own projects instead of using the general ones seems a little strange you would be angry they choose to invest most of their security resources into shared projects.

Reason 6:

You might want to check out this thing called a "Smart Phone" you see Ubuntu is planning to run their OS on it. Something that Wayland and X were never designed for. It already has shown far greaster preformance then X and far further along then Wayland. Was Ubuntu supposed to wait 2 years for Wayland to get finished while Microsoft continues to secure its 3 place status. As for Compiz would you have rather they moved right to Unity 8 without a short term release buffer?

Reason 7:

Your right popularity is not always the best way to pick a system. Though it does make it a lot easier to find help. The simple fact it this really shows why YOU shouldn't use Ubuntu 14.04. It isn't made for you. You are from the old guard where software is written once to spec and then installed and left. Until it either breaks or needs to be replaced. Cloud/IAAS/PAAS these as you point out are buzz words. Not meant to mean a new feature but a new way of thinking about and approching software. Ubuntu for those of us continually update/evolve our software. That no longer cling to some antiquated idea of big Monolithic software. 

Reason Mine:

Use Ubuntu 14.04 it may not be totally LTS .... but if you don't your gonna get left behind.

About Me:

I am a Software Engineer. My sevver software runs on Fedora/Ubuntu/Mac OS X/Windows. I choose Ubuntu for my cloud server cause it gives me the best stablity and takes the least amount of time to setup out of any of those. As for the software my Ubuntu 14.04 servers currently run among others: MySQL, PHP5.5, Node.js, Nginx, Couchbase, Elasticsearch and Docker.io

chrisv on Tuesday April 29 2014 19:15:04 said:

@Corbin, Reason #4:

No need for a PPA, mariadb is already available in the official repo: http://packages.ubuntu.com/search?suite=trusty&searchon=names&keywords=mariadb

Corbin on Tuesday April 29 2014 19:26:50 said:

@Chisv 

Thanks I meant the lastest fork as was pointed out in the article and which is MariaDB 10

https://downloads.mariadb.org/mariadb/repositories/#mirror=jmu&distro=Ubuntu&version=10.0&distro_release=trusty

But is a good point that the Ubuntu does have maria as an alterntive so them updating their MySQL to the newest Oracle version seems a very strange thing to complain about.

Siosm on Wednesday April 30 2014 10:20:04 said:

@lordbaco:

> Did Red Hat paid you for this article?

No thanks, those opinions are my own.

> It's important to have good repository of packages of the software you need...

I'd rather have a few well supported packages than a ton of unsupported ones. As said in the articles, numbers hardly tell us anything about quality.

> Juju Charms are really really cool!

Interesting technical argument.

> Ubuntu is the reference OS for the OpenStack project
> The number of packages 38k packages

Debunked in the article.

@Corbin:

> The Ubuntu Kernel team has a known history of supporting linux kernels on their own and their track record speaks for itself. The is no reason they should have any trouble continuing to do so.

Those kernels are supported for short term supported Ubuntu releases, that are rarely used in production servers.

> The systemd vs Upstart argement needed to be settled a the Debian level

Was the switch to Upstart (vs SysVinit) resolved in Debian before Ubuntu chose to use it?

> The simple fact is neither system is "superior" as the Debian group clearly pointed out.

This collection of bugs suggest the opposite: https://lwn.net/Articles/582585/

> How does this have anything to do with Ubuntu 14.04

If the behaviour is the same in Ubuntu then it does have to do with Ubuntu.

> MySQL

I made a mistake here indeed. I'll fix it shortly.

> You might want to check out this thing called a "Smart Phone" you see Ubuntu is planning to run their OS on it. Something that Wayland and X were never designed for.

Unfortunately this is false as the Jolla is already shipping with Wayland.

> It already has shown far greaster preformance then X and far further along then Wayland.

I'd really like to see those benchmarks.

> Was Ubuntu supposed to wait 2 years for Wayland to get finished while Microsoft continues to secure its 3 place status.

Wayland is already shipping.

> As for Compiz would you have rather they moved right to Unity 8 without a short term release buffer?

Compiz has been dead for more than two years, almost since the latest LTS. Alternatives should have been sought earlier.

> Though it does make it a lot easier to find help.

Most of the tutorials available on the web are wrong at some point. Having a lot of people write about how they insecurely installed some software isn't a plus.

> You are from the old guard where software is written once to spec and then installed and left.

This blog is hosted on an Arch Linux server. I'm not really an "install & forget" kind of guy.

Reven on Wednesday April 30 2014 12:16:57 said:

Great article. I think the same things.

I will wait for Centos 7. I hope that it will arrive shortly

Frank on Thursday May 01 2014 04:30:14 said:

Cool article, I wasn't sure what to decide for my company IT private cloud infrasrcture (around 800 vms for labs) and I think not choosing Ubuntu is the better choice, will go for Red Hat (since money is not an issue for us)

indolering on Saturday May 10 2014 09:11:39 said:

I don't think highly of your piece, but I will correct you on pollinate: it mixes with the existing randomness.  It's the same reason Linus didn't need to mitigate the backdoored hardware random number generators, a nonrandom seed hashed with a truly random seed is still a random seed.

indolering on Saturday May 10 2014 09:17:40 said:

Also, what's stopping you from enabling SELinux?  I'm not familiar with either system, but I know that Ubuntu supports it.  

Siosm on Saturday May 10 2014 09:48:44 said:

@indolering:

The issue here is that the existing randomness isn't truly random when running in a virtual machine with no pre-seeded entropy. Thus adding more potentially know data doesn't really help nor does it makes it worse. Pollinate by itself isn't really an horrible thing, but there are other solutions which give much better results and aren't as controversial.

Good luck with SELinux on Ubuntu (https://wiki.ubuntu.com/SELinux) as it is officially not being worked on and good luck with it on Debian: the last time a friend tried it, he gave up. I'll admit it's been a while since I last tried. Supporting SELinux (especially the policy) in a distribution is time consuming (I co-maintain SELinux packages for Arch Linux).

indolering on Friday May 16 2014 22:33:51 said:

RE: Pollinate

Note: Pollinate/Pollen may actually be an effective way to seed Ubuntu cloud instances on bad cloud providers, but the question here is: Are benefits worth the trouble and the risks? I’m not so sure.

Worst-case scenario is that this does not add any entropy to the system. Best-case scenario is that it proects Ubuntu users from lazy admins at cheap hosting facillities. That's a positive balance in my book :)


RE: SELinux

Yeah, like I said, I have no history with SELinux so I'm willing to believe that it's not a pleasant experience. What are your thoughts on how difficult it is to run on ArchLinux and RedHat based distros? Is it enabled by default on any major distro like it is for Ubuntu?

I'm a firm believer in usability == security. OpenBSD may be very secure, but take all that security and multiply it by the usage and you get an infintismally small bump in overall security. My hunch is that SELinux is too cumbersome to run by default and that AppArmor provides a more usable alternative. If you are taking score, then the very security practices you call questionable become much more reasonable.

Siosm on Wednesday May 21 2014 03:23:14 said:

@indolering:

> Worst-case scenario is that this does not add any entropy to the system

In the worst case scenario, it does not add entropy to your system, leaks the fact that your system does not have entropy, delays first boot, informs Canonical you're booting a new Ubuntu instance, opens a new first boot attack surface.

> I'm willing to believe that it's not a pleasant experience.

It's not a pleasant experience on distributions that do not maintain it. On Fedora, it's enabled by default and I like it a lot. On Arch Linux, I'm working on it on my free time, thus this is going slower than I'd like it to go.

> I'm a firm believer in usability == security.

And I must agree here. And SELinux is completely usable. Let me explain that:

Very few people know how to properly configure a Linux kernel, yet everyone using Android use it and never think about it. Very few people know how to properly write SELinux policies, yet a lot of people using Fedora, CentOS and RHEL use them.

Try Fedora 20, you'll see how usable SELinux has become and how powerful it actually is.

> OpenBSD may be very secure, but take all that security and multiply it by the usage and you get an infinitesimally small bump in overall security.

I'm distantly following the LibreSSL development and I must say I am concerned about how development happens in the OpenBSD project. But this is for another post.

> My hunch is that SELinux is too cumbersome to run by default and that AppArmor provides a more usable alternative.

Again, try Fedora 20 before saying that SELinux is unusable.

There is a fundamental issue with AppArmor: it does not cover all cases. This is related to another argument often pointed out by SELinux opponents: SELinux rules may appear to be too complex. But the rules can only be more complex than the Linux kernel interface is otherwise you would be in either one of the following cases which are hardly acceptable:

* Less complex rules covering the whole interface: some cases cannot be expressed in the policy and thus are left unconfined;
* Less complex rules covering part of the interface: some cases are not taken into account in the policy;
* Rules only as complex as the interface: interesting properties not covered by standard access control mechanisms can not be expressed in the policy.

mik on Thursday May 22 2014 09:31:17 said:

Other reasons:

* @ launchpad.net they start to help you thinking you're a noob, but when they realize you're facing a real bug that didn't exist in previous release, they let you down without a word. So you're alone to find work around in my case downgrading the kernel... unless I just give up on Ubuntu.

They're shooting themselves in the foot & that's a shame: with this shitty LTS release who is going to tell their friends & family to give up on Windows knowing they'll be ton of work around to dig before getting a descent set-up?

* In summary, 14.04 is not working out of the box & that's sufficient reason to not use it.

hadrons123 on Friday June 06 2014 15:49:58 said:

Tim,

This has been a very good read since I was feeling the same about most of the issues surrounding ubuntu, specifically LTS version.

The worst thing canonical would do is to remove systemd from their repos.

I tried installing Lubuntu and found that an LTS release cannot use root partion for XFS and the installer crashed, instantly when i tried to install.

The nm-applet doesn't appear on the lxpanel post-installation, even though this issue was documented in their release notes this are all some basic bugs which should have been sorted for an LTS release.

The quality is appalling, I was using Xubuntu for a few days and I ended by moving back to my F20. Some of the packages are years old or totally not supported by upstream. The grahpics stack is upto date.

Aditya on Sunday June 15 2014 10:23:33 said:

I won't comment on the post. But I would tell my opinion on which distro to use:

Ubuntu - want looks, ease of use (for newbees)

Arch - for those who want lightwieght distro and latest updates

Fedora - for those who want taste of enterprise for free

RHEL - for those who want to take the tested, safe enterprise path.

Debian - for those who want stable os

I haven't gone through CentOS so I'll not speak of it.

lordbaco on Sunday June 15 2014 13:16:53 said:

Users of each Linux distro

https://mobile.twitter.com/lordbaco/status/436143366187651074

Debian: Stable for Production servers, free, released when ready, best distro for upgrade without reinstall, DEB reference & best packaging, The Universal OS? (Google switched prod from Red Hat to Debian)

Ubuntu: OpenStack reference, JuJu Charms, MAAS, some innovations but but not so stable. Unity for dummies (Mint alternative), more supported from third parties vendors than Debian (Fork for Desktop Mint, Pear OS, Elementary OS etc)

Red Hat: Stable for Production Enterprise servers, $$$, innovations (libvirt, spice, etc.), RPM reference

CentOS, Scientific Linux: enterprise for the rest who still need RPM distro

Gentoo: nice to tune compilation of software but require CPUs or distcc to compile latest softwares. Was one of the best Wiki for Linux before they lost it...

Arch: Best Wiki for Linux, smart distro for desktop but not as good for servers

Fedora: sandbox of RedHat

ChromeOS & CoreOS: KISS for Desktop & Server (Docker)

No Redhat on Monday June 16 2014 12:12:51 said:

Don't use Redhat, only when you want to use SELinux. Even then i question it's value.

Redhat makes its own modifications to the sources which suits them. But more ofter it brakes other opensource software if you want to build them from source.

I'd recommend Debian for its stability, and also when you want al the features a GNU/Linux platform has to offer, the testing en unstable branches. Futhermore it has a huge user fan base.

The best vanilla GNU/Linux systems in my opinion are Slackware and Arch linux.

And as novelty distribution one can use Gentoo Linux, which also has the potential as a valuable training tool, because it gives the user insight into the proces of creating ones own GNU/Linux operating system, with confortable build scripts and tooling.

And if you realy like to be creative, use Linux From Scatch.

William D on Wednesday June 18 2014 10:44:16 said:

On the comments by Mik and buntupissed above - I have been using 14.04 since it was released with cairo-dock as the DE and for the last couple of weeks with systemd init. I have had no problems whatsoever with either ubuntu or systemd. Admittedly, systemd has not been completely (ported? is that the right term?) so I wait and see what develops.

Siosm on Thursday June 19 2014 06:33:57 said:

@ Aditya, lordbaco, No Redhat:

This article is not about which distribution is better for each kind of user. It's about the default choice that should be recommended.

> Ubuntu - want looks, ease of use (for newbees)

Ubuntu is hardely easier to use by itself when compared to Fedora for example. It only comes from the fact that a lot of howtos are available on the net (although most of them are full of errors) and that a lot of people learned how to use it.

> CentOS, Scientific Linux: enterprise for the rest who still need RPM distro

Remember that CentOS is Red Hat Enterprise Linux without the branding. The main difference is the lack of support.

> Fedora: sandbox of RedHat

That's really funny, as Arch Linux is integrating a lot of what's added to Fedora (sometimes going even further (/usr merge for example)) but it has never been considered a RHEL sandbox...

And if Debian is released when ready, what about Fedora?

> ChromeOS & CoreOS: KISS for Desktop & Server (Docker)

Those projects are unrelated to one another and very specific in use-cases. I'd definitly not recommend them for general usage.

> Arch: Best Wiki for Linux, smart distro for desktop but not as good for servers

Why? This servers runs Arch Linux and everything is fine. The fact that you may not want to use it for a server does not make it unsuitable for servers.


@ No Redhat:
So you're criticizing Red Hat for patching software and recommending Debian instead when Debian is well known for carrying a lot of patches in-house and making not-that-many efforts to upstream them whereas Red Hat is making tons of efforts to ensure patches go upstream. Don't you see any problem here?

Debian is also known for breaking software because it actually changes their name and the code as they wish to fit their naming scheme.

As far as I remember, Slackware does not have a proper package manager.

Linux From Scratch is a learning tool. I fail to see the creativity here.

Guy Baconniere on Thursday June 19 2014 20:18:25 said:

Arch for Server

http://fomori.org/blog/?p=468

<< The problem is, that you need to pacman -Syu regularly, because if you don’t, you don’t have any chances of getting your remotly installed server back in shape. This is due to the many changes Arch Linux goes through to stay the cutting edge distro it is. >>

Debian for Server

http://w3techs.com/technologies/details/os-linux/all/all

http://www.computerweekly.com/blogs/open-source-insider/2013/05/international-space-station-adopts-debian-linux-drop-windows-red-hat-into-airlock.html

https://www.usenix.org/conference/lisa13/technical-sessions/presentation/merlin

http://www.informationweek.com/cloud/infrastructure-as-a-service/googles-cloud-drops-custom-linux-for-debian/d/d-id/1109911?


Let's take top 10 distros on distro watch: http://distrowatch.com/

5 distros based on .DEB (Debian...), top 3 are all DEB based

3 distros based on .RPM (RedHat...)

1 distros based on .PKG.TAR.XZ (Arch's Pacman)

1 distros based on .PET (Puppy)


Let's compare Debian / Redhat / Arch by number of Distro based on them,...

http://en.wikipedia.org/wiki/List_of_Linux_distributions

Luis Costa on Tuesday July 01 2014 10:41:12 said:

Every Distribuition has is choices. If you don't like change it that's the best thing on every linux.

I started to use Fedora a long time ago but now i use ubuntu every here for me is the less time spend OS to use.

About the cloud Red hat started to switch to openstack last year, ubuntu or canonical is on openstack for more than that.

But the things i have write are no reason for chosing an OS so aren't your reasons. That´s why we open source community have several flavors and we can choose we have the option.

Try to be contructive and not destructive.

The best for you all.

Daniel on Wednesday July 16 2014 16:49:39 said:

Reason 1: Non LTS Linux kernel

Ubuntu kernel can patch community. Ubuntu is opensource. Linux 3.10 LTS in RHEL 7 is (and will be) heavily patched by Red Hat, it is not much similar to original 3.10

Reason 2: Upstart

There is new Upstart version for Ubuntu 14.10, so it is still maintained.

Reason 6: Compiz & Mir

"Unity is the only desktop environment still using Compiz" - Gnome is the only desktop environment still using Mutter, KDE is the only desktop environmet still using Kwin....
Compiz is still maintained. Canonical was main developer of Compiz, so it is most experienced for maintaining.
Mir is not used in Ubuntu 14.04

Daniel on Wednesday July 16 2014 16:57:57 said:

RHEL 6

1) Extreme heavily patched kernel 2.6.32 has own bugs which are not present in upstream 2.6.32

2) Very slow start, so higher downtime during restart after update kernel.

3) Very old libraries, so many of new programs are not installable. For example HipHop PHP (server side), Chrome (desktop side), Skype (desktop side)

4) Many missing programs. Distribution is about to have prepared software for easy installation and maintanance for security fixes. RHEL often needs 3rd party software or compilation which can be security or stability risk.

Siosm on Wednesday July 16 2014 18:38:02 said:

@Guy Baconniere:

Comparing Distrowatch ranking is hardely a useful metric (http://distrowatch.com/dwres.php?resource=popularity), nor is comparing the number of distributions based on another.

I've updated several Arch Linux installations that had not been updated for a long time (including remote installations) and I did not encounter any issue not already referenced in the Arch Linux news. If you want perfect uptime, you need to avoid any single point of failure and test updates before rolling them into production, whether the distribution you're using is Debian, RHEL or Arch Linux.

@Daniel:

Did you even read the things I wrote between the titles? Because most (if not all) of what your saying is debunked/explained there.

Philipp on Tuesday August 12 2014 15:04:58 said:

You mentioned many good points. Some time ago, I wondered if I am the only person who feels totally uncomfortable with Debians stupid behaviour of starting services directly after the installation - good to know that I'm not the only one. However, also CentOS has some weak points, if not as many as ubuntu. For example, you recommend CentOS as a desktop OS - why the fuck are the fonts so fucking ugly in CentOS? As a solution, many people recommend installing some third party font renderer called "infinality" or whatever. As a desktop user, I don't care about laws, I just want good looking fonts. And Ubuntu has good looking fonts. Also, some people already mentioned that there are simply not as many packages in CentOS available as there are in Ubuntu. Thats so true. Try to install XBMC in CentOS 7 - in Ubuntu its just adding a ppa and running the apt-get install command, in CentOS the solution is: compile it yourself. Are you fucking kidding me? No, I won't compile that shit by myself, because I dont wan't to install tons of dev libs and I also don't want to recompile XBMC every time there is a new release.

What's also funny - or rather sad - is the CentOS installer. It starts with some basic usability fails, like: they placed the button for going to the next dialog (also called the "next" button) on the upper LEFT area of the screen. WTF, how stupid is that. Can't RedHat afford at least ONE UX employee? The other thing is, for example, if I want to do only a minimal install (e.g. for a server), I would probably go with the minimal install cd. So far so good. It boots, and when it comes to the package selection, it asks to ENTER a mirror manually. WTF? Why can't they just provide a list of mirrors?

It's always the same shit with linux - if you ask me, both distributions suck in their own special way.

loser on Wednesday August 13 2014 23:09:07 said:

CentOS apparently lacks ec2 support. CentOS 7 images for ec2 don't seem to be forthcoming. Also the older ones seem to be lacking attention. The wiki states:

One key benefit of going down the route of using the Market Place is that you will then be subscribed to receive important security and bugfix announcements about these images, issued by the CentOS Project via Amazon's services.

But they all are marked "Includes Updates: No."

Using the AMI's directly is currently deprecated. We are working to resolve issues and establish automation as well as monitoring around resources and process that will allow us to re-enable direct AMI instantiation. In the mean time, the AMI's are listed at the bottom of this document for legacy / reference purposes.

Amazon puts limitations on the boot volume of instances created from Market Place listings.

mvaldez on Sunday August 24 2014 09:17:41 said:

Hi. Thanks for sharing your opinion. We use a mix of Ubuntu and Centos/RedHat for public-facing servers but we are stopping using Ubuntu because of some of the choices done for 14.04. (For internal servers we use Debian and Slackware). Yes, with Centos we usually have to compile some packages, but that's ok (we build once then automate the deployment on multiple servers).

For desktops we still use Ubuntu, but 12.04 version (with Gnome). Ubuntu 14.04 had some usability problems in our tests (with our users).

My two only complains for CentOS (so far) are the choice of XFS as default filesystem and some missing packages (available on older releases).

So, we will test the next Ubuntu server LTS in a couple of years. In the mean time we are using CentOS 7 for all servers and Ubuntu 12.04 on desktops.

Regards, MV.

Vincent Bernat on Thursday August 28 2014 13:11:45 said:

Just a quick note about MariaDB. Being in universe is quite different than being in main. In universe, you have absolutely no guarantee to get any fix (including security one). Only packages in main have the LTS label. See https://help.ubuntu.com/community/Repositories#Universe

Elsac on Saturday August 30 2014 12:38:25 said:

I still can't understand why or how some one can use Fedora for a server. I have experience with feroda since Fedora 14. Last 3 times I installed Fedora 20, the system crashed within 2/3 days.

For some reson, Arch, even after being bleeding edge, never crashed on me. That was really impressive.

Lack of proper systemd support in Ubuntu 14 was disappointing, but not enough of a reason to not use it. I still hate having to write custom sysv init scripts, but still not enough to switch to another distro completely.

SELinux vs AppArmor is a good point, but, as you don't secure your system with just that, it finally comes down to the admin. If you are used to AppArmor, the best thing to do would be to stay with that. Trying to write SELinux policies without mastering them will end up in a disaster (From the experiences of some collegues. Fedora 20 might have made it easier, but I couldn't enjoy my fedora 20 systems long enough to find out).

Package management - I personally prefer apt, but pacman was impressive. But, being tied up in one way of doing things is not recommendable. If you are a system admin, you might have to manage systems with various distros somewhere along the line. For desktop users, experiment with various systems till you can find one which you like. All of these are linux systems. If you find something about your favourite distro that you don't like, you can change just that. Most of the things being discussed here won't even affect an average desktop user.

So, my advise to desktop users - Forget all these discussions, and use whatever you like.

Foo on Tuesday September 02 2014 16:24:27 said:

I surely hope Red Hat paid for this hit piece, because otherwise this makes little sense.

"Software not having support outside of Ubuntu" is a ridiculous argument.  The Ubuntu kernel team maintains the kernels that Ubuntu uses, and they do not need the help of other parties (although that would be nice).

So, if your argument boils down to Ubuntu needing to run the kernel, init system, windowing system and wutnot of RHEL for it to pass your test, then I think you are, indeed, better off running RHEL.  Oh, and you have not studied AppArmor, so that's also an issue. COME ON.

The starting of services post-install is the only valid argument you have. It is a minor problem of the Debian sysytem... and RHEL has its own minor glitches.

This is sad.  Nothing more to see here.

-Foo

Chris Smith on Tuesday September 16 2014 16:18:18 said:

Thank you for writing this up.

Unlike the majority of the respondents here who are desktop users, zealots and hobbyists, I'm actually responsible for a fair number of business critical production 10.04 LTS machines and we had to think hard about the upgrade cycle on these. They have been nothing but EXCRUTIATING PAIN from day one. Various bugs resulting in us building our own packages and kernel problems with CIFS/SMB constantly.

In our testing with 14.04 LTS half our test suite didn't run due to package bugs. Not to mention that python3 that ships TO THIS DAY is broken on 14.04 LTS. venv is shafted as are the PERC RAID drivers. Total disaster.

Even paid up Canonical support is crap. No solutions. Better luck on public launchpad and even that is a veritable wasteland of negligence.

We're off to CentOS 7 this time and will buy a single RHEL7 license in for the support entitlement.

Don't let the Ubuntu zealots and idiots kick you down; you're 100% right on all counts, particularly the service management and proprietary direction things.

JDS on Wednesday September 17 2014 15:24:47 said:

The takeaway here is probably that the truth is somewhere in the middle. We've been very successful running a SaaS based software business using Ubuntu, and, while there are occasional Ubuntu-specific problems, the truth is that RHEL has its own set of slightly different, but similar, problems. To take too strong a stance on the side of either is just silly, IMO.

najjar on Sunday September 21 2014 22:36:35 said:
I totally agree with this article, and do not agree to the direction Cononical is leading Ubuntu to.Very nice comparision, I have my experience with both Debian and Centos, and I can see the advantages each has, and the other doesn't.In the company I work in we use debian stable for servers, and unstable for desktops, and we are so satisfied.For any who would like to choose a server os, I would recommend either centos or debian stable, and for desktops choose whatever you like as long as it does not conflict with the basic philosophical idea behind the GNU/Linux project. Personally I see that Ubuntu / Cononical are shifing away from there. I would like to add that to the reasons not to go for Ubuntu neither on a server nor a desktop.Keep the good talk.
Oerthling on Saturday October 11 2014 13:36:27 said:

You have a preference for RH - that's fine. IMHO all of the major distros (and some of the smaller ones) have their strengths and weaknesses.

But many of your arguments look a bit weird to me.

From the very beginning Ubuntu has been Debian plus patches. Long before Upstart, Software Center and Unity Canonical has maintained various packages to make them particular to Ubuntu (naturally - otherwise it would just be Debian with 6 month/2 year release cycle). This has escalated a lot with Unity and Mir.

I don't see why having to maintain something like Upstart - a project they created themselves - should pose a major problem. They won't add more features over the next few years and have just to patch security issues. Sounds doable to me. Regular stuff for a long-term release - not that unusual at all.

Similar for the Kernel. On a server admins won't look for new features every few weeks. As long as the RAID is supported and the network card runs on near capacity, the only thing you need from Kernel upgrades is security fixes for new found flaws. I don't see how that is so very difficult to manage. Extra work - yes. But not at all undoable.

With regard to software having bugs that are not fixed - welcome to reality. That's the case for each and every complex software environment out there. There's always bugs and everybody has them (RH et al included). It boils down to whether the most critical ones get handled soon enough. And how many people are seriously affected.

And most of your arguments apply mostly to servers. Security and stability concerns are very different to desktop systems. Red Hat tends to be very consrevative. That makes total sense for a distro that's mainly used in enterprise and on servers where few people attach random hardware every day.

For desktop users the trade-offs are different. If compiz has a crash every few days that hardly affects me beyond a message box, but Ubuntu has bells and whistles and can use almost any hardware I connect to a USB port than that's a reasonable trade-off for many people.

Also the groupings in your statistics table are weird. Why combine OpenSuse and Suse, but not Fedora and RH?

But most of all - self-eported numbers are USELESS. Utterly unreliable.

And in what alternate universe is > 50% not impressive? Sure - it's much less than 90% - but it's still a majority. And > 20 points greater than RH/CentOS/Scientific combined.

But as the source is self-reported data we don't actually know whether Ubuntu has 85% of that "market" or RH has actually 70%.

With regard to your quantity is not everything remarks - yes, true. Having something on a lot of machines doesn't guarantee quality. But it is a factor (mindshare, documentation, higher chance of somebody noticing and having a fix for a problem, etc...). It' is a very valid point in favour of using something. Other things might be more important - but it's an important part of making a decision for or against something.

In short - if you had said that admins should prefer more stable alternatives for a server environment - I would not have bothered to comment. Not all your points are strong, but I could see why you'ld argue that on the whole.

I have been running Ubuntu 14.04 on several laptops/Netbooks since March/April - best Linux Desktop experience ever (IMHO, in my experience and given my preferences, etc..).

But on a professional server I'd run Debian (stable and familiar) or perhaps RH (the "IBM" of Linux for enterprise systems).

 

Harri on Tuesday October 14 2014 20:09:24 said:

Well,

 

lots of talk and I through all the flame in there. But the point here; LTS is where you can rely on build on. Ubuntu is sadly showing more and more symptoms of Windows world. Defend of the easyness of Ubuntu-life is understandable but not accepted in Linux/Posix/Unix land. Linux is not open source free to use Window$ replacement but a different world to live.

On the topic LTS as it should be like it is on RHEL/C/SL world is rock solid platform. I can state this since 1996. Linux is not anymore hobby or home "thing". We have run out business on Linux since 2000. We do process manufacturing on high performance polymers. Linux is like are a sports car; you have to know what you are doing or you are lost... user friendliness is a cream topping to cake that normal people will accept it and need services built on it. Like VW Beetle; built on millions for people but based on Carrera which is extreme track weapon and streed GT. On capable hand even more...

So where do we stand? There is no free lunch. The OS is for free to grab but you have to invest your time and intellectual capital to make it work for you. You have a short easy way to use Ubuntu to get thing work but they will fail and crash in future not so far with the upgrades.

...or you take the *nix way to go and you have to invest some to H/W and to knowledge to build the systems... but in the end you will have systems which have only one problem: you forget the passwords because you do not have to work with the servers.

 

I have to say that of the shelf laptops and workstations are easier to  get working (with knowledge) with Fedora than Uduntu (Mint... derivatevs). With servers RHEL derivates C/SL are worth every second of your time. You deploy and you trust... and finaly forget. Debian is easy for quick deployment but is you know how to compile a program you do not external package archives ... etc...

So my point is that do not bring opinions from Windows word to Linux land... they do not comply and work.

hamoxen on Friday October 31 2014 17:25:41 said:

Linux is a fine server environment.  In my opinion, all the effort to make it a desktop OS is a huge waste of time.  When you have SSH and SFTP clients for all of the major OSes, of which the desktop of other OSes have superior developer toolsets available (hands-down), all of the fancy GUI stuff for Linux is a huge crapshoot with most of it being completely unusable.

Linux needs to focus on its strengths:  Performance and stability in a server environment.  Desktop Linux, while an interesting tinker toy, still has no business being in a business/enterprise setting - which is my view based on almost 15 years of watching and then using Linux.  If the GUI architecture went away, I wouldn't miss it.

Ubuntu is my Linux OS of choice for its ease-of-use and wide hardware support.  The 14.04 LTS update was a rough experience due to a lot of misinformation floating around about when the upgrades would actually be available.  Ubuntu was also the first Linux distro that I was able to successfully install and use.  When it first came out, it was a huge game-changer in the Linux world because it far outpaced the other distros by actually working on most systems.  Before Ubuntu, I had failed installation attempt after failed installation attempt on whatever systems I tried to install it on.

Ubuntu also has a huge support system.  If you have a problem, it's either a Google search away or a forum post with fairly rapid response times.

Other distros have since caught up so the differences are in degrees and look more or less the same to me.  Once you know how to navigate one distro (from the command-line), others are similar.  Each has its own quirks.  I could care less about the GUI battles that still rage on.  The GUI in Linux is meaningless - it is a huge waste of developer resources because it causes developers to scatter in a billion different directions with no specific goals in mind.  Windows, Mac/iOS, and Android have heavy oversight in the directions that the GUI takes, naturally reigning in the developers in the process.

Jonathan on Thursday November 13 2014 08:10:32 said:

Intersting article, thanks.

Regarding point #4: MariaDB is only available in the "universe" repository which is essentially unsupported.

Regarding compiz, there are other desktops still using it, MATE is at least, but I'm not sure to what extent they are contributing towards its upkeep.

Phil on Sunday November 16 2014 11:00:28 said:

Thanks for this article.  It's interesting how real criticism like this is so absent from software development discussions -- and how most respond to it in highly emotional tones.  A recipe for empty self-satisfaction, not progress, methinks.

But let me ask you this: has Ubuntu ever NOT been a mess?  I'm a long-time Ubuntu user, and I can't remember a time where I haven't been stumbling over the OS to get things done.  One of Ubuntu's primary values -- obviously -- is 'coolness'.  The subjective experience matters too.  Ubuntu never guaranteed an error-free experience, why demand it of them?  Your criticism might be better spent elsewhere, in communities that more closely aligned to your values. 

At any rate, there are certainly major problems with the Ubuntu evolution, but these won't be fixed by doing small things like implementing MariaDB.  Espeically not while Unity still lives.  This is a very-thought provoking article, but I don't think you are aware of the full context, and I don't think it will actually change anything.

fatpugsley on Thursday November 27 2014 12:14:15 said:

As an inexperienced user the popularity of  a distro matters a lot to me, because

a)when things go south chances are that my problem won't be unique and someone else will probably have already tackled it and

b) I can't really manage to compile a big project with lots of depedencies e.g ROS. If a precompiled package is offered for any Linux flavour it will almost always include Ubuntu and

c) i've had pretty poor experience with non-Latin internatianalization support with Fedora the times I tried it (F18 and back).

ARedhatANDUbuntuUser on Saturday December 13 2014 05:06:53 said:

I would just like to point out that reason 1 is not valid as even in Ubuntu 12.04 as of release 12.04.5 it's using kernel 3.13 (unless running under a vm, see: Kernel/LTSEnablementStack) not kernel 3.2 (which it was released with). Ubuntu has been updating the kernels in the LTSs once a quarter after testing a kernel to be stable instead of using old kernels for all 5 years of support. They do this through their point release system so LTSs get an extra ".X" at the end every four months where you do a "distrobution upgrade" which includes a new kernel. Somewhat like RHEL releasing a 6.5 or 6.6 every six months, but with a new kernel as apposed to an updated (via backports) version of the release kernel.

 

From having used CentOS6 for the last three years while using Ubuntu 11.04+ it has been nice to have the kernel features and updated binaries throughout the process in Ubuntu instead of being unable to fully use new hardware (Thank you Intel Fusion!!!!!) in CentOS6. Now that being said CentOS6 was not really aimed at the laptop user market and couldn't have known that graphics card producers would start caring about power on gaming laptops.

 

Reason 6 (Compiz + Mir) Fedora also supports Compiz in it's Mate spin which is not dying tomorrow, granted they don't have to support that for 5 years, but there is RHEL's research playground still supporting it currently.

Kalebe on Sunday December 14 2014 20:30:11 said:

Very nice arguments and cool article.

For me the reason that sets me off most of ubuntu and debian is apt-get enabling boot and starting every service as it install. I like my servers built from scratch, with nothing more than necessary, the main reason why I love arch and gentoo, and with ubuntu, you'd have way more steps to do so.

Followed by Pollinate system, as you said, full of controversies with apparmor itself.

For desktops, I have "extreme alergic reaction" against unit and compiz. And I know, there are (k,x,l,.......)ubuntu mixes, I even used kubuntu and lubuntu for many years, still ubuntu is the case here.

The only thing that I got problem once with systemd is the binary log, that also have a workaround, but until that, I got some corruptions on mine.

I like CentOS for production servers a lot because of package release. They are generally newer than debian(talking about latest stable release of both distro) and still extremely stable and secure. Yes, i know, the downtime between RHel and Cent updates but i would rather wait a couple weeks to get an update, rather than wait months to get to newer versions.

Kalebe on Sunday December 14 2014 21:47:49 said:

@UbuntuLover

You're probably not used to deal with enterprise environment, or gnu/linux OS too.

If a huge corp, even medium ones, have a problem with their system, they rather call a company that has a tons of full time dedicated great professionals to solve problems on their OS and services, than search on forums, or hire some guy and wait for them to become available, figure out the problem and fix it.

Not saying that is like that with every company or corp on the world, but is a very common scenario. Also not always a company's support, like RHEL, is ideal for any problems, but is something you can rely on with some good level of certainty at 24/7 for most of them. Many companies have their own sys admin and other professionals to take care off their system, and even on that spot, i've seen that lots of this type of companies still hire RHEL support.

Besides that, RHEL is one of the most sucessfull companies on that kind of service, and their system is nowdays one of the most secure and stable.

Some places you may find using the "unlogical" system choice you referred before:

Not saying that is the best choice, but is a damm pretty good one.

Bird on Tuesday January 13 2015 17:59:56 said:

I just look at these comments and do not understand them.

Maybe all english speaking people became stupid for last few years. Because most of comments I've read (not only at this site) show clearly that nobody have read the article. 

It is clearly written that author does not recommend using 14.04 for server.

Most critical comments tells that popularity and other stuff is make Ubuntu easier to support.

It is very nice for desktop. But for servers security and stability are more important. Those are systems which are purposed for deploying Server software on them.

You can try and train your skills on Ubuntu but we are talking about final deployment of software.

I even not talking about idiotic compare repositories. How this related to server software availability at all ?

salvatore on Friday January 23 2015 15:04:21 said:

Reason 1: Non LTS Linux kernel

Ubuntu LTS has the "Hardware Enablement Stack" (HWE) which installs kernel and X from the 3 following non-LTSes. This mechanism exists at least since Ubuntu 12.04 LTS. The "kernel packages could be shared with future non LTS releases" solution is already there.

Another reason not to use or encourage usage of Mir is the CLA, see http://mjg59.dreamwidth.org/25376.html Tl;dr: it is "free software" provided you do not get into Canonical's feet.

Sam on Sunday April 05 2015 01:55:20 said:

Most of these arguments are against using Ubuntu 14.04 LTS as a long-term strategy on servers.  One is against using it for workstations.

But what if I want to run it on my server, and plan to upgrade my server more than once every 5 years?  If I'm running my own servers, I'm probably using a system like Chef/Puppet/Docker so I can re-deploy with a new OS version with only a few keystrokes.  When Ubuntu 14.10 rolls around, I'll just upgrade to that.  Or maybe I'll wait for 15.04.

Just because I'm deploying Ubuntu 14.04 today doesn't mean I'm committing to run it until 2020.  I just want the latest security releases, and a distribution that's compatible with the other tools I'm using.  You'd have to be doing some pretty specific work to have your server tied to Linux 3.13, or Upstart, or the version of MySQL that ships with the OS.

 

Aaron on Monday May 18 2015 04:46:11 said:

Based on my experience, 14.04 is indeed the most unstable distribution in the last a few years. I have been using ubuntu since 2008. 

just do a quick google how many problem reports of 14.04 wrt to wifi issue, graphics card issue, etc. To me, 12.04 seems to be the best so far. 

bearmatt on Friday June 12 2015 06:00:11 said:

Thanks for the great read. Personally, I'm worried about the claims of long term support with Centos. It is why Centos has become so popular. I fear it will also its undoing.

Jon Felosi on Tuesday June 16 2015 21:45:51 said:

Oh man I could go on all day about Ubuntu turning into a shit OS for servers but you have nailed it. I have stuck my foot in my mouth with very important clients I setup on ubuntu all for some update to come in and screw up everything. the last freaking update deleted the freaking boot partiton! I shit you not! on MULTIPLE servers. I dont know if they are trying to be all bleeding edge or what but screw that, I need stability, I got better things to do then fixing servers that did nothing more than simply update. And for christ sake, no one should be scared to update a server. Thats how stuff gets hacked. Im done with ubuntu, Im trying to convince a client now to migrate off of them. But they keep on throwing it up to me that I reccomended it in the first place. Well 12 was fine for the most part. And now any one from the company who logs into ubuntu 12 servers are immediately hit with do-release-upgrade notices and warnings about such and such so why even say 12 is still a supported version?

I think they are focusing too much on desktop or something but one thing is for sure it is NOT a stable operating system!