Nat Friedman spouts nonsense, or – Another Novell Open Audio review

Late as usual (but starting to catch up), I listened to Novell Open Audio‘s Ted Haeger interviews Nat Friedman, formerly co-founder of Ximian (The GNOME desktop company – bought by Novell and hence the link does a funny redirect) and currently Novell’s VP of engineering for Linux desktop (or something).

The interview was mostly interesting and except for a higher then average number of self back-patting and cheap shots at the competitors, I rather enjoyed it. Until it came to the Q&A part in which Mr. Friedman was asked to field some “touch questions” from the local crowd (the interview was done on the LinuxWorld Expo floor) – at which he fouled up big time:

“I think we also have a better security story then windows, because we have fewer users then windows, and so there’s no viruses or worms … for Linux yet. That would change in the future, [and] we’ve got to be ready for that, but I think in the mean time … the security story is undeniably many many orders of magnitude better on Linux today.”

Which is utter bullshit!. Except being FUD, misrepresentation of the facts its just plain wrong, and any technically inclined person with some amount of experience with Linux – let along the VP of engineering in one of the worlds largest Linux companies – should know that. The reason that there are no viruses or worms for Linux “yet” is not that there are fewer users then windows – its not even correct to say that there aren’t any viruses or worms for Linux because there are several – they are just never found “in the wild”. Under the same token you could say that as Mac OS-X has more users then Linux there should be more viruses for Mac (which there aren’t), or better yet: as there are many many more users of the Apache web server then Microsoft’s Internet Information Server (IIS), there should be many more worms that target the Apache web server – which there aren’t at all!

The reason there are no viruses or worms for Linux (in the wild), is that Linux has a much better “security story” due to a much better separation of user interactions and system applications – one cannot, under any circumstances, change, damage, remove or modify the other – so even if you had the misfortune to download an infected binary (like some Korean Mozilla installers), unless you are both stupid enough and technically savvy enough to run it as root, at most a virus can do is infect its own carrier – it can’t touch any system applications, and thus can’t infect other users.

But Mr. Friedman is not done yet with misrepresentation of facts: answering another question on why Novell chose to develop XGL, to support 3D desktop effects on Linux, instead of working with the AIGLX project (which was recently – later then this interview – was merged back to the main X.org server), he correctly noted that XGL was actually developed some time before AIGLX was thought of (and AIGLX was actually started because of XGL), but he then has these things to say about XGL vs. AIGLX (my comments after each quote):

“AIGLX will only work if you have open source drivers or if you get the various proprietary driver authors to do some fairly significant modifications to the drivers which ATI and Nvidia have not indicated a willingness to do.”

Nvidia have shown a willingness to support the required driver extensions, and indeed their latest beta drivers have the required features. The features that AIGLX requires are only some of the features that are made available by proprietary drivers in MS-Windows and currently aren’t in drivers from the same manufacturers for Linux – so it would generally be a good idea if video chipsets vendors would get a swift kick in the butt and start upgrading their Linux driver offers. And if that also fuels development of better open source drivers for these cards – even better. And ATI suck.

“Plus XGL is probably a much long term architecture for graphics, because we are able to really accelerate all the drawing primitives, because we basically run a GL backend for the X server, where as AIGLX are basically saying ‘lets take the existing driver model and extend its life time for as long as we can’, so you’re going to be hobbling along on crutches for another 10 years using the existing driver model basically”.

Describing XGL as a long term architecture is probably the saddest joke I’ve ever heard. XGL is a hack over a kludge – its nothing even closely resembling an OpenGL backend for the X server. XGL is an X server implemented as an X client running on top of another X server (which uses the same driver model, with the same problems and not even modifications to bring it up to date in terms of OpenGL calls available). The XGL client grabs an OpenGL context from the underlying server and uses that to render its own clients to the entire screen of the real backend server. XGL is more of a middle-frontend then a backend. As a result of that, XGL has a lot of problems – most notably is that OpenGL performance of applications is really poor as the nested XGL hogs the OpenGL resources. The long term architecture for XGL is the XEGL server which will require a major driver rewrite (much more then what AIGLX requires) and is currently not even in active development.

AIGLX on the other hand is a much saner architecture, allowing 3D applications with proper acceleration and even remote OpenGL applications to run on another computer and display to the local screen. It doesn’t use a kludgy double server method (which even XEGL would require) and is available now. It has been integrated into the X.org X server distribution (and even though it was done only recently – anyone who keeps track of these things could have guessed that this is where AIGLX is going) which makes it defacto the current standard and when the driver model changes (again – as it has done every few years) to kdrive or glucose, AIGLX will be there for the ride.

Nat Friedman then goes on to comment that he wants XGL to do dual screen and that he thinks it will be very hard to do with AIGLX – which again proves that he has no clue as to what he’s talking about: due to the hackish way XGL is layered, it can still not be made to run on dual screen (except under very specific circumstances and with a specific ATI model) while AIGLX can run wherever Xinerama does with no changes whatsoever.

Anyway – done ranting for now.

2 Responses to “Nat Friedman spouts nonsense, or – Another Novell Open Audio review”

  1. Nat Friedman:

    Hi, I’m Nat Friedman, and I thought I’d respond to some of your thoughts here.

    First, security. I don’t remember exactly what I said during this interview, but people ask me about Linux desktop security pretty frequently and I usually tell them approximately the same story. Linux’s security benefits over Windows on the desktop are one of the main reasons that businesses are currently replacing Windows desktops with Linux desktops on a large scale and so this comes up all the time.

    Indeed, ONE of the reasons that fewer viruses, worms and other bits of malware target Linux is that the Linux user base is a smaller, less juicy target than the vast masses of non-tech savvy Windows users over whom virus authors like to demonstrate their superiority. I’m sure I mentioned that in the interview because it’s an importnat factor. But that is only one reason; there are several other layers of security that protect Linux desktop users.

    First, at the user interface level, we tend to make it more difficult on Linux for people to accidentally execute scripts or unverified applications by requiring people to save attachments to disk and make them executable before running them. In Evolution and generally in Linux, you can’t just double-click on an executable or a script in your mailer, mistaking it for a document, and run some malicious script. These extra steps protect a lot of people from mistakenly running a trojan.

    Second, with AppArmor (in SUSE) or SE/Linux (in SUSE and other Linux distros), you can very easily sandbox an application and specify exactly what types of system calls it is allowed to call and what parameters it can pass to them. So you can, for example, prevent your browser from reading or writing your email or any of your personal directories; or you can restrict an app to only connecting to certain IP addresses. You can create multiple profiles for various applications. This is all very flexible. Both SE/Linux and AppArmor allow you to do this, but AppArmor has some nice UI to make this easy. All of this is based on kernel-level instrumentation and the LSM (Linux Security Module).

    Third, because the Linux kernel can use the XD/NX functionality in modern x86 processors, we can systematically prevent buffer overruns, even in poorly written software. These can be prevented at the level of the chipset/kernel; basically you tag certain pages in memory as being data and not code, and prevent someone from running code out of data pages.

    Fourth, as you point out, there is a strong separation of user and superuser capabilities in Linux. This means that if a user account is compromised, the entire system isn’t compromised.

    And finally, Linux was written by people collaborating on the internet; people who grew up on the internet, and who know instinctively to treat network data as potentially hostile. So Linux desktop code tends to be written in a more secure way; it’s part of our culture.

    I don’t remember whether I had time to say all of that during the few minutes Ted interviewed me on the floor at Linux World (and I don’t really feel like listening to my own interview right now), but I do usually try to say all those things.

    The rest of your blog post seems to focus on the current way people are running Xgl, which involves running two X servers — one on top of the other. This is definitely a weird setup and suboptimal; it requires us to proxy through a lot of X extensions from one server to the other, and it’s a bit fragile. It’s not where we want to be with Xgl. Ideally there will be one X(gl) server which runs on top of kernel-space EGL drivers. This is the ultimate architecture David is headed for. XEGL would not require two X servers as you describe, but it would give us some other benefits, like being able to run the X server as a non-root user.

    In the end, though, it doesn’t matter to our users or me or, I think, anyone else, if the entire world uses aiglx or Xgl or Xegl or some new thing (Xaieglx??) that we haven’t invented yet. What’s great is that Linux is getting lots of great fancy effects which make the desktop sexier and in some cases easier to use. It’s also been nice to see Red Hat (the primary developers of aiglx) coming down in support of Compiz, too.

    I’m sorry I didn’t respond to your blog entry till now. I would have replied earlier but I didn’t see it till recently.

  2. Guss:

    Hi Nat. Thanks for replying (and I hope I didn’t annoy you too much with my post above – which I actually never expected you or anyone else of consequence to read, geez, you can’t even dis someone on the internet these days 😉 – and you’ll read this reply).

    I don’t want to make a large discussion, out of this (blog comments are the wrong place IMO for serious discussions), but a few points none-the-less:

    * Regarding security – All the points you discussed here are good and valid, except – again, in my opinion – the first one, which is valid but not a good one. I think the small user base should never be raised as a security benefit of Linux, at least not if anyone has any hope and one time for the Linux user base to be large – and definetly not raised as the first point. In your interview you described it as the 1st and foremost benefit of Linux security profile, with AppArmor/SELinux/whatever being only secondary issues which aren’t as important. IMO most people listening would take from your answer that “Linux is only marginally more secured then competing OSs as long as it has a small user base. I better not hop on then”.

    * Graphics effects are good! yes, well. It was until compiz/mesa/X was broken in one of the updates I installed 🙁 . Anyway, my current main gripe with XGL is that its complex, has no chance of making it into the main X.org tree but still “Good Enough(tm)” which means its not likely to be replaced soon. The way I understand the OpenSource development scenario, is that as something becomes good enough and grabs a significant mind share (which XGL has thanks to Novell) the less likely it is to be replaced with something which is “The Right Thing(tm)” as more people are going to invest in making the “Good Enough” work better despite its many failings instead of taking the lessons learned and readdressing the problem anew. Case in point – Apache. Case in point – tomcat. Case in point – Linux kernel.

    Anyway – thanks for reading, and as you can see – I dont’ update my blog as often, so it doesn’t matter that you were a bit late. I want to make clear that I’m generally very happy with what Novell is doing for and with the open source community and with “desktop Linux” specifically – keep up the good work!

Leave a Reply