Why Wayland is a Bad Idea™

Most people who have been involved in Linux development/administration/advocacy already know that Linux operating systems are in the middle of a big shift (or actually, almost at the end of it) in the graphics stack used in Linux desktops – from the old and gnarly X11 protocol to the new hotness: Wayland compositing. And it sucks.

Yes, X11 is bad – I don’t argue that fact: it is slow, archaic in its API conventions and its requirements and specifications are stuck 30 years in the past (at best), and don’t even get me started on XKB bug 865 and how horrible multi-language support is on X11. The old protocol has its benefits – mostly around network transparency (it is ridiculously easy to just open a secure network connection and access just one single application that is installed and running on another computer – no remote desktop required), but it’s time has definitely come – and gone.

So Wayland – a set of protocols, processes and software, that is 15 years in the making – is geared to replace it, with everything that is needed for the modern desktop experience: flashy graphics, performance and graphical acuity, and most of all: security.

The focus on security kind of makes sense: software deployment options on Linux have diversified a lot from the time that XFree86 hit the Linux scene (when it was mostly, get source code, then build it yourself). Now you can get software from your operating system’s curated repositories, non-curated automatic build services (such as Canonical’s Launchpad; Suse’s OBS) , cross-OS software stores (such as Flathub), “download from the web and run it” AppImages, or even Windows software installers. In this environment – it hard to make sure that the applications you run are trustworthy, so making sure the protocol is secure and does not allow too many shenanigans would be a “Good Thing”, right?

Well – there lies the problem – security vs convenience is more or less a zero-sum game: by increasing security you reduce convenience, and Wayland puts a heavy hand on the scales, tipping them to the side of security.

As far as I understand, the Wayland desktop model is very simple – applications get square areas on the screen where they can draw, and get input from the user, and are not allowed any other kind of interaction with the desktop environment – because it is insecure; the other side of the equation is the Wayland compositor – it is responsible for everything else: putting the application windows on the screen, adding whatever decorations are needed (drop-shadows is a big thing in a desktop experience these days) and drawing everything else: the background, the panels, etc (a Wayland compositor isn’t a specific piece of software: each Linux desktop environment has its own “Wayland compositor”).

The problems start to appear when an application developer wants to do anything outside drawing in their own walled rectangular area: want to share a copy and paste clipboard with another application? That took a bit to add into the protocol; Screenshots? They ironed it out and you can do it now, though screen-casting is still a work in progress; Causing another existing application to handle an action (such as opening a file, start and email or load a link)? There’s a new protocol for that, but the implementations are still a bit immature.

And if you want to do other kinds of interactions with the desktop environment, that allow application developers to build useful utilities? How about creating windows that don’t appear in the task panel? Windows that stay on top of other windows even when not active? Applications that raise themselves in response to internal events? Nope – you can’t do that, because of security… Sometimes you can get this functionality by using a “desktop-specific” API, but often not -for example, under KDE’s Plasma you can ask to skip the task bar, but not on other compositors and then either the application misbehaves on “foreign desktops” or the app developer has to start worrying about all the weird little desktop environments available on Linux and writing custom code to handle each (and there are tons), or – again – functionality that was on X11 and supported on Windows and MacOS, may simply not be allowed under Wayland, “because it is not secure” or some such.

In my day to day job, I like to use a nifty tool called “Yakuake” – it is a “pull down terminal”, i.e. a small terminal that you can call up with a shortcut, it will slide in, accept a few commands, and then you dismiss it. It works great under X11, but it seems to hit all the pain points of the missing windowing feature on Wayland:

  • If it is open, but you switched to another window, you can’t call it up because it is not allowed to raise itself.
  • If it is open, it will show up in the application switching shortcut, even though this is inconvenient and not behaving like that on other windowing environment.
  • It can’t ask to stay on and above other windows when switching to other applications or other workspaces (a feature that is readily available on X11, Windows and MacOS.
  • And a few other small weird defficiencies.

What frustrates me the most, is that it is very hard to start a discussion about this: when I approach developers about these issues, I often get a response in the line of “this is the new way and it is better because it is more secure; if you want your application to do that, then you are wrong and you should stop doing that”. This is very frustrating.

One Response to “Why Wayland is a Bad Idea™”

  1. Posterdati:

    Yes, I see the frustrating issue about wayland. In modern era (post 1990) we are living in the dictatorship of standard rules. Look at programming patterns which became a mantra: even in engineering there are a lot of rules from IEEE, CENELEC, CIGRE etc. Those rules are not truly wrong in the sense that they are useful in a world made by manufacturers so their stuffs can be interchangeable. But the reality is that solving a problem does not mean to do that by strictly using those rules, it means solving by using physical and mathematical rules or if we like algorithms. What will happen when quantum computers will get the upper hand (we have to see this), will the patterns be still useful or should we add more to the list?

Leave a Reply