Dev

Make Linux safer… or die trying


Part 1 Some Linux veterans are irritated by some of the new tech: Snap, Flatpak, Btrfs, ZFS, and so forth. Doesn’t the old stuff work? Well, yes, it does – but not well enough.

Why is Canonical pushing Snap so hard? Does Red Hat really need all these different versions of Fedora? Why are some distros experimenting with ZFS if its licence is incompatible with the GPL? Is the already bewildering array of packaging tools and file systems not enough?

No, they aren’t. There are good justifications for all these efforts, and the reasons are simple and fairly clear. The snag is that the motivations behind some of them are connected with certain companies’ histories, attitudes, and ways of doing business. If you don’t know their histories, the reasoning that led to major technological decisions is often obscure or even invisible.

The economics of the computer software industry has changed massively since some now-widespread tools were originally invented. Techniques and methods that made good commercial sense decades ago don’t any more, and some of this applies to Linux more than it does to Windows. Modern Windows is based on Windows NT, the first version of which was released in July 1993 and was a modern, hi-tech OS from the start. Its developers had already learned lessons from its forerunners: less DOS and 16-bit Windows, more as OS/2 1.x and Digital Equipment Corporation’s VAX/VMS.

Linux is quite a different beast. Although many Unix fans haven’t really registered this yet, it’s a fact: Linux is a Unix now. In fact, arguably, today Linux is Unix.

As a project in its own right, Linux is roughly the same age as Windows NT. Linux 0.01, the first public version, appeared in late 1991, it went GPL with version 0.99 in late 1992, and version 1.0 was released in March 1994. FreeBSD is about the same age, and so is NetBSD. All of them are fairly traditional, monolithic, Unix-like OSes in design. This means that it inherits many of its design choices from earlier, mostly proprietary Unix OSes.

The thing is, solid, carefully made decisions that worked for commercial Unix in its heyday may not be such a good fit any more. In the 1970s and 1980s, proprietary Unix boxes cost lots of money. The companies that bought them – and it was a big-business level of expenditure – could afford to pay for highly trained specialist staff to tend and nurture those machines.

Windows NT came out 30 years ago and created a lively commercial market of relatively inexpensive 32-bit PCs, powered by x86 processors, open-standard fast expansion buses and low-priced mass storage. Cheap mass-produced PCs were just about good enough, and so were cheap mass-developed OSes for them.

Since then, Windows has been good enough, and it runs on commodity kit. So the commercial mainstream, always looking for savings, moved to Windows. The result is that Windows tech staff became cheap and plentiful – which implies fungible – while Unix techies remained more expensive.

This cheap, mass-market hardware in turn has aided the evolution of open source Unixes. Linux has done well partly because its native platform is the same cheap kit that was built to run Windows. This is a huge and vastly diverse market and, as we recently described, software is a gas: it expands to fill the hardware. The result is that, to support the most diverse computer platform ever, Linux is big and complicated.

Yes, it’s a Unix-like OS, and Unix has been around for over 50 years. But Linux isn’t just another Unix. It’s free for everyone, and the same kernel runs on everything from $5 SBCs to $50 million supercomputers. Proprietary Unix was expensive, exclusive, and mostly ran on expensive, high-quality hardware that was designed for it, while Linux mostly runs on relatively cheap devices that were designed to run Windows.

When Unix ruled the datacenter, computer resources were limited, and proprietary platforms strictly controlled what was on offer. Now that disk and memory are cheap, the PC hardware is uncontrolled and proliferates as wildly as kudzu. Linux supports most of it, meaning that it’s much bigger and more complex than any proprietary Unix ever was… and to a good approximation, nobody fully understands the entire Linux stack: it’s just too big. Real experts are scarce, and that means that they command top dollar.

But the mass adoption of Linux has changed the economics somewhat. While the top-tier gurus remain pricey, ordinary mortal techies aren’t. Smart curious folks who can work out how to stack some components together like construction toys, and get it more or less working. Then you push it out into someone else’s datacenter, add some tools that will arrange for it to scale out – if you’re lucky enough to need it, and for it to work… those folks aren’t so pricey. Which implies that the building blocks of that stack need to be tough, to match the levels expected over in Windows land, and they need to just plug together.

The flipside of this coin is the famed DevOps model: treat servers as cattle, not as pets. It’s not all about servers – but it’s server distros that pay. So desktop distros use lots of tools designed for servers, and phone distros are being built from the same components.

When the software and the hardware are cheap, but the skills are expensive, the cost centers become support and maintenance – which is a large part of why the big enterprise Linux vendors sell support, not software. The software is free, and if you don’t mind compiling it yourself, you can have the source code for nothing. To get the ready-to-use version, though, you have to buy a support contract.

What that means is that the evolutionary selective pressure is to reduce the cost of providing that support in order to maximize the profitability of the support contracts. That requires making the OSes as robust as possible: to prevent faults from occurring, so you don’t have to pay someone to fix them. If possible, to prevent whole categories of system failures. Better still, to make the OS able to recover from certain types of fault automatically, without human intervention.

If you want to deploy a lot of a cheap or free OS, without hiring a lot of expensive gray-bearded gurus, a core part of the economic proposition is to build Linux distros that can cope, even thrive, without constant nurture. For example, making them able to fetch and install their own updates. The goal is to make them able to cope with their own problems, and heal their own injuries, just as farm animals must in their short, miserable lives.

One aspect of this is visible as multiple parallel efforts to contain and manage the vast and ever-growing complexity of modern Linux: to encapsulate it, and if possible, even eliminate parts of it. This shows up in several places. The first was in file system design, but the first set of such changes was relatively minor and caused little disruption. Now another round of modernization is being worked on. There are also major changes in how software is packaged: how packages are built, how they’re distributed, and how they’re stored, installed, and upgraded. A further aspect is how they are uninstalled again or upgrades reverted.

This is a complex, interlocking set of problems, and not only is there not one single best way to tackle it, but the approach each company takes is guided by the tools which it has or favors. For various reasons, not all vendors are spending their R&D money in the same directions. Some are working on file systems, some on packaging, some on distribution, some on more than one of these at once.

In the second half of this feature, we’ll offer an executive briefing on the different efforts, and why different distro vendors are addressing the problems in different ways. ®



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.