Dev

RHEL stays fresh with 9.4 while CentOS 7 gets a Rocky retirement plan


Good news for users of RHEL versions old and new – and for the freebie CentOS Linux 7, which is approaching its end of life next month.

Red Hat Enterprise Linux 9.4 is out, and the IBM subsidiary is also offering an extension of support for the aging RHEL 7 released way back in 2014.

Alma Linux 9.4 is still in beta, but if you want to know what’s new in RHEL 9.4, we reported on what was coming when we looked at the beta version of 9.4 last month. Aside from Alma Linux 9.4 still supporting some aging hardware that the official RHEL 9.4 drops, what’s new is largely the same in them both. That is the main point of the RHELatives, after all.

RHEL 9 appeared two years ago and what Red Hat really wants you to do is upgrade to that. If you are still running RHEL 7, which is now approaching a decade old, there’s good news. Red Hat is offering four more years of support for RHEL 7.9, which it terms Extended Life Cycle Support or ELS.

If you are running the free version, CentOS Linux 7, that hits its end-of-life on the same date: June 30, 2024. CIQ, which offers CentOS Linux rebuild Rocky Linux, has a life cycle extension for that too, which it calls CIQ Bridge. The company told The Reg:

CIQ believes there’s a substantial market for this, and points to research from Enlyft that suggests hundreds of thousands of users still on CentOS Linux 7.

Executive summary

If you are a RHEL 7 customer, you can pay the Big Purple Hat some more money and keep getting updates for another four years – and of course the company will be happy to help you migrate to a newer paid version.

If you’re using the freebie CentOS 7, you can pay CIQ some money instead, and keep getting updates for another three years, while you migrate to Rocky Linux.

Meanwhile, over in the land of Alma Linux…

The AlmaLinux project has yet to throw a hat into this particular ring, but it does have documentation on how to migrate from CentOS 7 to AlmaLinux 8.

As we reported a couple of years ago, remarkably enough, over in Red Hat country, doing version-to-version upgrades is not a given, as it is for users of most other distribution families. For Debian, Ubuntu, SUSE, and openSUSE users, this kind of thing is relatively routine. This is why the Alma Linux folks developed their ELevate tool for simplifying version-to-version in-place upgrades.

In the meantime, AlmaLinux announced that it is establishing a Special Interest Group (SIG) for High-Performance Computing. This is about running Linux on supercomputers.

This tends to mean relatively loosely coupled clusters [PDF]: many thousands of separate Linux boxes, to which you can automatically push out software – for example, with something like Spack – and then automatically split your data into lots of chunks, and farm those chunks out to all the machines in the cluster, for instance with Slurm.

The biggest clusters are tracked in a list called the Top500 and this is the sort of tech that Reg sister site The Next Platform tracks.

This kind of computing is a key interest of the enterprise players: Red Hat has HPC offerings, there’s a special edition of SUSE, and of course Canonical is there too. Licensing a paid-for Linux for five and six-figure numbers of nodes can obviously get very expensive fast, so this is an area of opportunity for free distros.

But why now? Well, since doing the same sort of numerical computation and transforms on large blocks of data is what modern GPU hardware is made for, this is a key sales area for GPU vendors. As big players bin their blockchain efforts and back away from NFTs, what are they to do with all those racks full of GPUs? Luckily they are also perfect for running large language model bots, the inaccurately named “AI” of which everyone is now suddenly so fond. If your pesky liveware has proved fond of working from home and doesn’t want to return to the unpleasantly expensive office buildings you rent, why not jettison the humans and replace them with server farms full of LLM bots? ®



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.