Official website for Linux User & Developer
FOLLOW US ON:
May
11

The kernel column #100 with Jon Masters – 100 issues of kernel updates

by Jon Masters

To help celebrate Linux User’s landmark 100th issue which goes on sale tomorrow, celebrated Linux Kernel contributor, Jon Masters, recounts some of the biggest developments in the Linux Kernel over the magazine’s last 100 issues…

I remember the first article I wrote for Linux User & Developer, way back in issue number one. It was a review of the first OpenOffice.org release, following the announcement by Sun Microsystems (now Oracle) that it was open-sourcing its Star Office product. Times have certainly changed. Sun is no more, and indeed OpenOffice.org has itself been forked (somewhat without enthusiasm from Oracle) into LibreOffice. In that same time period, untold changes have occurred within the Linux kernel community, which has grown in both size and complexity, along with its body of code…

A tale of two kernels
Since the first issue we’ve seen the release of both the 2.4 and 2.6 kernels. The former (2.4) moved Linux into the mainstream consciousness with many features required of a general purpose ‘enterprise grade’ operating system. These features included the first support for USB and Bluetooth, and robustness in the form of RAID, logical volume management (LVM) and journaled file systems (ext3, since deprecated and replaced with ext4 as well as other alternatives). I remember watching as 2.4 matured from daily crashes on a test file server to such a point that it is still used in some embedded systems being produced today. Early dramas such as instability in the virtual memory subsystem (which led to Alan Cox’s ‘ac’ series kernels, as used in earlier Red Hat Linux releases, containing a wholesale replacement VM) resulted in many useful lessons being learned about the introduction of new features that carried over into 2.6.

The release of Linux 2.6 at the end of 2003 was a complete game changer. Whereas 2.4 had introduced features necessary for Linux to be considered in mainstream adoption, 2.6 actually delivered many of the features required for Linux to become a first choice candidate. These game-changing features included massive scalability, from the smallest embedded devices (now often running Android, itself a Linux derivative that has appeared) to most of the TOP500 supercomputers today. Linux 2.6 introduced support for huge file systems and terabytes of memory, and allowed PC-class hardware utilising AMD and Intel’s 64-bit extensions to gain dominance in the server space. Along the way, 2.6 changed the development model used from one in which an unstable 2.3 and 2.5 development cycle led to a stable 2.4 and 2.6 release, to the model we see today in which development is more fluid and occurs on three-month cycles.

Linux 2.6 was accompanied by the release of a ‘post-halloween’ document from Dave Jones (perhaps a reference to the Microsoft ‘halloween’ documents of the same era – leaked confidential Microsoft memos detailing strategies to counter the rise of Linux), which is still very good reading as to the many changes that took Linux from the 2.4 days into the modern age. And although there is no sign of an impending 2.7, 2.8, or even 3.0, development of the 2.6 series, Linux kernel has continued to introduce innovation after innovation. Most recently, this included the final elimination of the Big Kernel Lock (BKL) – a course-grained global kernel lock that hindered truly massive scalability, but which is now finally completely gone in 2.6.39. This combines with many other efforts to make Linux 2.6 the most scalable and flexible operating system kernel available today, bar none.

Since the first issue of this magazine, we’ve also seen the rise of the Enterprise Linux concept, and the associated development of vendor kernels filled with various patches and backports (code pulled in to older vendor kernels from subsequent official kernel releases). While not always loved by the community, such Enterprise products have helped to drive and shape development of highly innovative kernel features. The non-vendor community has also responded to the desire to have more stable, long-term-supportable kernel releases with the creation of ‘long-term’ kernels (which were formerly known as ‘stable’ kernels). These vendor and community kernel releases complement the official ‘mainline’ or ‘upstream’ kernel and allow it to proceed unimpeded while giving end-users a product that has improved testing and durability for use in production environments.

More women needed
Every story has some downsides. Although we’ve seen a general growth in the size of the kernel community, there have been some high-profile exits. Some of these happened due to burnout or natural changes in career choice, but many can be attributed to the nature of dialogue and community interaction on mailing lists and elsewhere – flame wars, abrasive attitudes and so on. Problems that still exist as a consequence of the many personalities involved. That said, generally, the Linux kernel community, like many other free and open source software communities, has become more much welcoming over time. There are people from many different cultural backgrounds involved, though still relatively few women. Those women who are involved tend to be extremely skilful and almost overcompensate the statistics with their productivity. Still, there’s room for improvement, and it is hopeful that efforts like the new Ada Initiative (started by a well-known female kernel engineer) can help attract more women to such open source projects in
the future.

We’ve seen some amazing changes in the past 100 issues of this magazine. But these are just the beginning. When we started, Linux was still the domain of enthusiasts and early ‘dot com’ adopters. No more. Today, the Linux ecosystem is dominated by all of the big, well-known companies. Every few months another large project comes along with tremendous commercial backing. The latest is the Yocto Project, which finally aims to standardise and fully commoditise the area of embedded Linux kernels, tools and other bits. It’s hard to say where we’ll be in another 100 issues from now, but one thing is for sure: the rise of Linux will continue for some time yet to come.

Ongoing development
In last month’s issue, we covered the release of the 2.6.38 kernel and the opening of the merge window (the period of time during which new feature additions are allowed into the next kernel under development) for 2.6.39. Work on that kernel release remained ongoing as of this writing. Among the many exciting new features coming will be support for ‘pstore’, or the persistent storage file system. This takes advantage of the growing trend for modern (server, but perhaps also embedded too) systems to have small amounts of flash or other persistent storage set aside for the purpose of logging faults and diagnosing crashes. Pstore exposes this storage through a new /dev/pstore device and can be used to log successful boots, kernel panics and other events of note without touching regular file system storage.

Having a separate area of storage set aside for important event logging is especially useful when a system crashes, since the kernel may not be able to trust that its regular storage subsystems have not been compromised – that is to say that the kernel cannot know whether the crash has caused some code in memory to become corrupted, possibly leading to unsafe writes to disk if the regular logging infrastructure were to be used. For this reason, most ‘Enterprise’-grade kernels will immediately halt on certain errors and force the system to ‘panic’ in order to avoid the more serious risk of any such data corruption. By setting aside a pstore device, the kernel can ensure that the worst that can possibly happen is that the pstore device data is somehow corrupted, which is only a minor inconvenience to tracking down a fault. Look out for pstore
in 2.6.39.

Power management
Further out, it has been announced that support for the legacy ‘APM’ (Advanced Power Management) is going away in 2.6.40. APM (which is not really all that ‘advanced’ these days) was implemented on older PC systems and required the BIOS to perform much of the heavy lifting involved in suspending and resuming, or other power management tasks. By contrast, the replacement for APM, ACPI, yields most functions involved in power management decisions to the operating system. ACPI systems use a combination of an in-kernel interpreter for a special kind of ACPI Machine Language (AML) and some limited callbacks from the OS into the BIOS to do a much more comprehensive job. Any PC system available today uses ACPI.

This month saw a flare-up of the debate over ARM platform support and standardisation. In the embedded space, it is common to hack up a Linux kernel to support a specific product being developed, release the product and then move onto the next big thing. This ‘embedded dilemma’ has gotten a little better in recent years, with many parties contributing patches into the mainstream Linux kernel, but there remains too much churn and reinvention of the wheel to solve short-term problems. Work on such projects as Device Tree aims to make such embedded systems more generically supportable through the use of one-size-fits-all kernels that can determine system device information and configuration at runtime, much as PC-class systems can do so using ACPI.

Finally, this month saw the revival of this author’s kernel podcast, which is a free semi-weekly audio account of certain development activities. Find out more and subscribe to the feed at www.kernelpodcast.org.

  • Tell a Friend
  • Follow our Twitter to find out about all the latest Linux news, reviews, previews, interviews, features and a whole more.