The other day I installed Ubuntu 15.04 on one of my boxes. I just needed something where I could throw in a DVD, hit install and be done. I didn’t care about customization or choice, I just needed a working Linux system from which I could chroot work. Thousands of people around the world install Ubuntu this way and when they’re done, they have a stock system like any other Ubuntu installation, all identical like frames in a Andy Warhol lithograph. Replication as a form of art.
In contrast, when I install a Gentoo system, I enjoy the anxiety of choice. Should I use syslog-ng, metalog, or skip a system logger altogether? If I choose syslog-ng, then I have a choice of 14 USE flags for 2^14 possible configurations for just that package. And that’s just one of some 850+ packages that are going to make up my desktop. In contrast to Ubuntu where every installation is identical (whatever “idem” means in this context), the shear space of possibilities make no two Gentoo systems the same unless there is some concerted effort to make them so. In fact, Gentoo doesn’t even have a notion of a “stock” system unless you count the stage3s which are really bare bones. There is no “stock” Gentoo desktop.
With the work I am doing with uClibc and musl, I needed a release tool that would build identical desktops repeatedly and predictably where all the choices of packages and USE flags were layed out a priori in some specifications. I considered catalyst stage4, but catalyst didn’t provide the flexibility I wanted. I initially wrote some bash scripts to build an XFCE4 desktop from uClibc stage3 tarballs (what I dubbed “Lilblue Linux“), but this was very much ad hoc code and I needed something that could be generalized so I could do the same for a musl-based desktop, or indeed any Gentoo system I could dream up.
This led me to formulate the notion of what I call a “Gentoo Reference System” or GRS for short — maybe we could make stock Gentoo systems available. The idea here is that one should be able to define some specs for a particular Gentoo system that will unambiguously define all the choices that go into building that system. Then all instances built according to those particular GRS specs would be identical in much the same way that all Ubuntu systems are the same. In a Warholian turn, the artistic choices in designing the system would be pushed back into the specs and become part of the automation. You draw one frame of the lithograph and you magically have a million.
The idea of these systems being “references” was also important for my work because, with uClibc or musl, there’s a lot of package breakages — remember you pushing up against actual implementations of C functions and nearly everything in your systems written in C. So, in the space of all possible Gentoo systems, I needed some reference points that worked. I needed that magical combinations of flags and packages that would build and yield useful systems. It was also important that these references be easily kept working over time since Gentoo systems evolve as the main tree, or overlays, are modified. Since on some successive build something might break, I needed to quickly identify the delta and address it. The metaphor that came up in my head from my physics background is that of phase space. In the swirling mass of evolving dynamical systems, I pictured these “Gentoo Reference Systems” as markers etching out a well defined path over time.
Enough with the metaphors, how does GRS work? There are two main utilities, grsrun and grsup. The first is run on a build machine and generates the GRS release as well as any extra packages and updates. These are delivered as binpkgs. In contrast, grsup is run on an installed GRS instance and its used for package management. Since we’re working in a world of identical systems, grsup prefers working with binpkgs that are downloaded from some build machine, but it can revert to building locally as well.
The GRS specs for some system are found on a branch of a git repository. Currently the repo at https://gitweb.gentoo.org/proj/grs.git/ has four branches, each for one of the four GRS specs housed there. grsrun is then directed to sync the remote repo locally, check out the branch of the GRS system we want to build and begin reading a script file called build which directs grsrun on what steps to take. The scripting language is very simple and contains only a handful of different directives. After a stage tarball is unpacked, build can direct grsrun to do any of the following:
mount and umount – Do a bind mount of /dev/, /dev/pts/ and other directories that are required to get a chroot ready.
populate – Selectively copy files from the local repo to the chroot. Any files can be copied in, so, for example, you can prepare a pristine home directory for some user with a pre-configured desktop. Or you can add customized configuration files to /etc for services you plan to run.
runscript – This will run some bash or python script in the chroots. The scripts are copied from the local repo to /tmp of the chroot and executed there. These scripts can be like the ones that catalyst runs during stage1/2/3 but can also be scripts to add users and groups, to add services to runlevels, etc. Think of anything you would do when growing a stage3 into the system you want, script it up and GRS will automated it for you.
kernel – This looks for a kernel config file in the local repo, parses it for the version, builds it and both bundles it as a packages called linux-image-<version>.tar.xz for later distribution as well as installs it into the chroot. grsup knows how to work with these linux-image-<version>.tar.xz files and can treat them like binpkgs.
tarit and hashit – These directives create a release tarball of the entire chroot and generate the digests.
pivot – If you built a chroot within a chroot, like catalyst does during stage1, then this pivots the inner chroot out so that further building can make use of it.
From an implementation point of view, the GRS suite is written in python and each of the above directives is backed by a simple python class. Its easy, for instance, to implement more directives this way. E.g. if you want to build a bootable CD image, you can include a directive called isoit, write a python class for what’s required to construct the iso image and glue this new class into the grs module.
If you’re familiar with catalyst, at this point you might be wondering what’s the difference? Can’t you do all of this with catalyst? There is a lot of overlap, but the emphasis is different. For example, I wanted to be able to drop in a pre-configured desktop for a user. How would I do that with catalyst? I guess I could create an overlay with packages for some pre-built home directory but that’s a perversion of what ebuilds are for — we should never be installing into /home. Rather with grsrun I can just populate the chroot with whatever files I like anywhere in the filesystem. More importantly, I want to be able control what USE flags are set and, in general, manage all of /etc/portage/. catalyst does provide portage_configdir which populates /etc/portage when building stages, but its pretty static. Instead, grsup and two other utilities, install-worldconf and clean-worldconf, can dynamically manage files under /etc/portage/ according to a configuration file called world.conf.
Lapsing back into metaphor, I see catalyst as rigid and frozen whereas grsrun is loose and fluid. You can use grsrun to build stage1/2/3 tarballs which are identical to those built with catalyst, and in fact I’ve done so for hardened amd64 mutlilib stages so I could compare. But with grsrun you have too much freedom in writing the scripts and file that go into the GRS specs and chances are you’ll get something wrong, whereas with catalyst the build is pretty regimented and you’re guaranteed to get uniformity across arches and profiles. So while you can do the same things with each tool, its not recommended that you use grsrun to do catalyst stage builds — there’s too much freedom. Whereas when building desktops or servers you might welcome that freedom.
Finally, let me close with how grsup works. As mentioned above, the GRS specs for some system include a file called world.conf. Its in configparser format and it specifies files and their contents in the /etc/portage/ directory. An example section in the file looks like:
[app-crypt/gpgme:1] package.use : app-crypt/gpgme:1 -common-lisp static-libs package.env : app-crypt/gpgme:1 app-crypt_gpgme_1 env : LDFLAGS=-largp
This says, for package app-crypt/gpgme:1, drop a file called app-crypt_gpgme_1 in /etc/portage/package.use/ that contains the line “app-crypt/gpgme:1 -common-lisp static-libs”, drop another file by the same name in /etc/portage/package.env/ with line “app-crypt/gpgme:1 app-crypt_gpgme_1”, and finally drop a third file by the same name in /etc/portage/env/ with line “LDFLAGS=-largp”. grsup is basically a wrapper to emerge which first populates /etc/portage/ according to the world.conf file, then emerges the requested pkg(s) preferring the use of binpkgs over building locally as stated above, and finally does a clean up on /etc/portage/. install-worldconf and clean-worldconf isolate the populate and clean up steps so they can be used in scripts run by grsrun when building the release. To be clear, you don’t have to use grsup to maintain a GRS system. You can maintain it just like any other Gentoo system, but if you manage your own /etc/portage/, then you are no longer tracking the GRS specs. grsup is meant to make sure you update, install or remove packages in a manner that keeps the local installation in compliance with the GRS specs for that system.
All this is pretty alpha stuff, so I’d appreciate comments on design and implementation before things begin to solidify. I am using GRS to build three desktop systems which I’ll blog about next. I’ve dubbed these systems Lilblue which is a hardened amd64 XFCE4 desktop with uClibc as its standard libc, Bluedragon that uses musl, and finally Bluemoon which uses good old glibc. (Lilblue is actually a few years old, but the latest release is the first built using GRS.) All three desktops are identical with respect to the choice of packages and USE flags, and differ only in their libc’s so one can compare the three. Lilbue and Bluedragon are on the mirrors, or you can get all three from my dev space at http://dev.gentoo.org/~blueness/theblues/. I didn’t push out bluemoon on the mirrors because a glibc based desktop is nothing special. But since building with GRS is as simple as cloning a git branch and tweaking, and since the comparison is useful, why not?
The GRS home page is at https://wiki.gentoo.org/wiki/Project:RelEng_GRS.
That looks pretty sweet, i’m going to try this out for rolling out system images for older laptops.
So far I’ve been using a 32bit chroot with binpkg repo to this effect, since i could not build a catalyst image because of various circular use dependencies i encountered.
So it’s nice to hear of the tool that can mitigate this issue (as indicated in the docs), and even do more.
Great stuff !
You might be interested in work I’ve done to build kernel using ebuild:
http://git.meleeweb.net/gentoo/portage.git/tree/sys-kernel
Also, see bug https://bugs.gentoo.org/show_bug.cgi?id=472906
If it gets into the tree, I could use it. Right now I’m bundling the kernel and initrd as just a plain tarball. This would make it a proper .tbz2 package that could be managed like any other package. I don’t know what Calculate Linux does, but they bundle a pre-compiled kernel. You might want to look at what they do as well.
i’m using metro (http://www.funtoo.org/Metro). It requires some time to customize but it works pretty well.