Eclasses, Portage and PMS

Recently I had a little IRC bikeshed with bonsaikitten on the topic of .la file removal. As one of the maintainers of autotools-utils.eclass, I tend to like people actually using that eclass and keeping the .la removal algorithm there; bonsaikitten would like to see it in Portage instead.

To prove my point, let’s take a look at the process of making a change in an eclass:

  1. getting eclass maintainers’ approval,
  2. sending patches to gentoo-dev for review,
  3. PROFIT!

The whole process usually takes one week, and the change is effective as soon as user syncs the tree. This means that if the particular change aims to fix an issue with an ebuild yet to be committed, its commit can be delayed. And considering it is committed after eclass change, users won’t even notice the breakage.

Although getting a change in a single PM can usually be faster, it starts being effective when user upgrades it. This usually means that the ebuild author has either to work around the problem or delay the commit until fix gets stable, and then still a number of users could be hit by the bug.

For PMS, I think the situation is clear. A lot of time to get it into PMS, get new EAPI approved, get it implemented, stabilize and finally get blessing for the tree.

Although this could sound like this, I’m not denying PMS. PMS has its coverage but I really don’t see a reason to put everything into it just for the fun. In my opinion, PMS should cover most fundamental functions which either are very simple (and thus unlikely to introduce bugs) or highly relevant to the PM internals.

For example, emake in Portage reuses MAKEOPTS variable which can be considered private/internal. elog relies on PM-specific output system, and doins… well, one can say it could reuse some magic to optimize the merge process (OTOH, ebuilds already assume it does not).

econf on the other hand, although pretty common, doesn’t fully fit into PMS. The only magic it uses, it uses for libdir; and it is very specific to autoconf. But the same magic needs to be implemented in multilib.eclass to let non-autoconf build systems handle libdir correctly.

Returning to the topic: .la removal is not suitable for either PMS or PM because:

  1. it is very specific to autotools and libtool,
  2. it requires either a smart algo or some magic to determine which files to remove and which ones to keep,
  3. and for those kept, more magic could be required.

A quite sane algo is implemented in autotools-utils right now. When further packages are migrated to it, maintainers can give us feedback on it and help us improve it. And if it fails on a new package, we can commit a fix before the package hits final users.

If Portage started removing .la files on its own, we end up with either:

  1. having a really smart algo which will always work right, or random breakages for a number of users with solution being ‘upgrade portage and rebuild offending packages’;
  2. implementing some kind of Portage-specific variables to control the .la removal better.

I really don’t like any of those. So, just migrate your ebuilds! Distutils use distutils.eclass, CMake uses cmake-utils.eclass. There’s no reason to not inherit a dedicated eclass for autotools, with autotools-specific quirks.

Ah, and please finally stop pushing everything into PMS just because some devs break eclass APIs. If someone breaks policy on instance, that person should be punished. Locking all devs in cages is no solution.

Building Mozilla plugins without Mozilla

As of Firefox 6.0, Mozilla no longer supports building it against shared xulrunner. You may or may not already noticed that. For some users, this simply means that your next --depclean will unmerge last install of xulrunner. For some others, this means you will be building two copies of the same thing — one for Firefox, and the other for a few packages depending on xulrunner.

One especially painful case here are browser plugins. Right now, their ebuilds mostly depend on the whole xulrunner being built while that’s not exactly true for the packages itself. In fact, building Netscape plugins requires only headers to exist — no linkage is necessary, all necessary symbols are provided by the browser itself (and if they aren’t, the plugin probably won’t work anyway).

For all that, I really don’t see a reason to waste like 1 hour compiling an awfully large package just to use its headers for a few minutes and then have no real use for it. That’s why I decided to try establishing a tiny package containing headers and pkg-config files necessary to build plugins.

How packages build Mozilla plugins?

Most of browser plugins are actually clear NPAPI plugins. In simplest words that means that they need four standard, well-established np* headers to be built. These could be found, for example, in the npapi-sdk package.

And in fact, some projects actually bundle that four headers. That’s the best case because it means that a particular plugin has no xulrunner/mozilla dependency. It just builds the plugin against bundled headers and we don’t have to worry about supplying them to it.

When packages rely on external NPAPI headers, the pain begins. All these years, Mozilla upstream didn’t really establish a clear way of finding them. Each package has its own autoconf for it, less or more complex, and less or more wrong.

A good example here is VLC. It provides a quite painless method looking either for libxul or one of *-plugin pkg-config packages. Not sure, however, if most of names used there really existed but mozilla-plugin is the one most commonly used.

Either way, that test always nicely succeeds with xulrunner and lets VLC find its headers. If we establishing a tiny NPAPI header package, and just use mozilla-plugin pkg-config file in it, VLC will build fine against it. Sadly, if libxul is in the system, VLC will use it and link the plugin with it — for no reason.

gnash is another good example here. Although the code may look a little scary, it uses mozilla-plugin pkg-config and doesn’t link against anything.

On the other hand, gecko-mediaplayer is an awful example here. It’s using a lot of random pkg-config checks, and relies on features based on xulrunner pkg-config version. Mostly impossible to handle clearly; we need to inject additional, hacky pkg-config file to satisfy their checks — probably for no good reason.

IcedTea-web is a totally different case here. Unlike packages mentioned before, this one uses a larger set of xulrunner headers; though still requires no linkage. After building it against a number of xulrunner headers, the plugin works fine in Opera.

Creating the header package

Considering the above, a simple npapi-sdk package is not enough. We at least need to install mozilla-plugin.pc; installing libxul.pc satisfies configure checks on more packages but breaks VLC (as it tries to link with non-existent xulrunner libraries). If we hack the latter and remove Libs: from it, VLC builds fine.

Right now, I’m testing a simple mozilla-plugin-sdk package. It installs the complete set of xulrunner headers along with the two forementioned pkg-config files, satisfying all Mozilla plugins I’ve tried. Sadly, due to number of headers the package is awfully large.

The next step would be probably stripping unnecessary headers out of the package. I already started using makedepend to check which headers are actually used by Netscape plugins. Any further tips would be appreciated.