One of the comments to the The modern packager’s security nightmare post posed a very important question:
why is it bad to depend on the app developer to address security issues? In fact, I believe it is important enough to justify a whole post discussing the problem. To clarify, the wider context is bundling dependencies, i.e. relying on the application developer to ensure that all the dependencies included with the application to be free of vulnerabilities.
In my opinion, the root of security in open source software is widely understood auditing. Since the code is public, everyone can read it, analyze it, test it. However, with a typical system install including thousands of packages from hundreds of different upstreams, it is really impossible even for large companies (not to mention individuals) to be able to audit all that code. Instead, we assume that with large enough number of eyes looking at the code, all vulnerabilities will eventually be found and published.
On top of auditing we add trust. Today, CVE authorities are at the root of our vulnerability trust. We trust them to reliably publish reports of vulnerabilities found in various packages. However, once again we can’t expect users to manually make sure that the huge number of the packages they are running are free of vulnerabilities. Instead, the trust is hierarchically moved down to software authors and distributions.
Both software authors and distribution packagers share a common goal — ensuring that their end users are running working, secure software. Why do I believe then that the user’s trust is better placed in distribution packagers than in software authors? I am going to explain this in three points.
How many entities do you trust?
The following graph depicts a fragment of dependency chain between a few C libraries and end-user programs. The right-hand side depicts leaf packages, those intended for the user. On the left side, you can see some of the libraries they use, or that their dependent libraries use. All of this stems from OpenSSL.
Now imagine that a vulnerability is found in OpenSSL. Let’s discuss what would happen in two worlds: one where the distribution is responsible for ensuring the security of all the packages, and another one where upstreams bundle dependencies and are expected to handle it.
In the distribution world, the distribution maintainers are expected to ensure that every node of that dependency graph is secure. If a vulnerability is found in OpenSSL, it is their responsibility to realize that and update the vulnerable package. The end users effectively have to trust distribution maintainers to keep their systems secure.
In the bundled dependency world, the maintainer of every successive node needs to ensure the security of its dependencies. First, the maintainers of cURL, Transmission, Tor, Synergy and Qt5 need to realize that OpenSSL has a new vulnerability, update their bundled versions and make new releases of their software. Then, the maintainers of poppler, cmake, qemu and Transmission (again) need to update their bundled versions of cURL to transitively avoid the vulnerable OpenSSL version. Same goes for the maintainers of Synergy (again) and KeepAssXC that have to update their bundled Qt5. Finally, the maintainers of Inkscape and XeTeX need to update their bundled poppler.
Even if we disregard the amount of work involved and the resulting slowdown in deploying the fix, the end user needs to trust 11 different entities to fix their respective software packages not to ship (transitively) a vulnerable version of OpenSSL. And the most absurd thing is, they will nevertheless need to trust their distribution vendor to actually ship all these updated packages.
I think that we can mostly agree that trusting a single entity provides much smaller attack surface than trusting tens or hundreds of different entities.
The bus factor
The second part of the problem is the bus factor.
A typical ‘original’ Linux distribution involves a few hundred developers working together on software. There are much smaller distributions (or forks) but they are generally building on something bigger. These distributions generally have dedicated teams in place to handle security, as well as mechanisms to help them with their work.
For example, in Gentoo security issues can be usually tackled from two ends. On one end, there’s the package maintainer who takes care of daily maintenance tasks and usually notices vulnerability fixes through new releases. On the other end, there’s a dedicated Security team whose members are responsible for monitoring new CVEs. Most of the time, these people work together to resolve security issues, with the maintainer having specific knowledge of the package in question and the Security team sharing general security experience and monitoring the process.
Should any maintainer be away or otherwise unable to fix the vulnerability quickly, the Security team can chime in and take care of whatever needs to be done. These people are specifically focused on that one job, and this means that the chances of things going sideways are comparatively small. Even if the whole distribution were to suddenly disappear, the users have a good chance of noticing that.
Besides a few very large software packages, most of the software projects are small. It is not uncommon for a software package to be maintained by a single person. Now, how many dependencies can a single person or a small team effectively maintain? Even with the best interest at heart, it is unlikely that a software developer whose primary goal is to work on code of the particular project can be reasonably expected to share the same level of dedication and experience regarding the project’s dependencies as dedicated distribution maintainers who are doing full-time maintenance work on them.
Even if we could reasonably assume that we can trust all upstreams to do their best to ensure that their dependencies are not vulnerable, it is inevitable that some of them will not be able to handle this timely. In fact, some projects suddenly become abandoned and then vulnerabilities are not handled at all. Now, the problem is not only that it might happen. The problem is how to detect the problem early, and how to deal with it. Can you be reasonable expected to monitor hundreds of upstreams for activity? Again, the responsibility falls on distribution developers who would have to resolve these issues independently.
How much testing can you perform?
The third point is somewhat less focused on security, and more on bugs in general. Bundling dependencies not only defers handling security in packages to the application developers but also all other upgrades. The comment author argues:
I want to ship software that I know works, and I can only do that if I know what my dependencies actually are so I can test against them. That’s a valid point but there’s a but: how many real life scenarios can you actually test?
Let’s start with the most basic stuff. Your CI most likely runs on a 64-bit x86 system. Some projects test more but it’s still a very limited hardware set to test. If one of your dependencies is broken on non-x86 architecture, your testing is unlikely to catch that. Even if the authors of that dependency are informed about the problem and release a fixed version, you won’t know that the upgrade is necessary unless someone reports the problem to you (and you may actually have a problem attributing the issue to the particular dependency).
In reality, things aren’t always this good. Not all upstreams release fixes quickly. Distribution packagers sometimes have to backport or even apply custom patches to make things work. If packages bundle dependencies, it is not sufficient to apply the fix at the root — it needs to be applied to all packages bundling the dependency. In the end, it is even possible that different upstreams will start applying different patches to the same dependencies to resolve the same problem independently reported to all of them. This means more maintenance work for you, and a maintenance nightmare for distributions.
There are also other kinds of issues that CIs often don’t catch. ‘Unexpected’ system setup, different locale, additional packages installed (Python is sometimes quite fragile to that). Your testing can’t really predict all possible scenarios and protect against them. Pinning to a dependency that you know to be good for you does not actually guarantee that it will be good for all your end users. By blocking upgrades, you may actually expose them to bugs that were already known and fixed.
Bundling dependencies is bad. You can reasonably assume that you’re going to do a good job at maintaining your package and keeping its bundled dependencies secure. However, you can’t guarantee that the maintainers of other packages involved will do the same. And you can’t reasonably expect the end user to place trust in the security of his system to hundreds of different people. The stakes are high and the risk is huge.
The number of entities involved is just too great. You can’t expect anyone to reasonably monitor them, and with many projects having no more than a single developer, you can’t guarantee that the fixes will be handled promptly and that the package in question will not be abandoned. A single package in middle of a dependency chain could effectively render all its reverse dependencies vulnerable, and multiply the work of their maintainers who have to locally fix the problem.
In the end, the vast majority of Linux users need to trust their distribution to ensure that the packages shipped to them are not vulnerable. While you might think that letting you handle security makes things easier for us, it doesn’t. We still need to monitor all the packages and their dependencies. The main difference is that we can fix it in one place, while upstreams have to fix it everywhere. And while they do, we need to ensure that all of them actually did that, and often it is hard to even find all the bundled dependencies (including inline code copied from other projects) as there are no widely followed standards for doing this.
So even if we ignored all the other technical downsides to bundled dependencies, the sum total of work needed to keep packages bundling it secure is much greater than the cost of unbundling the dependencies.
You should ask yourself the following question: do you really want to be responsible for all these dependencies? As a software developer, you want to focus on writing code. You want to make sure that your application works. There are other people who are ready and willing to take care of the ecosystem your software is going to run in. Fit your program into the environment, instead of building an entirely new world for it. When you do that, you’ve effectively replicating a subset of a distribution. Expecting every single application developer to do the necessary work (and to have the necessary knowledge) does not scale.
8 thoughts on “Why not rely on app developer to handle security?”
lol KeepAssXC :D
> How many entities do you trust?
I think this question is missing the point.
Trust is how humans sense risks of a relationship they are in.
Risk can be described as a set of potential events, characterised by amounts of damage and their respective probabilities.
What matters is the total risk they have running a given system. Like, “what is the total amount of damage from security breaches per year, which covers 90% of the cases”?
When we take this, probabilistic and economy-centric, view, it becomes obvious that actual performance of the suppliers is what matters. By performance I mean defect resolution and upgrade timeliness and reliability, and don’t even get me started on the Total Cost of Ownership. This is why the arguments from the Linux distributions side, including your fine articles, are confronted by so much rage: the actual performance of distributions in delivering many valuable, modern, developed at fast pace, software tools is just not competitive in a large fraction of cases. You’re never going to dissuade people from short-circuiting distros. Not going to happen unless it becomes an economically sensible choice for them.
Which is why, personnaly, I recommend (half)rolling release distros such as partially keyworded Gentoo, or Manjaro.
You are right that this easier and faster security comes at the expense of a slightly slower rollout of new versions.
However, from a single developer’s TCO standpoint, nothing beats the linux distro, precisely because of the CVE dependency graph, and because I don’t have to think about distribution other than tarring the source.
If you move this discussion a layer down, into the kernel, you realise the real mess this leads to: the Android phone unsupported hardware ecosystem. By bundling everything together on one vendor-approved kernel version, the kernel security lifetime of devices is shortened to 2 years, if not less.
Yes, from a company’s perspective it sucks, because I can’t guarantee the entire chain. But guaranteeing the chain is what brings this mess in the first place.
I agree with the analysis, but not with the conclusion. You explain in great details why we want distribution maintainers to do QA work on packages and their dependencies, which I expect is uncontroversial.
But your conclusion assumes that static linking prevents this QA work from being done. This mostly true for C and C++packages which have byzantine build systems and opaque binaries, but false for languages with a modern ubiquitous deps manager like Go or Rust (and somewhere in between for other languages).
Dependencies can be tracked distribution-wide, and there are even tools to audit a compiled binary for CVEs. The workflow is different than the “update libfoo.so system-wide” one that has been refined over the years with C and C++ in mind, but there’s no reason it must result in inferior QA. There’s a bit of work left to write tools to make that workflow easy, but it’s not insurmountable (I’m contributing on the Gentoo/Rust side of things) and has to be done anyway. Like it or not, static linking is becoming more common (in part because distributions sometimes can’t keep up with upstream needs); I don’t think distributions can afford to say “not our problem, ask upstream to use dynamic linking”.
Another point where I think your analysis is flawed is that you’re framing this as a “trust the distribution vs trust upstream” dichotomy. But it’s not an either/or situation : as an end user I benefit from both the distribution and upstream’s watchful eyes. In all cases I expect the distribution to do the QA work, but I *also* benefit from upstream doing their due diligence, which is more likely to happen if upstream is using static linking.
> static linking prevents this QA work from being done
It does prevent it. As a packager you then need tools (as you mentioned yourself) to inspect the dependencies to rule out outdated/vulnerable dependencies, and maintain a separate database. (The “original” database being the package manager itself)
> tools to audit a compiled binary for CVEs
Please introduce me to one… I honestly never looked for one. This really isn’t a sarcastic comment.
Also does this look for known vulnerable bytecode? What if it gets optimized because static linking? Factor in multiple architectures. I’m eager to learn about such tools.
> In all cases I expect the distribution to do the QA work, but I *also* benefit from upstream doing their due diligence, which is more likely to happen if upstream is using static linking.
No. Can’t happen. Won’t happen. I write closed source stuff for a living, and our dependencies get added, and then frozen until a CVE in one of the libs hits us. And then a shitstorm happens if it was frozen too long, and the API changed. All people I know work the same way. You can argue “bad upstream” but this is the world. Same as you argue distros are slow. That is the world too. Out of 2 bad options IMO we get both.
> tools to audit a compiled binary for CVEs
Actually it can audit based on lock file:
cargo audit is the basic building block I had in mind. It currently works at build-time, but there is also a Rust RFC under way to embed the necessary info in the compiled binary, so that auditing can be done after the fact, even on third-party binaries. Until that becomes the Rust default, if upstream doesn’t embed the info itself, it can be done by the ebuild. I’m hoping to add that feature to cargo-ebuild at some stage. Go already embeds the relevant info by default, although its cargo-audit equivalent doesn’t seem as mature. Once that’s in place, we can have a distribution-agnostic tool to scan go/rust binaries for CVEs, and fix/rebuild them. No need to wait for upstream: an automated one line patch to Cargo.toml does the job.
If you prefer a traditional package-manager database approach, all we need are source-only multi-slot packages of the dependencies that are normally fetched via the CRATES ebuild variable. Those packages would not install anything, just be there for dependency management by portage, including masking vulnerable versions.
Concerning “upstream due diligence” you missed my point: it’s a bonus check on top of the baseline distribution checks. Not something to rely on, but something that increases the confidence. And which is much likely to happen in an ecosystem that encourages the practice (Rust) than in one that doesn’t (C++).
Falling behind when a dep changes its API is equally painful with staticaly vs dynamicaly linked languages. But again : it’s much less likely to go unnoticed in the Rust ecosystem, where running `cargo oudated` is trivial and common practice.
Before you reply again that most upstreams are careless : I’m only arguing that the situation is better in those ecosystems, and not that it is a sufficient line of defense.
I’m not sure what to make of your last few sentences. We’re both trying to be pragmatic about the state of the world. Security is not a one-stop shop, it needs to be attacked from multiple angles. We can’t fix behaviors, but we can put incentives in place. To me, “static linking is bad” is an outdated dogma, trying to fix behaviors instead of working with what we have. We can have the best of both worlds, but distributions need to catch up.
> When you do that, you’ve effectively replicating a subset of a distribution. Expecting every single application developer to do the necessary work (and to have the necessary knowledge) does not scale.
*puts on developer hat while still holding their packager hat*
True but there is always the issue of some (major) distros bundling too old or broken packages and so there you run into the issue of dropping users or becoming a bit of a distro for some users or ending up having to do QA on an infinity of distros. (getting your software packaged helps but it’s not always easy or even possible)
And there is times where you use a library and need to have patches applied to it.
I’ve seen numerous ways of it being done but so far I think the only good way is to have a patchset/branch/fork/… of upstream, this helps app developers stay close to of upstream and think more of getting their modifications merged so they get less maintenance burden and it helps packagers because even if they could end up maintaining two packages instead of just one it’s much easier to track than bundling.