The road to MVC

In the past month or so I started helping Vittorio on adding one of the important missing feature to our h264 decoder. Multi View support.

MVC

The basic idea of this feature is quite simple, you are shooting a movie with multiple angles, something is bound to be sort of common and you’d like to ensure frame precision.

So what about encoding all the simultaneous frames captured in the same elementary stream, share across the different layers as much as you could and then let the decoder output the frames somehow?

Since we know that all the containers have problems might be not completely a bogus idea to have the codec taking care of it. Even better if the resulting aggregated bitstream is more compact than the sum of the single ones.

High level structure

What’s different in h264-mvc than the normal h264?

Random bystander

Not a lot, in fact the main layer is exactly the same and a normal decoder can just skip over the additional bits (3 NALs more or less) and just decode as usual.

Basically there is a NAL unit to signal which layer we are currently working on, a NAL to store the SPS specific per layer and a NAL to keep the actual frame data.

Beside that everything is exactly the same.

Implementation

So why it isn’t already available, you made it look easy?!

Random jb

Sadly it would be easy if the decoder we have isn’t _that_ convoluted with many components entangled in a monolithic entity, with code that grew over the years to adapt to different needs.

Architectural pain points

Per slice multithreaded decoding made the code quite hard to follow since you then have a master context, h that in certain functions is actually h0 and a slice specific copy hx that sometimes becomes h and such.

Per frame multhtreaded decoding luckily doesn’t get in the way too much for now.

Having to touch a large file of about 4k lines of code in itself isn’t _so_ nice, split view as you like for editing, you end up waiting a single core of you cpu doing the work.

Community constraints

The h264-mvc is a fringe feature for many and if you care about speed you want to not have all the cruft around slowing down. What’s is for you a feature, for many is just cruft.

  • MVC support must be completely optional or not slow down the normal decoding at all.
  • MVC support must not make the code harder to follow than it is now, so hacking your way is not an option.
  • MVC should give me a pony, purple

The plan

First take the low hanging fruits while you think what’s the best route to achieve your goal.

Random wise person

Refactor

The first step is always refactor and cleanup. As you, hopefully, do not cook on a dirty kitchen, people shouldn’t
write code on top of crufty one.

Split the monster

In Libav everything compiles quite fast beside for vc1(vc1dec.c is 6k loc) and h264(h264.c was around 6k loc).
New codecs such as vp9 or hevc landed already split in smaller chunks.

Shuffling the code should be simple enough, so we had h264.c split in h264_slice.c, h264_mb.c and such. That helps having shorter (re)build time and makes you easier to focus.

Untangle it

Vittorio tried to remove the dependency over the mpeg12 context in order to make easier to follow the code, it was one of the pending issues since years. Now h264 doesn’t require mpeg12 in order to build, that will make probably happier our friends working on Chrome and everybody else needing to have _just_ few selected features in their build.

Pave the road

Once you divided the problem in smaller sub problems (parsing the new nals, store the information in an appropriate data structure, do the actual decoding and store the results somewhere accessible) you can start working on adapting the code to fit. That means reordering some code, splitting functions that would be shared and maybe slay some bugs hidden in the code weed while at it.

So far

We are halfway!

Random optimist

Done

We got the frame splitting, nal parsing pretty much in working shape and is not sent for review just because in itself is not
useful.

Doing

The frame data decoding is pending some patches from me that try to simplify the slice header parsing so enough of it could be shared w/out adding more branches. I hacked it once and I know the approach used works.

The code to store multiple views in a single frame has a whole blueprint being evaluated.

To Do

Test the actual decoding and hopefully make so the frame reference code behaves as expected, this will be probably the most annoying and time consuming task if we are unlucky. That code bites.

Libav 10 – release

New release

After several months spent finalizing, we are now pleased to announce the release of Libav 10.

One of the main features of this release is the addition of reference-counted data buffers to Libav and their use in various structures. Specifically, the data buffers used by AVPacket and AVFrame can now be reference counted, which should allow to significantly simplify many use cases. In addition, reference-counted AVFrames> can now be used in libavfilter, avoiding the need for a separate libavfilter-specific frame structure. Frames can now be passed straight from the decoders into filters or from filters to encoders.

These additions made it necessary to bump the major versions of libavcodec, libavformat, libavdevice, libavfilter, and libavutil, which was accompanied by dropping some old deprecated APIs. These libraries are thus not ABI- or API- compatible with the previous release. All the other libraries (libavresample and libswscale) remain ABI- and API-compatible.

Another major point is the inclusion of the HEVC (AKA H.265, the successor of H.264) decoder in the main codebase. It was started in 2012 as a Libav Google Summer of Code project by Guillaume Martres and subsequently completed with the assistance of the OpenHEVC project and several Libav developers.

As usual, this release also contains support for other new formats, many smaller new features and countless bug fixes. We can highlight a native VP9 decoder, with encoding provided through libvpx, native decoders for WebP, JPEG 2000, and AIC, as well as improved WavPack support with encoding through libwavpack, support for more AAC flavors (LD – low delay, ELD – enhanced low delay), slice multithreading in libavfilter, or muxing chapters in ASF. Furthermore a few new filters have been introduced, namely compand, to change audio dynamics, framepack, to create stereoscopic videos, asetpts, to set audio pts, and interlace, to convert progressive video to interlaced. Finally there is more fine-grained detection of host and target libc, which should allow better portability to various cross compilation scenarios.

See the Changelog file for a fuller list of significant changes.

You can download the new release, as usual, from our download page.

Release 10 took a lot of time to get out, mostly due the fact we spent lots of time to help downstream to adapt to the new API and we tried our best to provide patches to most of the projects we were aware of.

Now we are settled to have migration guides so the next API breaking releases won’t require that much effort.

Thanks

I want to thank everybody in the Libav team for spending that much time on the annoying, depressing and unrewarding task of coping with the release process, fixing fringe bugs, bake patches for projects not really used and help cleaning up the documentation.

Special thanks also go to the people from VideoLan and mpv since they helped us a lot in many different ways (testing, giving feedback on the new APIs and also provide patches) and to the Google security team that provided me on short notice a large batch of samples for HEVC and VP9 that I used to validate the new decoders.

Future releases

Release process update

This is the plan for the next 4 releases (spanning more or less from spring till winter), it is the result of all the feedback regarding our release process and requests.

Enough people, mostly mpv, vlc and other downstreams tracking us by git commit, would like to have quicker major releases. The API changes
introduced are mostly caused by us trying to satisfy their needs after all.

On the other hand, a good amount of people, distribution managers/packagers and the people tending to orphaned packages used but not really developed further, have quite a problem keeping up with the changes if the API gets incompatible too often.

In order to help them we already opened a dedicated section to our bugzilla and started writing migration guides, but they would really like not having to patch old packages that often anyway.

Trying to satisfy those two, apparently conflicting, requirements that’s what we aim for:

  • Every odd major releases should not break the API, must happen quickly once enough features are available and just augment the API. ABI breaks still possible, thus the version bumps.
  • Major releases removing old APIs, thus normally source incompatible with downstream not tracking git, should happen at most once per season
    or twice per year.
  • All the API changes will get an entry in the migration guide when it is committed.
  • We do remain committed to backport security-impacting bugfixes through a window of API-breaking releases, thus not leaving in the cold who couldn’t or didn’t update often enough.

I hope ~8 feature improvements and ~4 api cleanups per year would make most people happy.

Next releases

Libav 11

It would just provide new features, more optimizations for the usual platforms and the new ones, support for a good number of fringe codecs, such as the elusive vp7, will be added.

As stated above no API breakages are to be expected.

Libav 12

This release will contain major changes, including possibly a new scaling library.
The wiki has a Blueprint section tracking the most prominent ones, you are welcome to discuss them with us.

What I’ll be working on

I’m personally involved in the following items:

  • Extend MXF support: The format is quite bizantine and had been extended even further over time.[libav11]
  • Hwaccel2: because the current situation is far from being easy to use.[libav11]
  • mime-type support in Input Formats: since we support as output I don’t see why we should not leverage it on input to speed-up probing formats.[libav11]
  • AVScale: a replacement to swscale, trying to be more rational and not pointlessly lose information doing all pointless intermediate conversions to YUV. Incidentally also support hardware scalers when available [libav12]
  • libmfx: Intel tried its best to give an uniform interface that spans Linux, Windows and possibly MacOSX, I have working decoders and encoders wrappers, soon also hwaccel1.2 support. [libav11]
  • MVC support: multiview support is nice to have if you want to watch your blu-ray disks. [libav11]
  • Apple VDA and VT hwaccel: Since the introduction of hwaccel1.2 supporting them properly should be easier.[libav11]

If some of them are important to you actual help or even sponsorship is welcome.

Libav10 Release Progress

We are working on getting some few remaining bits in the tree before we could eventually branch release/10, lots of low hanging fruits are being reaped right now. Soon we’ll get the first beta out, thus freezing the featureset. If something you need doesn’t make it don’t be afraid the release/11 will be much quicker and appear in spring.

Missing from the next beta

There are a number of speed improvement people would like to land before we open the branch and that are polished this week-end.

The results of the preliminary work Vittorio and I are doing on MVC are getting merged already since they just make the h264 decoder nicer to read.

Some less famous codecs are getting some slices of attention, even if some fringe ones requiring additional data structures might not make 10.

Missing from the actual release

Downstream updates cross-distribution

We are doing our best to track which downstream projects need an API update, help is more than welcome. Soon we’ll have a tinderbox run to check what’s the status in Gentoo, Debian has already some reports and hopefully we’ll get some feedback from our friends at OpenSuse as well.

Fuzz cleansing some pending bugs

I still have some in my list and I’m fixing them while not merging the pending patchsets deemed useful for this release. Luckily that part must be finished by the last beta and not the first one.

Documentation

We are getting more data in the wiki and our doxy is getting polished and the examples section is getting richer and more structured (eventually). By the time we hit this release all the pending work on this side will be available for consumption.

Sprint in March

I decided to postpone the first Libav sprint/meeting by one month and probably the focus will change a bit, the wiki will be updated accordingly.

Libav Meeting at Fosdem

Fosdem had been great. As Libav and VLC teams we had long meetings so I manage to attend only the nouveau presentation (and miss the Gentoo activities again =_=, sorry guys).

Here a summary of what had been discussed and the outcome.

Releases

We have the next two releases more or less planned

Libav 10 End of February

The API is almost frozen, we’ll set the branch point as soon Tim and probably Vittorio land their respective audio and interlacing patches in. We’ll get few betas out during the next 2 weeks and release the following.

I’ll take care to make sure downstreams requests regarding small features get addressed, please drop me a line.

Expect the first beta by this weekend.

Libav 11 Early summer

Due the apparently conflicting requests from downstreams, that want more major releases, and distributors, that would like have API deprecation less often, this release will have just API extension and should remain  backwards compatible with 10, that means that 12, due this Autumn, will contain all the planned deprecations.

The two items I’ll be working on will be hwaccel2 and avscale. Help in form of sponsorship and code is be welcome.

The other items being worked on will be: timestamps, scalable hevc, generic data structures to support certain codec features, multi-view frames, a complete dsputil overhaul, resulting in smaller code for focused downstreams (e.g. Chrome) and much more.

Libav 12 Mid Autumn

This release will contain some deprecations, more internal overhauls to make even simpler user just portions of Libav without requiring the rest of it and further improvements on codecs and container formats.

Documentation

We have a wiki! And we are not afraid to use it!

Migration Paths

Derek pointed out we should make easier to migrate from the old API to the newer ones, we have already some unfinisched migration documents. From release 10 they should be enough to move to the next one w/out having to dig into git in order to figure out what to do.

Blueprints

New features get some spotlight and preliminary documentation as well. This make easier to follow the development process since most of the time sifting irc and the mailing list is the only way to get the whole picture.

(In)formal Specifications

We have some part of our codebase implementing not-really-documented formats, I’ll move my notes about NUT in the wiki soon and probably we’ll try to extract from the libvpx sources some slightly more human-readable document.

Help and sponsorship is more than welcome since it will be a major chore.

Sprint

Real life meeting seem quite productive so this year we’ll have some short time meetings focused on fixing some of the long standing annoyances. The first one is in 2 weeks possibly, more details about it and the following will appear soon.

Conferences

We plan to be present during other conferences around the world (a list will appear later), the next one for me will be probably the LinuxTag.

If you are an organizer and you want somebody to participate, we’ll be glad to talk about multimedia and opensource.

Chocolate

This Fosdem instead of t-shirt I brought some special chocolate.

It is also for sale now.

LIbav gets a cut on the sales so if you want to try it we’ll be grateful as well. =)

Fosdem!

About 26h before Fosdem (yes, the beer event is the glorious start of the conference)!

What

I’ll be around bearing chocolate and chocolate for friends and fellow members of the communities I belong to (no beers this time, sorry guys!), hopefully we’ll find some space to discuss anything you’d like to discuss with me.

Topics

  • Libav (We should also have a room to discuss some more Libav10 and Libav11 planned releases)
  • VLC (Probably most discussions during the meeting, where Felix will stab me for not having done hwaccel2)
  • Gentoo/Sabayon (Complaints and rants welcome only during the beer event)
  • Any of my other many projects (contributions welcome btw!)
  • Anything else.

Where

There might be a room to discuss for about 1 hour about Libav10 Sunday, I’ll be around the Gentoo BoF Saturday and obviously I’ll be around attending some of the events.

See you there! (hopefully)

Welcome Kvazaar HEVC encoder!

I stumbled upon this promising encoder yesterday.

The purpose of this academic open-source project is to develop a video encoder for the emerging High Efficiency Video Coding (HEVC) standard. This Kvazaar HEVC encoder is being developed towards the following goals:

  1. Coding efficiency close to HEVC reference encoder (HM)
  2. Modular encoder structure to simplify its data flow modeling
  3. Efficient support for different parallelization approaches
  4. Easy portability to different platforms
  5. Optimized encoding speed without sacrificing its coding efficiency, modularity, or portability
  6. Reduced computation and memory resources without sacrificing its coding efficiency, modularity, or portability
  7. Excellent software readability and implementation documentation

Achieving these objectives requires encoder with design decisions that make this open-source encoder unique:

  1. The encoder is developed from the scratch (HM used as a reference)
  2. The implementation language is platform-independent C

The source codes of the Kvazaar HEVC encoder, its latest version, and issue tracker are available in
GitHub (https://github.com/ultravideo)
under the GNU GPLv2 license. The features of the latest encoder version and upcoming milestones are listed in the feature roadmap below. Currently, the supported platforms are x86 and x64 on Windows and Linux but we might add other platforms in the future.

Statistics of the code repository can be found from Ohloh.

New contributors

New ambitious developers from academia, industry, and other sectors are warmly invited to make contributions, report bugs, and give feedback. We do not ask contributors to give up copyright to their work. Active contributors will also be considered when filling open positions in Ultra Video group.

You may contact us by email (ultravideo at cs dot tut dot fi), GitHub, or via IRC at #kvazaar_hevc in FreeNode IRC network.

It looks promising, the code is mostly clean (even if I’m not fond of 2 spaces indentation) and from the early interaction on irc the people seem nice.

They use git and they code in plain C + YASM to boot (I decided to let other look at x265 since they use mercurial, that I dislike and C++ that I loathe and so quite a number of other people I happen to know).

The project is at its early stage but they have a good roadmap and hopefully they’ll mold their API so it gets supported by other projects (why x264 is widely used and libvpx a little less? because the codecs implemented are less good? Not at all! Just because the API is much worse to use!).

Btw: Dibs on libav integration!

Security & Fuzzing

New year, new bugs and, since apparently lots of people are interested,
new posts about security.

The main topic is obviously libav and the bugs we are fixing here and there thanks to Mateusz and Gynvael kindly providing us fuzzed samples.

Fuzz testing

Many programs expect a certain input and provide a certain output, most of the time you miss a corner case and it leads to unexpected situations.
Fuzzing is one of the most effective black-box testing testing technique and in case of complex input (such as multimedia protocols and codecs) it does wonders spotting unhandled or mishandled conditions.

We are keeping a page about the tools useful to track bugs, since, unluckily, for us most bugs are security issues.

Fuzz testing is tersely explained there and the tools useful for the task are all there. We had a Google Code In mostly devoted in spotting crashes using zuff.

Sadly fuzzing using zuff is time consuming and requires a decent amount of cpu since even AddressSanitizer is relatively slow and in many cases you want to use valgrind: memory leaks are a security issue as well.

Google

Google is using Libavformat and Libavcodec in many projects and last year they started to share with us the results of their huge fuzzing system, what would take to me probably years takes them few days at most.

Before, outside Google developers, just Michael Niedermayer had access to the samples and since we usually do not agree on how to solve problems had been quite a problem figuring out the real problem from his patches and fix it for real.

Now things are quite better and I had the chance to get some feedbacks about new code (such as vp9) before having it landing in the main tree.
We could spot a couple of issues during review and with zuff we could spot some more, Google fuzzing found twice as many. That gives you an idea on how useful this kind of activity is for our code. Thinking about corner cases in complex code is HARD.

Fixing security issues

Initially it was painful, you get a huge amount of samples and you have to run them through avconv instrumented with valgrind or such tools (drmemory, asan, msan), then figure out where the problem is and hopefully fix it. Doing it manually can be tedious.

You need some form of coordination so people can work on different issue and not stomp of each other feet.

Automation

Currently our setup is a bit more organized, we have a central place in which some nice scripts to triage and categorize the samples and provide a sort of nice report with a per-codec and per-format breakout. Me, Martin, Diego, Anton and other interested parties have access to the samples and the scripts so we can work together having 1/2 of the time consuming and boring part done once and for everybody, probably soon I’ll extend it to be even more smarter and have some bug aggregation heuristic.

Integration

Valgrind integrates with gdb quite well, AddressSanitizer more or less on the same level with some few lines of .gdbinit to make the whole experience
smooth. Currently I’m mostly using asan with gcc-4.8* and I’m looking forward to see new drmemory releases since it seem quite promising.

Valgrind is used mostly to make sure memory leaks hadn’t been left around once all the asan-reported issues got fixed.

Fixes and Reviews

One of the annoying problems in fixing security issues is that you first see where it breaks but maybe the reason why is FAR from there.

Usually you might rush and just fix the damn bug where it breaks, it can be as wrong as using duct tape to plug an hole in the ceiling, sure it won’t drip on you from there, but if you don’t go and follow the plumbing or check the roof you never know what will happen next.

You might had spent already an hour sifting through gdb and error logs and you can’t spot a better place where to fix and since it isn’t a job you devote just enough time.

There is where usually reviews shine: having more than a pair of (tired) eyes helps a lot and getting people to take over from where you left and get something better quite good.

Releases

One of the nice perks of the current automated system is that is quite easy to check if the problems are present also in our current supported release branches. Backporting patches is yet another time consuming task and Reinhart, our release manager, couldn’t do that for the past point releases so Sean, Martin and I took the interim for that.

So far

The total amount of samples received is over 1600 of which 240+ are new samples triggering issues in hevc (patches for fixing all of them are already on review luckily).

There are less than 300 samples still waiting for a fix lots of them involve some of the ugliest and oldest lines of our codebase.

Luckily I’m not alone and hopefully in the process we’ll also freshen code untouched since ages and look at how naive we were when we wrote it.

Trades

The code must be properly formatted, nice to read. You must have testcases.

  • Always start with simple even dumb code. (e.g. as is written in the spec you write it down no matter how stupid or inefficient)
  • Trade simplicity/clarity for speed, if the gain is big. (e.g. move from the letter-of-the-spec-matching code to something actually faster, sometimes also clarity and simplicity gain from it)
  • Trade space for speed, if the gain is big enough. (e.g. a lookup table usually is a nice solution, sometimes for a different problem)
  • Trade precision for speed if you must, always leave a codepath that isn’t imprecise.
  • Never trade portability, but you might trade slower generic code for faster specialized code for all the platform implementing it. (e.g. implement in asm a function that was inlined before, the plain C code would be slower since you have a call overhead while the asm-optimized version would be 16-fold faster than C)

As Kostya pointed, sometimes you start with a binary specification so point 1 is moot.

Security fun – what’s security?

Since I eventually had access to a batch of broken samples from Google, I spent the past months volunteering time to fix in Libav the issues uncovered (the whole set is over 3000 samples), you probably noticed by the number of releases.

You can consider “security” issues pretty much any kind of bug:

  • A segfault is a security issue.
  • A read/write from not allocated memory is a security issue.
  • An assert triggered IS a security issue and not a way to fix them.
  • A memory leak is a security issue and in most cases the worst kind.

Your security concern is not the same as mine!

Libav has a large surface to attack since you have decoders for every kind of multimedia format, it is a library used in many different situations, what’s a security concern for somebody is a nuisance for somebody else.

If VLC breaks on you when you are trying to decode some incomplete movie you got from bittorrent because one 0 or 1 got misinterpreted is not such an issue. If your transcoding pipeline gets stalled due the same movie being uploaded on Youtube, somebody might be screaming at the idiot that forgot to bound-check that array deep into the code.

If some buffer overflow could lead to code execution, most of the people using avconv to mass transcode won’t care that much, the process is fully sandboxed and they expect it, the people making players are mostly afraid of some buffer overflow being exploitable, their users would feel the pain.

So for us, Libav developers, there isn’t a bug more important or least important. We have to fix all of them and possibly fix them correctly once (so if you move from a buffer overflow to an assert, you just trade a possible code execution to a deny of service). That takes time and resources.

The source of all pain

Most of the bugs are naive assumptions and overlooks piling up over the years, the most common are the following

Off by one
You loop over something and you read one element too many
Corner cases
What happens when your frame has dimension 0? What if it is as large as the maximum representable value?
Faulty assumption
If you think that a malloc cannot fail, think again, if you think realloc won’t ever return NULL so you
can forget about the old pointer and just overwrite it, please DO think again. It can happen, even on Linux
Sloppy coding practices
Some bad practices tend to stick and bad patterns such as not forwarding return values will lead to problems later, usually making the process of tracking back to the root issue HARD.

Even if you are writing something non critical such a fire and forget commandline app you should be a little careful, if you plan to write something more involving such a library that could be used in MANY ways by LOTS of people, you MUST be careful.

Tools of the trade

Tracking bugs is usually annoying and time consuming, if they are crash they are at least apparent, memory leaks and faulty read/write may not trigger an apparent crash, making the whole thing more daunting. Luckily there are good tools help you.

Valgrind

The whole toolset is really valuable, massif and memcheck are the best to figure out where the memory went and who’s the fault.

AddressSanitizer

Asan is a boon since it is much faster than memcheck but also a pain since you have to instrument your code by using a certain compiler (clang or gcc-4.8 and later) and certain flags (-fsanitize=address). You can leverage it in gdb so you can inspect memory while debugging. That had been an huge timesaver most of the time. You can in theory do that also on memcheck adding some lines of code, probably I’ll provide snippets later.

drmemory

If your problem is on non-linux and non-mac you cannot use Asan and Valgrind, the new and coming tool to save you is drmemory. It is the youngest of the set and you can see how green it is by the lack of best practices… So no source releases, naive build system and bad version control system. If you try to build it is better to use the latest svn and hope.

Yet if you have to figure out what’s wrong on windows it is a huge boon already. People with time and will could try to help them on fixing their build system and convince them to move to git.

Automation

Never, ever, ever start hunting this kind of bugs w/out automating the most. Currently I have written a consistent number of lines of bash to automatically triage and check the samples, get the code to build in at least 2-3 flavours (clang and gcc with asan, vanilla gcc for valgrind) and eventually generate additional fate targets so I can run make fate-sec -C .gcc-asan and see if something that was fixed broke when we hadn’t look.

In closing

I still have 200 samples to fix and hopefully I’ll rally more people in helping, if you aren’t running routine tests and make sure your projects are at least valgrind clean (the easiest check to do), you should.

If you are writing code that is a little more critical, better if you use all the tools I briefly described and fix what you overlooked.

The case of defaults (Libav vs FFmpeg)

I tried not to get into this discussion, mostly because it will degenerate to a mud sliding contest.

Alexis did not take well the fact that Tomáš changed the default provider for libavcodec and related libraries.

Before we start, one point:

I am as biased as Alexis, as we are both involved on the projects themselves. The same goes for Diego, but does not apply to Tomáš, he is just a downstream by transition (libreoffice uses gstreamer that uses *only* Libav).

Now the question at hand: which should be the default? FFmpeg or Libav?

How to decide?

- Libav has a strict review policy every patch goes through a review and has to be polished enough before landing the tree.

- FFmpeg merges daily what had been done in Libav and has a more lax approach on what goes in the tree and how.

- Libav has fate running on most architectures, many of those are running Gentoo, usually real hardware.

- FFmpeg has an old fate with less architectures, many qemu emulations.

- Libav defines the API

- FFmpeg follows adding bits here and there to “diversify”

- Libav has a major release per season, minor releases when needed

- FFmpeg releases a lot touting a lot of *Security*Fixes* (usually old code from the ancient times eventually fixed)

- Libav does care about crashes and fixes them, but does not claim every crash is a Security issue.

- FFmpeg goes by leaps to add MORE features, no matter what (including picking wip branches from my personal github and merging them before they are ready…)

- Libav is more careful, thus having less fringe features and focusing more polishing before landing new stuff.

So if you are a downstream you can pick what you want, but if you want something working everywhere you should target Libav.

If you are missing a feature from Libav that is in FFmpeg, feel free to point me to it and I’ll try my best to get it to you.