Pre-made Builds

Had been years I’m maintaining the win32 builds, about 2 weeks ago I had the n-th failure with the box hosting it and since I was busy with some work stuff I could not fix it till this week.

Top-IX graciously provided a better sized system and I’m almost done reconfiguring it. Sadly setting up the host and reconfigure it is a quite time consuming task and not so many (if all) show appreciation for it. (please do cheer for the other people taking care of our other piece of infrastructure from time to time).

The new host is builds.libav.org since it will host builds that are slightly more annoying to get. Probably it will start with just builds for the releases and then if there is interests (and volunteers) it will be extended to nightly builds.

Changes

More Platforms

The first an more apparent is that we’ll try to cover more platforms, soon I’ll start baking some Android builds and then hopefully Apple-oriented stuff will appear in some form.

Building Libav in itself is quite simple and hopefully documented well enough and our build system is quite easy to use for cross building.

Getting some of the external dependencies built, on the other hand, is quite daunting. gnutls/nettle and x265 are currently missing since their build system is terrible for cross compiling and my spare time didn’t allow to get that done within the deadline I set for myself.

Possibly in few weeks we will get at least the frameworks packaging for iOS and Android. Volunteers to help are more than welcome.

New theme

The new theme is due switching to nginx so now thanks to fancy_index is arguably nicer.

More builds

The original builds tried to add almost everything that was deemed useful and thus the whole thing was distributed under gpl. Since I noticed some people might not really need that or might just want less functionality I added a lgpl-distributable set. If somebody feels useful having a version w/out any dependencies, please drop me a line.

Thanks

Thanks again to Top-IX for the support and Gabriele in particular for setting up the new system while he was in a conference in London.

Thanks for Sean and Reinhart for helping with the continuous integration system.

Enjoy the new builds!

Post Scriptum: token of appreciation in form of drinks or just thank you are welcome: writing code is fun, doing sysadmin tasks is not.

Making a new demuxer

Maxim asked me to to check a stream from a security camera that he could not decode with avconv without forcing the format to mjpeg.

Mysterious stream

Since it is served as http the first step had been checking the mime type. Time to use curl -I.

# curl -I "http://host/some.cgi?user=admin&pwd=pwd" | grep Content-Type

Interesting enough it is a multipart/x-mixed-replace

Content-Type: multipart/x-mixed-replace;boundary=object-ipcamera

Basically the cgi sends a jpeg images one after the other, we even have a (old and ugly) muxer for it!

Time to write a demuxer.

Libav demuxers

We already have some documentation on how to write a demuxer, but it is not complete so this blogpost will provide an example.

Basics

Libav code is quite object oriented: every component is a C structure containing a description of it and pointers to a set of functions and there are fixed pattern to make easier to make new code fit in.

Every major library has an all${components}.c in which the components are registered to be used. In our case we talk about libavformat so we have allformats.c.

The components are built according to CONFIG_${name}_${component} variables generated by configure. The actual code reside in the ${component} directory with a pattern such as ${name}.c or ${name}dec.c/${name}enc.c if both demuxer and muxer are available.

The code can be split in multiple files if it starts growing to an excess of 500-1000 LOCs.

Registration

We have some REGISTER_ macros that abstract some logic to make every component selectable at configure time since in Libav you can enable/disable every muxer, demuxer, codec, IO/protocol from configure.

We had already have a muxer for the format.

    REGISTER_MUXER   (MPJPEG,           mpjpeg);

Now we register both in a single line:

    REGISTER_MUXDEMUX(MPJPEG,           mpjpeg);

The all${components} files are parsed by configure to generate the appropriate Makefile and C definitions. The next run we’ll get a new
CONFIG_MPJPEG_DEMUXER variable in config.mak and config.h.

Now we can add to libavformat/Makefile a line like

OBJS-$(CONFIG_MPJPEG_DEMUXER)            += mpjpegdec.o

and put our mpjpegdec.c in libavformat and we are ready to write some code!

Demuxer structure

Usually I start putting down a skeleton file with the bare minimum:

The AVInputFormat and the core _read_probe, _read_header and _read_packet callbacks.

#include "avformat.h"

static int ${name}_read_probe(AVProbeData *p)
{
    return 0;
}

static int ${name}_read_header(AVFormatContext *s)
{
    return AVERROR(ENOSYS);
}

static int ${name}_read_packet(AVFormatContext *s, AVPacket *pkt)
{
    return AVERROR(ENOSYS);
}

AVInputFormat ff_${name}_demuxer = {
    .name           = "${name}",
    .long_name      = NULL_IF_CONFIG_SMALL("Longer ${name} description"),
    .read_probe     = ${name}_read_probe,
    .read_header    = ${name}_read_header,
    .read_packet    = ${name}_read_packet,

I make so that all the functions return a no-op value.

_read_probe

This function will be called by the av_probe_input functions, it receives some probe information in the form of a buffer. The function return a score between 0 and 100; AVPROBE_SCORE_MAX, AVPROBE_SCORE_MIME and AVPROBE_SCORE_EXTENSION are provided to make more evident what is the expected confidence. 0 means that we are sure that the probed stream is not parsable by this demuxer.

_read_header

This function will be called by avformat_open_input. It reads the initial format information (e.g. number and kind of streams) when available, in this function the initial set of streams should be mapped with avformat_new_stream. Must return 0 on success. The skeleton is made to return ENOSYS so it can be run and just exit cleanly.

_read_packet

This function will be called by av_read_frame. It should return an AVPacket containing demuxed data as contained in the bytestream. It will be parsed and collated (or splitted) to a frame-worth amount of data by the optional parsers. Must return 0 on success. The skeleton again returns ENOSYS.

Implementation

Now let’s implement the mpjpeg support! The format in itself is quite simple:
– a boundary line starting with --
– a Content-Type line stating image/jpeg.
– a Content-Length line with the actual buffer length.
– the jpeg data

Probe function

We just want to check if the Content-Type is what we expect basically, so we just
go over the lines (\n\r-separated) and check if there is a tag Content-Type with a value image/jpeg.

static int get_line(AVIOContext *pb, char *line, int line_size)
{
    int i, ch;
    char *q = line;

    for (i = 0; !pb->eof_reached; i++) {
        ch = avio_r8(pb);
        if (ch == 'n') {
            if (q > line && q[-1] == 'r')
                q--;
            *q = '';

            return 0;
        } else {
            if ((q - line) < line_size - 1)
                *q++ = ch;
        }
    }

    if (pb->error)
        return pb->error;
    return AVERROR_EOF;
}

static int split_tag_value(char **tag, char **value, char *line)
{
    char *p = line;

    while (*p != '' && *p != ':')
        p++;
    if (*p != ':')
        return AVERROR_INVALIDDATA;

    *p   = '';
    *tag = line;

    p++;

    while (av_isspace(*p))
        p++;

    *value = p;

    return 0;
}

static int check_content_type(char *line)
{
    char *tag, *value;
    int ret = split_tag_value(&tag, &value, line);

    if (ret < 0)
        return ret;

    if (av_strcasecmp(tag, "Content-type") ||
        av_strcasecmp(value, "image/jpeg"))
        return AVERROR_INVALIDDATA;

    return 0;
}

static int mpjpeg_read_probe(AVProbeData *p)
{
    AVIOContext *pb;
    char line[128] = { 0 };
    int ret;

    pb = avio_alloc_context(p->buf, p->buf_size, 0, NULL, NULL, NULL, NULL);
    if (!pb)
        return AVERROR(ENOMEM);

    while (!pb->eof_reached) {
        ret = get_line(pb, line, sizeof(line));
        if (ret < 0)
            break;

        ret = check_content_type(line);
        if (!ret)
            return AVPROBE_SCORE_MAX;
    }

    return 0;
}

Here we are using avio to be able to reuse get_line later.

Reading the header

The format is pretty much header-less, we just check for the boundary for now and
set up the minimum amount of information regarding the stream: media type, codec id and frame rate. The boundary by specification is less than 70 characters with -- as initial marker.

static int mpjpeg_read_header(AVFormatContext *s)
{
    MPJpegContext *mp = s->priv_data;
    AVStream *st;
    char boundary[70 + 2 + 1];
    int ret;

    ret = get_line(s->pb, boundary, sizeof(boundary));
    if (ret < 0)
        return ret;

    if (strncmp(boundary, "--", 2))
        return AVERROR_INVALIDDATA;

    st = avformat_new_stream(s, NULL);

    st->codec->codec_type = AVMEDIA_TYPE_VIDEO;
    st->codec->codec_id   = AV_CODEC_ID_MJPEG;

    avpriv_set_pts_info(st, 60, 1, 25);

    return 0;
}

Reading packets

Even this function is quite simple, please note that AVFormatContext provides an
AVIOContext. The bulk of the function boils down to reading the size of the frame,
allocate a packet using av_new_packet and write down if using avio_read.

static int parse_content_length(char *line)
{
    char *tag, *value;
    int ret = split_tag_value(&tag, &value, line);
    long int val;

    if (ret < 0)
        return ret;

    if (av_strcasecmp(tag, "Content-Length"))
        return AVERROR_INVALIDDATA;

    val = strtol(value, NULL, 10);
    if (val == LONG_MIN || val == LONG_MAX)
        return AVERROR(errno);
    if (val > INT_MAX)
        return AVERROR(ERANGE);
    return val;
}

static int mpjpeg_read_packet(AVFormatContext *s, AVPacket *pkt)
{
    char line[128];
    int ret, size;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        return ret;

    ret = check_content_type(line);
    if (ret < 0)
        return ret;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        return ret;

    size = parse_content_length(line);
    if (size < 0)
        return size;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    ret = av_new_packet(pkt, size);
    if (ret < 0)
        return ret;

    ret = avio_read(s->pb, pkt->data, size);
    if (ret < 0)
        goto fail;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    // Consume the boundary marker
    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    return ret;

fail:
    av_free_packet(pkt);
    return ret;
}

What next

For now I walked you through on the fundamentals, hopefully next week I’ll show you some additional features I’ll need to implement in this simple demuxer to make it land in Libav: AVOptions to make possible overriding the framerate and some additional code to be able to do without Content-Length and just use the boundary line.

PS: wordpress support for syntax highlight is quite subpar, if somebody has a blog engine that can use pygments or equivalent please tell me and I’d switch to it.

Fix ALL the BUGS!

Vittorio started (with some help from me) to fix all the issues pointed by Coverity.

Static analysis

Coverity (and scan-build) are quite useful to spot mistakes even if their false-positive ratio tend to be quite high. Even the false-positives are usually interesting since the spot code unnecessarily convoluted. The code should be as simple as possible but not simpler.

The basic idea behind those tools is to try to follow the code-paths while compiling them and spot what could go wrong (e.g. you are feeding a NULL to a function that would deference it).

The problems with this approach are usually two: false positive due to the limited scope of the analyzer and false negatives due shadowing.

False Positives

Coverity might assume certain inputs are valid even if they are made impossible by some initial checks up in the codeflow.

In those case you should spend enough time to make sure Coverity is not right and those faulty inputs aren’t slipping somewhere. NEVER try to just add some checks to the code pointed as first move, you might either hide issues (e.g. if Coverity complains about uninitialized variable do not just initialize it to nothing, check why it happens and if the logic behind is wrong).

If Coverity is confused, your compiler is confused as well and will produce suboptimal executables.
Properly fixing those issues can result in useful speedups. Simpler code is usually faster.

Ever increasing issue count

While fixing issues using those tools you might notice to your surprise that every time you fix something, something new appears out of thin air.

This is not magic but simply that the static analyzers usually keep some limit on how deep they go depending on the issues already present and how much time had been spent already.

That surprise had been fun since apparently some of the time limit is per compilation unit so splitting large files in smaller chunks gets us more results (while speeding up the building process thanks to better parallelism).

Usually fixing some high-impact issue gets us 3 or 5 new small impact issues.

I like solving puzzles so I do not mind having more fun, sadly I did not have much spare time to play this game lately.

Merge ALL the FIXES

Fixing properly all the issues is a lofty goal and as usual having a patch is just 1/2 of the work. Usually two set of eyes work better than one and an additional brain with different expertise can prevent a good chunk of mistakes. The review process is the other, sometimes neglected, half of solving issues.

So far about 100+ patches got piled up over the past weeks and now they are sent in small batches to ease the work of review. (I have something brewing to make reviewing simpler, as you might know)

During the review what probably about 1/10 of the patches will be rejected and the relative coverity report updated with enough information to explain why it is a false positive or the dangerous or strange behaviour pointed is intentional.

The next point release for our 4 maintained major releases: 0.8, 9, 10 and 11. Many thanks to the volunteers that spend their free time keeping all the branches up to date!

Tracking patches

You need good tools to do a good job.

Even the best tool in the hand of a novice is a club.

I’m quite fond in improving the tools I use. And that’s why I started getting involved in Gentoo, Libav, VLC and plenty of other projects.

I already discussed about lldb and asan/valgrind, now my current focus is about patch trackers. In part it is due to the current effort to improve the libav one,

Contributors

Before talking about patches and their tracking I’d digress a little on who produces them. The mythical Contributor: without contributions an opensource project would not exist.

You might have recurring contributions and unique/seldom contributions. Both are quite important.
In general you should make so seldom contributors become recurring contributors.

A recurring contributor can accept to spend some additional time to setup the environment to actually provide its contribution back to the community, a sporadic contributor could be easily put off if the effort required to send his patch is larger than writing the patch itself.

Th project maintainers should make so the life of contributors is as simple as possible.

Patches and Revision Control

Lately most opensource projects saw the light and started to use decentralized source revision control system and thanks to github and many other is the concept of issue pull requests is getting part of our culture and with it comes hopefully a wider acceptance to the fact that the code should be reviewed before it is merged.

Pull Request

In a decentralized development scenario new code is usually developed in topic branches, routinely rebased against the master until the set is ready and then the set of changes (called series or patchset) is reviewed and after some round of fixes eventually merged. Thanks to bitbucket now we have forking, spooning and knifing as part of the jargon.

The review (and merge) step, quite properly, is called knifing (or stabbing): you have to dice, slice and polish the code before merging it.

Reviewing code

During a review bugs are usually spotted as well way to improve are suggested. Patches might be split or merged together and the series reworked and improved a lot.

The process is usually time consuming, even more for an organization made of volunteer: writing code is fun, address issues spotted is not so much, review someone else code is much less even.

Sadly it is a necessary annoyance since otherwise the errors (and horrors) that would slip through would be much bigger and probably much more. If you do not care about code quality and what you are writing is not used by other people you can probably ignore that, if you feel somehow concerned that what you wrote might turn some people life in a sea of pain. (On the other hand some gratitude for such daunting effort is usually welcome).

Pull request management

The old fashioned way to issue a pull request is either poke somebody telling that your branch is ready for merge or just make a set of patches and mail them to whoever is in charge of integrating code to the main branch.

git provides a nifty tool to do that called git send-email and is quite common to send sets of patches (called usually series) to a mailing list. You get feedback by email and you can update the set using the --in-reply-to option and the message id.

Platforms such as github and similar are more web centric and require you to use the web interface to issue and review the request. No additional tools are required beside your git and a browser.

gerrit and reviewboard provide custom scripts to setup ephemeral branches in some staging area then the review process requires a browser again. Every commit gets some tool-specific metadata to ease tracking changes across series revisions. This approach the more setup intensive.

Pro and cons

Mailing list approach

Testing patches from the mailing list is quite simple thanks to git am. And if the reply-to field is used properly updates appear sorted in a good way.

This method is the simplest for the people used to have the email client always open and a console (if they are using a well configured emacs or vim they literally do not move away from the editor).

On the other hand, people using a webmail or using a basic email client might find the approach more cumbersome than a web based one.

If your only method to track contribution is just a mailing list, gets quite easy to forget which is the status of a set. Patches could be neglected and even who wrote them might forget for a long time.

Patchwork approach

Patchwork tracks which patches hit a mailing list and tries to figure out if they are eventually merged automatically.

It is quite basic: it provides an web interface to check the status and provides a mean to just update the patch status. The review must happen in the mailing list and there is no concept of series.

As basic as it is works as a reminder about pending patches but tends to get cluttered easily and keeping it clean requires some effort.

Github approach

The web interface makes much easier spot what is pending and what’s its status, people used to have everything in the browser (chrome and mozilla could be made to work as a decent IDE lately) might like it much better.

Reviewing small series or single patches is usually nicer but the current UIs do not scale for larger (5+) patchsets.

People not living in a browser find quite annoying switch context and it requires additional effort to contribute since you have to register to a website and the process of issuing a patch requires many additional steps while in the email approach just require to type git send-email -1.

Gerrit approach

The gerrit interfaces tend to be richer than the Github counterparts. That can be good or bad since they aren’t as immediate and tend to overwhelm new contributors.

You need to make an additional effort to setup your environment since you need some custom script.

The series are tracked with additional precision, but for all the practical usage is the same as github with the additional bourden for the contributor.

Introducing plaid

Plaid is my attempt to tackle the problem. It is currently unfinished and in dire need of more hands working on it.

It’s basic concept is to be non-intrusive as much as possible, retaining all the pros of the simple git+email workflow like patchwork does.

It provides already additional features such as the ability to manage series of patches and to track updates to it. It sports a view to get a break out of which series require a review and which are pending for a long time waiting for an update.

What’s pending is adding the ability to review it directly in the browser, send the review email for the web to the mailing list and a some more.

Probably I might complete it within the year or next spring, if you like Flask or python contributions are warmly welcome!

VDD14 Discussions: HWAccel2

I took part to the Videolan Dev Days 14 weeks ago, sadly I had been too busy so the posts about it will appear in scattered order and sort of delayed.

Hardware acceleration

In multimedia, video is basically crunching numbers and get pixels or crunching pixels and getting numbers. Most of the operation are quite time consuming on a general purpose CPU and orders of magnitude faster if done using DSP or hardware designed for that purpose.

Availability

Most of the commonly used system have video decoding and encoding capabilities either embedded in the GPU or in separated hardware. Leveraging it spares lots of cpu cycles and lots of battery if we are thinking about mobile.

Capabilities

The usually specialized hardware has the issue of being inflexible and that does clash with the fact most codec evolve quite quickly with additional profiles to extend its capabilities, support different color spaces, use additional encoding strategies and such. Software decoders and encoders are still needed and need badly.

Hardware acceleration support in Libav

HWAccel 1

The hardware acceleration support in Libav grew (like other eldritch-horror tentacular code we have lurking from our dark past) without much direction addressing short term problems and not really documenting how to use it.

As result all the people that dared to use it had to guess, usually used internal symbols that they wouldn’t have to use and all in all had to spend lots of time and
had enough grief when such internals changed.

Usage

Every backend required a quite large deal of boilerplate code to initialize the backend-specific context and to render the hardware surface wrapped in the AVFrame.

The Libav backend interface was quite vague in itself, requiring to override get_format and get_buffer in some ways.

Overall to get the whole thing working the library user was supposed to do about 75% of the work. Not really nice considering people uses libraries to abstract complexity and avoid repetition

Backend support

As that support was written with just slice-based decoder in mind, it expects that all the backend would require the software decoder to parse the bitstream, prepare slices of the frame and feed the backend with them.

Sadly new backends appeared and they take directly either bitstream or full frames, the approach had been just to take the slice, add back the bitstream markers the backend library expects and be done with that.

Initial HWAccel 2 discussion

Last year since the number of backends I wanted to support were all bitstream-oriented and not fitting the mode at all I started thinking about it and the topic got discussed a bit during VDD 13. Some people that spent their dear time getting hwaccel1 working with their software were quite wary of radical changes so a path of incremental improvements got more or less put down.

HWAccel 1.2

  • default functions to allocate and free the backend context and make the struct to interface between Libav and the backend extensible without causing breakage.
  • avconv now can use some hwaccel, providing at least an example on how to use them and a mean to test without having to gut VLC or mpv to experiment.
  • document better the old-style hwaccels so at least some mistakes could be avoided (and some code that happen to work by sheer look won’t break once the faulty assuptions cease to exist)

The new VDA backend and the update VDPAU backend are examples of it.

HWAccel 1.3

  • extend the callback system to fit decently bitstream oriented backends.
  • provide an example of backend directly providing normal AVFrames.

The Intel QSV backend is used as a testbed for hwaccel 1.3.

The future of HWAccel2

Another year, another meeting. We sat down again to figure out how to get further closer to the end result of not having the casual users write boilerplate code to use hwaccel to get at least some performance boost and yet let the power users have the full access to the underpinnings so they can get most of it without having to write everything from scratch.

Simplified usage, hopefully really simple

The user just needs to use AVOption to set specific keys such as hwaccel and optionally hwaccel-device and the library will take care of everything. The frames returned by avcodec_decode_video2 will contain normal system memory and commonly used pixel formats. No further special code will be needed.

Advanced usage, now properly abstracted

All the default initialization, memory/surface allocation and such will remain overridable, with the difference that an additional callback called get_hw_surface will be introduced to separate completely the hwaccel path from the software path and specific functions to hand over the ownership of backend contexts and surfaces will be provided.

The software fallback won’t be anymore automagic in this case, but a specific AVERROR_INPUT_CHANGED will be returned so would be cleaner for the user reset the decoder without losing the display that maybe was sharing the same context. This leads the way to a simpler mean to support multiple hwaccel backends and fall back from one to the other to eventually the software decoding.

Migration path

We try our best to help people move to the new APIs.

Moving from HWAccel1 to HWAccel2 in general would result in less lines of code in the application, the people wanting to keep their callback need to just set them after avcodec_open2 and move the pixel specific get_buffer to get_hw_surface. The presence of av_hwaccel_hand_over_frame and av_hwaccel_hand_over_context will make much simpler managing the backend specific resources.

Expected Time of Arrival

Right now the review is on the HWaccel1.3, I hope to complete this step and add few new backends to test how good/bad that API is before adding the other steps. Probably HWAccel2 will take at least other 6 months.

Help in form of code or just moral support is always welcome!

Outreach Program for Women

Libav participated in the summer edition of the OPW. We had three interns Alexandra, Katerina and Nidhi.

Projects

The three interns had different starting skills so the projects picked had a different breadth and scope.

Small tasks

Everybody has to start from a simple task and they did as well. Polishing crufty code is one of the best ways to start learning how it works. In the Libav case we have plenty of spots that require extra care and usually hidden bugs get uncovered that way.

Not so small tasks

Katerina decided to do something radical from the start and she tried to use coccinelle to fix a whole class of issues in a single swoop: I’m still reviewing the patch and splitting it in smaller chunks to single out false positives. The patch itself gave some spotlights to some of the most horrible code still lingering around, hopefully we’ll get to fix those part soon =)

Demuxer rewrite

Alexandra and Katerina showed interest in specific targeted tasks, they honed their skills by reimplementing the ASF and RealMedia demuxer respectively. They even participated in the first Libav Summer Sprint in Torino and worked together with their mentor in person.

They had to dig through the specifications and figure out why some sample files behave in unexpected ways.

They are almost there and hopefully our next release will see brand new demuxers!

Jack of all trades

Libav has plenty of crufty code that requires some love, plenty of overly long files, lots of small quirks that should be ironed out. Libav (as any other big projects) needs some refactoring here and there.

Nidhi’s task was mainly focused on fixing some of those and help others doing the same by testing patches. She had to juggle many different tasks and learn about many different parts of the codebase and the toolset we use.

It might not sound as extreme as replacing ancient code with something completely new (and make it work at least as well as the former), but both kind of tasks are fundamental to keep the project healthy!

In closing

All the projects have been a success and we are looking forward to see further contributions from our new members!

PowerPC is back (and little endian)

Yesterday I fixed a PowerPC issue since ages, it is an endianess issue, and it is (funny enough) on the little endian flavour of it.

PowerPC

I have some ties with this architecture since my interest on the architecture (and Altivec/VMX in particular) is what made me start contributing to MPlayer while fixing issue on Gentoo and from there hack on the FFmpeg of the time, meet the VLC people, decide to part ways with Michael Niedermayer and with the other main contributors of FFmpeg create Libav. Quite a loong way back in the time.

Big endian, Little Endian

It is a bit surprising that IBM decided to use little endian (since big endian is MUCH nicer for I/O processing such as networking) but they might have their reasons.

PowerPC traditionally always had been both-endian with the ability to switch on the fly between the two (this made having foreign-endian simulators lightly less annoying to manage), but the main endianess had always been big.

This brings us to a quite interesting problem: Some if not most of the PowerPC code had been written thinking in big-endian. Luckily since most of the code wrote was using C intrinsics (Bless to whoever made the Altivec intrinsics not as terrible as the other ones around) it won’t be that hard to recycle most of the code.

More will follow.

Libav Release Process

Since the release document is lacking here few notes on how it works, it will be updated soon =).

Versioning

Libav has separate version for each library provided. As usual the major version bump signifies an ABI-incompatible change, a minor version bump marks a specific feature introduction or removal.
It is made this way to let users leverage the pkgconf checks to require features instead of use a compile+link check.
The APIChange document details which version corresponds to which feature.

The Libav global version number e.g. 9.16 provides mainly the following information:

  • If the major number is updated the Libraries have ABI differences.
    • If the major number is Even API-incompatible changes should be expected, downstreams should follow the migration guide to update their code.
    • If the major number is Odd no API-incompatible changes happened and a simple rebuild **must** be enough to use the new library.
  • If the minor number is updated that means that enough bugfixes piled up during the month/2weeks period and a new point release is available.

Major releases

All the major releases start with a major version bump of all the libraries. This automatically enables new ABI incompatible code and disables old deprecated code. Later or within the same patch the preprocessor guards and the deprecated code gets removed.

Alpha

Once the major bump is committed the first alpha is tagged. Alphas live within the master branch, the codebase can still accept features updates (e.g. small new decoders or new demuxers) but the API and ABI cannot have incompatible changes till the next one or two major releases.

Beta

The first beta tag also marks the start of the new release branch.
From this point all the bugfixes that hit the master will be backported, no feature changes are accepted in the branch.

Release

The release is not different from a beta, it is still a tag in the release branch. The level of confidence nothing breaks is much higher though.

Point releases

Point releases are bugfix-only releases and they aim to provide seamless security updates.

Since most bugs in Libav are security concerns users should update as soon the new release is out. We keep our continuous integration system monitoring all the release branches in addition to the master branch to be confident that backported bugfixes do not cause unexpected issues.

Libav 11

The first beta for the release 11 should appear in the next two days, please help us by testing and reporting bugs.

Releases!

Last we made a huge effort to make a release for every supported branch (and even one that is supposed not to be). Lots of patches to fix some old bugs got backported. I hope you appreciate the dedication.

Libav 0.8.15

We made an extra effort, this branch is supposed to be closed and the code is really ancient!
I went the extra mile and I had to run over all the codebase to fix a security issue properly: you might crash if your get_buffer callback doesn’t validate the frame dimension, that code is provided by the library user (e.g. VLC), so the solution is to wrap the get_buffer callback in a function ff_get_buffer and do the check there. For Libav 9 and following we did already for unrelated reasons, for Libav 0.8 I (actually we since the first patch didn’t cover all usage) had sift through the code and replace all the avctx->get_buffer() with ff_get_buffer().

Libav 9.16

This is a standard security release, backporting from Libav 10 might require some manual retouch since code got cleaned up a lot and some internals are different but it is still less painful than backporting from 11 to 0.8

Libav 10.3

This is a quite easy release, backporting fixes is nearly immediate since Libav 11 doesn’t have radical changes in the core internals and the cleanups can apply to release/10.

Libav 11 alpha1

Libav11 is a major release API compatible with Libav10, that makes transitioning as smooth as possible: you enjoy automatically some under-the-hood changes that required an ABI bump (such as the input mime support to speed up AAC webradio startup time) and if you want you can start using the new API features (such as avresample AVFrame API, av_packet_rescale_ts(), AVColor in AVFrame and so on).

You can help!

Libav 11 will be out within the month and help is welcome to polish it and make sure we do not have rough edges.

Update a downstream project you are using

Many downstreams are still using (and sometimes misusing) the old (Libav9) and ancient (Libav0.8) API. We started writing migration guides to help, we contributed many patches already and the Debian packagers did a great job to take care of their side.

Some patches are just waiting to be forwarded to the downstream or, if the package is orphaned, to your favourite distribution packagers.

Triage our bugzilla

Most of the Libav development happens in the mailing-lists and sometimes
bugs reported over bugzilla get not updated timely. Triaging bugs sometimes take a little of time and helps a lot.