AVScale – part1

swscale is one of the most annoying part of Libav, after a couple of years since the initial blueprint we have something almost functional you can play with.

Colorspace conversion and Scaling

Before delving in the library architecture and the outher API probably might be good to make a extra quick summary of what this library is about.

Most multimedia concepts are more or less intuitive:
encoding is taking some data (e.g. video frames, audio samples) and compress it by leaving out unimportant details
muxing is the act of storing such compressed data and timestamps so that audio and video can play back in sync
demuxing is getting back the compressed data with the timing information stored in the container format
decoding inflates somehow the data so that video frames can be rendered on screen and the audio played on the speakers

After the decoding step would seem that all the hard work is done, but since there isn’t a single way to store video pixels or audio samples you need to process them so they work with your output devices.

That process is usually called resampling for audio and for video we have colorspace conversion to change the pixel information and scaling to change the amount of pixels in the image.

Today I’ll introduce you to the new library for colorspace conversion and scaling we are working on.

AVScale

The library aims to be as simple as possible and hide all the gory details from the user, you won’t need to figure the heads and tails of functions with a quite large amount of arguments nor special-purpose functions.

The API itself is modelled after avresample and approaches the problem of conversion and scaling in a way quite different from swscale, following the same design of NAScale.

Everything is a Kernel

One of the key concept of AVScale is that the conversion chain is assembled out of different components, separating the concerns.

Those components are called kernels.

The kernels can be conceptually divided in two kinds:
Conversion kernels, taking an input in a certain format and providing an output in another (e.g. rgb2yuv) without changing any other property.
Process kernels, modifying the data while keeping the format itself unchanged (e.g. scale)

This pipeline approach gets great flexibility and helps code reuse.

The most common use-cases (such as scaling without conversion or conversion with out scaling) can be faster than solutions trying to merge together scaling and conversion in a single step.

API

AVScale works with two kind of structures:
AVPixelFormaton: A full description of the pixel format
AVFrame: The frame data, its dimension and a reference to its format details (aka AVPixelFormaton)

The library will have an AVOption-based system to tune specific options (e.g. selecting the scaling algorithm).

For now only avscale_config and avscale_convert_frame are implemented.

So if the input and output are pre-determined the context can be configured like this:

AVScaleContext *ctx = avscale_alloc_context();

if (!ctx)
    ...

ret = avscale_config(ctx, out, in);
if (ret < 0)
    ...

But you can skip it and scale and/or convert from a input to an output like this:

AVScaleContext *ctx = avscale_alloc_context();

if (!ctx)
    ...

ret = avscale_convert_frame(ctx, out, in);
if (ret < 0)
    ...

avscale_free(&ctx);

The context gets lazily configured on the first call.

Notice that avscale_free() takes a pointer to a pointer, to make sure the context pointer does not stay dangling.

As said the API is really simple and essential.

Help welcome!

Kostya kindly provided an initial proof of concept and me, Vittorio and Anton prepared this preview on the spare time. There is plenty left to do, if you like the idea (since many kept telling they would love a swscale replacement) we even have a fundraiser.

New AVCodec API

Another week another API landed in the tree and since I spent some time drafting it, I guess I should describe how to use it now what is implemented. This is part I

What is here now

Between theory and practice there is a bit of discussion and obviously the (lack) of time to implement, so here what is different from what I drafted originally:

  • Function Names: push got renamed to send and pull got renamed to receive.
  • No separated function to probe the process state, need_data and have_data are not here.
  • No codecs ported to use the new API, so no actual asyncronicity for now.
  • Subtitles aren’t supported yet.

New API

There are just 4 new functions replacing both audio-specific and video-specific ones:

// Decode
int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);
int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);

// Encode
int avcodec_send_frame(AVCodecContext *avctx, const AVFrame *frame);
int avcodec_receive_packet(AVCodecContext *avctx, AVPacket *avpkt);

The workflow is sort of simple:
– You setup the decoder or the encoder as usual
– You feed data using the avcodec_send_* functions until you get a AVERROR(EAGAIN), that signals that the internal input buffer is full.
– You get the data back using the matching avcodec_receive_* function until you get a AVERROR(EAGAIN), signalling that the internal output buffer is empty.
– Once you are done feeding data you have to pass a NULL to signal the end of stream.
– You can keep calling the avcodec_receive_* function until you get AVERROR_EOF.
– You free the contexts as usual.

Decoding examples

Setup

The setup uses the usual avcodec_open2.

    ...

    c = avcodec_alloc_context3(codec);

    ret = avcodec_open2(c, codec, &opts);
    if (ret < 0)
        ...

Simple decoding loop

People using the old API usually have some kind of simple loop like

while (get_packet(pkt)) {
    ret = avcodec_decode_video2(c, picture, &got_picture, pkt);
    if (ret < 0) {
        ...
    }
    if (got_picture) {
        ...
    }
}

The old functions can be replaced by calling something like the following.

// The flush packet is a non-NULL packet with size 0 and data NULL
int decode(AVCodecContext *avctx, AVFrame *frame, int *got_frame, AVPacket *pkt)
{
    int ret;

    *got_frame = 0;

    if (pkt) {
        ret = avcodec_send_packet(avctx, pkt);
        // In particular, we don't expect AVERROR(EAGAIN), because we read all
        // decoded frames with avcodec_receive_frame() until done.
        if (ret < 0)
            return ret == AVERROR_EOF ? 0 : ret;
    }

    ret = avcodec_receive_frame(avctx, frame);
    if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
        return ret;
    if (ret >= 0)
        *got_frame = 1;

    return 0;
}

Callback approach

Since the new API will output multiple frames in certain situations would be better to process them as they are produced.

// return 0 on success, negative on error
typedef int (*process_frame_cb)(void *ctx, AVFrame *frame);

int decode(AVCodecContext *avctx, AVFrame *pkt,
           process_frame_cb cb, void *priv)
{
    AVFrame *frame = av_frame_alloc();
    int ret;

    ret = avcodec_send_packet(avctx, pkt);
    // Again EAGAIN is not expected
    if (ret < 0)
        goto out;

    while (!ret) {
        ret = avcodec_receive_frame(avctx, frame);
        if (!ret)
            ret = cb(priv, frame);
    }

out:
    av_frame_free(&frame);
    if (ret == AVERROR(EAGAIN))
        return 0;
    return ret;
}

Separated threads

The new API makes sort of easy to split the workload in two separated threads.

// Assume we have context with a mutex, a condition variable and the AVCodecContext


// Feeding loop
{
    AVPacket *pkt = NULL;

    while ((ret = get_packet(ctx, pkt)) >= 0) {
        pthread_mutex_lock(&ctx->lock);

        ret = avcodec_send_packet(avctx, pkt);
        if (!ret) {
            pthread_cond_signal(&ctx->cond);
        } else if (ret == AVERROR(EAGAIN)) {
            // Signal the draining loop
            pthread_cond_signal(&ctx->cond);
            // Wait here
            pthread_cond_wait(&ctx->cond, &ctx->mutex);
        } else if (ret < 0)
            goto out;

        pthread_mutex_unlock(&ctx->lock);
    }

    pthread_mutex_lock(&ctx->lock);
    ret = avcodec_send_packet(avctx, NULL);

    pthread_cond_signal(&ctx->cond);

out:
    pthread_mutex_unlock(&ctx->lock)
    return ret;
}

// Draining loop
{
    AVFrame *frame = av_frame_alloc();

    while (!done) {
        pthread_mutex_lock(&ctx->lock);

        ret = avcodec_receive_frame(avctx, frame);
        if (!ret) {
            pthread_cond_signal(&ctx->cond);
        } else if (ret == AVERROR(EAGAIN)) {
            // Signal the feeding loop
            pthread_cond_signal(&ctx->cond);
            // Wait
            pthread_cond_wait(&ctx->cond, &ctx->mutex);
        } else if (ret < 0)
            goto out;

        pthread_mutex_unlock(&ctx->lock);

        if (!ret) {
            do_something(frame);
        }
    }

out:
        pthread_mutex_unlock(&ctx->lock)
    return ret;
}

It isn’t as neat as having all this abstracted away, but is mostly workable.

Encoding Examples

Simple encoding loop

Some compatibility with the old API can be achieved using something along the lines of:

int encode(AVCodecContext *avctx, AVPacket *pkt, int *got_packet, AVFrame *frame)
{
    int ret;

    *got_packet = 0;

    ret = avcodec_send_frame(avctx, frame);
    if (ret < 0)
        return ret;

    ret = avcodec_receive_packet(avctx, pkt);
    if (!ret)
        *got_packet = 1;
    if (ret == AVERROR(EAGAIN))
        return 0;

    return ret;
}

Callback approach

Since for each input multiple output could be produced, would be better to loop over the output as soon as possible.

// return 0 on success, negative on error
typedef int (*process_packet_cb)(void *ctx, AVPacket *pkt);

int encode(AVCodecContext *avctx, AVFrame *frame,
           process_packet_cb cb, void *priv)
{
    AVPacket *pkt = av_packet_alloc();
    int ret;

    ret = avcodec_send_frame(avctx, frame);
    if (ret < 0)
        goto out;

    while (!ret) {
        ret = avcodec_receive_packet(avctx, pkt);
        if (!ret)
            ret = cb(priv, pkt);
    }

out:
    av_packet_free(&pkt);
    if (ret == AVERROR(EAGAIN))
        return 0;
    return ret;
}

The I/O should happen in a different thread when possible so the callback should just enqueue the packets.

Coming Next

This post is long enough so the next one might involve converting a codec to the new API.

Bitstream Filtering

Last weekend, after few months of work, the new bitstream filter API eventually landed.

Bitstream filters

In Libav is possible to manipulate raw and encoded data in many ways, the most common being

  • Demuxing: extracting single data packets and their timing information
  • Decoding: converting the compressed data packets in raw video or audio frames
  • Encoding: converting the raw multimedia information in a compressed form
  • Muxing: store the compressed information along timing information and additional information.

Bitstream filtering is somehow less considered even if the are widely used under the hood to demux and mux many widely used formats.

It could be consider an optional final demuxing or muxing step since it works on encoded data and its main purpose is to reformat the data so it can be accepted by decoders consuming only a specific serialization of the many supported (e.g. the HEVC QSV decoder) or it can be correctly muxed in a container format that stores only a specific kind.

In Libav this kind of reformatting happens normally automatically with the annoying exception of MPEGTS muxing.

New API

The new API is modeled against the pull/push paradigm I described for AVCodec before, it works on AVPackets and has the following concrete implementation:

// Query
const AVBitStreamFilter *av_bsf_next(void **opaque);
const AVBitStreamFilter *av_bsf_get_by_name(const char *name);

// Setup
int av_bsf_alloc(const AVBitStreamFilter *filter, AVBSFContext **ctx);
int av_bsf_init(AVBSFContext *ctx);

// Usage
int av_bsf_send_packet(AVBSFContext *ctx, AVPacket *pkt);
int av_bsf_receive_packet(AVBSFContext *ctx, AVPacket *pkt);

// Cleanup
void av_bsf_free(AVBSFContext **ctx);

In order to use a bsf you need to:

  • Look up its definition AVBitStreamFilter using a query function.
  • Set up a specific context AVBSFContext, by allocating, configuring and then initializing it.
  • Feed the input using av_bsf_send_packet function and get the processed output once it is ready using av_bsf_receive_packet.
  • Once you are done av_bsf_free cleans up the memory used for the context and the internal buffers.

Query

You can enumerate the available filters

void *state = NULL;

const AVBitStreamFilter *bsf;

while ((bsf = av_bsf_next(&state)) {
    av_log(NULL, AV_LOG_INFO, "%s\n", bsf->name);
}

or directly pick the one you need by name:

const AVBitStreamFilter *bsf = av_bsf_get_by_name("hevc_mp4toannexb");

Setup

A bsf may use some codec parameters and time_base and provide updated ones.

AVBSFContext *ctx;

ret = av_bsf_alloc(bsf, &ctx);
if (ret < 0)
    return ret;

ret = avcodec_parameters_copy(ctx->par_in, in->codecpar);
if (ret < 0)
    goto fail;

ctx->time_base_in = in->time_base;

ret = av_bsf_init(ctx);
if (ret < 0)
    goto fail;

ret = avcodec_parameters_copy(out->codecpar, ctx->par_out);
if (ret < 0)
    goto fail;

out->time_base = ctx->time_base_out;

Usage

Multiple AVPackets may be consumed before an AVPacket is emitted or multiple AVPackets may be produced out of a single input one.

AVPacket *pkt;

while (got_new_packet(&pkt)) {
    ret = av_bsf_send_packet(ctx, pkt);
    if (ret < 0)
        goto fail;

    while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {
        yield_packet(pkt);
    }

    if (ret == AVERROR(EAGAIN)
        continue;
    IF (ret == AVERROR_EOF)
        goto end;
    if (ret < 0)
        goto fail;
}

// Flush
ret = av_bsf_send_packet(ctx, NULL);
if (ret < 0)
    goto fail;

while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {
    yield_packet(pkt);
}

if (ret != AVERROR_EOF)
    goto fail;

In order to signal the end of stream a NULL pkt should be fed to send_packet.

Cleanup

The cleanup function matches the av_freep signature so it takes the address of the AVBSFContext pointer.

    av_bsf_free(&ctx);

All the memory is freed and the ctx pointer is set to NULL.

Coming Soon

Hopefully next I’ll document the new HWAccel layer that already landed and some other API that I discussed with Kostya before.
Sadly my blog-time (and spare time in general) shrunk a lot in the past months so he rightfully blamed me a lot.

lxc, ipv6 and iproute2

Not so recently I got a soyoustart system since it is provided with an option to install Gentoo out of box.

The machine comes with a single ipv4 and a /64 amount of ipv6 addresses.

LXC

I want to use the box to host some of my flask applications (plaid mainly), keep some continuous integration instances for libav and some other experiments with compilers and libraries (such as musl, cparser other).

Since Diego was telling me about lxc I picked it. It is simple, requires not much effort and in Gentoo we have at least some documentation.

Setting up

I followed the documentation provided and it worked quite well up to a point. The btrfs integration works as explained, creating new Gentoo instances just worked, setting up the network… Required some effort.

Network woes

I have just 1 single ipv4 and some ipv6 so why not leveraging them? I decided to partition my /64 and use some, configured the bridge to take ::::1::1 and set up the container configuration like this:

lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up
lxc.network.ipv4 = 192.168.1.4/16
lxc.network.ipv4.gateway = auto
lxc.network.ipv6 = ::::1::4/80
lxc.network.ipv6.gateway = auto
lxc.network.hwaddr = 02:00:ee:cb:8a:04

But the route to my container wasn’t advertised.

Having no idea why I just kept poking around sysctl and iproute2 until I got:

  • sysctl.conf:
  net.ipv6.conf.all.forwarding = 1
  net.ipv6.conf.eth0.proxy_ndp = 1

And

ip -6 neigh add proxy ::::1::4 dev eth0

In my container runner script.

I know that at least other people had the problem so here this mini-post.

Code and Conduct

This is a sort of short list of checklists and few ramblings in the wake of Fosdem’s Code of Conduct discussions and the not exactly welcoming statements about how to perceive a Code of Conduct such as this one.

Code of Conduct and OpenSource projects

A Code of Conduct is generally considered a mean to get rid of problematic people (and thus avoid toxic situations). I prefer consider it a mean to welcome people and provide good guidelines to newcomers.

Communities without a code of conduct tend to reject the idea of having one, thinking that it is only needed to solve the above mentioned issue and adding more bureaucracy would just actually give more leeway to macchiavellian ploys.

Sadly, no matter how good the environment is, it takes just few poisonous people to get in an unbearable situation and a you just need one in few selected cases.

If you consider the CoC a shackle or a stick to beat “bad guys” so you do not need it until you see a bad guy, that is naive and utterly wrong: you will end up writing something that excludes people due a, quite understandable but wrong, knee-jerk reaction.

A Code of Conduct should do exactly the opposite, it should embrace people and make easier joining and fit in. It should be the social equivalent of the developer handbook or the coding style guidelines.

As everybody can make a little effort and make sure to send code with spaces between operators everybody can make an effort and not use colorful language. Likewise as people would be more happy to contribute if the codebase they are hacking on is readable so they are more confident in joining the community if the environment is pleasant.

Making an useful Code of Conduct

The Code of Conduct should be a guideline for people that have no idea what the expected behavior is.
It should be written thinking on how to help people get along not on how to punish who does not.

  • It should be short. It is pointless to enumerate ALL the possible way to make people uncomfortable, you are bound to miss few.
  • It should be understanding and inclusive. Always assume cultural biases and not ill will.
  • It should be enforced. It gets quite depressing when you have a 100+ lines code of conduct but then nobody cares about it and nobody really enforces it. And I’m not talking about having specifically designated people to enforce it. Your WHOLE community should agree on what is an acceptable behavior and act accordingly on breaches.

People joining the community should consider the Code of Conduct first as a request (and not a demand) to make an effort to get along with the others.

Pitfalls

Since I saw quite some long and convoluted wall of text being suggested as THE CODE OF CONDUCT everybody MUST ABIDE TO, here some suggestion on what NOT do.

  • It should not be a political statement: this is a strong cultural bias that would make potential contributors just stay away. No matter how good and great you think your ideas are, those are unrelated to a project that should gather all the people that enjoy writing code in their spare time. The Open Source movement is already an ideology in itself, overloading it with more is just a recipe for a disaster.
  • Do not try to make a long list of definitions, you just dilute the content and give even more ammo to lawyer-type arguers.
  • Do not think much about making draconian punishments, this is a community on internet, even nowadays nobody really knows if you are actually a dog or not, you cannot really enforce anything if the other party really wants to be a pest.

Good examples

Some CoC I consider good are obviously the ones used in the communities I belong to, Gentoo and Libav, they are really short and to the point.

Enforcing

As I said before no matter how well written a code of conduct is, the only way to really make it useful is if the community as whole helps new (and not so new) people to get along.

The rule of thumb “if anybody feels uncomfortable in a non-technical discussion, once they say they are, drop it immediately”, is ok as long:

  • The person uncomfortable speaks up. If you are shy you might ask somebody else to speak up for you, but do not be quiet when it happens and then fill a complaint much later, that is NOT OK.
  • The rule is not abused to derail technical discussions. See my post about reviews to at least avoid this pitfall.
  • People agree to drop at least some of their cultural biases, otherwise it would end up like walking on eggshells every moment.

Letting situations going unchecked is probably the main issue, newcomers can think it is OK to behave in a certain way if people are behaving such way and nobody stops that, again, not just specific enforcers of some kind, everybody should behave and tell clearly to those not behaving that they are problematic.

Gentoo is a big community, so gets problematic having a swift reaction: lots of people prefer not to speak up when something happens, so people unwillingly causing the problem are not made aware immediately.

Libav is a much smaller community and in general nobody has qualms in saying “please stop” (that is also partially due how the community evolved).

Hopefully this post would help avoid making some mistakes and help people getting along better.

Trusting the context

This mini-post spurred from this bug.

AVFrame and AVCodecContext

In Libav there are a number of patterns shared across most of the components.
Does not matter if it models a codec, a demuxer or a resampler: You interact with it using a Context and you get data in or out of the module using some kind of Abstraction that wraps data and useful information such as the timestamp. Today’s post is about AVFrames and AVCodecContext.

AVFrame

The most used abstraction in Libav by far is the AVFrame. It wraps some kind of raw data that can be produced by decoders and fed to encoders, passed through filters, scalers and resamplers.

It is quite flexible and contains the data and all the information to understand it e.g.:

  • format: Used to describe either the pixel format for video and the sample format for audio.
  • width and height: The dimension of a video frame.
  • channel_layout, nb_samples and sample_rate for audio frames.

AVCodecContext

This context contains all the information useful to describe a codec and to configure an encoder or a decoder (the generic, common features, there are private options for specific features).

Being shared with encoder, decoder and (until Anton’s plan to avoid it is deployed) container streams this context is fairly large and a good deal of its fields are a little confusing since they seem to replicate what is present in the AVFrame or because they aren’t marked as write-only since they might be read in few situation.

In the bug mentioned channel_layout was the confusing one but also width and height caused problems to people thinking the value of those fields in the AVCodecContext would represent what is in the AVFrame (then you’d wonder why you should have them in two different places…).

As a rule of thumb everything that is set in a context is either the starting configuration and bound to change in the future.

Video decoders can reconfigure themselves and output video frames with completely different geometries, audio decoders can report a completely different number of channels or variations in their layout and so on.

Some encoders are able to reconfigure on the fly as well, but usually with more strict constraints.

Why their information is not the same

The fields in the AVCodecContext are used internally and updated as needed by the decoder. The decoder can be multithreaded so the AVFrame you are getting from one of the avcodec_decode_something() functions is not the last frame decoded.

Do not expect any of the fields with names similar to the ones provided by AVFrame to stay immutable or to match the values provided by the AVFrame.

Common pitfalls

Allocating video surfaces

Some quite common mistake is to use the AVCodecContext coded_width and coded_height to allocate the surfaces to present the decoded frames.

As said the frame geometry can change mid-stream, so if you do that best case you have some lovely green surrounding your picture, worst case you have a bad crash.

I suggest to always check that the AVFrame dimensions fit and be ready to reconfigure your video out when that happens.

Resampling audio

If you are using a current version of Libav you have avresample_convert_frame() doing most of the work for you, if you are not you need to check that format channel_layout and sample_rate do not change and manually reconfigure.

Rescaling video

Similarly you can misconfigure swscale and you should check manually that format, width and height and reconfigure as well. The AVScale draft API on provides an avscale_process_frame().

In closing

Be extra careful, think twice and beware of the examples you might find on internet, they might work until they wont.

Reviews

This spurred from some events happening in Gentoo, since with the move to git we eventually have more reviews and obviously comments over patches can be acceptable (and accepted) depending on a number of factors.

This short post is about communicating effectively.

When reviewing patches

No point in pepper coating

Do not disparage code or, even worse, people. There is no point in being insulting, you add noise to the signal:

You are a moron! This is shit has no place here, do not do again something this stupid.

This is not OK: most people will focus on the insult and the technical argument will be totally lost.

Keep in mind that you want people doing stuff for the project not run away crying.

No point in sugar coating

Do not downplay stupid mistakes that would crash your application (or wipe an operating system) because you think it would hurt the feelings of the contributor.

    rm -fR /usr /local/foo

Is as silly as you like but the impact is HUGE.

This is a tiny mistake, you should not do that again.

No, it isn’t tiny it is quite a problem.

Mistakes happen, the review is there to avoid them hitting people, but a modicum of care is needed:
wasting other people’s time is still bad.

Point the mistake directly by quoting the line

And use at most 2-3 lines to explain why it is a problem.
If you can’t better if you fix that part yourself or move the discussion on a more direct media e.g. IRC.

Be specific

This kind of change is not portable, obscures the code and does not fix the overflow issue at hand:
The expression as whole could still overflow.

Hopefully even the most busy person juggling over 5 different tasks will get it.

Be direct

Do not suggest the use of those non-portable functions again anyway.

No room for interpretation, do not do that.

Avoid clashes

If you and another reviewer disagree, move the discussion on another media, there is NO point in spamming
the review system with countless comments.

When receiving reviews (or waiting for them)

Everybody makes mistakes

YOU included, if the reviewer (or more than one) tells you that your changes are not right, there are good odds you are wrong.

Conversely, the reviewer can make mistakes. Usually is better to move away from the review system and discuss over emails or IRC.

Be nice

There is no point in being confrontational. If you think the reviewer is making a mistake, politely point it out.

If the reviewer is not nice, do not use the same tone to fit in. Even more if you do not like that kind of tone to begin with.

Wait before answering

Do not update your patch or write a reply as soon as you get a notification of a review, more changes might be needed and maybe other reviewers have additional opinions.

Be patient

If a patch is unanswered, ping it maybe once a week, possibly rebasing it if the world changed meanwhile.

Keep in mind that most of your interaction is with other people volunteering their free time and not getting anything out of it as well, sometimes the real-life takes priority =)

Tags in git

mini-post about using tags in git commits.

Tags

In git a commit message is structured in a first subject line, an empty newline and more text making the body of the message.

The subject can be split in two components tags and the actual subject.

tag1: tag2: Commit Subject

A body with more information spanning
multiple lines.

The tags can be used to pin the general area the patch is impacting, e.g:

ui: Change widget foo

Usage

When you are looking at the history using git log having tags helps a lot digging out old commits, for example: you remember some commit added some timeout system in something related to the component foo.

git log --oneline | grep foo:

Would help figuring out the commit.

This usage is the best when working with not well structured codebase, since alternatively you can do

git log --oneline module/component

If you use separate directories for each module and component within the module.

PS: This is one of the reasons plaid focuses a lot on tags and I complain a lot when tags are not used.

Nobody hears you being subtle on twitter

You might be subtle like this or just work on your stuff like that but then nobody will know that you are the one that did something (and praise somebody else completely unrelated for your stuff, e.g. Anton not being praised much for the HEVC threaded decoding, the huge work on ref-counted AVFrame and many other things).

Blogging is boring

Once you wrote something in code talking about it gets sort of boring, the code is there, it works and maybe you spent enough time on the mailing list and irc discussing about it that once it is done you wouldn’t want to think about it for at least a week.

The people at xiph got it right and they wrote awesome articles about what they are doing.

Blogging is important

JB got it right by writing posts about what happened every week. Now journalist can pick from there what’s cool and coming from VLC and not have to try to extract useful information from git log, scattered mailing lists and conversations on irc.
I’m not sure I’ll have the time to do the same, but surely I’ll prod at least Alexandra and the others to write more.

Deprecating AVPicture

In Libav we try to clean up the API and make it more regular, this is one of the possibly many articles I write about APIs, this time about deprecating some relic from the past and why we are doing it.

AVPicture

This struct used to store image data using data pointers and linesizes. It comes from the far past and it looks like this:

typedef struct AVPicture {
    uint8_t *data[AV_NUM_DATA_POINTERS];
    int linesize[AV_NUM_DATA_POINTERS];
} AVPicture;

Once the AVFrame was introduced it was made so it would alias to it and for some time the two structures were actually defined sharing the commond initial fields through a macro.

The AVFrame then evolved to store both audio and image data, to use AVBuffer to not have to do needless copies and to provide more useful information (e.g. the actual data format), now it looks like:

typedef struct AVFrame {
    uint8_t *data[AV_NUM_DATA_POINTERS];
    int linesize[AV_NUM_DATA_POINTERS];

    uint8_t **extended_data;

    int width, height;

    int nb_samples;

    int format;

    int key_frame;

    enum AVPictureType pict_type;

    AVRational sample_aspect_ratio;

    int64_t pts;

    ...
} AVFrame;

The image-data manipulation functions moved to the av_image namespace and use directly data and linesize pointers, while the equivalent avpicture became a wrapper over them.

int avpicture_fill(AVPicture *picture, uint8_t *ptr,
                   enum AVPixelFormat pix_fmt, int width, int height)
{
    return av_image_fill_arrays(picture->data, picture->linesize,
                                ptr, pix_fmt, width, height, 1);
}

int avpicture_layout(const AVPicture* src, enum AVPixelFormat pix_fmt,
                     int width, int height,
                     unsigned char *dest, int dest_size)
{
    return av_image_copy_to_buffer(dest, dest_size,
                                   src->data, src->linesize,
                                   pix_fmt, width, height, 1);
}

...

It is also used in the subtitle abstraction:

typedef struct AVSubtitleRect {
    int x, y, w, h;
    int nb_colors;

    AVPicture pict;
    enum AVSubtitleType type;

    char *text;
    char *ass;
    int flags;
} AVSubtitleRect;

And to crudely pass AVFrame from the decoder level to the muxer level, for certain rawvideo muxers by doing something such as:

    pkt.data   = (uint8_t *)frame;
    pkt.size   =  sizeof(AVPicture);

AVPicture problems

In the codebase its remaining usage is dubious at best:

AVFrame as AVPicture

In some codecs the AVFrame produced or consumed are casted as AVPicture and passed to avpicture functions instead
of directly use the av_image functions.

AVSubtitleRect

For the subtitle codecs, accessing the Rect data requires a pointless indirection, usually something like:

    wrap3 = rect->pict.linesize[0];
    p = rect->pict.data[0];
    pal = (const uint32_t *)rect->pict.data[1];  /* Now in YCrCb! */

AVFMT_RAWPICTURE

Copying memory from a buffer to another when can be avoided is consider a major sin (“memcpy is murder”) since it is a costly operation in itself and usually it invalidates the cache if we are talking about large buffers.

Certain muxers for rawvideo, try to spare a memcpy and thus avoid a “murder” by not copying the AVFrame data to the AVPacket.

The idea in itself is simple enough, store the AVFrame pointer as if it would point a flat array, consider the data size as the AVPicture size and hope that the data pointed by the AVFrame remains valid while the AVPacket is consumed.

Simple and faulty: with the AVFrame ref-counted API codecs may use a Pool of AVFrames and reuse them.
It can lead to surprising results because the buffer gets updated before the AVPacket is actually written.
If the frame referenced changes dimensions or gets deallocated it could even lead to crashes.

Definitely not a great idea.

Solutions

Vittorio, wm4 and I worked together to fix the problems. Radically.

AVFrame as AVPicture

The av_image functions are now used when needed.
Some pointless copies got replaced by av_frame_ref, leading to less memory usage and simpler code.

No AVPictures are left in the video codecs.

AVSubtitle

The AVSubtitleRect is updated to have simple data and linesize fields and each codec is updated to keep the AVPicture and the new fields in sync during the deprecation window.

The code is already a little easier to follow now.

AVFMT_RAWPICTURE

Just dropping the “feature” would be a problem since those muxers are widely used in FATE and the time the additional copy takes adds up to quite a lot. Your regression test must be as quick as possible.

I wrote a safer wrapper pseudo-codec that leverages the fact that both AVPacket and AVFrame use a ref-counted system:

  • The AVPacket takes the AVFrame and increases its ref-count by 1.
  • The AVFrame is then stored in the data field and wrapped in a custom AVBuffer.
  • That AVBuffer destructor callback unrefs the frame.

This way the AVFrame data won’t change until the AVPacket gets destroyed.

Goodbye AVPicture

With the release 14 the AVPicture struct will be removed completely from Libav, people using it outside Libav should consider moving to use full AVFrame (and leverage the additional feature it provides) or the av_image functions directly.