Reviews

This spurred from some events happening in Gentoo, since with the move to git we eventually have more reviews and obviously comments over patches can be acceptable (and accepted) depending on a number of factors.

This short post is about communicating effectively.

When reviewing patches

No point in pepper coating

Do not disparage code or, even worse, people. There is no point in being insulting, you add noise to the signal:

You are a moron! This is shit has no place here, do not do again something this stupid.

This is not OK: most people will focus on the insult and the technical argument will be totally lost.

Keep in mind that you want people doing stuff for the project not run away crying.

No point in sugar coating

Do not downplay stupid mistakes that would crash your application (or wipe an operating system) because you think it would hurt the feelings of the contributor.

    rm -fR /usr /local/foo

Is as silly as you like but the impact is HUGE.

This is a tiny mistake, you should not do that again.

No, it isn’t tiny it is quite a problem.

Mistakes happen, the review is there to avoid them hitting people, but a modicum of care is needed:
wasting other people’s time is still bad.

Point the mistake directly by quoting the line

And use at most 2-3 lines to explain why it is a problem.
If you can’t better if you fix that part yourself or move the discussion on a more direct media e.g. IRC.

Be specific

This kind of change is not portable, obscures the code and does not fix the overflow issue at hand:
The expression as whole could still overflow.

Hopefully even the most busy person juggling over 5 different tasks will get it.

Be direct

Do not suggest the use of those non-portable functions again anyway.

No room for interpretation, do not do that.

Avoid clashes

If you and another reviewer disagree, move the discussion on another media, there is NO point in spamming
the review system with countless comments.

When receiving reviews (or waiting for them)

Everybody makes mistakes

YOU included, if the reviewer (or more than one) tells you that your changes are not right, there are good odds you are wrong.

Conversely, the reviewer can make mistakes. Usually is better to move away from the review system and discuss over emails or IRC.

Be nice

There is no point in being confrontational. If you think the reviewer is making a mistake, politely point it out.

If the reviewer is not nice, do not use the same tone to fit in. Even more if you do not like that kind of tone to begin with.

Wait before answering

Do not update your patch or write a reply as soon as you get a notification of a review, more changes might be needed and maybe other reviewers have additional opinions.

Be patient

If a patch is unanswered, ping it maybe once a week, possibly rebasing it if the world changed meanwhile.

Keep in mind that most of your interaction is with other people volunteering their free time and not getting anything out of it as well, sometimes the real-life takes priority =)

Tags in git

mini-post about using tags in git commits.

Tags

In git a commit message is structured in a first subject line, an empty newline and more text making the body of the message.

The subject can be split in two components tags and the actual subject.

tag1: tag2: Commit Subject

A body with more information spanning
multiple lines.

The tags can be used to pin the general area the patch is impacting, e.g:

ui: Change widget foo

Usage

When you are looking at the history using git log having tags helps a lot digging out old commits, for example: you remember some commit added some timeout system in something related to the component foo.

git log --oneline | grep foo:

Would help figuring out the commit.

This usage is the best when working with not well structured codebase, since alternatively you can do

git log --oneline module/component

If you use separate directories for each module and component within the module.

PS: This is one of the reasons plaid focuses a lot on tags and I complain a lot when tags are not used.

Nobody hears you being subtle on twitter

You might be subtle like this or just work on your stuff like that but then nobody will know that you are the one that did something (and praise somebody else completely unrelated for your stuff, e.g. Anton not being praised much for the HEVC threaded decoding, the huge work on ref-counted AVFrame and many other things).

Blogging is boring

Once you wrote something in code talking about it gets sort of boring, the code is there, it works and maybe you spent enough time on the mailing list and irc discussing about it that once it is done you wouldn’t want to think about it for at least a week.

The people at xiph got it right and they wrote awesome articles about what they are doing.

Blogging is important

JB got it right by writing posts about what happened every week. Now journalist can pick from there what’s cool and coming from VLC and not have to try to extract useful information from git log, scattered mailing lists and conversations on irc.
I’m not sure I’ll have the time to do the same, but surely I’ll prod at least Alexandra and the others to write more.

Deprecating AVPicture

In Libav we try to clean up the API and make it more regular, this is one of the possibly many articles I write about APIs, this time about deprecating some relic from the past and why we are doing it.

AVPicture

This struct used to store image data using data pointers and linesizes. It comes from the far past and it looks like this:

typedef struct AVPicture {
    uint8_t *data[AV_NUM_DATA_POINTERS];
    int linesize[AV_NUM_DATA_POINTERS];
} AVPicture;

Once the AVFrame was introduced it was made so it would alias to it and for some time the two structures were actually defined sharing the commond initial fields through a macro.

The AVFrame then evolved to store both audio and image data, to use AVBuffer to not have to do needless copies and to provide more useful information (e.g. the actual data format), now it looks like:

typedef struct AVFrame {
    uint8_t *data[AV_NUM_DATA_POINTERS];
    int linesize[AV_NUM_DATA_POINTERS];

    uint8_t **extended_data;

    int width, height;

    int nb_samples;

    int format;

    int key_frame;

    enum AVPictureType pict_type;

    AVRational sample_aspect_ratio;

    int64_t pts;

    ...
} AVFrame;

The image-data manipulation functions moved to the av_image namespace and use directly data and linesize pointers, while the equivalent avpicture became a wrapper over them.

int avpicture_fill(AVPicture *picture, uint8_t *ptr,
                   enum AVPixelFormat pix_fmt, int width, int height)
{
    return av_image_fill_arrays(picture->data, picture->linesize,
                                ptr, pix_fmt, width, height, 1);
}

int avpicture_layout(const AVPicture* src, enum AVPixelFormat pix_fmt,
                     int width, int height,
                     unsigned char *dest, int dest_size)
{
    return av_image_copy_to_buffer(dest, dest_size,
                                   src->data, src->linesize,
                                   pix_fmt, width, height, 1);
}

...

It is also used in the subtitle abstraction:

typedef struct AVSubtitleRect {
    int x, y, w, h;
    int nb_colors;

    AVPicture pict;
    enum AVSubtitleType type;

    char *text;
    char *ass;
    int flags;
} AVSubtitleRect;

And to crudely pass AVFrame from the decoder level to the muxer level, for certain rawvideo muxers by doing something such as:

    pkt.data   = (uint8_t *)frame;
    pkt.size   =  sizeof(AVPicture);

AVPicture problems

In the codebase its remaining usage is dubious at best:

AVFrame as AVPicture

In some codecs the AVFrame produced or consumed are casted as AVPicture and passed to avpicture functions instead
of directly use the av_image functions.

AVSubtitleRect

For the subtitle codecs, accessing the Rect data requires a pointless indirection, usually something like:

    wrap3 = rect->pict.linesize[0];
    p = rect->pict.data[0];
    pal = (const uint32_t *)rect->pict.data[1];  /* Now in YCrCb! */

AVFMT_RAWPICTURE

Copying memory from a buffer to another when can be avoided is consider a major sin (“memcpy is murder”) since it is a costly operation in itself and usually it invalidates the cache if we are talking about large buffers.

Certain muxers for rawvideo, try to spare a memcpy and thus avoid a “murder” by not copying the AVFrame data to the AVPacket.

The idea in itself is simple enough, store the AVFrame pointer as if it would point a flat array, consider the data size as the AVPicture size and hope that the data pointed by the AVFrame remains valid while the AVPacket is consumed.

Simple and faulty: with the AVFrame ref-counted API codecs may use a Pool of AVFrames and reuse them.
It can lead to surprising results because the buffer gets updated before the AVPacket is actually written.
If the frame referenced changes dimensions or gets deallocated it could even lead to crashes.

Definitely not a great idea.

Solutions

Vittorio, wm4 and I worked together to fix the problems. Radically.

AVFrame as AVPicture

The av_image functions are now used when needed.
Some pointless copies got replaced by av_frame_ref, leading to less memory usage and simpler code.

No AVPictures are left in the video codecs.

AVSubtitle

The AVSubtitleRect is updated to have simple data and linesize fields and each codec is updated to keep the AVPicture and the new fields in sync during the deprecation window.

The code is already a little easier to follow now.

AVFMT_RAWPICTURE

Just dropping the “feature” would be a problem since those muxers are widely used in FATE and the time the additional copy takes adds up to quite a lot. Your regression test must be as quick as possible.

I wrote a safer wrapper pseudo-codec that leverages the fact that both AVPacket and AVFrame use a ref-counted system:

  • The AVPacket takes the AVFrame and increases its ref-count by 1.
  • The AVFrame is then stored in the data field and wrapped in a custom AVBuffer.
  • That AVBuffer destructor callback unrefs the frame.

This way the AVFrame data won’t change until the AVPacket gets destroyed.

Goodbye AVPicture

With the release 14 the AVPicture struct will be removed completely from Libav, people using it outside Libav should consider moving to use full AVFrame (and leverage the additional feature it provides) or the av_image functions directly.

Decoupling an API – Part II

During the VDD we had lots of discussions and I enjoyed reviewing the initial NihAV implementation. Kostya already wrote some more about the decoupled API that I described at high level here.

This article is about some possible implementation details, at least another will follow.

The new API requires some additional data structures, mainly something to keep the data that is being consumed/produced, additional implementation-callbacks in AVCodec and possibly a mean to skip the queuing up completely.

Data Structures

AVPacketQueue and AVFrameQueue

In the previous post I considered as given some kind of Queue.

Ideally the API for it could be really simple:

typedef struct AVPacketQueue;

AVPacketQueue *av_packet_queue_alloc(int size);
int av_packet_queue_put(AVPacketQueue *q, AVPacket *pkt);
int av_packet_queue_get(AVPacketQueue *q, AVPacket *pkt);
int av_packet_queue_size(AVPacketQueue *q);
void av_packet_queue_free(AVPacketQueue **q);
typedef struct AVFrameQueue;

AVFrameQueue *av_frame_queue_alloc(int size);
int av_frame_queue_put(AVFrameQueue *q, AVPacket *pkt);
int av_frame_queue_get(AVFrameQueue *q, AVPacket *pkt);
int av_frame_queue_size(AVFrameQueue *q);
void av_frame_queue_free(AVFrameQueue **q);

Internally it leverages the ref-counted API (av_packet_move_ref and av_frame_move_ref) and any data structure that could fit the queue-usage. It will be used in a multi-thread scenario so a form of Lock has to be fit into it.

We have already something specific for AVPlay, using a simple Linked List and a FIFO for some other components that have a near-constant maximum number of items (e.g. avconv, NVENC, QSV).

Possibly also a Tree could be used to implement something such as av_packet_queue_insert_by_pts and have some forms of reordering happen on the fly. I’m not a fan of it, but I’m sure someone will come up with the idea..

The Queues are part of AVCodecContext.

typedef struct AVCodecContext {
    // ...

    AVPacketQueue *packet_queue;
    AVFrameQueue *frame_queue;

    // ...
} AVCodecContext;

Implementation Callbacks

In Libav the AVCodec struct describes some specific codec features (such as the supported framerates) and hold the actual codec implementation through callbacks such as init, decode/encode2, flush and close.
The new model obviously requires additional callbacks.

Once the data is in a queue it is ready to be processed, the actual decoding or encoding can happen in multiple places, for example:

  • In avcodec_*_push or avcodec_*_pull, once there is enough data. In that case the remaining functions are glorified proxies for the matching queue function.
  • somewhere else such as a separate thread that is started on avcodec_open or the first avcodec_decode_push and is eventually stopped once the context related to it is freed by avcodec_close. This is what happens under the hood when you have certain hardware acceleration.

Common

typedef struct AVCodec {
    // ... previous fields
    int (*need_data)(AVCodecContext *avctx);
    int (*has_data)(AVCodecContext *avctx);
    // ...
} AVCodec;

Those are used by both the encoder and decoder, some implementations such as QSV have functions that can be used to probe the internal state in this regard.

Decoding

typedef struct AVCodec {
    // ... previous fields
    int (*decode_push)(AVCodecContext *avctx, AVPacket *packet);
    int (*decode_pull)(AVCodecContext *avctx, AVFrame *frame);
    // ...
} AVCodec;

Those two functions can take a portion of the work the current decode function does, for example:
– the initial parsing and dispatch to a worker thread can happen in the _push.
– reordering and blocking until there is data to output can happen on _pull.

Assuming the reordering does not happen outside the pull callback in some generic code.

Encoding

typedef struct AVCodec {
    // ... previous fields
    int (*encode_push)(AVCodecContext *avctx, AVFrame *frame);
    int (*encode_pull)(AVCodecContext *avctx, AVPacket *packet);
} AVCodec;

As per the Decoding callbacks, encode2 workload is split. the _push function might just keep queuing up until there are enough frames to complete the initial the analysis, while, for single thread encoding, the rest of the work happens at the _pull.

Yielding data directly

So far the API mainly keeps some queue filled and let some magic happen under the hood, let see some usage examples first:

Simple Usage

Let’s expand the last example from the previous post: register callbacks to pull/push the data and have some simple loops.

Decoding

typedef struct DecodeCallback {
    int (*pull_packet)(void *priv, AVPacket *pkt);
    int (*push_frame)(void *priv, AVFrame *frame);
    void *priv_data_pull, *priv_data_push;
} DecodeCallback;

Two pointers since you pull from a demuxer+parser and you push to a splitter+muxer.

int decode_loop(AVCodecContext *avctx, DecodeCallback *cb)
{
    AVPacket *pkt  = av_packet_alloc();
    AVFrame *frame = av_frame_alloc();
    int ret;
    while ((ret = avcodec_decode_need_data(avctx)) > 0) {
        ret = cb->pull_packet(cb->priv_data_pull, pkt);
        if (ret < 0)
            goto end;
        ret = avcodec_decode_push(avctx, pkt);
        if (ret < 0)
            goto end;
    }
    while ((ret = avcodec_decode_have_data(avctx)) > 0) {
        ret = avcodec_decode_pull(avctx, frame);
        if (ret < 0)
            goto end;
        ret = cb->push_frame(cb->priv_data_push, frame);
        if (ret < 0)
            goto end;
    }

end:
    av_frame_free(&frame);
    av_packet_free(&pkt);
    return ret;
}

Encoding

For encoding something quite similar can be done:

typedef struct EncodeCallback {
    int (*pull_frame)(void *priv, AVFrame *frame);
    int (*push_packet)(void *priv, AVPacket *packet);
    void *priv_data_push, *priv_data_pull;
} EncodeCallback;

The loop is exactly the same beside the data types swapped.

int encode_loop(AVCodecContext *avctx, EncodeCallback *cb)
{
    AVPacket *pkt  = av_packet_alloc();
    AVFrame *frame = av_frame_alloc();
    int ret;
    while ((ret = avcodec_encode_need_data(avctx)) > 0) {
        ret = cb->pull_frame(cb->priv_data_pull, frame);
        if (ret < 0)
            goto end;
        ret = avcodec_encode_push(avctx, frame);
        if (ret < 0)
            goto end;
    }
    while ((ret = avcodec_encode_have_data(avctx)) > 0) {
        ret = avcodec_encode_pull(avctx, pkt);
        if (ret < 0)
            goto end;
        ret = cb->push_packet(cb->priv_data_push, pkt);
        if (ret < 0)
            goto end;
    }

end:
    av_frame_free(&frame);
    av_packet_free(&pkt);
    return ret;
}

Transcoding

Transcoding, the naive way, could be something such as

int transcode(AVFormatContext *mux,
              AVFormatContext *dem,
              AVCodecContext *enc,
              AVCodecContext *dec)
{
    DecodeCallbacks dcb = {
        get_packet,
        av_frame_queue_put,
        dem, enc->frame_queue };
    EncodeCallbacks ecb = {
        av_frame_queue_get,
        push_packet,
        enc->frame_queue, mux };
    int ret = 0;

    while (ret > 0) {
        if ((ret = decode_loop(dec, &dcb)) > 0)
            ret = encode_loop(enc, &ecb);
    }
}

One loop feeds the other throught the queue. get_packet and push_packet are muxing and demuxing functions, they might end up being other two Queue functions once the AVFormat layer gets a similar overhaul.

Advanced usage

From the examples above you would notice that in some situation you would possibly do better,
all the loops pull data from a queue push it immediately to another:

  • why not feeding right queue immediately once you have the data ready?
  • why not doing some processing before feeding the decoded data to the encoder, such as conver the pixel format?

Here some additional structures and functions to enable advanced users:

typedef struct AVFrameCallback {
    int (*yield)(void *priv, AVFrame *frame);
    void *priv_data;
} AVFrameCallback;

typedef struct AVPacketCallback {
    int (*yield)(void *priv, AVPacket *pkt);
    void *priv_data;
} AVPacketCallback;

typedef struct AVCodecContext {
// ...

AVFrameCallback *frame_cb;
AVPacketCallback *packet_cb;

// ...

} AVCodecContext;

int av_frame_yield(AVFrameCallback *cb, AVFrame *frame)
{
    return cb->yield(cb->priv_data, frame);
}

int av_packet_yield(AVPacketCallback *cb, AVPacket *packet)
{
    return cb->yield(cb->priv_data, packet);
}

Instead of using directly the Queue API, would be possible to use yield functions and give the user a mean to override them.

Some API sugar could be something along the lines of this:

int avcodec_decode_yield(AVCodecContext *avctx, AVFrame *frame)
{
    int ret;

    if (avctx->frame_cb) {
        ret = av_frame_yield(avctx->frame_cb, frame);
    } else {
        ret = av_frame_queue_put(avctx->frame_queue, frame);
    }

    return ret;
}

Whenever a frame (or a packet) is ready it could be passed immediately to another function, depending on your threading model and cpu it might be much more efficient skipping some enqueuing+dequeuing steps such as feeding directly some user-queue that uses different datatypes.

This approach might work well even internally to insert bitstream reformatters after the encoding or before the decoding.

Open problems

The callback system is quite powerful but you have at least a couple of issues to take care of:
– Error reporting: when something goes wrong how to notify what broke?
– Error recovery: how much the user have to undo to fallback properly?

Probably this part should be kept for later, since there is already a huge amount of work.

What’s next

Muxing and demuxing

Ideally the container format layer should receive the same kind of overhaul, I’m not even halfway documenting what should
change, but from this blog post you might guess the kind of changes. Spoiler: The I/O layer gets spun in a separate library.

Proof of Concept

Soon^WNot so late I’ll complete a POC out of this and possibly hack avplay so that either it uses QSV or videotoolbox as test-case (depending on which operating system I’m playing with when I start), probably I’ll see which are the limitations in this approach soon.

If you like the ideas posted above or you want to discuss them more, you can join the Libav irc channel or mailing list to discuss and help.

Libav: The project

This is a tiny introduction to Libav, the organization.

Libav

The project aims to provide useful tools, written in portable code that is readable, trustworthy and performant.

Libav is an opensource organization focused on developing libraries and tools to decode, manipulate and encode multimedia content.

Structure

The project tries to be as non-hierarchical as possible. Every contributor must abide by a well defined set of rules, no matter which role.

For decisions we strive to reach near-unanimous consensus. Discussions may happen on irc, mailing-list or in real life meetings.

If possible, conflicts should be avoided and otherwise resolved.

Join us!

We are always looking for enthusiastic new contributors and will help you get started. Below you can find a number of possible ways to contribute. Please contact us.

Roles

Even if the project is non-hierarchical, it is possible to define specific roles within it. Roles do not really give additional power but additional responsibilities.

Contributor

Contributing to Libav makes you a Contributor!
Anybody who reviews patches, writes patches, helps triaging bugs, writes documentation, helps people solve their problems, or keeps our infrastructure running is considered a contributor.

It does not matter how little you contribute. Any help is welcome.

On top of the standard great feats of contributing to an opensource project, special chocolate is always available during the events.

Reviewer

Many eyes might not make every bug shallow, but probably a second and a third pair might prevent some silly mistakes.

A reviewer is supposed to read the new patches and prevent mistakes (silly, tiny or huge) to land in the master.

Because of our workflow, spending time reading other people patches is quite common.

People with specific expertise might get nagged to give their opinion more often than others, but everybody might spot something that looks wrong and probably is.

Bugwrangler

Checking that the bugs are fixed and ask for better reports is important.

Bug wrangling involves making sure reported issues have all the needed information to start fixing the problem and checking if old issues are still valid or had been fixed already.

Committer

Nobody can push a patch to the master until it is reviewed, but somebody has to push it once it is.

Committers are the people who push code to the main repository after it has been reviewed.

Being a committer requires you to take newly submitted patches, make sure they work as expected either locally or pushing them through our continuous integration system and possibly fix minor issues like typos.

Patches from a committer go through the normal review process as well.

Infrastructure Administrator

The regression test system. git repository, the samples collection, the website, the patch trackers, the wiki and the issue tracker are all managed on dedicated hardware.

This infrastructure needs constant maintaining and improving.

Most of comes from people devoting their time and (beside few exceptions) their own hardware, definitely this role requires a huge amount of dedication.

Rules

The project strives to provide a pleasant environment for everybody.

Every contributor is considered a member of the team, regardless if they are a newcomer or a founder. Nobody has special rights or prerogatives.

Well defined rules have been adopted since the founding of the project to ensure fairness.

Code of Conduct

A quite simple code of conduct is in place in our project.

It boils down to respecting the other people and being pleasant to deal with.

It is commonly enforced with a friendly warning, followed by the request to leave if the person is unable to behave and, then, eventual removal if anything else fails.

Contribution workflow

The project has a simple contribution workflow:

  • Every patch must be sent to the mailing-list
  • Every patch must get a review and an Ok before it lands in the master branch

Code Quality

We have plenty of documentation to make it easy for you to prepare patches.

The reviewers usually help newcomers by reformatting the first patches and pointing and fixing common pitfalls.

If some mistakes are not caught during the review, there are few additional means to prevent them from hitting a release.

Post Scriptum

This post tried to summarize the project and its structure as if the legends surrounding it do not exist and the project is just a clean slate. Shame on me for not having written this blog post 5 years ago.

Past and Present

I already wrote about the past and the current situation of Libav, if you are curious please do read the previous posts. I will probably blog again about the social issues soon.

Future

The Release 12 is in the ABI break window now and soon the release branch will be spun off! After that some of my plans to improve the API will see some initial implementations and hopefully will be available as part of the release 13 (and nihav)

I will discuss avframe_yield first since Kostya already posted about a better way to handle container formats.

Patches and Plaid

This is part of the better tools series.

Sometimes you should question the tools you are using and try to see if there is something better out there. Or build it yourself.

Juggling patches

It is quite common when interacting with people to send back and forth the changes to the shared codebase you are working on.

This post tries to analyze two commonly used models and explain why they can be improved and which are the good tools for it (existing or not).

The two models

The focus is on git, github-like web-mediated pull-requests and mailinglist-oriented workflows.

The tools in use are always:

  • a web browser
  • an editor
  • a shell
  • an email client

Some people might have all in one in a way or another making one of the two model already incredibly more effective. Below I assume you do not have such tightly integrated environments.

Pull requests

Github made quite easy to propose patches in the form of ephemeral branches that can be reviewed and merged with a single click on your browser.

The patchset can be part of your master tree or a brand new branch pushed on your repository for this purpose: first you push your changes on github and then you go to your browser to send the PullRequest (also known as merge request or proposed changeset).

You can get email notification that a pull request is available and then move to your browser to review it.

You might have a continuous integration report out of it and if you trust it you may skip fetching the changes and test them locally.

If something does not work exactly as it should you can notify the proponents and they might get an email that they have comments and they have to go to the browser to see them in detail.

Then the changes have to be pushed to the right branch and github helpfully updates it.

Then the reviewer has to get back to the browser and check again.

Once that is done you have your main tree with lots of merge artifacts and possibly some fun time if you want to bisect the history.

Mailing-list mediated

The mailing-list mediated is sort of popular because Linux does use it and git does provide tools for it out of box.

Once you have a set of patches (say 5) you are happy with you can simply issue

git send-email --compose -5 --to the_mailing@list.org

And if you have a local mailer working that’s it.

If you do not you end up having to configure it (e.g. configuring gmail with a specific access token not to have to type the password all the time is sort of easy)

The people in the mailing-list then receive your set in their mailbox as is and they can use git-am to test it (first saving the thread using their email client then using git am over it) locally and push to something like oracle if they like the set but they aren’t completely sure it won’t break everything.

If they have comments can just reply to the specific patch email (using the email Message-Id).

The proponent can then rework the set (maybe using git rebase -i) and send an update and add some comments here and there.

git send-email --annotate -6 --to the_mailing@list.org

Updates to specific patches or rework from other people can happen by just sending the patch back.

git send-email --annotate -1 --in-reply-to patch-msgid

Once the set is good, it can be applied to the tree, resulting in a purely linear history that makes going over looking for regression pretty easy.

Where to improve

Pull request based

The weak and the strong point of this method is its web-centricity.

It works quite nicely if you just use the web-mail so is just switching from a tab to another to see exactly what’s going on and reply in detail.

Yet, if your browser isn’t your shell (and you didn’t configure custom actions to auto-fetch the pull requests) you still have lots of back and forth.

Having already continuous integration hooks you can quickly configure is quite nice if the project has already a solid regression and code coverage harness so the reviewer bourden to make sure the code doesn’t break is lighter.

Sending a link to a pull request is easy.

Sadly, new code does not come with tests or tests you should trust the whole point above is half moot: you have to do the whole fetch&test dance.

Reworking sets isn’t exactly perfect, it makes quite hard to a third party to provide input in form of an alternate patch over a set:

  • you have to fetch the code being discussed
  • prepare a new pull request
  • reference it in your comment to the old one

then

  • the initial proponent has to fetch it
  • rebase his branch on it
  • update the pull request accordingly

and so on.

There are desktop-tools trying to bridge web and shell but right now they aren’t an incredible improvement and the churn during the review can be higher on the other side.

Surely is really HARD to forget a pull request open.

Mailing list based

The strong point of the approach is that you have less steps for the most common actions:

  • sending a set is a single command
  • fetching a set is two commands
  • doing a quick review does not require to switch to another application, you just
    reply to the email you received.
  • sending an update or a different approach is always the same git send-email command

It is quite loose so people can have various degrees of integration, but in general the experience as reviewer is as good as your email client, your experience as proponent is as nice as your sendmail configuration.

People with basic email client would even have problems referring to patches by its Message-Id.

The weakest point of the method is the chance of missing a patch, leaving it either unreviewed or uncommitted after the review.

Ideal situation

My ideal solution would include:

  • Not many compulsory steps, sending a patch for a habitual contributor should take the least amount of time.

  • A pre-screening of patches, ideally making sure the new code has tests and it passes them on some testing environments.

  • Reviewing should take the least amount of time.

  • A mean to track patches and make easy to know if a set is still pending review or it is committed.

Enters plaid

I do enjoy better using the mailing-list approach since it is much quicker for me, I have a decent email client (that still could improve) and I know how to configure my local smtp. If I want to contribute to a new project that uses the approach it is just a matter to find the email address and type git send-email --annotate --to email, github gets unwieldy if I just want to send a couple of fixes.

That said I do see that the mailing-list shortcomings are a limiting factor and while I’m not much concerned as making the initial setup much easier (since federico has already plans for it), I do want to not lose patches and to get some of the nice and nifty features github has without losing the speed in development I do enjoy.

Plaid is my try to improve the situation, right now it is just more or less an easier to deploy patch tracker along the lines of patchwork with a diverging focus.

It emphasizes the concepts of patch tag to provide quick grouping, patch series to ease reviewing a set.

curl http://plaid.libav.org/project/libav/series/50/mbox | git am -s

Is all you need to get all the patches in your working tree.

Right now it works either as stand-alone tracker (right now this test deploy is fed by fetching from the mailing list archives) or as mailbox hook (as patchwork does).

Coming soon

I plan to make it act as postfix filter, so it injects in the email an useful link to the patch. It will provide a mean to send emails directly from it so it can doubles as nicer email client for those that are more web-centric and gets annoyed because gmail and the likes aren’t good for the purpose.

More views such as a per-submitter view and a search view will appear as well.

Cleaner API

We are getting closer to a new release and you can see it is an even release by the amount of old and crufty code we are dropping. This usually is welcomed by some people and hated by others. This post is trying to explain what we do and why we are doing it.

New API and old API

Since the start of Libav we tried to address the painful shortcomings of the previous management, here the short list:

  • No leaders or dictators, there are rules agreed by consensus and nobody bends them.
  • No territoriality, nobody “owns” a specific area of the codebase nor has special rights on it.
  • No unreviewed changes in the tree, all the patches must receive an Ok by somebody else before they can be pushed in the tree.
  • No “cvs is the release”, major releases at least twice per year, bugfix-only point releases as often as needed.
  • No flames and trollfests, some basic code of conduct is enforced.

One of the effect of this is that the APIs are discussed, proposals are documented and little by little we are migrating to a hopefully more rational and less surprising API.

What’s so bad regarding the old API?

Many of the old APIs were not designed at all, but just randomly added because mplayer or ffmpeg.c happened to need some
feature at the time. The result was usually un(der)documented, hard to use correctly and often not well defined in some cases. Most users of the old API that I’ve seen actually used it wrong and would at best occasionally fail to work, at worst crash randomly.
– Anton

To expand a bit on that you can break down the issues with the old API in three groups:

  • Unnamespaced common names (e.g. CODEC_ID_NONE), those may or might not clash with other libraries.
  • Now-internal-only fields previously exposed that were expected to be something that are not really are (e.g. AVCodecContext.width).
  • Functionality not really working well (e.g. the old audio resampler) for which a replacement got provided eventually (AVResample).

The worst result of API misuse could be a crash in specific situations (e.g. if you use the AVCodecContext dimension when you should use the AVFrame dimensions to allocate your screen surface you get quite an ugly crash since the former represent the decoding time dimension while the latter the dimensions of the frame you are going to present and they can vary a LOT).

But Compatibility ?!

In Libav we try our best to give migration paths and in the past years we even went over the extra mile by providing patches for quite a bit of software Debian was distributing at the time. (Since nobody even thanked for the effort, I doubt the people involved would do that again…)

Keeping backwards compatibility forever is not really feasible:

  • You do want to remove a clashing symbol from your API
  • You do want to not have application crashing because of wrong assumptions
  • You do want people to use the new API and not keep compatibility wrappers that might not work in certain
    corner cases.

The current consensus is to try to keep an API deprecated for about 2 major releases, with release 12 we are dropping code that had been deprecated since 2-3 years ago.

Next!

I had been busy with my dayjob deadlines so I couldn’t progress on the new api for avformat and avcodec I described before, probably the next blogpost will be longer and a bit more technical again.

My fun starts now

Debian decided to move to the new FFmpeg, what does it mean to me? Why should I care? This post won’t be technical for once, if you think “Libav is evil” start reading from here.

Relationship between Libav and Debian

After split between what was FFmpeg in two projects, with Michael Niedermayer keeping the name due his ties with the legal owner of the trademark and “merging” everything the group of 18 people was doing under the new Libav name.

For Gentoo I, maybe naively, decided to just have both and let whoever want maintain the other package. Gentoo is about choice and whoever wants to shot himself on a foot has to be be free to do that in the safest possible way.

For Debian, being binary packaged, who was maintaining the package decided to stay with Libav. It wasn’t surprising given “lack of releases” was one of the sore points of the former FFmpeg and he started to get involved with upstream to try to fix it.

Perceived Leverage and Real Shackles

Libav started with the idea to fix everything that went wrong with the Former FFmpeg:
– Consensus instead of idolatry for THE Leader
– Paced releases instead of cvs is always a release
– Maintained releases branches for years
git instead of svn
– Cleaner code instead of quick hacks to solve the problem of the second
– Helping downstreams instead of giving them the finger.

Being in Debian, according to some people was undeserved because “Libav is evil” and since we wrongly though that people would look at actions and not at random blogpost by people with more bias than anything we just kept writing code. It was a huge mistake, this blogpost and this previous are my try to address this.

Being in Debian to me meant that I had to help fixing stale version of software, often even without upstream.

The people at Debian instead of helping, the amount of patches coming from people @debian.org over the years amounted to 1 according to git, kept piling up work on us.

Fun requests such as “Do remove a standard test image because its origin according to them is unclear” or “Do maintain the ancient release branch that is 3 major releases behind” had been quite common.

For me Debian had been no help and additional bourden.

The leverage that being in a distribution theoretically gives according to those crying because the evil Libav was in Debian amounts to none to me: their user complain because the version provided is stale, their developers do not help even keeping the point releases up or updating the software using Libav because scared to be tainted, downstreams such as Kubi (that are so naive to praise FFmpeg for what happened in Libav, such as the HEVC multi-thread support Anton wrote) would keep picking the implementation they prefer and use ffmpeg-only API whenever they could (debian will ask us to fix that for them anyway).

Is important being in Debian?

Last time they were discussing moving to FFmpeg I had the unpleasant experience of reading lots of lovely email with passive-aggressive snide remarks such as “libav has just developers not users” or seeing the fruits of the smear campaign such as “is it true you stole the FFmpeg hardware” in their mailing list (btw during the past VDD the FFmpeg people there said at least that would be addressed, well, it had not been yet, thank you).

At that time I got asked to present Libav, this time after reading in the debian wiki the “case” presented with skewed git statistics (maybe purge the merge commits when you count them to compare a project activity?) and other number dressing I just got sick of it.

Personally I do not care. There is a better way to spend your own free time than do the distro maintenance work for people that not even thanks you (because you are evil).

The smear campaign pays

I’m sure that now that now that the new FFmpeg gets to replace Libav will get more contributions from people @debian.org and maybe those that were crying for the “oh so unjust” treatment would be happy to do the maintenance churn.

Anyway that’s not my problem anymore and I guess I can spend more time writing about the “social issues” around the project trying to defuse at least a little the so effective “Libav is evil” narrative a post a time.

Summer Sprint in Stockholm

Last weekend some libav developers met in the South Pole offices with additional sponsorship from Inteno Broadband Technology. (And the people at Borgodoro that gave us more chocolate to share with everybody).

Sprints

Since last year the libav started to have sprints to meet up, discuss in person topics that require a more direct media than IRC or Mailing List and usually write some code asking for direct opinions and help.

Who attended

Benjamin was our host for the event. Andreas joined us for the first day only, while Anton, Vittorio, Kostya, Janne, Jan and Rémi stayed both days.

What we did

The focus had been split in a number of area of interests:

  • API: with some interesting discussion between Rémi and Anton regarding on how to clarify a tricky detail regarding AVCodecContext and AVFrame and who to trust when.
  • Reverse Engineering: With Vittorio and Kostya having fun unraveling codecs one after the other (I think they got 3 working)
  • Release 12 API and ABI break
    • What to remove and what to keep further
    • What to change so it is simpler to use
    • If there is enough time to add the decoupled API for avcodec
  • Release 12 wishlist:
    • HEVC speed improvements, since even the C code can be sped up.
    • HEVC extended range support, since there is YUV 422 content out now.
    • More optimizations for the newer architectures (aarch64 and power64le)
    • More hardware accelerator support (e.g. HEVC encoding and decoding support for Intel MediaSDK).
    • Some more filters, since enough people asked for them.
    • Merge some of the pending work (e.g. go2meeting3, the new asf demuxer).
    • Get more security fixes in (with ago kindly helping me on this).
    • … and more …
  • New website with markdown support to make easier for people to update.

During the sprint we managed to write a lot of code and even to push some during the sprint.
Maybe a little too early in the case of asf, but better have it in and get to fix it for the release.

Special mention to Jan for getting a quite exotic container almost ready, I’m looking forward to see it in the ml; and Andreas for reminding me that AVScale is needed sorely by sending me a patch that fixes a problem his PowerPC users are experiencing while uncovering some strange problem in swscale… I’ll need to figure out a good way to get a PowerPC big-endian running to look at it in detail.

Thank you

I want to especially thank all the people at South Pole that welcome me when I arrived with 1 day in advance and all the people that participated and made the event possible, had been fun!

Post Scriptum

  • This post had been delayed 1 week since I had been horribly busy, sorry for the delay =)
  • During the sprint legends such as kropping the sourdough monster and the burning teapot had been created, some reference of them will probably appear in commits and code.
  • Anybody with experience with qemu-user for PowerPC is welcome to share his knowledge with me.