MXF Op-Atom files for Avid

THIS POST HAS BEEN PARTLY SUPERSEDED BY THE MUCH FASTER METHOD SHOWN HERE: http://johnwarburton.net/blog/?p=50731 BUT SOME METADATA IS OMITTED.

This post shows how to convert almost any kind of video and audio into native Avid Op-Atom MXF files, suitable for placement directly in Avid’s MXF media files directory. The method is fast, and uses only open source software. Crucially, conversion takes place on any machine, not just an Avid-equipped computer.

A side note regarding AMA: it’s sometimes (?) a little flaky when linking to files that aren’t from a small subset of QuickTime, or that have their own manufacturer-tested plugins.

In this example, I am importing footage into a 25fps HD project. The Avid codec is its own DNxHD, running at 145MBit/s.

Use FFmpeg to convert your incoming footage into uncompressed audio files, and into Avid’s native video format. Note that the video is not encapsulated beyond the raw DNxHD format: but this format contains almost enough information about the file to enable import to take place. Frame rate, for example, seems to be missing.

So, convert the incoming video into DNxHD and uncompressed audio with FFmpeg like this:

ffmpeg -i "bach.flv" -vcodec dnxhd -b:v 145M -an -sws_flags lanczos -vf "scale=1920:1080, smartblur=1.0:-1.0" bach-video.dnxhd -vn -ar 48000 -acodec pcm_s16le bach-audio.wav

I have scaled the video to the correct size using what I consider to be the best scaling algorithm (Lanczos), and have added a little crispness to avoid too much softening. Obviously, you will not want to do this to footage that is already the correct dimensions and does not need restoration.

Now, we must prepare these files for Avid, in the same way that Avid itself imports files. They must be encapsulated as Avid-flavour MXFs (Op-Atom). Here, the BBC and EBU-supported raw2bmx utility, from bmxlib, comes into play. Again, this is open source software, and this is a very simple command line. Much more metadata can be included, and you’ll need to think about this if you’re going to reconform the project at any stage.

On this command line, I instruct raw2bmx to wrap both the video file and the stereo audio file into MXF. The project name is given, as is a tape name. The output file location together with the file prefix is given.

You will also need to specify the frame rate, using the ‘-f’ option, if your footage is not 25fps. The rates acceptable are: 23976, 24, 25, 2997, 30, 50, 5994 and 60. The incoming DNxHD is specified by “–vc3_1080p_1237”, naming the codec, picture size and flavour. All such flavours are listed in the help for raw2bmx.

raw2bmx -t avid -f 25 --project BACH --clip "BACH001" -o "I:\Avid MediaFiles\MXF\1\BACH001" --vc3_1080p_1237 bach-video.dnxhd --wave bach-audio.wav

In your Avid Mediafiles directory, a number of MXF files will appear: Avid’s Media Tool will pick these up as clips with combined video and audio (if that’s what you’re converting), and you can drag the clips to whichever bin you wish. Note that the raw2bmx tool is terse in its progress reporting. It prints nothing until the end of the wrapping process.

Recent builds of FFmpeg can be downloaded here, and the bmxlib project is on Sourceforge here.

A Lot To Learn

Today, Thursday 21st August, the GCSE exam results come out. In my schooldays, we went through the same results procedures for our O-levels and CSEs, although coursework generally wasn’t assessed. This was the first time we’d ever experienced result nerves, as the staff rifled through sealed envelopes until the correct name was found.

It was considered normal at my large, good, comprehensive school to take somewhere between four and ten exams. Today, teenagers regularly sit many more than this, and marvellous alternative qualifications are available for young people whose examination skills don’t match their real-world virtuosity.

We had most of the benefits that modern times bring: safe food and water, the National Health Service, easy transport with much cheaper petrol, luxuries spread around more classes than in our parents’ time, and lots of entertainment on record and cassette tape.

But we didn’t have the Internet with the immense, often anonymous, social pressures it brings to young minds.

A sixteen-year-old today can debate directly over Twitter with, for example, Richard Dawkins, Buzz Aldrin or Lily Allen; but he or she is also subject to anonymous and permanent criticism or attack on any aspect of their life, real or imagined, from any corner of the globe. Likewise, almost every media outlet was heavily edited: we had newspapers, radio and tv, but zines and self-published information were much more scarce than they are today. Blogs or instant social networks, outside radio hams and CBers, were just a dream. Now, teenagers must think editorially from their earliest exposure to the Internet, or be misled.

For sixteen-year-olds today, it seems to me that there’s much more to learn, and to refute, than there was for us in 1980, thirty-four years ago.

Timecode overlay with FFmpeg

This post describes how to use FFmpeg, a free and open-source program, to burn filename and timecode automatically into any number of video files, and then save them in a form suitable for network viewing.

In the olden days, video rushes would be burned to DVD through a VTR or, latterly, timecode plugin on Avid so as to give everyone timecoded copies.

Today, the free and open-source FFmpeg program can complete this task in the background on almost any modern computer. In this office, it’s making burnt-in timecode visible on three machines: a Mac PowerBook G4 made in 2004 (PowerPC processor) now running Debian Jessie GNU/Linux, a Seagate GoFlex Home caddy with an ARM5-compatible processor running GNU/Linux, and a Windows PC.

Here is a command line for a Windows machine that downconverts all files of a certain extension in a directory, and burns timecode onto them, along with the filename. At the moment, the timecode just starts at zero for each clip: it is no trouble to write an extra routine to read any embedded timecode and use that instead. The command also slaps a simple autolevel on the soundtracks (because this is for off-line logging) and also adds a slow-acting video AGC to make shots palatable if they need severe grading. The font I use looks clear on screen: it is a free font, downloadable from a number of sources. You could use arial.ttf instead, because it is already on all Windows machines.

In case you’re not familiar with command lines, the backslashes are escape characters, that rob the following character of any special meaning. For example, a colon : has a special meaning to FFmpeg, but preceding it with a backslash, \:, causes the colon to be treated as an ordinary printable character.

This line is for 25fps material, reducing the footage the size 512×288.

ffmpeg -i <VIDEOFILE> -n -acodec libfdk_aac -b:a 40k ^
-profile:a aac_he_v2 -vcodec libx264 -crf 22 ^
-vf "yadif, colormatrix=bt709:bt601, pp=al, scale=512:288, smartblur=1.0:-1.0, drawtext=fontfile='c\:\\windows\\fonts\\LiberationSans-Bold.ttf':text='<VIDEOFILE>\ \ \ \ \ \ ':x=120:y=h-lh-1:fontsize=16:fontcolor=white:shadowcolor=black:shadowx=1:shadowy=1:timecode='00\:00\:00\:00':timecode_rate=25" ^
-x264opts colorprim=bt470bg:fullrange=off:transfer=bt470bg:colormatrix=bt470bg ^
-af "compand=0.0|0.0:0.8|0.8:-90/-40|0/0:6:0:-30:0" ^
<VIDEOFILE>.mp4

 

Here’s a step-by-step explanation of this command.

ffmpeg
FFmpeg command name
-i <VIDEOFILE>
Use your VIDEOFILE as input
-n
Overwrite existing files without question. I use this because the command is run from a FOR…DO loop in a batch file, and may be making revisions of many earlier files.
-acodec libfdk_aac -b:a 40k -profile:a aac_he_v2
Choose Fraunhofer’s AAC codec for audio, instruct the coder to use the HEAAC V2 flavour of the codec, and use 40kbit/s as the bitrate. AAC is the successor codec to MP3, used very widely in applications such as iTunes; HE means the “High Efficiency” version of the codec, which uses spectral band replication to code the upper frequencies of its input; and Version 2 adds a more efficient form of representing the difference between two stereo channels. The Fraunhofer codec is the best codec in the market, and its source code has been released primarily for use in Android development. However, its licence is not compatible with FFmpeg’s licence, and so it must be compiled into FFmpeg by hand, as I do.
-vcodec libx264 -crf 22
Use x264 as the video codec. This open source project is widely regarded as the most accurate H.264 codec. The coder is instructed to use a constant quality, represented by the “constant rate factor” or crf parameter. 22 is a fair trade-off between bandwidth and quality for our purposes, and is suitable for distribution over a LAN or good ADSL line.
-vf "yadif, colormatrix=bt709:bt601, pp=al, scale=512:288, smartblur=1.0:-1.0, drawtext=fontfile='c\:\\windows\\fonts\\LiberationSans-Bold.ttf':text='<VIDEOFILE>\ \ \ \ \ \ ':x=120:y=h-lh-1:fontsize=16:fontcolor=white:shadowcolor=black:shadowx=1:shadowy=1:timecode='00\:00\:00\:00':timecode_rate=25"

This is a video filter chain. It does several jobs. In order, they are:

  1. De-interlace the video. My incoming source is interlaced, but the eventual film, and web, destinations demand progressive-scan video
  2. Change the colour matrix from the HD standard to the SD (and below) standard. Many external sources will confirm how YUV representations of real-world colour pictures are calculated, and how the international standards differ in this representation between high-definition and standard-definition pictures.
  3. Add the auto-level filter from the post-production-processing library (libpostproc). This is a simple, slow-acting automatic gain control for the video. It goes before the scaling process in case of overshoots caused by the change in size or any sharpening.
  4. Scaling then takes place: a raster size of 512 x 288 gives sufficient detail for logging, but does not eat up too much bandwidth after coding
  5. The smartblur filter is a semi-intelligent edge-detection algorithm that, in this case, is asked to work on neighbouring pixels, and sharpen them. The negative number literally means “the opposite of blur”
  6. The drawtext filter writes on the video. This command (with appropriate escape characters):
    1. chooses a font by pointing to its filename;
    2. colours it white;
    3. positions the line of text appropriately (“lh” means “line height”);
    4. adds a black shadow for clarity;
    5. includes the filename;
    6. tells the timecode counter where to start (timecode display is implicit when a start frame is given);
    7. instructs the counter to count 25 frames per second.
  7. The filter chain ends here. Its output is now fed to the x264 coder.

-x264opts colorprim=bt470bg:fullrange=off:transfer=bt470bg:colormatrix=bt470bg

These are private options for the x264 coder.

  • The three options specifying “bt470bg” instruct the coder exactly how to interpret, and tell the decoder how to interpret, the conversion from YUV to RGB for display. In this case, I have chosen “ITU Recommendation BT.470 systems G and R”, the standard colour encoding for PAL and SECAM standard definition video. I specify this exactly because some displays assume colour matrices wrongly if they are not explicitly given instructions. Some others get it wrong anyway, but we must try.
  • The fullrange instruction tells the x264 coder that the incoming video is at studio levels (16 <= Y <= 235), and the coder includes this information in its instructions to the decoder and display. Again, it ought to be assumed that YUV-encoded video is already at studio levels, but sometimes decoders get this wrong.
-af "compand=0.0|0.0:0.8|0.8:-90/-40|0/0:6:0:-30:0"
An audio filter is described here.
compand
is a general audio level alteration process.
0.0|0.0
These two zeroes describe the channel attack times for the level-detection algorithm (0.0 seconds, instant)
0.8|0.8
These are the decay time constants for the stereo channels (0.8s)
-90/-40|0/0
A simple curve for gain amplification is described: raise sounds at -90dB up to -40dB, then alter levels proportionally until 0dB (full scale) remains at 0dB
6
This defines the softness of the knee down at -40dB: the curve is 6dB wide
0
This zero instructs the filter to apply 0dB gain make-up
-30
-30dB is the initial volume that the filter should assume, to avoid very loud audio at the start of each file
0
This final zero instructs the filter to act without delaying the side-chain, thus disabling its look-ahead function. This has been included for simplicity’s sake: I didn’t have much time to tweak this figure.
<FILENAME>
The output filename: same as the input filename but with the extension .mp4

The files this command line produces vary in bit-rate between 2Mbit/s and 500kbit/s, suitable for low quality LAN or Wi-fi use. A further shell batch command (this time run on my little ARM5 Seagate GoFlex Home) further downconverts them to around 150kbit/s, suitable for ADSL streaming over my slow line.

This may seem a complex command, but it does a lot of time-saving work in a single pass.