In the olden days, video rushes would be burned to DVD through a VTR or, latterly, timecode plugin on Avid so as to give everyone timecoded copies.
Today, the free and open-source FFmpeg program can complete this task in the background on almost any modern computer. In this office, it’s making burnt-in timecode visible on three machines: a Mac PowerBook G4 made in 2004 (PowerPC processor) now running Debian Jessie GNU/Linux, a Seagate GoFlex Home caddy with an ARM5-compatible processor running GNU/Linux, and a Windows PC.
Here is a command line for a Windows machine that downconverts all files of a certain extension in a directory, and burns timecode onto them, along with the filename. At the moment, the timecode just starts at zero for each clip: it is no trouble to write an extra routine to read any embedded timecode and use that instead. The command also slaps a simple autolevel on the soundtracks (because this is for off-line logging) and also adds a slow-acting video AGC to make shots palatable if they need severe grading. The font I use looks clear on screen: it is a free font, downloadable from a number of sources. You could use arial.ttf instead, because it is already on all Windows machines.
In case you’re not familiar with command lines, the backslashes are escape characters, that rob the following character of any special meaning. For example, a colon : has a special meaning to FFmpeg, but preceding it with a backslash, \:, causes the colon to be treated as an ordinary printable character.
This line is for 25fps material, reducing the footage the size 512×288.
ffmpeg -i <VIDEOFILE> -n -acodec libfdk_aac -b:a 40k ^ -profile:a aac_he_v2 -vcodec libx264 -crf 22 ^ -vf "yadif, colormatrix=bt709:bt601, pp=al, scale=512:288, smartblur=1.0:-1.0, drawtext=fontfile='c\:\\windows\\fonts\\LiberationSans-Bold.ttf':text='<VIDEOFILE>\ \ \ \ \ \ ':x=120:y=h-lh-1:fontsize=16:fontcolor=white:shadowcolor=black:shadowx=1:shadowy=1:timecode='00\:00\:00\:00':timecode_rate=25" ^ -x264opts colorprim=bt470bg:fullrange=off:transfer=bt470bg:colormatrix=bt470bg ^ -af "compand=0.0|0.0:0.8|0.8:-90/-40|0/0:6:0:-30:0" ^ <VIDEOFILE>.mp4
Here’s a step-by-step explanation of this command.
ffmpeg
- FFmpeg command name
-i <VIDEOFILE>
- Use your VIDEOFILE as input
-n
- Overwrite existing files without question. I use this because the command is run from a FOR…DO loop in a batch file, and may be making revisions of many earlier files.
-acodec libfdk_aac -b:a 40k -profile:a aac_he_v2
- Choose Fraunhofer’s AAC codec for audio, instruct the coder to use the HEAAC V2 flavour of the codec, and use 40kbit/s as the bitrate. AAC is the successor codec to MP3, used very widely in applications such as iTunes; HE means the “High Efficiency” version of the codec, which uses spectral band replication to code the upper frequencies of its input; and Version 2 adds a more efficient form of representing the difference between two stereo channels. The Fraunhofer codec is the best codec in the market, and its source code has been released primarily for use in Android development. However, its licence is not compatible with FFmpeg’s licence, and so it must be compiled into FFmpeg by hand, as I do.
-vcodec libx264 -crf 22
- Use x264 as the video codec. This open source project is widely regarded as the most accurate H.264 codec. The coder is instructed to use a constant quality, represented by the “constant rate factor” or crf parameter. 22 is a fair trade-off between bandwidth and quality for our purposes, and is suitable for distribution over a LAN or good ADSL line.
-vf "yadif, colormatrix=bt709:bt601, pp=al, scale=512:288, smartblur=1.0:-1.0, drawtext=fontfile='c\:\\windows\\fonts\\LiberationSans-Bold.ttf':text='<VIDEOFILE>\ \ \ \ \ \ ':x=120:y=h-lh-1:fontsize=16:fontcolor=white:shadowcolor=black:shadowx=1:shadowy=1:timecode='00\:00\:00\:00':timecode_rate=25"
This is a video filter chain. It does several jobs. In order, they are:
- De-interlace the video. My incoming source is interlaced, but the eventual film, and web, destinations demand progressive-scan video
- Change the colour matrix from the HD standard to the SD (and below) standard. Many external sources will confirm how YUV representations of real-world colour pictures are calculated, and how the international standards differ in this representation between high-definition and standard-definition pictures.
- Add the auto-level filter from the post-production-processing library (libpostproc). This is a simple, slow-acting automatic gain control for the video. It goes before the scaling process in case of overshoots caused by the change in size or any sharpening.
- Scaling then takes place: a raster size of 512 x 288 gives sufficient detail for logging, but does not eat up too much bandwidth after coding
- The smartblur filter is a semi-intelligent edge-detection algorithm that, in this case, is asked to work on neighbouring pixels, and sharpen them. The negative number literally means “the opposite of blur”
- The drawtext filter writes on the video. This command (with appropriate escape characters):
- chooses a font by pointing to its filename;
- colours it white;
- positions the line of text appropriately (“lh” means “line height”);
- adds a black shadow for clarity;
- includes the filename;
- tells the timecode counter where to start (timecode display is implicit when a start frame is given);
- instructs the counter to count 25 frames per second.
- The filter chain ends here. Its output is now fed to the x264 coder.
-x264opts colorprim=bt470bg:fullrange=off:transfer=bt470bg:colormatrix=bt470bg
These are private options for the x264 coder.
- The three options specifying “bt470bg” instruct the coder exactly how to interpret, and tell the decoder how to interpret, the conversion from YUV to RGB for display. In this case, I have chosen “ITU Recommendation BT.470 systems G and R”, the standard colour encoding for PAL and SECAM standard definition video. I specify this exactly because some displays assume colour matrices wrongly if they are not explicitly given instructions. Some others get it wrong anyway, but we must try.
- The
fullrange
instruction tells the x264 coder that the incoming video is at studio levels (16 <= Y <= 235), and the coder includes this information in its instructions to the decoder and display. Again, it ought to be assumed that YUV-encoded video is already at studio levels, but sometimes decoders get this wrong.
-af "compand=0.0|0.0:0.8|0.8:-90/-40|0/0:6:0:-30:0"
compand
- is a general audio level alteration process.
0.0|0.0
- These two zeroes describe the channel attack times for the level-detection algorithm (0.0 seconds, instant)
0.8|0.8
- These are the decay time constants for the stereo channels (0.8s)
-90/-40|0/0
- A simple curve for gain amplification is described: raise sounds at -90dB up to -40dB, then alter levels proportionally until 0dB (full scale) remains at 0dB
6
- This defines the softness of the knee down at -40dB: the curve is 6dB wide
0
- This zero instructs the filter to apply 0dB gain make-up
-30
- -30dB is the initial volume that the filter should assume, to avoid very loud audio at the start of each file
0
- This final zero instructs the filter to act without delaying the side-chain, thus disabling its look-ahead function. This has been included for simplicity’s sake: I didn’t have much time to tweak this figure.
<FILENAME>
- The output filename: same as the input filename but with the extension
.mp4
The files this command line produces vary in bit-rate between 2Mbit/s and 500kbit/s, suitable for low quality LAN or Wi-fi use. A further shell batch command (this time run on my little ARM5 Seagate GoFlex Home) further downconverts them to around 150kbit/s, suitable for ADSL streaming over my slow line.
This may seem a complex command, but it does a lot of time-saving work in a single pass.