Timecode overlay with FFmpeg

This post describes how to use FFmpeg, a free and open-source program, to burn filename and timecode automatically into any number of video files, and then save them in a form suitable for network viewing.

In the olden days, video rushes would be burned to DVD through a VTR or, latterly, timecode plugin on Avid so as to give everyone timecoded copies.

Today, the free and open-source FFmpeg program can complete this task in the background on almost any modern computer. In this office, it’s making burnt-in timecode visible on three machines: a Mac PowerBook G4 made in 2004 (PowerPC processor) now running Debian Jessie GNU/Linux, a Seagate GoFlex Home caddy with an ARM5-compatible processor running GNU/Linux, and a Windows PC.

Here is a command line for a Windows machine that downconverts all files of a certain extension in a directory, and burns timecode onto them, along with the filename. At the moment, the timecode just starts at zero for each clip: it is no trouble to write an extra routine to read any embedded timecode and use that instead. The command also slaps a simple autolevel on the soundtracks (because this is for off-line logging) and also adds a slow-acting video AGC to make shots palatable if they need severe grading. The font I use looks clear on screen: it is a free font, downloadable from a number of sources. You could use arial.ttf instead, because it is already on all Windows machines.

In case you’re not familiar with command lines, the backslashes are escape characters, that rob the following character of any special meaning. For example, a colon : has a special meaning to FFmpeg, but preceding it with a backslash, \:, causes the colon to be treated as an ordinary printable character.

This line is for 25fps material, reducing the footage the size 512×288.

ffmpeg -i <VIDEOFILE> -n -acodec libfdk_aac -b:a 40k ^
-profile:a aac_he_v2 -vcodec libx264 -crf 22 ^
-vf "yadif, colormatrix=bt709:bt601, pp=al, scale=512:288, smartblur=1.0:-1.0, drawtext=fontfile='c\:\\windows\\fonts\\LiberationSans-Bold.ttf':text='<VIDEOFILE>\ \ \ \ \ \ ':x=120:y=h-lh-1:fontsize=16:fontcolor=white:shadowcolor=black:shadowx=1:shadowy=1:timecode='00\:00\:00\:00':timecode_rate=25" ^
-x264opts colorprim=bt470bg:fullrange=off:transfer=bt470bg:colormatrix=bt470bg ^
-af "compand=0.0|0.0:0.8|0.8:-90/-40|0/0:6:0:-30:0" ^


Here’s a step-by-step explanation of this command.

FFmpeg command name
Use your VIDEOFILE as input
Overwrite existing files without question. I use this because the command is run from a FOR…DO loop in a batch file, and may be making revisions of many earlier files.
-acodec libfdk_aac -b:a 40k -profile:a aac_he_v2
Choose Fraunhofer’s AAC codec for audio, instruct the coder to use the HEAAC V2 flavour of the codec, and use 40kbit/s as the bitrate. AAC is the successor codec to MP3, used very widely in applications such as iTunes; HE means the “High Efficiency” version of the codec, which uses spectral band replication to code the upper frequencies of its input; and Version 2 adds a more efficient form of representing the difference between two stereo channels. The Fraunhofer codec is the best codec in the market, and its source code has been released primarily for use in Android development. However, its licence is not compatible with FFmpeg’s licence, and so it must be compiled into FFmpeg by hand, as I do.
-vcodec libx264 -crf 22
Use x264 as the video codec. This open source project is widely regarded as the most accurate H.264 codec. The coder is instructed to use a constant quality, represented by the “constant rate factor” or crf parameter. 22 is a fair trade-off between bandwidth and quality for our purposes, and is suitable for distribution over a LAN or good ADSL line.
-vf "yadif, colormatrix=bt709:bt601, pp=al, scale=512:288, smartblur=1.0:-1.0, drawtext=fontfile='c\:\\windows\\fonts\\LiberationSans-Bold.ttf':text='<VIDEOFILE>\ \ \ \ \ \ ':x=120:y=h-lh-1:fontsize=16:fontcolor=white:shadowcolor=black:shadowx=1:shadowy=1:timecode='00\:00\:00\:00':timecode_rate=25"

This is a video filter chain. It does several jobs. In order, they are:

  1. De-interlace the video. My incoming source is interlaced, but the eventual film, and web, destinations demand progressive-scan video
  2. Change the colour matrix from the HD standard to the SD (and below) standard. Many external sources will confirm how YUV representations of real-world colour pictures are calculated, and how the international standards differ in this representation between high-definition and standard-definition pictures.
  3. Add the auto-level filter from the post-production-processing library (libpostproc). This is a simple, slow-acting automatic gain control for the video. It goes before the scaling process in case of overshoots caused by the change in size or any sharpening.
  4. Scaling then takes place: a raster size of 512 x 288 gives sufficient detail for logging, but does not eat up too much bandwidth after coding
  5. The smartblur filter is a semi-intelligent edge-detection algorithm that, in this case, is asked to work on neighbouring pixels, and sharpen them. The negative number literally means “the opposite of blur”
  6. The drawtext filter writes on the video. This command (with appropriate escape characters):
    1. chooses a font by pointing to its filename;
    2. colours it white;
    3. positions the line of text appropriately (“lh” means “line height”);
    4. adds a black shadow for clarity;
    5. includes the filename;
    6. tells the timecode counter where to start (timecode display is implicit when a start frame is given);
    7. instructs the counter to count 25 frames per second.
  7. The filter chain ends here. Its output is now fed to the x264 coder.

-x264opts colorprim=bt470bg:fullrange=off:transfer=bt470bg:colormatrix=bt470bg

These are private options for the x264 coder.

  • The three options specifying “bt470bg” instruct the coder exactly how to interpret, and tell the decoder how to interpret, the conversion from YUV to RGB for display. In this case, I have chosen “ITU Recommendation BT.470 systems G and R”, the standard colour encoding for PAL and SECAM standard definition video. I specify this exactly because some displays assume colour matrices wrongly if they are not explicitly given instructions. Some others get it wrong anyway, but we must try.
  • The fullrange instruction tells the x264 coder that the incoming video is at studio levels (16 <= Y <= 235), and the coder includes this information in its instructions to the decoder and display. Again, it ought to be assumed that YUV-encoded video is already at studio levels, but sometimes decoders get this wrong.
-af "compand=0.0|0.0:0.8|0.8:-90/-40|0/0:6:0:-30:0"
An audio filter is described here.
is a general audio level alteration process.
These two zeroes describe the channel attack times for the level-detection algorithm (0.0 seconds, instant)
These are the decay time constants for the stereo channels (0.8s)
A simple curve for gain amplification is described: raise sounds at -90dB up to -40dB, then alter levels proportionally until 0dB (full scale) remains at 0dB
This defines the softness of the knee down at -40dB: the curve is 6dB wide
This zero instructs the filter to apply 0dB gain make-up
-30dB is the initial volume that the filter should assume, to avoid very loud audio at the start of each file
This final zero instructs the filter to act without delaying the side-chain, thus disabling its look-ahead function. This has been included for simplicity’s sake: I didn’t have much time to tweak this figure.
The output filename: same as the input filename but with the extension .mp4

The files this command line produces vary in bit-rate between 2Mbit/s and 500kbit/s, suitable for low quality LAN or Wi-fi use. A further shell batch command (this time run on my little ARM5 Seagate GoFlex Home) further downconverts them to around 150kbit/s, suitable for ADSL streaming over my slow line.

This may seem a complex command, but it does a lot of time-saving work in a single pass.

6 thoughts on “Timecode overlay with FFmpeg”

  1. Hi John,

    running into a problem here and I’m a bit lost.
    I get an message.

    Reinstalled and updated ffmpeg thru brew, but no avail.
    Any clues?

    PS: Great blog, it’s been a very helpful resource to me.

    1. Error message doesn’t show up in the uintial post, so here it is again

      [AVFilterGraph @ 0x7f85f1d00440]No such filter: ‘drawtext’
      Error opening filters!

  2. Hello Jay.

    Thank you for your question. I can see that the “drawtext” filter has not been compiled into the binary of FFmpeg that you are using. I am not familiar with the ‘brew’ system or installer. What is it, please?

    To compile “drawtext” into FFmpeg, please follow the guidelines in this FFmpeg document, or try my build system:


    My binaries, available here for Windows 10 (64-bit only), include the necessary libraries to use the “drawtext” filter.


  3. Hi John, Nice blog – thank you for sharing. I had one question pursuant to your mention of “At the moment, the timecode just starts at zero for each clip: it is no trouble to write an extra routine to read any embedded timecode and use that instead.” This is exactly what I am trying to do. I have written a Windows batch script that takes an HD MXF and burns the timecode into the video and outputs an H264 video file and folds down 4 mono audio tracks to one stereo and outputs a stereo wav. Currently I have to manually enter the timecode start value (it varies on each video). Any tips on how to have the drawtext filter automatically take the existing start timecode of the file without manual entry?
    Regards, Sean

    1. Hello Sean,

      To get timecode out of a container, and then substitute it in the starting timecode section for the drawtext filter, run FFprobe on your incoming file before you commence the FFmpeg encoding with overlay.

      What you’re looking for is WHERE the timecode is contained. I will show two examples here. The FFprobe documentation refers to other examples. But I think you’re using a Sony camera producing XAVC files, and their MXF containers can be probed in this way.

      In a Sony XAVC MXF file, the starting timecode is located in the container metadata.

      This example command retrieves the timecode in a format of your choice, which you can put into any number of standard parsers, and then you can use variable substitution (with escapes for the colons, as necessary) to make the retrieved value you starting timecode.

      Here is what my Sony MXF file gives, when FFprobe is asked to retrieve the container timecode. This is a real Sony XAVC camera file. The format specifier directs FFprobe to use ‘ini’ format, and you will see that the ‘timecode’ tag is already escaped for you to use in the ‘drawtext’ filter.

      ffprobe -v quiet -of ini -show_entries format_tags=timecode "i:\WORLD VOICE\TUESDAY\SONY\CARD2\PRIVATE\XDROOT\Clip\Clip0016.MXF"
      # ffprobe output


      Here is an example using JSON output format, which could conceivably be imported directly into a Python or Javascript object (and many others):

      ffprobe -v quiet -of json -show_entries format_tags=timecode "i:\WORLD VOICE\TUESDAY\SONY\CARD2\PRIVATE\XDROOT\Clip\Clip0016.MXF"

      "format": {
      "tags": {
      "timecode": "02:42:54:06"

      …or the default format:

      ffprobe -v quiet -show_entries format_tags=timecode "i:\WORLD VOICE\TUESDAY\SONY\CARD2\PRIVATE\XDROOT\Clip\Clip0016.MXF"


      …or the flat format:

      ffprobe -v quiet -of flat -show_entries format_tags=timecode "i:\WORLD VOICE\TUESDAY\SONY\CARD2\PRIVATE\XDROOT\Clip\Clip0016.MXF"


      Now, other types of container store the timecode in different ways. For example, an Avid-ingested MXF file stores timecode as a metatag in the stream, not the container. So you need to instruct FFprobe to look for it there, instead.

      ffprobe -v quiet -of ini -show_entries stream_tags=timecode "00043V01.56FBBC2D_556FBBC2D.mxf"

      # ffprobe output


      The FFprobe documentation carries a few other hints. But the real meat comes when you look deeply into the stream and format tags if you don’t know where the timecode actually is. Use the -show_streams option. If the timecode is nonsense, look for the “start_pts” or “start_time” tags and calculate the starting timecode from your knowledge of the recording system’s timebase.

      1. Hi John,

        Thanks you very much. That was enough to get me going in the right direction. I now have a working script that is doing the job nicely. Much appreciated.

        Regards, Sean

Leave a Reply

Your email address will not be published. Required fields are marked *