“Today, the magic of film and television is half visual storytelling and half technical wizardry” (SMPTE, 2017). Major broadcasters (e.g., Radio Canada) largely depend on proprietary big ticket professional digital broadcasting equipment whose standards are written and shaped by Society for Motion Pictures and Television Engineers® (SMPTE®) which includes both the broadcasters and the manufacturers of the equipment. In short, SMPTE’s main job is to help advance “moving-imagery engineering” across the broadcasting industry. These heavy duty broadcasting machines are capable of transmitting raw data (i.e., uncompressed, unencrypted digital video signals) within television facilities. Recently, Savoir-faire Linux– a leading open source software service provider in Canada− has embarked on a journey (financially supported by Radio Canada) to test a technological possibility: can we adapt FFmpeg, using a general purpose server, in order to transmit raw data at volume/speed of 3.5 Gbps without relying on specialized hardware broadcasting equipment?
Savoir-faire Linux Takes Up the Technological Challenge
The journey started this winter which was by the way extremely cold and snowy in Montreal. A team of three from the product engineering department of Savoir-faire Linux decided to spend some time looking at FFmpeg’s internals in the hope of getting a working TR-03/SDI pipeline processing up to a several HD streams on a contemporary server while benefiting from FFmpeg’s easily available lower definition derivatives from the same video stream.
A few challenges arose from this endeavor. The first was the TR-03 format itself which wasn’t supported in the upstream version of FFmpeg. The second was the volume of data to be processed: we are talking about 3Gb/s of traffic here, which according to the FFmpeg developers might not be possible to process. Finally, the need to add transcoding to the pipeline which meant even greater CPU load.
Implementing TR-03 was not the biggest challenge, as the video format was pretty straightforward, and the team quickly got a working implementation. In this context, they used GStreamer, which was already very capable in terms of streaming as a sender for this volume of data.
Once the data began to flow in, the performance bottlenecks became even more evident. To clearly locate these bottlenecks, they made the decision to write a few benchmark scenarios using a combination of unit tests and LTTng (i.e., an open source tracing framework for Linux). These benchmark scenarios allowed the team to detect where they were dropping a bunch of packets (i.e., losing data at different layers) at both the kernel socket and NIC buffers. Since the data processing process was closely monitored, it was relatively easy to tweak the buffer sizes, while keeping the delays within our acceptable range. However, another issue arose. The team noticed that FFmpeg’s data gathering/decoding thread was hogging a single CPU core causing occasional packet loss on the occasion when they were not fast enough to dequeue data from the socket buffer. To work around this, they decoupled the data gathering work from the decoding part. Following through these steps carefully, made it possible to have the packet drop-free pipeline, an achievement celebrated with a fine cake and a few drinks in front of a cool movie they could finally watch. As the head of the engineering team mentioned, “I could have never imagined that Big Bug Bunny was so much fun”.
The Latest Update and the Road Ahead
Later, on April 5, 2017, the contributions developed and refined by the team were finally integrated into FFmpeg, meaning that, their proof of concept met with the stipulated standards of the community and became part of the platform.
For Savoir-faire Linux’ product engineering team, this was a very encouraging and promising adventure. Despite all odds, they empirically demonstrated the possibility to run SDI processing pipelines on top of a general purpose server equipment using open source software. This experiment was a success as it turned out there was great potential for broadcasters. To continue in this direction and further shape the standards of SMPTE 2110 – which are not frozen yet – being still in embryonic phase – they face another main challenge. FFmpeg does not support the synchronization as described in SMTPE 2110; so, the team is now evaluating the possibility to provide and include the support needed.
- Damien Riegel,
- Eloi Bail, and
- Stepan Salenikovitch.