Preprocessing in Hardware
View PDF Version
Digital Rapids encoding systems are renowned for the quality of their compressed video output. The same original content, encoded with the same codec at the same data rates, can still result in varying output quality depending on the encoding system used. The key to delivering consistently high quality compressed video is properly preparing the source video before handing it off to the codecs for compression.
Our unique hardware-based video preprocessing technology is built on our many years of experience in the broadcast and post production markets. With our purpose-built hardware solutions, we are able to deliver video quality that is far superior to typical encoding platforms. The advanced video preprocessing features in our hardware deliver multiple benefits. In addition to improving the quality of the video, our hardware-based preprocessing also reduces the amount of work that the software compression engine needs to do, which means that you can process more video and audio streams simultaneously. This preprocessing also delivers the highest possible quality video (up to 30% more efficient to compress) to the codecs, ensures optimal quality in the output media stream and enables the most efficient use of bandwidth in the compressed result.
Motion Adaptive De-Interlacing
De-interlacing is a critical function of preprocessing interlaced source video for ultimate viewing on progressive displays. Properly processing the video from its native interlace form factor to high quality progressive-scan data is extremely important to the overall quality of the resulting image. Not only will any de-interlacing artifacts be visible, but they also increase the work that the codec must do to compress the image, resulting in lower quality at a given data rate. Digital Rapids' encoding systems, featuring the Flux capture and preprocessing hardware, incorporate advanced motion adaptive de-interlacing capabilities that set it apart from other systems.
Common forms of deinterlacing include linear temporal (meshing two fields together to create a single frame, also known as 'Weave') and linear spatial ('Bob', discarding one field and interpolating the result back to full resolution). The Weave method creates a lot of motion artifacting, but works well in scenes with little to no motion. The Bob method avoids motion artifacts, but at the cost of considerable vertical detail. Another technology, Vertical Temporal (VT), discards one field (like Bob) but rather than simply interpolating the remaining field back to full resolution it uses the discarded high frequency information to recover missing edge data. VT can adapt processing of the video (between Bob and Weave style methods) based on the content of the entire frame (whether it contains any motion). The disadvantage of VT may be visible as artifacts in areas of high motion, with an effect similar to trails (of the high frequency data) or motion blur (if VT is applied to both fields).
Motion adaptive de-interlacing is one of the most advanced forms of de-interlacing technology available. Motion adaptive de-interlacing combines the best aspects of both Bob and Weave by isolating the de-interlacing compensation to the pixel level. Spatial and temporal comparisons are performed to decide whether or not an individual pixel has motion. Whereas the other methods affect the entire frame of video, motion adaptive de-interlacing (as implemented on the Flux hardware) processes each pixel independently, resulting in the highest quality image possible. Areas of no motion are statically meshed (Weave) and areas where motion is detected are treated with a proprietary filtering technique resulting in very high quality, progressive-scan images.
Check out the sample video in our Example Clips section to see the advantages of motion adaptive de-interlacing.
It's worth noting that the term "motion adaptive" is occasionally colloquially used to describe some algorithms (like VT) that adapt their processing based on whether an entire frame has motion, but the Flux hardware dynamically adapts right down to the pixel level -- full motion adaptive de-interlacing with individual pixel analysis.