Page 20

EETE FEBRUARY 2013

AUDIO & VIDEO ELECTRONICS Designing low-power video image stabilisation IP for FPGAs By Dr S. Parker and W. Cranwell Image stabilisation is an important capability for many electro-optic sensors, where an operator or user is required to view the output imagery. The technique can therefore enhance many practical viewing systems, spanning a very broad range of applications including those found in defence and security sectors. Stabilisation provides a means for reducing both image blur and unwanted frame-to-frame image shifts and rotations, thereby aiding image interpretation and reducing the operator’s workload. For those systems that require the operator to locate or classify features within the video stream (typically recognition and identification), then a stabilised image stream will help improve the accuracy of these tasks. There are a number of techniques for stabilising an image which are either based on mechanical correction or image processing. Mechanical stabilisation techniques include those that gyroscopically stabilise the whole camera system or use elements within the camera to effectively move the lens or detector array. Mechanical stabilisation techniques are well established, although they can have a limited rate of response. Furthermore, they tend to be more expensive, consume more power and are physically larger and heavier. Mechanical stabilisation Fig. 1: Unstabilised image set. The 5 frames have been falsecoloured and superimposed to illustrate the movement effect. techniques used within the camera housing are generally less expensive and are physically more compact. However, they can have performance limitations such as an inability to correct for roll, and may operate over a restricted range of unwanted camera movements. In addition, such integrated camera techniques are less well established for infrared cameras and those cameras that use interchangeable lenses. Finally, it should be noted that mechanical stabilisation corrects for movement associated with the camera, but does not correct for other effects such as atmospheric scintillation. Digital video stabilisation techniques provide image correction by using information from within the video-stream and this includes movements of the camera, any atmospheric effects, and movement within the scene itself. The approach offers a potentially significant performance gain with minimal impact on power, weight, and size. However, to realise these benefits, the stabilisation algorithm complexity can be high, which translates into a high computational load. Although electronic stabilisation can be achieved using a low-cost CPU architecture, the limited processing bandwidth restricts the maximum input image size and frame rate. Consequently, the capability of the stabilisation algorithms has to be compromised to facilitate real-time operation. GPU architectures can be used to reduce the limitations associated with CPU-only devices and provide higher processing bandwidths that enable more complex processing. However, GPU implementations consume more power and often still need an additional system host for designs based on commercial-off-the-shelf products. The approach taken by RFEL has been to specify a high-performance stabilisation system that can readily support high input resolutions and frame rates, while maintaining low latency and power consumption. Also, the solution was required to be compatible with cameras that operate over different spectral bands, with support of multiple camera interfaces. Physically, a flexible and compact hardware implementation was required that supports both stand-alone and networked applications. Furthermore, the stabilisation solution should allow rapid integration into third-party hardware, including retro-fitting into in-service equipment. To meet these challenging requirements, RFEL elected to base the implementation on the latest FPGA architectures which have embedded ARM processors. Compared with a GPU implementation, the primary drawback was the required engineering development time which is significantly higher when compared with a CPU / GPU software module implementation. Fortunately, RFEL has been developing advanced signal and video processing modules for many years, which allowed substantial re-use of pre-existing functions and development tools. Initially, functional requirements were captured by liaising with major customers in the military and security markets. The system was then designed and developed using RFEL’s proven methodology of floating and fixed-point modelling in Matlab that allows efficient performance testing, rapid debugging and substantially de-risks all aspects of system implementation. Dr Steve Parker is Principal Digital Systems Engineer and Technical Project Lead at RF Engines Ltd – www.RFEL.com – he can be reached at Steve.parker@rfel.com Wayne Cranwell is Technical Sales Engineer and Project Manager at RF Engines Ltd – he can be reached at Wayne.cranwell@rfel.com 20 Electronic Engineering Times Europe February 2013 www.electronics-eetimes.com


EETE FEBRUARY 2013
To see the actual publication please follow the link above