Page 19

EETE APRIL 2013

AUTOMOTIVE SAFETY Image sensing and recognition silicon optimise automotive safety By Klaus Neuenhueskes Safety systems that protect both vehicle passengers and pedestrians by increasing the chances of avoiding accidents are at the top of the design agenda for European automotive engineers. Furthermore, in Europe the importance of implementing such systems is set to become even more critical thanks to initiatives such as the Euro NCAP Advanced reward system. This rewards and recognizes car manufacturers who make available new safety technologies that demonstrate a scientifically proven safety benefit for consumers and society and will further accelerate the standard fitment of important safety equipment across a wide variety of model ranges. Collectively known as ADAS, these Advanced Driver Assistance Systems encompass a wide range of functions such as forward and rear collision warning, pedestrian detection and lane departure warnings. All of these capabilities demand high-performance, responsive systems that are able to capture data relating to objects in the near vicinity of a vehicle and then rapidly process that information in order to implement the appropriate course of action – be it an audible and/or visual warning to alert a diver to a potential hazard, an image on a console on which key information is overlaid, or the active application of brakes or other vehicle safety features. Vision-based ADAS Camera-based systems typically offer the most flexibility for ADAS implementations compared to other options including radar approaches. Not only can camera systems capture the relevant information around the vehicle, they can also provide drivers with ‘real-world’ images on console displays that can be integrated with useful information and graphics. In addition, when those camera-based systems are capable of capturing colour images, options for traffic signal and sign recognition can also be added to further enhance safety. Such flexibility, however, presents a number of challenges including how to ensure effectiveness of image recognition in a wide variety of conditions. Unlike their counterparts in mobile phones, for example, cameras in automotive and surveillance applications must be able to capture images in low light conditions and high contrast ‘light-to-dark’ and ‘dark-to-light’ situations such as when reversing into a garage on a bright, sunny day. Capturing the image is only one part of the story – cameras generate large volumes of data and the sheer volume of information that needs to be processed in real time can be significant. What’s more, in contrast to products destined for the consumer market, automotive camera-based systems must continue to provide high levels of performance and responsiveness, without degradation, over many years of operation, often in harsh environments. Finally, engineers are increasingly expected to integrate their ADAS implementations with other advanced automotive ‘infotainment’ systems. It is with these challenges in mind that semiconductor Fig. 1: Automotive vision system. manufacturers have been developing new generations of image sensors and advanced image recognition processors dedicated to automotive safety applications. Increasingly these products integrate dedicated functions for ADAS designs that simplify and speed design and implementation and improve overall system performance and responsiveness, while providing flexibility for developers to add their own vehicle-specific functions and features. CMOS image sensors A combination of cost, performance and the ability to reduce component count has made CMOS image sensing the technology of choice for the majority of automotive camera-based safety systems. The latest automotive CMOS sensors follow the trend for increasing resolution such that applications that may currently use VGA and 1.3 megapixel sensors will soon see soon higher resolutions up to full 1080p HD. In addition, sensors will also feature a growing number of built-in capabilities that significantly improve performance in ‘real-life’ conditions. Take, for example, the TCM5114PL CMOS image sensor recently launched by Toshiba. This is designed as an SoC (System on Chip) sensor that includes video encoding capabilities. The sensor employs a large, 5.6x 5.6μm pixel pitch that allows it to capture images in very low light conditions - something that is difficult to achieve with conventional sensors. In addition, the sensor incorporates an embedded High Dynamic Range (HDR) algorithm, which enables the TCM5114PL to show dark to light gradation images naturally, even under high-contrast light. Applied on RAW data, the single-frame HDR algorithm uses a double line electrical shutter type exposure method, with short and long exposure time for each line. This consequently produces images of a high dynamic range - equivalent to more than 100dB - through a synthesis process. This is useful, for instance, when a car reverses into a garage and the inside of the garage is hard to see because of the contrast between the dark interior and bright daylight. The HDR function allows the sensor to capture images both inside and outside the garage clearly. A single-frame HDR implementation with frame rates up to 60fps also improves the capture of fast-moving objects compared to conventional multi-frame HDR methods, and does Klaus Neuenhueskes is Senior Marketing Engineer for Automotive System LSI IC Product Marketing at Toshiba Electronics Europe – www.toshiba.eu www.electronics-eetimes.com Electronic Engineering Times Europe April 2013 19


EETE APRIL 2013
To see the actual publication please follow the link above