Page 36

EETE MAY 2013

HAPTICS & USER INTERFACES Fig. 3: People counting sensor positions above doors or gates. Once the skeleton model in 3D for all joints is obtained, the user can accurately point at objects providing a 3D vector to the system to trigger actions. With the high resolution of the sensor and the high frame rate, efficient finger tracking can also be performed to enable sophisticated gesture recognition as shown in figure 4. The 3D data provided by the sensor is used to derive additional characteristics of tracked or counted objects. As example an autonomous vehicle counter consists of an object presence detector, a vehicle classifier, a feature extractor, a counting application and the interface to an infrastructure management system. The sensor is typically mounted in a top-down view position several meters above a road. That is the standard mounting position for road traffic surveillance and traffic management applications. For toll collection systems or parking management systems the sensor is mounted in a side-front-view position looking towards incoming vehicles providing a good view of the license plate. Design and implementation of algorithms for object presence detection, vehicle classification and feature extraction benefit the depth image. The software is typically organized as a chain of image processing steps. Object presence detection is the first system task in many such applications. Such systems are nowadays often based on conductor loops or ultra-sonic sensors. Both technologies are well introduced, rather cheap, but require a certain effort in installation and cabling between sensors and processing logic. The task of installation has significant impact on the costs of vehicle counting applications. The ToF-sensor gives the ability to replace those technologies, to reduce cabling effort and to add quality and new functionality to the system. The software identifies significant changes in the depth images as an indication of object flow and segments the indicated regions of consecutive depth images for further object classification. The goal of object classification is to identify different types of vehicles. This filter stage may drop all kinds of detected objects that do not classify as vehicles. The algorithm identifies Fig. 5: The Sentis M100 integrated ToF 3D sensor solution by Bluetechnix. shape of bonnet, shape of windshield, and also measures width, height and length of the vehicles. These parameters are used to classify vehicles into trucks, vans, SUVs, sedans, motor-cycles, etc. Additional features of a vehicle from the depth image are detected and notifications can be passed to the infrastructure when the vehicle has entered a specific zone in the depth image. That feature may be used to trigger further activities of the infrastructure. Toll collect systems or parking management systems may need to read the license plate. The ToF-sensor aids that task by identifying the location of the license plate. It identifies the license plate as a cube-shaped, prominent structure in the depth image and passes the location information on to a license plate recognition. An integrated solution The Bluetechnix Sentis M100 shown in figure 5 is an example for a commercially available 3D sensor using a PMD PhotonICs 19k-S3 Time-of-Flight sensor with 160x120 pixels and a range of 3m. The onboard Processor, a DualCore Blackfin BF561 enables applications operating at frame rates up to 40 fps. This specific smart sensor is tailored towards the integration into existing housings and can be connected via Ethernet or a RS232/ RS485 interface. Additional GPIOs can be used to trigger other devices and actions. Fig. 4: Using a depth image and a greyscale image for hand tracking. 36 Electronic Engineering Times Europe May 2013 www.electronics-eetimes.com


EETE MAY 2013
To see the actual publication please follow the link above