Jürgen Valldorf · Wolfgang Gessner (Eds.) Advanced Microsystems for Automotive Applications 2005
Jürgen Valldorf · Wolfgang Gessner (Eds.)
Advanced Microsystems for Automotive Applications 2005 With 353 Figures
13
Dr. Jürgen Valldorf VDI/VDE Innovation + Technik GmbH Rheinstraße 10B D-14513 Teltow
[email protected]
Wolfgang Gessner VDI/VDE Innovation + Technik GmbH Rheinstraße 10B D-14513 Teltow
[email protected]
Library of Congress Control Number: 2005920594
ISBN 3-540-24410-7 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in other ways, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under German Copyright Law. Springer is a part of Springer Science + Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2005 Printed in The Netherlands The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Jasmin Mehrgan Cover-Design: deblik, Berlin Production: medionet AG, Berlin Printed on acid-free paper
68/3020 Rw
543210
Preface Since 1995 the annual international forum on Advanced Microsystems for Automotive Applications (AMAA) is held in Berlin. The event is offering a unique opportunity for microsystems component developers, system suppliers and car manufacturers to show and to discuss competing technological approaches of microsystems based solutions in vehicles. The event’s design, character and intention remained unchanged, although becoming more mature during the last years. The AMAA facilitates technology transfer and co-operation along the automotive value chain. At the same time it is an antenna for newest tendencies and most advanced industrial developments in the area of microsystems for the automobile applications. The book accompanying the event has demonstrated to be an efficient instrument for the diffusion of new concepts and technology results. The present volume including the papers of the AMAA 2005 gives an overview on the state-of-the-art and outlines imminent and mid-term R&D perspectives. The 2005 publication reflects – as in the past – the current state of discussions within industry. More than the previous publications, the AMAA 2005 “goes back” to the technological requirements and indispensable developments for fulfilling the market needs. The large part of contributions dealing with sensors as well as “sensor technologies and data fusion” is exemplary for this tendency. In this context a paradigm shift can be stated. In the past development focused predominantly on the detection and processing of single parameters originating from single sensors. Today, the challenge increasingly consists in getting information of complex situations with a series of variables from different sensors and in evaluating the information. Smart integrated devices using the information deriving from the various sensor sources will be able to describe and assess a traffic situation or behaviour much faster and more reliable than a human being might be able to do. Systems integration in an enlarged sense becomes the key issue. Prof. Färber in his keynote paper gives a wonderful outline on how the understanding of biological systems can help to develop technical systems of cognitive perception and behaviour. We are particularly happy to present contributions from INTERSAFE on intersection safety issues. INTERSAFE is a subproject of the EU Commission funded Integrated Project PREVENT on active vehicle safety.
My explicit thanks go to the authors for their valuable contributions to this publication and to the members of the Honorary and the Steering Committee for their commitment and support. Particular thanks are also addressed to the companies providing the demonstrator vehicles: A.D.C., Aglaia, Audi, Bosch, Continental, DaimlerChrysler, IBEO, University of Ulm and Toyota. I would like to thank the European Commission, the Senate of Berlin and the Ministry of Economics Brandenburg for their financial support through the Innovation Relay Centre Northern Germany and to the numerous organisations and individuals supporting the International Forum Advanced Microsystems for Automotive Applications 2005 for their material and immaterial help. Last but not least, I would like to express my sincere thanks to the Innovation Relay Centre team at VDI/VDE-IT, especially Jasmin Mehrgan for preparing this book for publication, and not forgetting Jürgen Valldorf, the conference chairman and project manager of this initiative. Teltow/Berlin, March 2005 Wolfgang Gessner
Public Financers Berlin Senate for Economics and Technology European Commision Ministry for Economics Brandenburg
Supporting Organisations Investitionsbank Berlin (IBB) State Government of Victoria, Australia mstnews ZVEI - Zentralverband Elektrotechnik- und Elektronikindustrie e.V. Hanser automotive electronic systems Micronews - The Yole Developpement Newsletter enablingMNT
Co-Organisators European Council for Automotive R&D (EUCAR) European Association of Automotive Suppliers (CLEPA) Advanced driver assistance systems in Europe (ADASE)
Honorary Commitee Domenico Bordone
President and CEO Magneti Marelli S.p.A., Italy
Günter Hertel
Vice President Research and Technology DaimlerChrysler AG, Germany
Rémi Kaiser
Director Technology and Quality Delphi Automotive Systems Europe, France
Gian C. Michellone
President and CEO Centro Ricerche FIAT, Italy
Karl-Thomas Neumann
CEO, Member of the Executive Board Continental Automotive Systems, Germany
Steering Commitee Dr. Giancarlo Alessandretti Alexander Bodensohn Serge Boverie Geoff Callow Bernhard Fuchsbauer Wolfgang Gessner Roger Grace Henrik Jakobsen Horst Kornemann Hannu Laatikainen Dr. Peter Lidén Dr. Torsten Mehlhorn Dr. Roland Müller-Fiedler Paul Mulvanny Dr. Andy Noble Gloria Pellischek David B. Rich Dr. Detlef E. Ricken Jean-Paul Rouet Christian Rousseau Patric Salomon Ernst Schmidt John P. Schuster Bob Sulouff Berthold Ulmer Egon Vetter Hans-Christian von der Wense Arnold van Zyl
Centro Ricerche FIAT, Orbassano, Italy DaimlerChrysler AG, Frankfurt am Main, Germany Siemens VDO Automotive, Toulouse, France Technical & Engineering Consulting, London, UK Audi AG, Ingolstadt, Germany VDI/VDE-IT, Teltow, Germany Roger Grace Associates, San Francisco, USA SensoNor A.S., Horten, Norway Continental Automotive Systems, Frankfurt am Main, Germany VTI Technologies Oy, Vantaa, Finland AB Volvo, Göteborg, Sweden Investitionsbank Berlin, Berlin, Germany Robert Bosch GmbH, Stuttgart, Germany QinetiQ Ltd., Farnborough, UK Ricardo Consulting Engineers Ltd., Shoreham-by-Sea, UK Clepa, Brussels, Belgium Delphi Delco Electronics Systems, Kokomo, USA Delphi Delco Electronics Europe GmbH, Rüsselsheim, Germany Johnson Controls, Pontoise, France Renault S.A., Guyancourt, France 4M2C, Berlin, Germany BMW AG, Munich, Germany Motorola Inc., Northbrook Illinois, USA Analog Devices Inc., Cambridge, USA DaimlerChrysler AG, Brussels, Belgium Ceramet Technologies, Melbourne, Australia Freescale GmbH, München, Germany EUCAR, Brussels, Belgium
Conference chair: Dr. Jürgen Valldorf
VDI/VDE-IT, Teltow, Germany
Table of Contents
Introduction Biological Aspects in Technical Sensor Systems
3
Prof. Dr. G. Färber, Technical University of Munic
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
23
F. Solzbacher, University of Utah S. Krüger, VDIVDE-IT GmbH
Status of the Inertial MEMS-based Sensors in the Automotive
43
J.C. Eloy, Dr. E. Mounier, Dr. P. Roussel, Yole Développement
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Safety Systems in Road Vehicles – Findings of the EU Funded Project SEiSS
49
S. Krüger, J. Abele, C. Kerlen, VDI/VDE-IT H. Baum, T. Geißler, W. H. Schulz, University of Cologne
Safety Special Cases of Lane Detection in Construction Areas
61
C. Rotaru, Th. Graf, Volkswagen AG J. Zhang, University of Hamburg
Development of a Camera-Based Blind Spot Information System
71
L.-P. Becker, A. Debski, D. Degenhardt, M. Hillenkamp, I. Hoffmann, Aglaia Gesellschaft für Bildverarbeitung und Kommunikation mbH
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
85
P. M. Knoll, B.-J. Schäfer, Robert Bosch GmbH
Datafusion of Two Driver Assistance System Sensors J. Thiem, M. Mühlenberg, Hella KGaA Hueck & Co.
97
Reducing Uncertainties in Precrash-Sensing with Range Sensor Measurements
115
J. Sans Sangorrin, T. Sohnke, J. Hoetzel, Robert Bosch GmbH
SEE – Sight Effectiveness Enhancement
129
H. Vogel, H. Schlemmer, Carl Zeiss Optronics GmbH
System Monitoring for Lifetime Prediction in Automotive Industry
149
A. Bodensohn, M. Haueis, R. Mäckel, M. Pulvermüller, T. Schreiber, DaimlerChrysler AG
Replacing Radar by an Optical Sensor in Automotive Applications
159
I. Hoffmann, Aglaia Gesellschaft für Bildverarbeitung und Kommunikation GmbH
System Design of a Situation Adaptive Lane Keeping Support System, the SAFELANE System
169
A. Polychronopoulos, Institute of Communications and Computer Systems N. Möhler, Fraunhofer Institute for Transportation and Infrastructure System S. Ghosh, Delphi Delco Electronics Europe GmbH A. Beutner, Volvo Technology Corporation
Intelligent Braking: The Seeing Car Improves Safety on the Road
185
R. Adomat, G. Geduld, M. Schamberger, A.D.C. GmbH J. Diebold, M. Klug, Continental Automotive Systems
Roadway Detection and Lane Detection using Multilayer Laserscanner
197
K. Dietmayer, N. Kämpchen, University of Ulm K. Fürstenberg, J. Kibbel, W. Justus, R. Schulz, IBEO Automobile Sensor GmbH
Pedestrian Safety Based on Laserscanner Data
215
K. Fürstenberg, IBEO Automobile Sensor GmbH
Model-Based Digital Implementation of Automotive Grade Gyro for High Stability
227
T. Kvisterøy, N. Hedenstierna, SensoNor AS G. Andersson and P. Pelin, Imego AB
Next Generation Thermal Infrared Night Vision Systems A. Kormos, C. Hanson, C. Buettner, L-3 Communications Infrared Products
243
Development of Millimeter-wave Radar for Latest Vehicle Systems
257
K. Nakagawa, M. Mitsumoto, and K. Kai, Mitsubishi Electric Corporation
New Inertial Sensor Cluster for Vehicle Dynamic Systems
269
J. Schier, R. Willig, Robert Bosch GmbH
Powertrain Multiparameteric Oil Condition Sensor Based on the Tuning Fork Technology for Automotive Applications
289
A. Buhrdorf, H. Dobrinski, O. Lüdtke, Hella Fahrzeugkomponenten GmbH J. Bennett, L. Matsiev, M. Uhrich, O. Kolosov, Symyx Technologies Inc
Automotive Pressure Sensors Based on New Piezoresistive Sense Mechanism
299
C. Ernsberger, CTS Automotive
Multilayer Ceramic Amperometric (MCA) NOx-Sensors Using Titration Principle
311
B. Cramer, B. Schumann, H. Schichlein, S. Thiemann-Handler, T. Ochs, Robert Bosch GmbH
Comfort and HMI Infrared Carbon Dioxide Sensor and its Applications in Automotive Air-Conditioning Systems
323
M. Arndt, M. Sauer, Robert Bosch GmbH
The role of Speech Recognition in Multimodal HMIs for Automotive Applications
335
S. Goronzy, R. Holve, R. Kompe, 3SOFT GmbH
Networked Vehicle Developments in Vehicle-to-vehicle Communications Dr. D. D. Ward, D. A. Topham, MIRA Limited Dr. C. C. Constantinou, Dr. T. N. Arvanitis, The University of Birmingham
353
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway
371
G. de Boer, P. Engel, W. Praefcke, Robert Bosch GmbH
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
381
T. Wipiejewski, F. Ho, B. Lui, W. Hung, F.-W. Tong, T. Choi, S.-K. Yau, G. Egnisaban, T. Mangente, A. Ng, E. Cheung, S. Cheng, Astri
Components and Generic Sensor Technologies Automotive CMOS Image Sensors
401
S. Maddalena, A. Darmont, R. Diels, Melexis Tessenderlo NV
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
413
G. Dahlmann, G. Hölzer, S. Hering, U. Schwarz, X-FAB Semiconductor Foundries AG
High Dynamic Range CMOS Camera for Automotive Applications
425
W. Brockherde, C. Nitta, B.J. Hosticka, I. Krisch, Fraunhofer Institute for Microelectronic Circuits and Systems A. Bußmann, Helion GmbH R. Wertheimer, BMW Group Research and Technology
Performance of GMR-Elements in Sensors for Automotive Application
435
B. Vogelgesang, C. Bauer, R. Rettig, Robert Bosch GmbH
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
447
T. Ina, K. Takeda, T. Nakamura, O. Shimomura, Nippon Soken Inc. T. Ban, T. Kawashima, Denso Corp.
Low g Inertial Sensor based on High Aspect Ratio MEMS
459
M. Reze, J. Hammond, Freescale Semiconductor
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements J. Thurau, VTI Technologies Oy
473
Intersafe eSafety for Road Transport: Investing in Preventive Safety and Co-operative Systems, the EU Approach
487
F. Minarini, European Commission
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
493
K. Fürstenberg, IBEO Automobile Sensor GmbH B. Rüssler, Volkswagen AG
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
505
A. Heenan, C. Shooter, M. Tucker, TRW Conekt K. Fürstenberg , T. Kluge, IBEO Automobile Sensor GmbH
Development of Advanced Assistance Systems for Intersection Safety
521
M. Hopstock, Dr. D. Ehmanns, Dr. H. Spannheimer, BMW Group Research and Technology
Appendices Appendix A: List of Contributors
533
Appendix B: List of Keywords
539
Introduction
3
Biological Aspects in Technical Sensor Systems Prof. Dr. G. Färber, Technical University of Munic Abstract This paper concentrates on the information processing aspects of biological and technical sensor systems. The focus is on visual information and it tries to extend the topic from acquisation to interpretation and understanding the observed scenes: This will become more and more important for technical systems – especially for automotive applications.
1
Information Processing in Biological and Technical Systems
To understand the layers of information processing as it was developed by nature, the scheme of Rasmussen (figure 1) is presented [R83]:
Fig. 1.
The Rasmussen 3-layer model of perception / action
The lowest layer relies on skills either inherited or learned by training:
The reaction to stimuli from outside is skill-based in the layer 1. If there is a more complex or rare situation where no skills are available,
rules out of a catalogue can be used: These “when-then”-rules help to master situations where no predefined skill-based behaviour is found
4
Introduction
(rule based: layer 2) Finally if there are no rules that cover the conditions of a situation,
more abstract knowledge is required and the sequence “identification”, “decision” and “planning” takes place to find new rules: knowledgebased behaviour in layer 3. Figure 1 corresponds to the sensorimotor principle in biology as shown in the quite imprecise figure 2. Biological systems are able to perceive multimodal sensory information and to abstract the sensory signals to an interpretation of the actual scene: “cognitive perception” means that the system understands the situation; to make decisions on the basis of the actual situation and of learned knowledge; furthermore it is able to extract new knowledge out of its positive or negative experiences in this situation and to feed it into its knowledge base (“learning”); to execute actions corresponding to its decisions: it needs motor capabilities to influence the situation (e.g. to evade or to attack).
Fig. 2.
The sensorimotor bow
Most scientists in biology or neurophysiology agree that “intelligence” only can evolve in such a “sensorimotor bow”: Cognition (cognitive perception and behaviour) requires a system with the ability to perceive and to act in its environment. The biological example may be analysed on different layers of the Rasmussenmodel as we will see in the next chapter. Three examples including technical applications are: The vestibulo-ocular reflex VOR will be used later as a model for an automotive camera: Here an interesting control loop belonging to layer
Biological Aspects in Technical Sensor Systems
1 in figure1 handles the problem of image stability on the retina [B01]; some other features belong to layer 2 (e.g. where to glance: Sakade control or switching off the visual channel during a sakade). The ability of animals with two legs to keep the equilibrium during standing and walking: This is a quite complex control loop with sensory inputs from the sense of balance as well as from proprioceptive and force sensors, with control of many muscles in legs, arms and body. It is interesting to see today´s “humanoid robots” that still rely on static equilibrium: Here the biological model gives chances to much better solutions – e.g. for a running humanoid robot. In figure 1 layer 1 and 2 have to be modelled for this example. It is an important capability of humans to perform complex manipulation tasks with its hand- / arm- combination: Tactile as well as visual sensors interact in this sensorimotor bow. Hand-eye-systems are an interesting model for future robotic manipulations: In a joint project with neurobiologists [HSSF99] the behaviour of normal and pathologic humans has been studied and modelled (figure 3). The model has been implemented on a technical manipulator – with very promising results that may help to come from today’s robotic systems to future sensor based robots that are able to adapt their behaviour to varying situations. Here the Rasmussen- model must be spanned over all 3 layers (figure 4).
Fig. 3.
Biological and technical hand-eye-coordination
The focus of the paper will be in visual sensors and try to bring some of the biological principles to technical vision systems to improve robustness as required for most future automotive applications. Some of the results presented here are subject of the FORBIAS-project [FB03]. This joint research project combines research in technical and medical / biological groups. It is founded by the BFS (Bavarian Research Foundation); the acronym means “Joint
5
6
Introduction
Research Project in Bioanalogue Sensorimotor Assistance Systems”. Instead of “bioanalogue” often the terms “bioinspired” or “biomorph” are used.
Fig. 4.
2
Structure of a technical hand-eye-coordination system
Biological and Technical Sensors
Biological sensors originated from the necessity to get all information on the environment required to survive: On the one side to detect enemies as early as required to evade or to avoid dangerous situations (gas, temperature, …), and on the other side to find something to eat or to drink or to catch a victim; many sensors are used also to gather information on the state of the own body. So the human has learned a to see (eyes, here in the focus), b to hear (ears), c to feel (tactile, temperature, pain), d to smell (nose), e to taste (tongue and nose), f to feel “balance”(sense of balance, otolithes), g to feel pressure or forces.
Biological Aspects in Technical Sensor Systems
The last two modalities are important for our kinaesthetic impressions (movements, accelerations, forces). In addition many additional sensors in the muscles help to control movement very accurately with actor elements that are non linear or show non reproducible behaviour. Technical sensors today are good for the modalities a, b, c und g; d (“electronic nose”) starts to come into use, e may not be important for technical systems, and for f we have inertial sensors with 3 or 6 degrees of freedom. It is the question if we should copy the physical principles of these biological sensors or if the analogy to biology should be restricted to the behaviour that has proved to be reasonable over millions of years. This behaviour consists at least of two elements: the physical characteristics (sensitivity, accuracy, range, dynamic behaviour, spatial and temporal resolution) and the type of perception and interpretation for the signals coming from the sensors (what does a given signal combination “mean” for the living organism?). Because of the construction of living organisms it does not make sense to copy the physical implementation (membrane potentials, waste of elements and interconnections, neurons), but the functionality has proved to meet the requirements. As an example: We start to understand the principles of neuronal processing. But to implement this functions by a one-to-one-imitation in an artificial neuronal network may not be the adequate technical solution. The second part of this chapter is dedicated to the biological model of the visual system. At first we consider the basic sensor “eye” and its characteristics: Dynamics (range of light intensity) is handled over 6 decades (1:106). This is accomplished by a logarithmic behaviour of the receptors and by “accommodation”, the automatic control of diaphragm (iris). Focussing the eye by automatic lens control. Colour vision (rods and cones for colour), with variable resolution. Temporal resolution: limited by neuronal processing; not homogenous moving objects are detected much better in the peripheral part of the view field. Spatial resolution: Also very non-homogenous, very high pixel density in the central part (fovea), lower and lower towards peripheral areas. The number of rods is about 6 million, and there are 120 million cones in the human eye!
7
8
Introduction
Fig. 5.
Characteristics of a HDRC CMOS-camera
It is not easy to copy that performance in technical systems. However, technical vision sensors are becoming better and better, and there are good chances to meet or even bypass the biological system: The dynamic behaviour of the eye is already feasible with the new CMOS-cameras (figure 5). The HDRC-principles allows to adapt the characteristics to the requirements (e.g. by controlling the integration times), a logarithmic behaviour in a range of 1:108 can be implemented. The right side of figure 5 shows an example of the effect of logarithmic behaviour compared with a CCD-camera. For many application only distances of >2m are required: This can be accomplished easily by a fixed focus lens. Colour vision is available in single and multiple chip versions. Today’s high resolution chips allow high quality colour even with a single chip. Temporal resolution is as good as the neuronal system: 25 to 50 images/s are easily achievable, with lower resolution even up to 1000 images/s. Many more than the eye. Some people tried to build image sensors with resolution characteristics similar to the eye (rotational symmetry, decreasing resolution with increasing radius). As this is not in the technological focus, these devices are very expensive. However, image chips with constant spatial resolution are getting more and more pixels. For the consumer market up to 12 million pixels are available at low cost. One solution here is to use a multifocal configuration. The view fields of two camera chips with different focus deliver high resolution in the fovea (about 10° telecamera) and lower resolution in a bigger angle (60° wide angle camera). Figure 6 shows the overlapping view area, in the “fovea” the resolution is 36 times higher.
Biological Aspects in Technical Sensor Systems
Fig. 6.
Multifocal camera system
Such a technical vision sensor configuration already is a quite good “bioanalogue eye”, but the biological eye can be moved in a very effective way (eye movements relative to the head, and movements of the head) so that the fovea can be directed always to the “region of interest” ROI; the biological eye compensates for all disturbances as they arise by walking (head movements) or by driving, e.g. if we are sitting in a car. However, we always see a stabilized image, otherwise we would feel dizziness. It seems not to be enough to just have a good sensor – it must be integrated into a sensor system that solves to problems just mentioned. The already mentioned Vestibulo-Ocular Reflex (VOR) serves as biological model that integrates the following additional components into the complete vision system: The oculomotor system consisting of some powerful muscles that allow to move the eye in 3 degrees of freedom: up and down, to the right and left, and around the vision axe (angle). The performance figures are impressive: The eye can be moved with up to 700°/s and with accelerations of up to 5.000°/s! The vestibular sensor system: The sense of balance (otolithes in the semicircular canals) delivers inertial signals concerning 6 degrees of freedom (3 translational and 3 rotational degrees). The nervous system can generate translation and angle velocities as well as changes in position and angle (integration of the acceleration signals). This information concerning fast changes is used to compensate the disturbances.
9
10
Introduction
There is finally the neuronal control loop that compensates the unin-
tended movements with eye movements. It uses the vestibular signals as well as the so called “retinal slip” – information that is generated in the cerebellum from the retina image. The neuronal system also allows to generate intended sakadic movements of the eye, it also prevents that images are processed during a sakade. For about 100ms the input channel is blocked until the image is again stable. Figure 7 [G03] shows a simplified block diagram for the interaction of these components: Especially interesting is the lower block within the “Inferiore Olive” that acts as a teacher changing the synaptic weights so that the visual error is minimized.
Fig. 7.
VOR: The vestibulo-ocular-reflex.
The “optokinetic nystagmus” is an additional mechanism that generates eye movements in the opposite direction if visual stimuli with large area are moving trough the view field (to keep the image stable). In constant time intervals sakades bring the eyes back to allow further movements in the same direction. Some effects observed in patients with a defect in the sense of balance can be explained easily by the VOR-model. For instance these patients tend to have heavy dizziness – the visual impressions do not correspond to the signals received from the sense of balance [B99]. For normal persons dizziness felt in a simulator that stimulates only the visual system but not the sense of balance has the same explanation. The image stabilisation function does not work for
Biological Aspects in Technical Sensor Systems
patients with a damaged sense of balance: If they walk on the street they cannot recognize the faces of people even if they know them well. The VOR-model is well understood, it is one of the main goals of FORBIAS to apply its principles to an automotive camera system. It also means that a fast moveable platform and an inertial sensory system has to be added to the camera. Now we have copied the basic capabilities of the biological eye: Beside mechanical disturbations (bluring) the camera system always generates stable and sharp image sequences, it allows also to direct the camera view to a desired direction (gaze control). These image sequences can be stored, they can be transmitted and presented on displays. Technically this is an immense quantity. If each pixel coming from a colour camera with a resolution of 500x700 pixels is coded in 2 Bytes and if a frame rate of 25 frames/s is generated for 2 cameras (with 2 different focusses), a bandwidth B of B = 2 • 500 • 700 • 2 • 25 = 35 MB/s has to be stored. But storing into any memory does not help at all: The image sequences have to be processed, biological systems are able to interpret, to “understand” the relevant information contained in the images. Perception must result into immediate behavioural decisions and in actions according to the “sensorimotor paradigm” – tolerable times to understand the images are in the range of a tenth of a second. The following chapter again looks to biological principles for perception because exactly the same functional and timing requirement have to be realized for technical applications – e.g. in a car using “machine vision”.
3
Visual Perception in Biology
Many scientists in different fields are trying to understand the mechanisms behind the perceptual capabilities of the visual system. Figure 8 indicates, where advances can be expected. In neurophysiological studies the very first layers of image interpretation are quite well understood. Especially the quite old results of Hubel & Wiesel indicate that the image is locally analysed by spatial and spatio-temporal filters
11
12
Introduction
(receptive fields as in figure 9). Edges of different length and direction – some of them only when moving with a given speed in a given direction – are detected. In higher layers there seems to happen an abstraction from the location of the photoreceptor cells. Until the cortex (V1 and V2) this type of features is detected – and there are many hypotheses what may happen in the next stages (bottom-up). Detailed models of these layers are still missing. However it may be useful to use these feature classification for the first stages in image interpretation.
Fig. 8.
Neurophysiological and psychophysical view
Fig. 9.
Spatial and spatio-temporal receptive fields
Simple animals like flies have a much simpler vision system. Most important here is the detection of moving objects, mainly enemies. Figure 10 presents a simple model for the fly´s eye. After the (facet) photoreceptors in the first
Biological Aspects in Technical Sensor Systems
stage, the Reichardt cells are detecting movements. In a second stage a spatial integration by so called “large-field neurons” happens and it looks that in this simple animal there is a direct coupling to a behavioural stage that executes the required high speed escape reaction [BH02].
Fig. 10. A simple model for the fly´s eye
In psychophysical studies impressions of people in experiments are analysed that are hierarchically on a much higher level as the simple features of a. (neuropsychology, perception psychology). The performance of the system behaviour and also its limit can be studied in a top-down direction: The analysis of “optical deceptions” is an interesting way to understand the mechanisms behind the system structure. “Gestalt”-perception is one example for the very good performance of biological vision. Independent of size (distance), location, rotation or viewing aspect humans are able to identify and classify known object types with very little errors. We have stored “cues” or combinations of cues for the “Gestalt”, that are used for a first hypothesis of the object class. For classes very relevant for us we have stored first hypothesises for different viewing angles, it takes only parts of a second to be sure that this is an instantiation of a certain class. We also have the capability to change the view of objects less well known in our imagination to get a good starting point for the recognition process. The object class hypothesis will be reinforced by tracking the object over some time, any uncertainty may disappear with time. This is especially true if either the observer or the object is moving – you get new aspects, new information and a better impression of the object´ s form: One way to use this information is “motion stereo” handled later.
13
14
Introduction
The information processing stages behind the eye tolerate even large defects in the sensor system itself. Many people not even know that they cannot see well with both eyes: The interpretation system hides this fact. An interesting example is the “blind spot” in the eye: Since the photoreceptors as well as the first neuronal stages are in front of the retina, the optical nerve has to go through a hole in the retina – it is a quite large field where no photoreceptors are – we compensate for that, and it is even difficult for us to find the blind spot.
Fig. 11. See something that is not there
Figure 11 shows an example for an optical deception that helps to understand the vision principles. There is no quadratic structure available, but everybody sees the form that overlaps the 4 concentric ring arrangements. These is an example showing that the idea of object recognition by combining basic features like edges is not always true. There are no edges in this image. For technical applications it will mostly be possible to restrict the number of object classes for a given domain. For automotive applications only traffic-relevant objects must be considered: That reduces the complexity for the building of first hypothesises. However, this is still one of the main problems of automotive vision systems. A technical perception process will need models for all the object classes involved in the domain. Depth perception is another important capability of the biological vision system. All biological vision sensors produce 2-dimensional projections of the 3Dscene, and the 3rd dimension, the depth, disappears. However for many biological purposes the distance to other objects is highly important. There are 2 types of distance of interest:
Biological Aspects in Technical Sensor Systems
Near distance, where a frog can catch a fly, or where a human can
manipulate objects (at arm´s distance). Distances of 0,2 to 2m have to be measured with sufficient accuracy for these biological purposes. Far distance, where a frog detects the dangerous stork or the human finds either its prey or its enemy. Distances of a few meter for the frog or of 50m and more for the human have to be mastered. For both distance types nature has developed different principles. We have a look now at the human vision system. For near distances binocular vision is used: Two eyes allow to have 2 different projections from the 3D-scene to 2-dimensional images, stereo-vision is the most well known principle to reconstruct the 3D-scene. But there are other cues also like Evaluation of the vergence angle: The axes of the 2 eyes point to the interesting objects, interpretation of the angle gives the impression of depth; some of the “2-dimensional 3D-pictures” are using this principle. Use of the depths of focus: Control of the lens focus until the image of an object is sharp. From the state of the lens you can infer to the distance of the object. Stereo often is used as the technical solution because also the biological systems are applying this principle. But it is well known that a person that can see only with one eye also has spatial impressions – the defect will be compensated by other methods (as shown in below). Technically stereo vision is especially useful for manipulation purposes. Far distance vision as mainly required for automotive applications relies on other concepts that also are implemented by nature: The first is based on some a priori-knowledge of the biological systems: A frog knows about the size of a stork, and by combining this knowledge with the relative size (angle) in its vision field it knows about the distance. The same is valid for the humans: We know how big a car is, by seeing it in a certain angle of the view field the distance can be estimated. Figure 12 shows a well know phenomenon – all figures here have the same absolute size, by comparing them with the alignments our impression is deceived. Second if there are movements either by the perceiving subject or by an observed object the principle of motion stereo can be used. Instead of two simultaneous images a sequence of images with different points of view and the knowledge about the distances of these points is processed to obtain a 3D-impression. Third there are other cues that help us to have spatial impressions.
15
16
Introduction
Examples are certain textures or the effect of illumination by a light source coming from a known (or assumed) direction.
Fig. 12. Subjective size perception
Combinations of these principles allows an estimation of distances accurate enough for most applications. This is a good model for technical systems in automotive vision applications. But it implies that the system has some a priori knowledge and that it is able to identify the object class. To solve both problems requires still a lot of work to do and some insights into the biological model. Motion detection is the next problem where nature can give solutions. In the frog’s or the fly’s vision system it is the most important thing to detect moving objects (prey or enemy). But also in the human visual system there are neurons specialised on movement: Especially in the peripheral area fast movements are detected very well. For automotive applications this principle may be applied too: It is very important to detect moving objects (cars, bicycles) coming from the peripheral to the central view field. The detection of moving objects must trigger visual attention. It will start to build a new hypothesis that comes into the scene interpretation process.
Biological Aspects in Technical Sensor Systems
In technical applications the principle of sensor data fusion is quite important. Many projects today are combining radar and visiual sensors to get a more robust knowledge about the scene. How is this in biology? Two examples shall be mentioned: The adaptation and calibration of the vision system of a baby happens by data fusion of two sensors: The eyes and the touching hands. The word “grasp” means as well “to understand” as “to reach for”: By combination of the 2 sensory inputs the baby learns to see, later on it can understand the scene also without its hands. Sensor data fusion between the vision and the auditory sensors is important for the human scene understanding. You hear where the sound is coming from and you see, what may be the reason. As long as both correspond there is no perceptual problem. If it does not, usually the vision sensor wins. Often this is called the ventriloquism phenomenon: If the speaking puppet moves its lips, the speech coming from the puppet player is heard from the lips direction. Technical applications often use active sensors (RADAR or LIDAR). There is no biological model for the visual channel, but quite successful with ultrasonic “vision” where the bath is a wonderful model for technical applications as for parking aids. Here data fusion to a visual channel could be successful. In the FORBIAS project, neuroscientists are cooperating with engineers to find new principles especially in visual perception. Some experimental work is already done to detect the best depth cues and to propose technical applications. Future work will be directed to the detection of “Gestalt”, of classes of objects relevant to automotive applications. Already today it seems clear that real advances in machine vision only can be made if some knowledge will be included to interpret e.g. traffic scenes. It is the question how much knowledge will be necessary and how the knowledge can be used. In any case technical systems are coming into the neighbourhood of “cognitive systems”. Cognition will be required if these systems must provide the same degree of robustness as human drivers. It must be much better than available in today´s technical vision systems for industrial applications.
4
Behavioural Aspects
The scope of the paper is on biological sensors. However, to run biological sensors in an optimal way the motor systems with the ability to change some sen-
17
18
Introduction
sor characteristics must behave optimally. For the biological vision sensor this includes: Focus control (lens focus adapted to an object’s distance) Intensity control (pupil width) Gaze control: To direct the fovea towards the actual “region of interest” Most behavioural reactions happen on layer 1 (skill based) and layer 2 (rule based) of the Rasmussen scheme (figure 1). Focus and intensity control as well as the stabilisation part of the VOR happens on layer 1, the optokinetic nystagmus on layer 2. The gaze direction happens partly in a reactive way (e.g. have a short look to a peripheral part where something is moving) but partly also intentionally. Here the layer 3 is involved. To get the necessarry knowledge from the environment a sequence of sakades should be executed that optimises the information content or minimizes the remaining uncertainty. If one considers the situation of a car driver: he has to look forward to preceding and meeting cars, to the side for crossing cars, into the mirrors for the cars behind the own system. All parts of the traffic scene have to be analysed fast enough that the risk for an accident is minimized. This needs a good gaze control strategy. This behaviour involves a lot of experience and knowledge about possible traffic situations. Some of the principles involved to find the adequate behaviour are: Selection from a list of well known alternative behaviours. Environmental cues are used as conditions. Pre-simulation of behavioural decisions, mental evaluation of the expected results on the basis of some knowledge on its own capabilities, followed by execution. Learning: E.g. bringing new behaviours into the list of alternatives or retrying a new behaviour for pre-simulation. “Reasoning” in the sense of the classical artificial intelligence very often is too slow. The result would be too late for a critical real time situation. There are many psychological experiments concerning gaze control for given tasks. Many of the results can be applied also to technical systems. Behavioural decisions for other actions – locomotion, manipulation, head movements, … – will follow similar laws. It is also a field where it is worth to look into the biological model.
Biological Aspects in Technical Sensor Systems
5.
Perception and Autonomous Action in Technical Systems.
Figure 13 shows an example for a system architecture as it may be used for a “cognitive technical system”. It resembles to the principles shown in the figures 1, 2 and 8: The relationship to the Rasmussen scheme is not as evident as to the sensorimotor bow, the comparison with the biological architecture in 8 demonstrates the similarity between the biological and technical systems. Figure 13 concentrates on a “cognitive car” that provides functions for driver assistance systems as well as for autonomous driving:
Fig. 13. System architecture for a “cognitive car” The physical body at the bottom shows all “hardware”: The car with all
its sensors and actors including the interfaces to information processing. On the right side there is the perception part: It has to care about the Ego-state (where the physical body is located, how fast it is moving and in which direction) and to do the “traffic sensing”. To detect all relevant objects like lane marks, moving and still cars and other objects, and to track these in the image sequence taken by the sensors. By far the most successful principle applied here is the 4D-model of Dickmanns [DW99]: It is based on Extended Kalman Filters EKF that allow estimations for the state (including movement vectors) of all observed objects. The art is to detect features relevant for the object class and to track them in a robust and stable way.
19
20
Introduction
It may be necessary to fuse these results with the results of other sen-
sors (like the object list of RADAR-sensors) or with some information available from a map where the location of the body is well known because of DGPS-information. The result of reading a traffic sign may be fused with some a priori knowledge that on this place a known sign is located. Also road orientation of the map may be used to stabilise results of the lane mark detection given by a vision system. All these results (descriptions of a perception process delivering object instances with their states) have to be stored into a dynamic database containing the actual state of all detected objects and the history of a few past seconds. This database is updated with about the video frame rate, it has a quite short temporal horizon. From this “Dynamic Object Base” a cognitive process must interpret the traffic situation and to detect the intention of other players (“subjects”) in the traffic situation (a car trying to bypass from the rear, a car starting to change its lane, a bypass-process on the other side of the road. The results are stored in an other data base with a larger time horizon that contains the actual traffic situation. This information is used together with the mission plan (what the car has to achieve, where to go), some value information (minimal risk for all participants), and some knowledge about the capabilities of the own body (e.g. acceleration, dynamics) to decide about the next behavioural steps to be forwarded to the actor-side of the car. Finally the selected behavioural steps have to be executed: For the active vision process the gaze control has to be done (“minimize uncertainty”) and for the car itself the longitudinal (breaking and gas) and lateral (steering) control has to be done with the corresponding control loops.
Perception, interpretation of the traffic situation and behavioural decisions require cognitive abilities. They need general knowledge about relevant objects, traffic situations and the capabilities of the car. However, the time to decide and to execute the behavioural steps is very short – only parts of a second are available to avoid risky situations and to stay in a safe state. The scheme in figure 13 is simplified. There are also many direct interconnections between the sensor- and the actor-side e.g. for the local control loops as well for car motion as for gaze control where the VOR-model is implemented. It requires very fast information from the vision sensor to use the retinal slip-information without time delay. However, it shows the principle organisation.
Biological Aspects in Technical Sensor Systems
6
Future Aspects
The examples presented in this paper make clear that it is not the physics of the biological sensor that must be copied to achieve the performance of biological systems. Technical sensors may use different physical sensor principles, most of these are not worse then the biological ones. The difference is in the interpretation process – the cognitive capabilities of biological beings. The signals coming from the sensors are processed in the context of the situation and stored knowledge is used to “understand” the signal pattern. We know only part of the neuronal mechanisms, we know something about behaviour. When we will better understand these principles, we also will be able to realize the observed functions in technical systems. With the transition from the horse carriage to the automobile we have gained a lot of speed, mobility, comfort. But we also have lost the cognitive capabilities of the biological system “horse”. It was able to find to home if its owner drank too much and it avoided crashes with other carriages. To compensate for these functions cars need some cognitive capabilities that may be used as “assistance functions” or even as “autonomous functions” – like the horse that finds its home.
References [B01]
Brandt, Th.: Modelling brain function: The vestibulo-ocular reflex. Curr. Opin. Neurol. 2001; 14: 1-4. [B99] Brandt, Th.: . Vertigo: Its multisensory syndromes. 2nd Ed. Springer: London, 1999. [BH02] Borst A, Haag J: Neural networks in the cockpit of the fly. J Comp Physiol 188: 419-437 (2002). [DW99] Dickmanns E.D., Wünsche H.-J.: Dynamic Vision for Perception and Control of Motion. Chapter 28 in: B. Jähne, H.Haußecker and P. Geißler: Handbook of Computer Vision and Applications. Vol. 3, Systems and Applications, Academic Press 1999, pp 569 – 620. [FB03] Färber G., Brandt, Th.: FORBIAS Bioanalogue sensomotoric assistance systems. Proposal for a joint research project founded by BFS (Bavarian Research Foundation), Munich 2003. [G03] Glasauer, S.: Cerebellar contribution to saccades and gaze holding: a modelling approach. Ann NY Acad Sci 1004, 206-219, 2003 [HSSF99] Hauck,A., Sorg M., Schenk T. and Färber G.: What can be Learned from Human Reach-To-Grasp Movements for the Design of Robotic Hand-Eye
21
22
Introduction
[R83]
Systems?. In Proc. IEEE Int. Conf. on Robotics and Automation (ICRA’99), Seiten 2521-2526, Mai 1999. Rasmussen J.: Skills, Rules, and Knowledge; Signals, Signes, and Symbols, and Other Distinctions in Human Performance Models. IEEE Transactions on Systems, Man, and Cybernetics, Vol.13 Nr.3, page 257-266.
Prof. Dr. Georg Färber TU München Lehrstuhl RCS Arcisstr. 21 80290 München Germany
[email protected]
23
The International Market for Automotive Microsystems, Regional Characteristics and Challenges F. Solzbacher, University of Utah S. Krüger, VDIVDE-IT GmbH Abstract Microsystems technologies are in widespread use. An ever increasing number of automotive functions rely on MEMS/MOEMS applications. Besides the simple quantitative market numbers the correlations in the value chain, the strategic alliances and willingness for cooperation and sharing of information as well as RTD infrastructure are strong indicators for success in a specific application field and geographical regions. This paper gives an overview about the key microsystems applications and their further market deployment. In addition to the market data, background information such as key players, competitive analysis, market specific situations and interdependencies are reviewed. A brief outlook will discuss the change of the greater picture by the introduction of solutions using data and sensor fusion approaches. We will introduce a suggestion of the impact of this upcoming approach.
1
The Global Automotive Market
The objective of this paragraph is to outline the boundary conditions for automotive microsystems, their market potential and potential future trends. It is an informed view of major issues, trends and behaviour of major players and includes global car production forecasts for the year 2015.
1.1
Competitive Arena
The automotive industry has become a very tough environment for suppliers and OEMs alike. It is one of the most global industries with high competition across the entire production chain. Due to low profitability and increasing investment cost for new products, a further consolidation of the industry field can be expected. New entries are almost impossible.
24
Introduction
Fig. 1.
Porter 5-forces model of the automotive industry
Trends
Design, branding marketing, distribution and a few key components to large extend define a car makers competitive position. Profits flow from sales, service, finance or leasing. Therefore, more and more OEMs are outsourcing extensive parts not only of their production but also R&D efforts to suppliers. Cost and innovation pressure are being passed on to the suppliers due to increasing competition and production overcapacities. The automotive supply chain is thus undergoing a tremendous transformation – suppliers are more and more evolving into development (and risk sharing) partners. The subsequent high R&D and production investment volumes that need to be advanced by automotive suppliers lead to further consolidation of the supplying industry. Current studies expect a decrease of the total number of suppliers from 5500 today to about 3500 in 2010 (from 800 to about 35 1st tier suppliers) [1]. Product development cycles continue to decrease from an average today of about 23.6 months down to 18.3 months in 2010, which will put unforeseeable strain even on the fastest of suppliers and R&D partners. At the same time, vertical integration of the manufacturing level is expected to decrease (OEM 39.5% 2002 to 27.8% 2010; supplier: from 46.1% 2002 to 40% 2010) [2]. The percentage of cars using common platforms will continue to rise from 65% (2000) to about 82% (2010) [3] allowing larger production volumes of identical or only slightly modified parts for suppliers and OEMs.
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
1.2
Production Forecast
Even though some of the key European markets have stalled over the past years, the total automotive market will continue to grow on average by about 2.2% (no. of cars) / 2.8 % (market volume) annually from about m57 cars (2002) to about m76 cars in 2015. This growth will be driven by fast emerging markets such as India, China and Thailand.
Fig. 2.
Global car production forecast 2015 [4].
Automotive Industry Regional Specifics
When reflecting on the local customer requirements of the international automotive business one has to take into account that the concept of a world car never proved successful. Regional specifics in consumer expectation have to be addressed, different legislation, tolling, the volatility of exchange rates and competing production costs are reasons for regional strategies. Table 1 lists some of the interesting regional specifics. The globally automotive industry is in a transition towards strong implementation of regional specifics and towards building a transnational manufacturing system along the whole value chain. This development will with some time delay be adapted to automotive microsystems supply and manufacturing networks.
25
26
Introduction
Automotive MST R&D in Europe has assumed a leading role combined with a long tradition in automobile manufacturing. Especially Germany has shown to be very strong in the development and production of medium and upper segment vehicles paired with a strong public engineering attitude which translates into a high level of automation. In contrast to this, Europe as a whole remains a fairly conservative market where industrial customers (OEMs) tend to stick to known solutions.
Tab. 1.
Comparison of the international markets
When looking for lead markets it is essential to have a closer look at Japan. The limited, but nevertheless large market with costumers willing to pay for innovations makes Japan very attractive. Almost no material resources, high population and limited numbers of highways due to the geographic situation, big cities with overwhelming traffic and a leading role in telecommunication infrastructure support the introduction of high tech solutions. On the downside however, Japan still remains a largely “hermetic” market with its own complete value chain where Europeans or Americans fail to address a significant market share. Japan on the other hand is a strong exporter with increasing shares in Europe high success in North America. North America is traditionally a very strong and competitive automotive market. The costumer is very cost sensitive which leads to “simple” solutions and a lack of high tech innovations in the car. In contrast to the end customer need, regulations by public authorities (e.g. California) are very tough. With exception of SUV`s which are counted as belonging to commercial vehicle categories in the US due to their “truck” structure, frame and engine and which are seeing very high demand, the US introduced fleet consumption rules, zero emission roadmaps and safety regulations. The ambivalence of the market paired with a legislative system with high industrial liability results in a rather high tech avoiding industrial approach. It further has to be mentioned that the US high way system does not compare to Europe or Asia which makes e.g. the discussion about zero emission, hybrid, gasoline or diesel in a global context difficult.
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Fuel consumption also does not seem to be a technology driver for cars in Eastern Europe. Eastern Europe can not build on a long automotive tradition. It is a big, but slowly growing market dominated by the international car industry. The costumer base divides into two groups, one looking for the cheapest available vehicle and the other going for the top-of-the-range car serving as a status symbol. For microsystems application the latter group is quite interesting, since cars are delivered even better equipped than in the manufacturers home markets. These cars help to create a strong brand image. Branding appears to be one of the buzz words for China, as well. Initially, sheer population numbers in China triggered a high interest of the automotive industry. When observing current developments it can however be concluded, that compared to traditional new markets China builds up a strong domestic automotive industry that is becoming internationally competitive in high tech areas including microsystems technology.
China
China’s domestic car sales have grown at more than 10% annually over the past years and will account for about 15% of the total global automotive market growth. Improved infrastructure, sales and distribution channels, the deregulation of the automotive market as well as the growing economy and subsequent prosperity will lead to increasing demand growth. One of the common misconceptions of China is that it is a market for simple low cost cars – on the contrary: Chinese customers see the car clearly as a status symbol of accomplishment and success. Safety and comfort are highly valued and upper middle class or luxury European cars are therefore in high demand (BMW is already selling more 7 series cars in China than in Germany!). The OEM market is dominated by global-local joint ventures, such as Shanghai Volkswagen and FAW (Changchun) which together account for more than 50% of the total Chinese car production. The remaining international joint-ventures add a further 43%, leaving no room for the remaining 20 domestic car makers. China’s entry into the WTO has lead to drastic cutting of import tariffs and to fading out of local-content requirements for cars. Competition and quality will continue to increase, partially due to the setting up of shops of European and US suppliers in China. Successful automotive suppliers such as SAIC (Shanghai Automotive Industry Group Corporation) assume more and more responsibility for quality and out put while transitioning from contract manufacturer to fully developed supplier.
27
28
Introduction
Fig. 3.
Automotive joint-ventures in China [5]
Just like for many of the emerging markets, looking at MST requirements for cars in China, one has to clearly make a distinction between European upper middle class and luxury cars (starting at VW Passat and above) which come fully equipped and the probably larger bulk of very low cost cars. An example of such cars is the Renault ”Logan” built by Dacia, which is being produced in Romania and sold for between 5000 and 7000 Euros and which except for optional ABS and airbag is devoid of any “invisible helpers” or MST devices. Due to the strong economic growth – unlike a lot of the former eastern block countries, the middle-east and India – China is most likely to absorb a growing percentage of MST market relevant luxury cars. Thus, the customer request for ever improved and increased high tech content in the car will keep driving an MST demand. Current studies are expecting the number of potential customers to grow to about m170 until 2010 [4]. At the same time, China has been very active identifying and attracting partners for core technology areas for the future development of their technology and industrial infrastructure base. Thus, initiatives to foster technology transfer as well as own developments are beginning to bear fruit. China has started growing an increasing base of SME companies that are supplying the big 1st tiers and OEMS in the automotive as well as the white appliances industry.
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Examples are companies like Huadong (China Eastern) Electronics Corporation which from a former military supplier has evolved into a high tech company with financial interest in multiple SME companies such as GaoHua Technologies (Nanjing) which is a supplier of industrial pressure transducers which fit European standards. These companies have also found local R&D partners in local Universities and national laboratories. Furthermore, China is starting to attract more and more highly skilled, foreign trained engineers and researchers due to the growing opportunities in their home country from the US and Europe: i.e. people are moving back and starting companies or pursuing academic careers rather than staying in the US or Europe – the chances or prospects of success are by now apparently larger than they are in the western world. US universities and employers can already account for a clear decline in high profile foreign researchers from countries such as China and India. I.e., even though it is still very common to come across company or administrative structures guided by the mind frame of engineers trained in pre WTC PRC which focus on imitation rather than innovation, it is a western misconception to expect a 10 to 20 year delay until countries like China may start to innovate. Already, one can witness the pace at which China absorbs the newest technology in all fields together with the people to produce, operate and develop it. At the same time, international joint ventures with technology drivers mainly from Europe and the US will lead to additional momentum for the approaching of international R&D standards. The European MST industry will have to take this into account when planning their strategic positioning for the coming two decades.
India
The Indian industry leader is Maruti Udyog with about 50% of the market share. Ford, Honda, Mitsubishi, Hyundai and Daewoo are amongst the most active global players. The component and system supplier market is highly fragmented und underdeveloped leading to low productivity. Many foreign carmakers are unsatisfied with Indian component manufacturers and would rather prefer free imports. The sales volume of 3 billion USD of automotive components in India is comparable to the Portuguese auto parts industry6. India – due to its comparably low growth of the economy is an ideal market for an MST irrelevant car as perceived and initiated by Renault’s CEO Louis
29
30
Introduction
Schweitzer about eight years back against the trend of all other manufacturers.
ASEAN
Indonesia, Malaysia, Philippines and Thailand are the four major markets of the Association of Southeast Asian Nations. Absolute increase in production from 2001 to 2007 in units: Indonesia: 230.000 to 300.000 Malaysia: 416.000 to 550.000 Philippines: 64.000 to 145.000 Thailand: 460.000 to 870.000 The top positions of the ASEAN market are occupied by Malaysian automakers, followed by Toyota, Isuzu and Mitsubishi. The ASEAN market is booming, but highly affected by economical and political crisis. Furthermore, the current absolute market size is still very small the automotive industry has become a very tough environment for suppliers and OEMs alike.
Japan
For Japan with a market volume decline of nearly 3 percent until 2007 the prospects do not look euphoric (2001: 9.134 Mio cars; 2007 8.873 Mio cars.). The Toyota group seems to be moving against the trend with a volume growth from m6.1 cars in 2001 to m7.8 cars (2007) – their primary volume growth however comes entirely from new volume in emerging markets, North America and Europe.
2
Automotive Microsystems
Automotive Microsystems have emerged in the eighties, starting with the introduction of Manifold Air Pressure (MAP) sensors followed by airbag sensors. The driving force for the use of microsystems in cars is that it technically or economically facilitates integration of new functionalities leading to improved overall car safety, security, efficiency and comfort. Key factors are: Low cost due to high degree of integration and low material use Small size and weight (allowing the use in weight sensitive applications, e.g. sensors in unbalanced part of suspension systems such as tire pressure sensors as well as the use of large numbers of systems without an
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
unbearable increase of vehicle weight) High reliability (processes and test mechanisms originating in semicon-
ductor industry are highly developed leading to low failure rates, systems integration lowers the number of external interfaces) Low power consumption (allowing large number of sensor systems without upgrading of car power grid, as well as some battery driven sensors, e.g. tire pressure monitoring) Interface to car electronics exists or can easily be established Enhanced functionality (possibility to measure and control quantities that so far could not be measured/controlled) Today, modern cars feature up to 100 microsystems components, fulfilling sensory and actuator tasks in: Engine/ drivetrain management and control Safety On-board diagnostics and Comfort / convenience and security applications. The increasing number of sensors used, as well as data fusion strategies, however, has lead to blurring of the boundaries between application fields. Figure 4 exemplary names some automotive functions strongly related to sensor input.
Fig. 4.
Car functions and the respective sensors (source: based on DaimlerChrysler)
31
32
Introduction
Technical Requirements for Automotive Microsystems
The use of microsystems as sensors typically requires close contact to the measured medium and often translates into harsh environment conditions for the sensor. Microsystems therefore have to withstand and function under almost all automotive conditions being present in a car. The following table gives a brief overview of such environments: Temperature challenges: unlike commonly assumed about 5 years back, most of the relevant future applications do not require operation of MST devices at temperatures beyond 180°C. The current strategy pursued by MST suppliers and 1st and 2nd tiers is thus to enhance existing Si chip technology by improving packaging and metallization systems, rather than employing SOI, SiC or other exotic materials and technologies. Exceptions are: Exhaust gas sensors (operated at between 280 and 450°C) Cylinder pressure sensors (operated up to 650°C), which however are assumed to remain irrelevant for market use due to the high price per sensor in any existing technology Differential pressure sensors for soot filters in diesel engines (sensor operated at around 280°C) – current systems place the sensor at a sufficient distance to the hot exhaust system reducing the pressure to implement new high temperature compatible systems. Exhaust gas pressure sensors for variable turbine control in TDI diesel engines (operating temperatures around 280°C)
Tab. 2.
Automotive environments
Low temperature Silicon fusion bonding for wafer level packaging and encapsulation of MST chips is well established. Technological challenges are thus
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
primarily to be found in the development of better high temperature metallization systems. Pressure challenges: peak pressures can be found in the diesel injection system as well as electrohydraulic brakes with peak pressures around 1500 bar. Up to date, two to three competing high pressure sensors, primarily based on stainless steel diaphragms with bonded piezoresistive Si-chip exist in the market. Core problems are found in sensor long term reliability under pressure and temperature load cycles. Media challenges: The majority of future devices will require operation in hostile environments such as hydraulics oil, exhaust gas, etc. This is a major concern for pressure and gas sensors. Gas sensors constitute one of the neglected fields in MST device technology. Examples of existing sensors are Lambda sensors (O2 sensor for catalytic converter) made from Yttria stabilized Zirconia and air quality sensors for automatic flap control in HVAC systems. New emissions regulations for trucks and cars taking effect in 2007/2008 will require a further reduction of the NOx gas concentration. Ammonia (NH3) is injected into the exhaust gas to reduce the NOx concentration. Thus, both a NOx and a NH3 sensor are required. Very little proven technology for reliable detection under automotive specifications exists today. The most mature technology available use metal oxide gas sensitive layers and electrochemical sensor principles, none of which currently offer sufficient long term stability and selectivity. Current specifications require about 8-10 years and 700.000 km of sensor operation. Current solution approaches towards media separation involve wafer encapsulation (e.g. fusion and anodic bonding), the use of hermetic coating layers and new substrate materials (e.g. SiC on Si). Besides aforementioned sensors operating in specific harsh environments, even regular standard sensor/actuator systems have to meet automotive specifications including some resistance to oil, fuel, salt water, ice and to car wash chemicals. These translate into severe packaging and capsulation requirements.
MST Applications
Microsystems applications for vehicles can be divided, according to their lifecycle, into three major groups: established devices, introduced systems and systems currently being researched. Using total produced units as a measure, settled and saturated systems to measure pressure and acceleration constitute the biggest group. Systems in currently being introduced are for instance predictive sensors (e.g. pre-crash detection), trying to derive a situation or status
33
34
Introduction
in the immediate or near future based on information gathered in the past or this instant. Compared to already common systems these sensors are quite complex, yet low volume and high price including attractive margins for suppliers. The third group of MST devices, which still is in the R&D phase consists of very complex sensors or sensors in highly challenging conditions. These devices often do not directly determine a certain quantity or value of interest. Sometimes like e.g. for oil condition sensors, a life time history might be needed to extract the data. Table 3 provides a brief overview in current microsystems, adding information on the respective application, the life cycle status, challenges of the system and the systems future potential based on market forces. Governmental regulation remains the biggest driver for introduction of MST technology. Further driving forces are X-by-wire and comfort features, where the customer is willing to pay the additional price and fusion concepts, leading to a new generation of sensors. The automation of the car and its driving asks for an improved understanding of the vehicle status, the traffic situation as well as better communication and interaction with the driver. Microsystems can make the difference leading to a feasible automated individual transport system of the future. 2.1
Sensor and Data Fusion
Many automotive MST development projects are faced with the problem that the high-precision sensors that would be needed in order to meet the functional specifications typically have to be replaced by less accurate sensors due to cost considerations. Hence, there is a huge potential for the use of data and sensor fusion technology to substitute expensive precision sensors and to create high-precision virtual sensors at modest cost. Widely available bus architectures, communication technologies and reliability issues for safety applications further support the use of data and sensor fusion concepts. Data fusion refers to the fusing of information resulting from several, possibly different physical sensors, i.e. to compute new virtual sensor signals. These virtual sensors can in principle be of two different types: High-precision and self-calibrating sensors, i.e. improved versions of the physical sensors. The goal is either to achieve higher performance using existing sensors or to reduce system cost by replacing expensive sensors by cheaper ones and using sensor fusion to restore signal quality. Soft sensors, i.e. sensors that have no direct physical counterpart.
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Tab. 3a. Automotive microsystem Applications - Drivetrain and Safety
35
36
Introduction
Tab. 3b. Automotive microsystem Applications - Diagnosis, Comfort and HMI
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Figure 5 contains a schematic picture of how data and sensor fusion concepts may be applied to vehicles. On the left hand side, different types of information sources are listed. These include underlying real sensors used to measure characteristics of the environment of the car, to monitor the vehicle internal parameters and status and to observe or anticipate the drivers’ intent and driving behaviour. Besides these car centred input dimensions, communication technologies already add virtual sensors by adding additional information to the car systems. In a similar fashion, information being of broader interest and originating from one vehicle can be shared with other vehicles.
Fig. 5.
Data/sensor fusion concepts (based on NIRA Dynamics [7])
All signals are fed into a sensor integration unit, which merges the information from different sources and allows the computation of virtual sensor signals. These, in turn, may be used as inputs to various control systems, such as anti-skid systems and adaptive cruise control systems, or in Human/Machine Interfaces (HMI) such as e.g. a dashboard or overhead display. The possibility to compute virtual sensor signals allows assessing complex dimensions like oil quality or obstacle detection. Additionally, fault diagnosis / self test of the physical sensors can be improved. The reason for this being, that by using sensor fusion, analytical redundancy is introduced, which can be used to detect and isolate different sensor faults. This redundancy also implies that a system can be reconfigured in case one or more sensors brake down to achieve so-called degraded, or “limp-home”, functionality. Classical designs rely on hardware redundancy to achieve these goals, which is a very expensive solution compared to using sensor fusion software.
37
38
Introduction
In order to discuss the practical influence of data and sensor fusion concepts, some major effects have to be looked at more closely. Looking at table 2, a decision has to be made, whether comparable parameters actually really have to be measured by different vehicle systems or even by each vehicle, separately. One example would be road friction monitoring as an input for vehicle dynamics systems. There exists no sensor system in the market today that can measure and predict road friction. Information about vehicle stability systems after a critical situation however allows deriving of road friction on a particular patch of road. If this information originating in one or a set of cars could be shared with other vehicles, a tremendous safety effect could be achieved with almost no additional cost, provided each car comes with a car to car communication module. Even when looking at one individual car quantities like e.g. inertial, pressure and temperature data are measured several times. Some of this information is redundant. Sometimes the same type of information is measured in a different range or position. In these situations it is difficult to measure with one sensor, only. Hence, looking at the overall information situation (table 2) it can be predicted that up to one fifth of all sensors might be not required, if using sensor and data fusion. The second issue to mention effect is the possibility to design a sensor for an information environment where the need for accuracy for the specific sensor decreases. Sensor fusion allows the design of cooperative systems where single sensors assist each other and are designed to function as a set. This concept translates into virtual sensors and allows for much simpler units.
2.2
Trends
The bottom line of the past years MST device and market development is that there will most likely not be a specific new “killer application” propelling MST device technology and market penetration onto a new level. There is a clear trend towards: Consolidation of existing sensor technologies – future developments focus on evolution rather than entirely new concepts – most of the known sensing mechanisms have – if suitable – been transferred to MST technology Evaluation of sensor and data fusion concept potential in order to a) gain additional otherwise inaccessible information or b) reducing the number of sensors required and make use of the increasing redundancies of existing hardware sensors Standardisation of signals and interfaces in order to reduce cost and improve exchangeability
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Improved communication technology in order to remove wire harness-
es (hostile environments + cost issue) In addition a few technological and device challenges will however remain such as e.g. pre-crash sensing.
2.3
Global Innovation Networks
The automotive industry’s R&D activities have undergone drastic changes over the past decade. The increasing complexity of future system components has led to research and development projects being addressed by teams representing (almost) the entire value and production chain from component/chip supplier to 1st tier and OEM. The establishment of increasing communication between all the partners involved has been one of the major accomplishments of the European automotive and MST industry. OEMs have undergone a tremendous learning process, when realising that for successful use and implementation of MST components and technology it is essential to be involved in the definition of acute requirements and future needs. In consequence, already, a project engineer has to have a very wide knowledge in order to coordinate multi-facetted developments. Over the coming decade this development will move to the next stage. US suppliers have already started buying technology or outsourcing developments chiefly to European facilities due to a) higher commercialisation rate of MST products in Europe (Germany), b) higher production and reliability competence and c) lower IP-barriers compared to the US. R&D and technology competence is progressing more and more towards a global scale. With increasing need for high tech components in emerging markets such as e.g. China, more and more competence will move into these countries as well. This will eventually lead to R&D not only having to take into account the entire value or production chain, but also to bridge intercultural barriers, since R&D partners will be spread all over the globe. The impact on requirements for future R&D engineers and project managers in this field can not be foreseen at this point in time. Likely scenarios are a) creation of new positions/functions for international project managers b) increased skill sets including intercultural competence for engineers. Interestingly enough, international logistics companies such as Danzers have become pioneers in excelling in intercultural competence as competitive advantage since this allows them to provide highly efficient and nationally customized service by making it part of their company culture.
39
40
Introduction
3
Summary and Outlook
MST for automotive markets and applications has certainly arrived at a watershed in its development. From a market perspective, automotive applications will remain an important cornerstone of MST development and products, but will make up an ever smaller proportion of the market. Due to the remaining high innovation pressure it can still be expected that new devices, technologies and applications will keep being introduced, even at maybe slower pace than in the previous decade. It can be expected that harsh environment compatible MST and high complexity MST or sensor systems will largely contribute to this remaining growth. Biomedical and consumer market applications will however outclass automotive MST in market volume (total market and quantities) by far. After a few years of experimenting with the new technology, Germany as key driver market for automotive MST has returned a more conservative approach towards new technologies and new MST devices: unless a device is a “need-tohave” item, the cost associated with introduction of the new device (investment, initial failure rates, etc.) does not justify or warrant the potential competitive advantage. In line with this development, most major OEM’s have to a large extent pulled out of earlier MST/MEMS commitments and R&D programs. Whereas until about 5 years ago the goal of many OEMs was to be technology leader when introducing new systems, today the primary objective appears to be “first-to-follow”, in order to be able to monitor the customer reaction as well as initial failure modes. BMW appears to be an exception to this rule with the introduction of features such as the iDrive concept. It remains to be seen, whether some emerging markets such as e.g. China with currently a high customer desire for high tech cars will keep up their momentum or undergo a similar saturation and slowing phase. Legal requirements (e.g. new emissions regulations for diesel engines taking effect in 2007/2008) remain a powerful driver for continued innovation in this field. Strangely enough, even though some of the regulations are stricter in the US, the European automotive OEMs still seem to be implementing a much larger number of new high tech systems than their US counterparts. Likely reasons for this development can be the extreme price competition and pressure on US brand cars as well as the reduced necessity for MST high tech devices due to lower traffic density, lower vehicle speeds, etc. Current trends point towards consolidation and data fusion – i.e. try to use systems in place first before adding additional new components which add to the overall car electronic system complexity and potential failure modes.
The International Market for Automotive Microsystems, Regional Characteristics and Challenges
Another interesting issue is future used car reliability. The complexity of today’s cars lead to a high failure rates and recalls already during early product life covered under warranty. Already we are approaching a state where cars do tend to behave similar to computer hard and software (i.e. problems with tire pressure monitoring systems requiring “system reboot”, software glitches in navigation and MMI interface computers leading to partial loss of central car comfort functionalities (HVAC, radio, etc.). Taking into account the complexity and cost of replacing some of the units (replacement of a navigation system can cost up to 7.000 Euros if a wire harness needs to be replaced) it remains an open question whether used high tech cars will be affordable at all due to high maintenance cost. In the US, car manufacturers have already reacted with “certified warranties” that allow up to 4 years of used car warranty on all parts and labor. It can be assumed that until the reliability of the new systems has reached a level comparable to the one acquired in airplane control systems, cars that were built about 10 years ago will probably represent a peak in reliability: their mechanical systems have matured to very high reliability, but they are not yet quite as loaded with electronic components that their electrical / electronic reliability suffers. Finally, the increasing globalisation of the automotive supply industry in unison with the need to cater for regional customer needs and desires calls for a new generation of automotive and MST R&D engineers. High intercultural competence will be a key to R&D project and product success. Automotive MST has started to mature and will start to loose some of its original momentum. Just like car electrical systems, however it has established itself as essential component of individual transport systems and will replace basic electrical system continue to be the economic and innovation motor for the coming decades.
Acknowledgements The authors wish to thank Mr. Goernig (ContiTemic) for the fruitful discussions on MST market penetration and the ongoing open minded exchange of ideas. The authors would also like to thank Mr. Rous for researching and compiling a large proportion of the market material presented in this article.
41
42
Introduction
References [1] [2] [3] [4]
[5] [6] [7]
Price Waterhouse Coopers: Supplier Survival: Survival in the modern automotive supply chain, July 2002 Center for Automotive Research: What Wallstreet wants…from the auto industry, April 2002 Chuck Chandler: Globalisation: The automotive industry’s quest for a world-car strategy, 2000 Mercer Management Consulting (Eds.): Automobilmarkt China 2010: Marke, Vertrieb und Service entscheiden den automobilen Wettbewerb in China, November 2004 The McKinsey Quarterly, 2002 Ed. 1 Francisco Veloso: The automotive supply chain: Global trends and Asian perspectives, Massachusetts Institute of Technology, September 2000 Forsell, U. et.al.: Virtual Sensors for Vehicle Dynamics Applications, in: Advanced Microsystems for Automotive Applications 2001, Springer-Verlag 2001
Prof. Dr.-Ing. Florian Solzbacher Department of Electrical Engineering and Computing University of Utah 50 S Central Campus Drive UT 84112 USA
[email protected] Dipl.-Ing. Sven Krüger VDIVDE-IT GmbH Rheinstrasse 10B 14153 Teltow Germany
[email protected] Keywords:
microsystems application, market, deployment, prediction, differentiation, production capacities, sensor and data fusion, competitive analysis, innovation networks, technological challenges
43
Status of the Inertial MEMS-based Sensors in the Automotive J.C. Eloy, Dr. E. Mounier, Dr. P. Roussel, Yole Développement Abstract The inertial sensor applications are the most active among the MEMS markets. This paper analyzes the future market for accelerometers and gyros. Yole found that between 2003 and 2007, the compound annual growth rate (CAGR) of gyroscopes will be 25%, coming from 348M$ in 2003 to 827M$ and the CAGR of acceleration sensor will reached 10%, coming from 351M$ to 504M$. For the first time, the markets for micro machined gyroscope will exceed acceleration sensor markets in 2005. Both markets are now dominated by automotive applications.
1
Accelerometers, a Market of $504 Million in 2007
The following table (figure 1) shows the accelerometers market forecast for the 2003 – 2007 time period. The total market has been estimated to be $351 million in 2003, $410 million in 2005 and $504 million in 2007. Today, the automotive application is 90% of the overall market for airbag deployment sensing and active suspension. The main characteristic of the automotive field is that it requires low cost chips in the range $3 to $5 per component. In the accelerometers field, main manufacturers are Bosch, Analog Devices and Freescale/Motorola. With a yearly production of cars of 40 millions units (CAGR of 0.36%), we estimate that 180 million of accelerometers will be necessary in 2005 for the automotive field only. Figure 2 shows the accelerometer manufacturers 2003 market share in $ M sales. The 8 first manufacturers of accelerometers represent more than 90% of the total market share in number of components. Main manufacturers are Bosch, Analog Devices, Motorola (part of the production is sub-contracted to Dalsa), VTI Hamlin, X Fab, Denso, Delphi-Delco and SensoNor (now Infineon). In 2003, the total volume of accelerometers for automotive was more than 100 millions
44
Introduction
of components for more than $300 million market. We should note that Infineon also uses pressure sensors as side airbag sensors placed inside the door structure.
Fig. 1.
Markets for MEMS-based accelerometers 2003-2007
For airbag application, the specifications are the following: ±50g, auto-calibration and self-test, integration of multi-axis sensing for front shock detection Integration from 1 to 5 airbag sensors per car, for several axis of detection Price of 1 axis sensor: < 2$ Price of 3 axis sensor: 5 to 6$
Fig. 2.
Accelerometers manufacturers’ 2003 market share (all applications)
The market shares in 2003 for airbag sensors were the following: ADI with 27%, Freescale with 15%, Bosch with 35%, Delphi with 9% and Denso with 8%. Today, the trends are to have more sensors in order to have focused activation of airbags and the integration of several axis of detection in one pack-
Status of the inertial MEMS-based sensors in the Automotive
aged sensor. For active suspension, a ±3g acceleration sensor is necessary with high accuracy. In 2003, VTI Technologies had the largest market share. For ESP application, a ±3g acceleration sensor plus 1 gyro are required. In 2003, VTI Technologies has the largest market share followed by Bosch. The business trend is a strong need due to extended use of security systems for car stabilization.
2
Gyroscopes, a Market of $827 Million in 2007
The following table (figure 3) shows the gyros market forecast for the 2003–2007 time period. The market has been estimated to be $348 million in 2003 and $827 million in 2007. This is about 25% CAGR. Today, like accelerometers, the automotive application is 90% of the overall market for: Rollover detection Navigation (GPS) Antiskid systems
Fig. 3.
Markets for MEMS-based gyroscopes 2003-2007
The main characteristic of the automotive field is that it requires low cost chips. The gyros’ ASP is in the range $15 to $30, which is still considered as a high price for automotive components, thus restricting the use of gyros to high-end cars. We estimate that 48 million of gyros will be necessary in 2005.
45
46
Introduction
Automotive application is more than 90% of the market for gyroscopes with: Rollover detection Navigation (GPS) ESP This field requires low cost gyros in the range 15$ to 30$ / components. This ASP is considered as a high price for automotive components. For car applications, main players are SSS-BAE, Bosch … and 2005 production is estimated to be more than 50 millions units.
Fig. 4.
Market shares for gyros manufacturers in 2003 (all applications)
For the rollover detection application, a detection of the angular rate as low as 0.5°/s is necessary. In 2003, Matsushita was the main supplier (followed by Sensonor/Infineon). It is mainly a Japanese market with few applications in North America, and unclear market evolution in 2005. For GPS specifications (loss of GPS signal in cities, tunnels …), the measurement range is ±80°/s. The major players worldwide are Matsushita and VTI, selling a 3x1 axis accelerometer on the USA market. There is a strong need in automotive GPS moving from high-end to low-end cars. For ESP, a ±3g acceleration sensor plus 1 gyro are needed. Bosch and SSS are main manufacturers. There is a strong need due to extended use of security systems for car stabilization.
Status of the inertial MEMS-based sensors in the Automotive
3
Most of the inertial MEMS Devices are Made with Deep Reactive Ion Etching
Regarding the accelerometers micro-structure, 40% of total production is comb-drive accelerometers (which represented more than 50 millions units in 2004). The companies manufacturing comb-drive accelerometers are Delphi Delco (less than 10 million units per year), Denso (10 million units per year) and Bosch. Matsushita also develops comb-drive accelerometers. Today, these accelerometers are at the feasibility stage). The production yield is in the range 70% to 80% for accelerometers and about 1800 accelerometers are manufactured on a 6’’ wafer with an average size of 8mm2. We estimate that 56% of accelerometers are surface micromachined and 44% of accelerometers are bulk micromachined. But, some players (such as Bosch, AD …) are using Deep RIE equipments in surface micromachining process in order to benefit from the high etching rate. Using these data, we calculated that, in 2004, more than 30 Deep RIE equipments should have produced about 100 millions accelerometers (that is 75% of the total production). By keeping a conservative scenario (in 2007, the use of Deep RIE will remain at 80% of the total production), we estimate that almost 50 Deep RIE equipments will be necessary in 2007. For gyroscopes, the production yield is in the range of 50% today. We estimate that 52% of gyros are silicon or quartz surface micromachined. 48% are bulk micromachined (for example, SSS is using Deep RIE equipments) but some other players are using Deep RIE equipments in surface micromachining process in order to take benefit from the high etching rate. We then estimate that 70% of the gyros for automotive market are manufactured using Deep RIE equipments. In 2004, we calculate that less than 40 Deep RIE equipments should have produced 70% of the total gyroscopes production. If we assume a conservative scenario for the future, it means that in 2007, 70% of the total production of gyros will be made using Deep RIE. With this hypothesis, we calculate that about 80 Deep RIE equipments will be necessary in 2007 for the production of gyros.
47
48
Introduction
4
Conclusions
The MEMS inertial sensor markets are widely dominated by automotive applications for the years to come and the new applications (both low-end and highend) are driven by the availability of adapted-cost devices with right specifications. We forecast that, in 2005, for the first time the gyroscopes market will exceed the accelerometers market. On the MEMS equipment side, the inertial MEMS market growth is an opportunity for DRIE manufacturers as the development of new applications in the fields of accelerometers and gyroscopes will drive the DRIE market. J.C. Eloy, Dr. E. Mounier, Dr. P. Roussel Yole Développement 45, rue Sainte-Geneviève 69006 Lyon France
[email protected] Keywords:
inertial sensors, accelerometers, gyroscopes, market forecast, deep etching
49
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Safety Systems in Road Vehicles – Findings of the EU Funded Project SEiSS S. Krüger, J. Abele, C. Kerlen, VDI/VDE-IT H. Baum, T. Geißler, W. H. Schulz, University of Cologne Abstract Road crashes take a tremendous human and societal toll from all EU member states. Each year, more than 125.000 people are killed and millions more are injured, many of them permanently. The costs of the road safety problem in the EU amount up to 2% of its gross domestic product. New, safety related technologies are promising instruments in order to reduce the number of accidents and their severity. The study delivers an overview of safety-related functions, identifies key variables and developes methods for the assessment of their socio-economic impact.
1
Introduction
Transport is a key factor in modern economies. The European Union with increasing demand for transport services needs an efficient transport system, and has to tackle the problems caused by transport: congestion, harmful effects to the environment and public health, and the heavy toll of road accidents. The costs of accidents and fatalities are estimated to be 2% of gross domestic product in the EU (EC 2003). It is the policy of the European Commission to aim at a 50% reduction of road fatalities by 2010. There is convincing evidence that the use of new technologies can contribute significantly to this reduction in the number of fatalities and injuries. For this reason the eSafety initiative aims to accelerate the development, deployment, and use of intelligent vehicle safety systems (IVSS). Intelligent safety systems for road vehicles are systems and smart technologies for crash avoidance, injury prevention, and upgrading of road holding and crash-worthiness of cars and commercial vehicles enabled by modern IT. Governments as well as marketing departments in the automotive industry face the dilemma to decide on new technologies or new paths of research and
50
Introduction
development, respectively, before reliable data can exist. For this reason, it is essential to evaluate the safety impact of new technologies before they are marketed. Being aware of methodological problems, it is necessary to provide a basis for rational and convincing decisions. Therefore the eSafety initiative as well as the European Commission are asking for a sound data base and decision supporting methodology. Facing the dilemma of not being able to account for the effects of the introduction of intelligent vehicle safety systems in advance, the problem stays evident for the evaluation of components or technologies. Therefore it is a challenging task to define the impact of the introduction of a specific technology, because to the general impact assessment problem of a vehicle function the exchangeability of technologies is added. The use of technologies like e.g. microsystems technology connects specific costs to technical possibilities. Other technologies will have different limitations and other advantages. Looking for break even points, it becomes very important to get a better understanding of the financial scenarios. Therefore, independent from stakeholders like scientists, suppliers, original equipment manufacturers, insurance companies, or public authorities it becomes very important to find measures to access and compare technologies, functions, and approaches. The European Commission initiated this exploratory study in order to provide a survey of current approaches to assess the impact of new vehicle safety functions, develop a methodology to assess the potential impact of intelligent vehicle safety systems in Europe, provide factors for estimating the socio-economic benefits resulting from the application of intelligent vehicle safety systems; these factors, such as improved journey times, reduced congestion, infrastructure and operating costs, environmental impacts, medical care costs etc., will be the basis for a qualified monetary assessment, identify important indicators influencing market deployment and develop deployment scenarios for selected technologies/regions.
2
State of the Art
Investigations of the socio-economic impact of intelligent vehicle safety systems began in the late 1980’s. Since then, the benefits of IVSS technologies and services have been assessed on the basis of more than 200 operational tests and early deployment experiences in North America, Europe, Japan, and Australia (PIARC 2000). Three broad-based categories of evaluation approach-
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Vehicle Safety Systems
es are currently being used (OECD 2003): empirical data from laboratory measurements as well as real-world tests simulation statistical analysis Several projects funded by EU Member States or the European Commission as well as studies of the automotive industry and equipment suppliers have already provided some data on the impact of intelligent vehicle safety systems. A large number of projects deals with technological research and development and provides a basis for further progress of the field (e.g. AIDE, CARTalk2000, CHAMELEON, EDEL, GST, HUMANIST, INVENT, PReVENT, PROTECTOR, RADARNET, SAFE-U). Several projects are focused on accompanying measures in order to develop the sectoral innovation system and strengthen networks and co-operation (e.g. ADASE II, HUMANIST). Some projects reflect on the implementation of safety systems and on measures to support the application of new technologies (e.g. ADVISORS, RESPONSE). Finally, a number of projects discuss costs and benefits of the technologies that were investigated (ADVISORS, CHAUFFEUR, DIATS, E-Merge, STARDUST, TRL-report). However, a systematic assessment and coherent analysis of the potential socio-economic impact of intelligent vehicle safety systems is not yet available. In addition, such an analysis is further complicated by the fact that many systems are not yet widely deployed. Reflecting on socio-economic effects of IVSS, it is necessary to distinguish different levels of impact: operational analysis dealing with the technical assessment of operational effectiveness, socio-economic evaluation, and strategic assessment. This study argues that an assessment of the socio-economic impact of intelligent safety systems has to combine these different evaluation approaches.
3
Methodology of the Study
The suggested methodology consists of 14 major steps (refer to figure 1). It includes technology, function, market, and traffic inputs. It delivers the opportunity to differentiate on member states level. The relevant steps for SEiSS methodology are:
51
52
Introduction
1 2 3 4 5 6 7 8 9 10 11 12 13 14
4
Determination of the technology and functions interaction matrix (IVSS) Assessment of functions interaction Calculation of collision probability for IVSS differentiated for accident types Estimate of the penetration rate for IVSS following specific market deployment scenarios Prediction of number of accidents for specific IVSS setup Prediction of accident severity for specific IVSS setup Calculation of accident costs Prediction of congestions Calculation of time costs based on congestions Calculation of vehicle operating costs Calculation of emission costs differentiating into CO2 and pollution Differentiation in cost effects with and without IVSS Calculation of IVSS specific cost Calculation of benefit-cost ratio
Technology, Safety Functions and System Interaction
Technology is a prerequisite for an automotive function. On the basis of new technologies, a new safety function might be introduced. However, we face the problem of system interaction between different safety technologies. It is not possible to refer to an evaluation of single technologies in order to assess the system behaviour (step 2). It is therefore necessary to define the interacting areas. In the model, functions are correlated to a time pattern, i.e. the effect of a function becomes assessed regarding to its time slot in accident mitigation and its effectiveness. A specific IVSS set-up translates into specific time patterns for the different accident types. This time pattern correlates to collision probability (step 3). In order to calculate the accident severity (relevant for step 6) the same time related pattern used for step 3 is being used. The severity of an accident depends on the impact energy that directly correlates to impact speed and passive safety systems absorption potential. The latter can be translated into additional time for the specific accident type.
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Vehicle Safety Systems
Fig. 1.
5
Relevant steps for SEiSS methodology
Market Deployment
The main goal of integrating the market perspective in the proposed model is to find a way of forecasting the diffusion of intelligent vehicle safety systems within the vehicle fleet of the countries considered, or in other words the market deployment (step 4). The target figure for capturing the market perspective therefore is the rate of equipment with intelligent vehicle safety sys-tems. This figure has an influence on the socio-economic impact of IVSS. Firstly, because vehicles that are equipped with IVSS and vehicles or other road users
53
54
Introduction
that are involved in crashes with those vehicles, profit from the advantages of the crash avoiding or crash outcome minimising effects of IVSS. Only the equipped vehicles therefore influence the overall socio-economic impact. Secondly, some IVSS may need a certain equipment rate to fully exploit their potential benefits. Especially car-to-car-communicating systems need a minimum of equipped cars for the technology to be able to function correctly. To forecast market deployment, i.e. to calculate an equipment rate at a given point in time, the time of availability of an intelligent safety system has to be determined, the time of market introduction has to be assessed, and a probable way of diffusion into the market has to be decided upon.
6
Traffic Influence and Socio-economic Evaluation
Considering the methodological framework, a widespread approach for assessing the potential socio-economic impact is the welfare economics-based costbenefit analysis. The favourability of intelligent vehicle safety systems from the society point of view can be illustrated by confront-ing the socio-economic benefits with the system costs (investment, operating and maintenance costs). Benefit/cost ratios of more than 1 indicate the public rentability of the system deployment. The cost-benefit-analysis consists of the following calculation procedure: analyse the impacts of each case by traffic and safety indicators such as traffic flow, vehicle speed, time gaps and headways, work out the physical dimensions of the traffic impacts such as total transport time, fuel consumption, level of pollution, number of accidents for the with-case and without-case; calculate the benefits (=resource savings) by valuing the physical effects with cost-unit rates (step 7 to 11) aggregate the benefits, determine the system costs (investment costs, maintenance costs, operating costs), and work out the benefit-cost ratios (step 12 to 14). It is necessary to specify the general framework conditions for the analysis and define the relevant alternatives that will be compared (without-case: IVSS is not used, with-case: IVSS will be used). Furthermore the proposed methodology needs the calculation for three different speed patterns covering urban, rural and highway traffic. The whole approach described so far has to be calculated for cars and heavy-duty vehicles separately. In addition, different mar-
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Vehicle Safety Systems
ket deployment, vehicle mileage, safety systems relevance, accident path, and cost figures call for separate calculation. The resulting cost-benefit ratios for cars and heavy-duty vehicles for urban, rural and highway traffic brought together become the overall benefit-cost ratio of a specific IVSS set-up. The calculations can be made for a worst and a best case scenario leading to a defined bandwidth of the benefit-cost ratio.
7
Additional Considerations
So far, the introduced methodology follows a clear path to find a comprehensive approach that allows to integrate system interaction and different disciplinary views on the problem. However, for the further development of the model it is essential to account for more detailed reflections into specific fields, such as: Speed independent correlations for system interaction The most important speed independent scenario is the “out of control” scenario. E.g. skidding has a sufficient effect on accidents and therefore should be adequately taken into consideration. Other effects like human perception might play a less important role in the assessment of IVSS interaction. For these cases a specific handling has to be defined. Minimum penetration rate for cooperative systems Functions like hazard warning based on car-to-car communication need a sufficient number of systems in the market to work. This aspect can be integrated into the model via the market deployment considerations. Non safety effects of the introduction of IVSS For examples like ACC systems non-safety relevant effects are predicted for strong market penetration. In this case traffic flow might lower because of an increased safety margin. This effect would influence congestions, being an important value in the calculation. Other important effects are energy consumption as well as pollution, because additional IVSS add for operations on these values. The inclusion of this correlation is planned and explored in dotted green lines in the above figure. External parameters Political influence might change deployment patterns or even the capabilities of intelligent vehicle safety systems. Within the model it is not planned to include such scenarios at this stage of development. Differentiation on member states might be done in detail and might be used for calculation of scenarios.
55
56
Introduction
8
Conclusion
The proposed methodology strongly allows for a competitive assessment of the introduction of different IVSS and in addition provides an absolute idea of the related costs and benefits. The proposed assessment methodology aims for a better understanding of the impact of the introduction of intelligent vehicle safety systems. For a contribution on the overall picture different disciplines have to be combined, therefore a proper understanding of technology, accident causation, statistics, marketing, and traffic influence is needed. Because there are specialists in each of these fields, it is suggested to rely on available data like professional forecasts. For specific areas like e.g. figures for the definition of accident probability and accident mitigation as well as accident severity additional research has to be carried out. Anybody looking for impact analysis needs this kind of information and faces the lack of relevant data. For better compatibility of results of different investigations common data bases should be used. So far, socio-economic effects have been calculated for single technologies and functions. The proposed model describes the possibilities of a comprehensive approach, that covers the interaction of different technolgogies and functions as well as the facets of a multi-disciplinary onset.
The Assessment of the Socio-economic Impact of the Introduction of Intelligent Vehicle Safety Systems
References [1]
[2] [3]
EC (2003): Communication from the Commission to the Council and the European Parliament, Information and Communications Technologies for Safe and Intelligent Vehicles (SEC(2003) 963), http://europa.eu.int/information_society/ activities/esafety/doc/ esafety_communication/esafety_communication_vf_en.pdf OECD (2003): Road Safety. Impact of New Technologies, Paris 2003. PIARC (2000): ITS Handbook 2000, Committee on Intelligent Transport, PIARC Paris 2000.
Sven Krüger, Dr. Johannes Abele, Dr. Christiane Kerlen VDI/VDE Innovation + Technik GmbH Rheinstr. 10b 14513 Teltow Germany
[email protected] [email protected] [email protected] Prof. Dr. Herbert Baum, Dr. Thorsten Geißler, Dr. Wolfgang H. Schulz University of Cologne Institute for Transport Economics Universitätsstr. 22 50932 Cologne Germany
[email protected] [email protected] [email protected]
57
Safety
61
Special Cases of Lane Detection in Construction Areas C. Rotaru, Th. Graf, Volkswagen AG J. Zhang, University of Hamburg Abstract This paper presents several methods that treat the special cases that appear in the lane marking detection in construction areas for both highways and country roads. The system complements the lane marking detection methods by treating the special case of temporary yellow markings that override the normal white markings. It uses both position and color to separate the valid markings from the former ones left in place but without semantics for the driver.
1
Introduction
Areas of construction on public roads are a permanent source of traffic problems. The special marking of these areas, the smaller size of the lanes and mostly the high quantity of older information (lane markings, traffic signs) without semantics raise problems that are not present elsewhere. For a driver assistance system one important aspect is to be able to distinguish between important and meaningless information in such an area. The major part of European countries use yellow lane makings that are over imposed on the existing white lane markings in construction areas. Such conditions don’t make a gray-level based approach very useful since both yellow and white colors will convert to relatively high intensity values. This makes a reliable distinction between them difficult if not impossible. In such situations a common approach is to signal to the driver that an unknown situation was encountered and to give up waiting for better environmental conditions. An approach based on color has a greater chance of interpreting the scene, but still it has to solve specific issues. This paper focuses on the specific handling of these situations. The lane detection problem is not covered here; the underlying lane detection algorithms are presented in [1]. Lane detection algorithms focus on the detection of lane markings by using some image features. There are many approaches to lane marking detection (see e.g. [2-5]), but few attempts (see [6]) to make the distinction between yellow and white markings. Both the difficulty of working with color data and the
62
Safety
complexity of the scenes have drastically limited the number of algorithms that promise to assist the driver in construction areas. This paper tries to fill in the gap by describing solutions for the most common problems encountered in such environments.
2
Assumptions about the Environment
Inconsistent lane markings (white markings that are in the right place may or may be not replaced by yellow markings), lack of markings (yellow markings may not be applied at exterior of the road) or incomplete markings are only a few cases which show the complexity of the environment. This suggests that an approach based strictly on color would have too many limitations in the number of situations it can handle. One way to overcome the problems is to use additional information (for example the road limits in the image) and to make some assumptions that limit the complexity of the system. These assumptions are listed below: A yellow marking is applied between two traffic lanes (most common situation). Situations in which yellow markings are on the side of the road and a white marking is present in the middle cannot be treated without using information on the lane size (i.e. a calibrated system which is beyond the scope of this paper). The outer white markings (at the left and right sides of the road) can be valid even if there are yellow markings on the street. If there are yellow markings located close to them the white markings should be dropped.
3
Software Implementation
3.1
System Overview
The system input is given by the road feature detection system presented in [1]. It consist of lane markings expressed in picture coordinates as groups of vertical segments, vertical limits of the road surface, average (H)ue, (S)aturation, (I)ntensity values for the road surface and the source H, S, I images obtained from the grabbed RGB image. The system output is given by a trust value attached to each detected marking. (0 = not valid, 255 = completely trusted), a flag indicating the presence/absence of yellow markings and a flag showing the quality of the detec-
Special Cases of Lane Detection in Construction Areas
tion.
Fig. 1.
3.2
Histogram of H, S, I components in the detected lane marking areas at day (left) and night (right). Up: only white markings. Down: both yellow and white markings.
Color Information
The system uses the HSI color representation. An analysis of the three components is done in order to decide what criteria can be used to separate the white markings from the yellow ones. In figure 1 two specific situations are presented. The two histograms on the left column have been obtained from data taken at daytime before and within construction area. The histograms on the right column have been obtained from data taken at night. In order to plot all three components (H, S, I) on the same histogram, a remapping of the respective domains was done to the interval 0..255. Each of the components is analyzed below with its specific advantages and disadvantages: Hue: In practice the values for the yellow colors associated with the markings depend on the hardware and software setup. Nevertheless they can be generally distinguishable from the values associated with the white markings. In both lower histograms one can observe the presence of a peak for hue components near the beginning of the hue interval. In our experiments the value given by the camera was close to orange. In all cases in which the yellow lanes are not present the hue component for white mostly consists of noisy values.
63
64
Safety
In the HSI representation white should be represented as having S = 0 and accordingly the H component is invalid. It is not always possible to invalidate hue using the saturation information given by the RGB to HSI conversion because of the inherent acquisition noises (the color camera gives no real grayscale values -i.e. having S=0- but some values in which S is small, still not negligible). Such H values proved to have little influence on the algorithm. The chosen solution was to use the H values without accounting for the saturation. In the lower histograms of figure 1 one can see the peak that characterizes the yellow markings. Its raw value may not always be high enough to count alone as a criteria for distinguishing between the markings, still hue is valuable information. Saturation: Comparing the lower histograms with the upper ones it becomes clear that saturation values that are above some specific threshold (this was empirically found to be about 10% from the maximum of the saturation) are observed only if there are yellow markings The yellow markings give a footprint between 15% and 70% of the maximum saturation. In some particular cases this criterion is still too weak. When the yellow markings are shining due to strong sunlight the footprint tends to be close to 15% and the white areas are somewhere below 10%. Intensity: Depending on the lightning of the scene and the camera setup yellow and white markings result in very close intensity levels in the picture. Taking into account the noise of the acquisition it is almost impossible to distinguish between the two intensity levels in almost all cases. An exception can be seen in the lower-left histogram. In this case yellow markings that are not highly reflective generate a second group of lower intensity values on the histogram. Since this information is not always accurate the intensity information is not used at all in this approach. The system starts by building the histograms for hue and saturation. The number of saturation values bigger than 15% of the maximum value is computed. If the number is significant the yellow flag is set. If the saturation data lies too close to the threshold then hue is analyzed as well. If no significant percent of values was between dark orange and yellow then the algorithm concludes that there are no yellow markings present. If yellow markings are found, the algorithm runs further and marks the white lanes as not trusted (based on their hue and saturation average values). At early stages of the development direct labeling in yellow and white lanes was tried without the evaluation of the presence or absence of the yellow markings; it produced very noisy results and even fake yellow markings when the markings were not having a very good footprint in the picture. Since singular values are not accurate enough, this
Special Cases of Lane Detection in Construction Areas
approach focused on evaluating values from all lane markings present in the picture. If the lane markings detector has not detected enough yellow markings (for example the markings are not continuous or they are old) the above-mentioned algorithm has not enough data and will not be able to perform well. In such cases a more sensitive but still accurate measure for the presence of yellow markings in the picture is needed. The function should be able to recognize a yellow lane marking that is not expected to be long or with a strong footprint in the picture. Since it complements the other method it was designed to work well especially in these cases where the other one fails (when the major part of detected markings were white). The chosen function is based on the weighted deviation of the lane marking H and S values from the average values for all lanes. This function performs very well if the number of segments belonging to yellow markings is less than 10% of the number of total segments. In these cases the saturation and hue of the yellow marking are experiencing a significant deviation from the averages. After an extra check that the lane marking color is close to yellow the algorithm concludes that the lane marking is yellow and the detection was not accurate enough.
Fig. 2.
3.3
Typical yellow markings at day (left) and night (right)
Position information
One common situation that occurs in construction areas is illustrated in the right image of figure 2. The yellow markings are applied on the center and right sides of the road, but no marking is applied over the old white marking on the left. In such situations dropping all white markings found in the picture means eliminating really valuable information. There are no criteria based on color that enable the distinction between the invalid white markings and the
65
66
Safety
valid ones. The only clue here is the position with respect to the yellow markings. Two approaches are presented here. The first one uses the results of the road detection algorithm. This algorithm returns the image coordinates where the road extends. Typically these are the same as the last left/right marking. Accordingly, the first algorithm relies on computing the offset between these extents and the position on which the white lane markings that were already marked as invalid by the color separation algorithm so far. If the result was negative (the lane marking started above the highest limits of the road) than the lane marking was considered valid. There are also cases in which such an approach is inefficient. The right image of figure 2 is such an example. The outer right white marking is not valid since its semantics is overridden by the traffic indicators.
Fig. 3.
Diagram of the system
The second algorithm can only be used in situations where at least 2 yellow lane markings were detected. It works by estimating the average distance between these markings as a first degree function dx = ay + b, where dx is the relative distance in the picture (in pixels) and y is the vertical picture position. Using this “template” distance it checks the distance to the closest yellow marking for all white markings. The difference is then compared to 40% of the minimum distance of the yellow markings at that Y position in the picture. If it is smaller then the white marking is dropped. If not, it checks whether the white marking is surrounded by yellow markings. If it is surrounded it is dropped. This approach avoids leaving detected white markings that were in the middle of the lane (see left image of figure 2) as valid in the output set.
Special Cases of Lane Detection in Construction Areas
3.4
Merging Results
The connection between algorithms is described below. The “yellow/white separation” algorithm refers to the algorithm described in section 3.2 for the global analysis of the lane markings; “sense yellow marking” algorithm being the algorithm from the bottom of the same section used for a deeper analysis of the cases in which the lane detection delivered minimal results. “Position on road” algorithm is the former presented in section 3.3 and “Relative position” algorithm is the latter presented in the same section. The “yellow markings present” flag is obtained from the “yellow/white separation algorithm”. If it is false, the “yellow sense algorithm” is run to enforce the conclusion. The flag indicating poor detection quality is set by default to false and will only be set to true if “yellow sense algorithm” ran and concluded that there was at least one yellow lane marking. In figure 3 the activity diagram of the system is presented. In short it works as follows: first of all the source lane markings are checked one by one by the “yellow/white separation algorithm”. The algorithm marks all markings that have low saturation and non-yellow hue averages as not trusted. The “Position on road algorithm” will then restore the trust for those lanes that are the lateral limits of the detected road surface. From these lane markings the ones that are close to the yellow ones will be invalidated by the “Relative position algorithm”. This is the final data that is released at the output of the system covered in this paper.
4
Experimental Results
The system was tested in both highway and country road scenarios. It showed that most of the encountered situations can be successfully interpreted. The yellow markings detection is stable even in cases when only few of the markings were presented at the output of the lane detector. At night due to the reflective nature of the yellow marking the results are usually better than at day. The worst cases were encountered shortly after rainfall when the street surface was still covered with water that extremely reduced the contrast of the markings. If old markings with lower reflectivity were present, the stability of the system was affected.
67
68
Safety
The algorithm runs in less than 4 ms (all operations described in the paper) on a mobile P4, 1,7 GHz. This makes it suitable as part of a real-time system.
5
Conclusion & Future Work
Using color information to make the separation between yellow and white markings complemented with position information proved to be an effective way of dealing with most situations in construction areas. The future work will include the detection of specific traffic signaling elements (indicators) that are present in these areas to enhance the detection in cases when no markings are present. The sequence information is examined as one way to improve the stability of the system.
Special Cases of Lane Detection in Construction Areas
References [1] [2]
[3] [4]
[5] [6]
C. Rotaru, “Extracting road features from color images using a cognitive approach”. Submitted to IEEE Conference on Intelligent Vehicles, Parma IT, 2004 K. C. Kulge, “Performance evaluation of vision-based lane sensing: some preliminary tools, metrics and results”. IEEE Conference on Intelligent Transportation Systems, 1997 D. Pomerlau D. and T. Jochem, “Rapidly Adapting Machine Vision for Automated Vehicle Steering” IEEE Expert, 1996, Vol. 11, pp. 109-114 S. Lakshmanan and K. Kluge, “LOIS: A real-time lane detection algorithm”, Proceedings of the 30th Annual Conference on Information Sciences and Systems”, 1996, pp. 1007-1012 M. Bertozzi and A. Broggi, “A Parallel Real-Time Stereo System for Generic Obstacle and Lane Detection”, Parma University, 1997 Toshio Ito and Kenichi Yamada, “Study of Color Image processing Methods to Aid Understanding of the Running Environment”
Dipl.-Ing. Calin Augustin Rotaru, Dr. Thorsten Graf Group Research Electronics, Volkswagen AG Brieffach 1776/0, D-38436 Wolfsburg, Germany
[email protected] [email protected] Prof. Dr. Jianwei Zhang Fachbereich Informatik, AB TAMS Vogt-Kölln-Straße 30, D-22527 Hamburg, Germany
[email protected] Keywords:
color image processing, yellow lane markings, driver assistance, construction areas
69
71
Development of a Camera-Based Blind Spot Information System L.-P. Becker, A. Debski, D. Degenhardt, M. Hillenkamp, I. Hoffmann, Aglaia Gesellschaft für Bildverarbeitung und Kommunikation mbH Abstract The development of a camera-based Blind Spot Information System (BLIS), starting with the functional requirements up to the final product design, will be described. The paper focuses on the illustration of the software system, while recognizing that different aspects of hardware architecture play an important role. Different constraints for the successful execution of such a project will be outlined. Finally, the capabilities of the driver assistant system will be demonstrated.
1
Introduction and Overview
The product development of a market-relevant camera-based driver assistant system is still a challenge. The hardware demands of price, design and performance seem to contradict functionality requirements along with high performance needs for an image processing software system. Chapter 2 covers this issue in more detail. The authors intend to describe how that contradiction was solved in the development of a Blind Spot Information System (BLIS). The development history from specified functionality (chapter 2.4) and selected hardware architecture (chapter 2.3) to the resulting software design (chapter 4) will be traced. Different constraints for the successful execution of such a project will be outlined, such as: prototype-oriented development strategy, in order to meet and demonstrate customer requirements at every project stage (see chapter 3.2). generation of an appropriate video-database representing different environmental conditions and driving situations (see chapter 3.3). This database is crucial for the reproducibility and proof of progress of the algorithms. set up of different testing environments and testing strategies for verification and validation of the system under laboratory and field test conditions (see chapter 3.5).
72
Safety
In order to reduce development time, it is necessary on one hand to use
a flexible software development platform in the laboratory as well as in the field. On the other hand, due to hardware limitations, it is necessary to create a compact and simple system. Our preferred solution will be outlined in chapter 3 (Development Process Strategy). Examples of the quality of the driver assistant system will be demonstrated in chapter 5 based on a number of traffic scenarios.
2
Motivation
2.1
Aglaia GmbH – A Mobile Vision Company
Since this paper is based on accumulated knowledge of Aglaia GmbH Berlin employees, the company will be briefly presented. Aglaia is an independent hardware and software development company of camera-based driver assistant systems. It was founded in 1998 by professionals in industrial and automotive real-time image processing applications. Today, Aglaia is selling its own automotive products, like high-dynamic CMOS cameras (including stereo cameras), CAN/LIN-Bridges and special development and testing tools. The development of customer- specific prototypes up to systems ready for serial production is also an essential part of the business concept.
2.2
The Need for a Blind Spot Information System
The well-known Blind Spot, outside of the peripheral vision of a driver, is responsible for an estimated 830.000 accidents per year in the United States, according to the National Highway Traffic Safety Administration [1]. As can be read in [2], one third of all accidents in Germany outside urban areas can be traced to a lane change. Unfortunately, there are no reliable statistics available on a European level about this type of accident. But it can be seen, that the European Commission is making great efforts in order to reduce accidents that can be traced to the blind spot. On November 10, 2003, the European Parliament and Council adopted a new directive (Directive 2003/97/EC) on rear-view mirrors and supplementary indirect vision systems for motor vehicles [3]. This directive will improve road user safety by upgrading the performance of rear-view mirrors and accelerating the introduction of new technologies that increase the field of indirect vision for drivers of passenger cars, buses and trucks.
Development of a Camera-Based Blind Spot Information System
Despite rear-view mirrors, there is always the risk of blind spots when driving a car especially in situations, where the driver is inattentive and is starting a lane change manoeuvre anyway. Since drivers have more distractions than ever and roadways are getting more congested, it is quite difficult to be aware of the current traffic situation around one’s own car at any time. The described situation is even worse with trucks due to their size and shape. In addition, motorists are getting older and are less able to swing their necks around to observe their blind spots. In order to make driving more comfortable and especially safer related to the blind spot, a camera-based driver assistant system is introduced. When another vehicle enters the monitored zone, the driver immediately gets a visual indication that another vehicle is in the adjacent lane beside his/her own car. This information gives the driver a better basis for making the right decision, especially in case of a lane change. Both sides of the car are monitored in the same way.
2.3
Hardware Requirements
One of the key points of such a product is the choice of the sensor. For the described Blind Spot System, a CMOS video camera is used on each side of the vehicle. Generally speaking, a video sensor has the following advantages in comparison to other sensors: The sensor is passive. Sun or headlight illumination is used – no electromagnetic emission, no legal restrictions. The infrastructure of the road environment is designed for human visual perception (lane markings, traffic signs, etc.). Thus, it is very suitable for visual processing. Systems can be designed with standard electronic components. The information in one image is abundant and thus a great deal of information can be extracted in a short time. CMOS technology offers a high dynamic range. Thus the images have comparable quality independent of weather and lighting conditions. In addition, the sensor characteristics can be adjusted according to the functional requirements of the system. The sensor, together with the optics, is very compact and offers a small package size including the processing unit. Due to the small package size, the complete system can be integrated into the mirror base. This installation is especially appropriate for the sensor, because vibration is minimal and that part of the mirror is always fixed and protected quite well from any sort of mechanical damage.
73
74
Safety
In order to minimize size and hardware costs of such a product, the hardware performance especially processor frequency and memory is very limited. The important data: 200 MHz Texas Instruments Floating Point DSP 256 KBytes DSP Cache (for code and data) 1 MByte Flash RAM (for code and parameters only) No additional external RAM As an important interface to the car, the BLIS is connected to an LIN-Bus and a bi-directional data exchange is performed. Data from other sensors of the vehicle as well as data from the other BLIS module, are required in order to meet the functional requirements. In particular, information of the wheel speed sensors is utilized. Figure 1 shows one of the early hardware versions.
Fig. 1.
2.4
BLIS Hardware
Functional Requirements
The main functional requirements will be described in the following paragraph. It has to be taken into account, that all requirements have to be applied to both BLIS systems, independent of the side on which they are installed. One’s own car is called the subject vehicle while all other vehicles are called object vehicles. Position and speed values for object vehicles are very important requirements. The system shall detect all vehicles entering the monitored detection zone with a certain relative (negative or positive) speed. That means the system must detect vehicles approaching from behind and vehicles sliding back when
Development of a Camera-Based Blind Spot Information System
they are overtaken by the subject car. The monitored detection area is divided into a “must detect” zone and a “may detect zone” (see figure 2). If a vehicle is within the “must detect” area and fulfils all warning requirements, BLIS must issue a warning on the appropriate side.
Fig. 2.
Detection Zones
The system should be designed in such a way that it is able to detect passenger cars, trucks (also with trailers) and buses as well as motorbikes, in both daylight and darkness. It may detect motorcycles and bicycles. At the same time, it should not react to parked vehicles, roadside fences, crash barriers, lampposts and so on. It is required, that the BLIS system also works in bad visibility conditions (e.g. bad weather). If this is not possible, the system should inform the driver of this fact. The system shall also detect situations in which the optical path is blocked, e.g. by dirt, ice and so on. The system must detect relevant vehicles very quickly in order to inform the driver without significant delay, which makes a processing speed of 24 frames per second necessary. Since the system is installed for the lifetime of the vehicle, it shall have the required performance all the time, independent of services at a garage (e.g. dismantling of complete mirror) or full load situations. The Blind Spot Information System shall work on motorways, rural roads and urban streets as well.
75
76
Safety
3
Development Process Strategy
3.1
Overview
As can be seen in figure 3 a prototype-oriented software development process is used. The yellow marked parts will be described in more detail in the following chapters. The dividing of prototype and firmware/framework ensures a productive development process without dealing with the restrictions of target hardware in terms of computing power, programming environment and availability. Therefore, the target hardware can be developed simultaneously and is introduced into the project at a later time. The prototype development itself will be described in chapter 3.2. The video-database represents different environmental conditions and driving situations (see chapter 3.3). This database is crucial for the reproducibility and proof of progress of the algorithms. Based on that database, the test system evaluates the prototype and should reflect how the algorithms are fulfilling the specifications (see chapter 3.5). As soon as the target hardware is available, the prototype code can be ported onto the target (firmware code) while the “operating system”, the so called framework, has to be implemented exclusively for the hardware. The porting process is described in chapter 3.4.
Fig. 3.
3.2
Strategy Overview
Prototype-Oriented Software Development Process
The prototype development process consists of several successive phases (see figure 4): mission definition (goal and scope of project), feature set definition
Development of a Camera-Based Blind Spot Information System
(features for next prototype version/milestone), (re)design, implementation, evaluation (verification of functionality with the test system, processing and memory consumption estimation) and the optional customer review. This is an iterative process, whereas the feature set can be refined and enriched at every cycle according to the specification. At every milestone the prototype meets the defined feature set for that development phase, which can be demonstrated either with a test vehicle or in the laboratory.
Fig. 4.
Prototype Development
In order to implement the design of the system, which follows the specified functionality, a rapid development tool is used. At Aglaia GmbH, a special automotive software development platform (Cassandra®) is the basis of nearly all developments. The test system is also an integral part of that platform. Thus, it is possible to evaluate the algorithms more easily and adapt them quickly, if necessary. The platform can be used in a very flexible manner without significant modifications on the configuration side on the test vehicle for field tests as well as in the laboratory along with video sequences of the database.
3.3
Database
The video-database consists of two parts. One part reflects the requirements and thus the typical driving behaviour of an average driver. In case of BLIS, the database contains motorway, rural and city driving situations to an equal degree, as well as during daytime and nighttime. This part of database is called the test database. A second part of the database can be used for the software development. Especially difficult traffic scenarios, which are very challenging for the algorithms, can be represented more in the database than other scenes. This second part of the database is called development database.
77
78
Safety
The development progress can be measured based on the development database while the test database ensures that the progress doesn’t have negative side effects on the overall performance. In order to precisely reproduce and forecast the behaviour of the system on the target, all additional information must be recorded which will be used later on the final hardware, including the correct timing behaviour, like other sensor signals from the LIN-Bus, all camera images without any loss etc. This is done with the Aglaia Drive Recorder software also based on Cassandra®. For specific traffic scenarios, the real world video database is not sufficient. In these cases, artificial computer generated traffic scenarios can be used. All information of the scene is automatically available (e.g. for later tests, see chapter 3.5). Also, with simple changes in the configuration and parameter set, a lot of different situations can be created. Only a certain fraction of the database should consist of simulated scenes, because they can only model real scenarios to a certain extend. Figure 5 shows an image out of a nighttime simulation.
Fig. 5.
Nighttime Simulation
The database can also be used to support and test the porting process, as described in the following chapter.
3.4
Porting Process
As soon as the prototype meets the system requirements and if the hardware is available, the solution can be ported. This includes a code and design review, a redesign (if necessary) and a re-implementation of the prototype units into firmware units. The correctness of firmware unit code is proven by direct comparison with the prototype unit response to an identical input. In order to simplify the debug process the framework can be emulated together with the firmware on a PC. In the final step, the correctness can be proven on the tar-
Development of a Camera-Based Blind Spot Information System
get. Special equipment and software ensures that the input is identical to the prototype input. The framework implements basic functionality on the target, like I/O from camera and LIN-Bus, interrupt handling, task scheduling, system boot procedures, dynamical code administration and so on. This part is usually reusable for other applications on the same hardware.
3.5
Test System
The test system has the following function within the project: It measures the software development progress. It verifies the functionality concerning the specification. It can be used for an automated parameter optimization. The Aglaia test system is implemented in such a way, that it is able to process the video database automatically and log the results for later evaluations. A test system needs nominal values. These values represent information of an independent (sensor) measurement of a certain video image. More precisely, it is required, that the exact position of every vehicle in the scene be known by the test system. Only based on that information it is possible to evaluate the prototype functionality. In case of BLIS, a manual evaluation of the test database and also on parts of the development database was performed. Special tools are used in order to automate that task.
4
Software Design
4.1
Design Overview
Due to the hardware architecture only 265 KByte cache is available for runtime code and data. Therefore only a few lines of the whole image frame can be stored at a time. In addition, the variety of possible image content is almost infinite because of the camera-based approach that deals with highly varying environmental conditions. Thus, a central feature for the design concept is an efficient model of a complex environment at every processing step. The amount of data must be reduced at an early stage without loss of crucial information, which is quite challenging.
79
80
Safety
Figure 6 shows a schematic view of the designed system. On the interface side, 12 bit video images from the CMOS camera are processed, along with the wheel speed data from the LIN-Bus. Depending upon the lighting conditions and thus the image brightness, the appropriate subsystem will process the video images (day or night subsystem). As a result, hypotheses of vehicles are produced, while position and speed are calculated for each vehicle. A post-processing system (for day and night) assigns the hypothesis to already tracked vehicles or creates a new one. Finally, the information concerning speed and position is evaluated in terms of the detection area. If all warning criteria are fulfilled, a warning is issued. Thus, every image is subsequently reduced down to a single piece of information – LED on or off.
Fig. 6. Software Design
4.2
Day System Design
In order to fulfil the requirements it is very important to estimate the speed and position of every vehicle as precisely as possible. Moreover, the system shall detect every sort of vehicle down to the size of a very small car and even a motorbike. As mentioned in chapter 2.3, the system consists of a monocular camera. In comparison to stereo cameras, it is not possible to estimate the real position of the vehicle on the street by only a single image. The well-known stereo from motion approach is used. Two successive images are used as if they were a stereo image pair. Since the car is moving between the images, the so-called ego motion of the car has to be taken into account. This is a combination of speed and yaw rate of the vehicle. As a result, the Feature Extraction unit provides special image features for the subsequent processing steps. The features are dynamically extracted within every image and matched between two successive images. Based on the
Development of a Camera-Based Blind Spot Information System
known position and alignment of the camera (external calibration) in conjunction with the parameters for the optical attributes of the camera (internal calibration), it is possible to estimate the so-called time to contact for each feature with the camera plane and a motion vector. The next processing unit clusters features into groups, which represent vehicles. Only those features are considered which fulfil certain criteria, like position, speed, reliability etc. The clustering step works independently from previous results and generates vehicle hypotheses for each image. Several plausibility tests are applied for each hypothesis. Since the motion flow is dense enough, the position on the street can be estimated by triangulation for each vehicle. Based on the time to contact, the speed can also be estimated. Finally, a probability for each hypothesis is estimated based on several properties.
4.3
Night System Design
Especially on motorways, lampposts are rarely seen. That’s why the only reliable information that is available at night are the headlights of the vehicles. A special Headlight Detection unit extracts the so-called blobs and measures size and position for each light. In the next step, lights are grouped into pairs of two in order to represent passenger cars or trucks. Based on this information, the distance can be estimated. It is assumed that all blobs that are not grouped belong to a motorbike. In order to estimate the distance of the bike, other information has to be taken into account. The described processing step works independently of previous results and generates vehicle hypotheses for each image. Finally, a probability for each hypothesis is estimated based on several properties.
4.4
Post-Processing Components
In this chapter, the most important post-processing units will be described briefly. One of them is the Tracking unit. As mentioned in the context of the day and night subsystem, vehicle hypotheses are provided as an input of the Tracking unit. The main function of this unit is to assign either a new hypothesis to existing and already tracked vehicles or to create new ones. Therefore, the longer a vehicle is tracked and confirmed by new hypothesis, the higher is the associated probability. With every update of a tracking object the position and speed is updated based on a recursive estimation filter. Since the hardware provides fixed timing and processing of images, the position for every vehicle can be predicted for the next frame.
81
82
Safety
A final processing unit filters all vehicles that fulfil the requirements concerning speed and position. A global danger level is updated based on the probabilities of each tracked vehicle. This (danger) level reflects a general probability for the existence of a car within the blind spot. If the level is above a certain threshold, a warning is issued and indicated.
4.5
System Robustness
Additional components ensure the correct functionality of the system. Some important ones will be described in the following. As mentioned in the previous chapters, the quality of the detection of vehicles depends on the calibration of the camera. If the car is being serviced in a garage, in case of a fully loaded car or even because of changes due to the aging of the vehicle, the external calibration might especially change significantly. In order to make sure that the mentioned scenarios don’t affect the functionality, continuous, dynamic re-calibration of the camera is performed during daytime and nighttime. Since the wheel speeds are processed in order to estimate the ego motion the plausibility of this data is also constantly checked. As a result, transmission failures on the LIN-Bus will not effect the calculation. Also, some major defects in the wheel speed sensors or even very different tire pressures, which can influence the yaw rate calculation, can be detected. If the sensor signals are no longer plausible, an appropriate message will be presented to the driver. Another unit observes the image quality. For example, the quality can decrease due to bad visibility or because of a blocked sensor situation (e.g. a lens is covered with dirt or ice or even completely clogged). If this unit calculates a high probability for the mentioned situations an appropriate message will be presented to the driver. As long as the image quality of only one camera is used, the information is not unambiguous. Reference measurements of other sensors can improve the results significantly. That’s why the image quality results of the other camera side are taken into account. But even then, some problem situations cannot be clearly determined. Future developments, especially concerning hardware, can improve the described self-diagnostic function.
Development of a Camera-Based Blind Spot Information System
5
Results
A bounding box marks the estimated position of a vehicle. The red and blue areas indicate the detection zones (see figure 2). The red circle in the lower left side shows the warning condition.
Fig. 7.
Day Heavy Rain
Fig. 8. Day Sunset
Fig. 9.
Day Sunny
Fig. 10. Day Motorbike
Fig. 11. Day Snow with Sun-Reflections
Fig. 12. Day Tunnel-Entry
83
84
Safety
Fig. 13. Night Motorbike (Brightened)
Fig. 14. Night City-Lights (Brightened)
Fig. 15. Night City-Lights (Brightened)
Fig. 16. Night Tunnel (Brightened)
References [1] [2] [3]
Brett Clanton, Detroit News, from http://www.usatoday.com, 2004 Jan C. Egelhaaf, Peter M. Knoll, Night Vision. Innovative Fahrerassistenzsysteme (IIR), 2003 European Energy and Transport Forum, Road Safety, from http://europa.eu.int, 2004
Dipl.-Inform. Lars-Peter Becker Tiniusstr. 12-15, 13089 Berlin Germany
[email protected] Keywords:
driver assistant system, blind spot detection, image processing, camerabased, video-based, headlight detection, stereo from motion
85
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation P. M. Knoll, B.-J. Schäfer, Robert Bosch GmbH Abstract Sensors to detect the vehicle environment are being used already today. Ultrasonic parking aids meanwhile have a high customer acceptance, and ACC (Adaptive Cruise Control) systems have been introduced in the market recently. New sensors are being developed at rapid pace. On their basis new functions are quickly implemented because of their importance for safety and convenience. Upon availability of high dynamic CMOS imager chips Video cameras will be introduced in vehicles. A computer platform with picture processing capability will explore the high potential of functions. Finally, sensor data fusion will improve significantly the performance of the systems. During the “PROMETHEUS” project at the end of the 1980s the electronic components necessary for these systems – highly sensitive sensors and extremely efficient micro-processors – were not yet ready for high-volume series production and automotive applications. Now they are available.
1
Introduction
Almost every minute, on average, a person dies in or caused by a crash. In 2000, more than 90.000 persons have been killed in the Triad (Europe, USA and Japan) in road traffic accidents leading to a socioeconomic damage of more then 400 bill. EUR. As a consequence, the EU Commission has defined with their e-Safety program a demanding goal by cutting the number of killed persons to half until the year 2010. Bosch wants to contribute significantly to this goal by developing Driver Assistance Systems in close cooperation with the OEMs and thus, reduce the frequency and the severity of road accidents. In critical driving situations only a fraction of a second may determine whether an accident occurs or not. Studies [1] indicate that about 60 percent of front-end crashes and almost one third of head-on collisions would not occur
86
Safety
if the driver could react one half second earlier. Every second accident at intersections could be prevented by faster reactions. An important aspect of developing active and passive safety systems is, therefore, the capability of the vehicle to perceive and interpret its environment by using appropriate sensors, to recognize and interpret dangerous situations and to support the driver and his driving maneuvers in the best possible way. Microsystems technology plays an important role during the introduction of active safety systems. The sensor technologies are manifold: Ultrasonic, Radar, Lidar, and Video sensors, they all contribute to gain relevant and reliable data of the vehicles surrounding. Sensor technology and sensor data processing, sensor data fusion and appropriate algorithms for function development allow the realization of functions for accident avoidance and mitigation
2
Traffic Accidents – Causes and Means to Mitigate or to Avoid Them
Only recently, statistic material has been published [2] showing that the accident probability for vehicles equipped with the ESP system (ESP=Electronic Stability Program) is significantly lower than for vehicles without ESP. Additional improvement is expected from systems like PRE-SAFE. It combines active and passive safety by recognizing critical driving situations with increased accident possibility. It triggers preventive measures to prepare the occupants and the vehicle for possible crash by evaluating the sensors of the ESP and the Brake Assist. To protect best passengers from a potential accident, reversible belt-pretensioners for occupant fixation, passenger seat positioning and sunroof closure are activated. Like the vehicle interaction with the vehicles dynamic with ESP, the release of collision mitigation means can be activated only in the case when a vehicle parameter went out of control or when an accident happens. Today, airbags are activated in the moment when sensors detect the impact. Typical reaction times last 5 ms. In spite of the extremely short time available for the release of accident mitigation means, there is no doubt that airbags have contributed significantly to the mitigation of road accidents and, in particular, fatalities. But due to the extremely short time between the start of the event and the possible reaction of a system the potential of today’s systems is limited. This high accident avoidance potential can be transferred in an ever higher extend to “predictive” driver assistance systems. They expand the detection
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
range of the vehicle by the use of surround sensors. With the signals of these sensors objects and situations in the vicinity of the vehicle can be enclosed into the calculation of collision mitigating and collision avoiding means.
3
Components of Predictive Driver Assistance Systems
Making use of the electronic surround vision many driver assistance systems can be realized. Today, the components for the realization of these systems – highly sensitive sensors and powerful microprocessors - are available or under development with a realistic time schedule, and the chance for the realization of the “sensitive” automobile are fast approaching. Soon sensors will scan the environment around the vehicle, derive warnings from the detected objects, and perform driving maneuvers all in a split second faster than the most skilled driver. Electronic surround sensing is the basis for numerous driver assistance systems – systems that warn or actively intervene. Figure 1 shows the detection areas of different sensor types.
Fig. 1.
Surround sensing: Detection fields of different sensors
By an early warning of the driver an earlier reaction of the driver can be achieved. Active driver assistance systems with vehicle interaction allow a vehicle reaction which is quicker than the normal reaction of the driver. The following sensors are available or under development.
87
88
Safety
3.1
Ultrasonic Sensors
Reversing and Parking Aids today are using Ultra Short Range Sensors in ultrasonic technology. Figure 2 shows an ultrasonic sensor of the 4th generation. The driving and the signal processing circuitry are integrated in the sensor housing. The sensors have a detection range of approx. 3m.
Fig. 2.
Ultrasonic sensor 4th generation
Ultrasonic parking aid systems have gained high acceptance with the customer and are found in many vehicles. The sensors are mounted in the bumper fascia. When approaching an obstacle the driver receives an acoustical and/or optical warning
3.2 Long Range Radar 77GHz The 2nd generation Long Range Sensor with a range of approx. 200m is based on FMCW Radar technology. The narrow lobe with an opening angle of ±8° detects obstacles in front of the own vehicle and measures the distance to vehicles in front. The CPU is integrated in the sensor housing. The sensor is multi target capable and can measure distance and relative speed simultaneously. The angular resolution is derived from the signals from 4 Radar lobes. Series introduction was made in 2001 with the first generation. Figure 3 shows the 2nd generation sensor. It will be introduced into the market in March, 2004. At that time this Sensor & Control Unit will be the smallest and lightest of its kind on the market. The antenna window for the mm-waves is a lens of plastic material which can be heated to increase the availability during winter season. The unit is mounted in air cooling slots of the vehicle front end or behind plastic bumper material by means of a model specific bracket. Three screws enable the alignment in production and in service. Figure 3 shows the long range Radar sensor.
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
Fig. 3.
77 GHz Radar sensor with integrated CPU for Adaptive Cruise Control
The information of this sensor is used to realize the ACC function (Adaptive Cruise Control). The system warns the driver from following too close or keeps automatically a safe distance to the vehicle ahead. The set cruise speed and the safety distance are controlled by activating brake or accelerator. At speeds below 30 km/h the systems switches off with an appropriate warning signal to the driver. In future, additional sensors (Video, Short Range Sensors) will be introduced in vehicles. They allow a plurality of new functions.
3.3 Short Range Sensors Besides ultrasonic sensors, 24GHz radar sensors (Short-Range-Radar (SRR)Sensors) or Lidar sensors can be used in future systems to build a „virtual safety belt” around the car with a detection range between 2 and 20m, depending on the specific demand for the function performance. Objects are detected within this belt, their relative speeds to the own vehicle are calculated, and warnings to the driver or vehicle interactions can be derived. The release for the 24GHz UWB (Ultra Wide Band) has been given in 2002 for the USA. In Europe it has been released by the ECC with some restrictions (Use only until mid of 2013 with a deactivation in the vicinity of Radio astronomy sites). As a substitute after the sunset date of this frequency, a new UWB between 77 and 81GHz has been released. The SARA consortium is working on a worldwide harmonization for these frequency bands to ensure a widespread application of these components.
89
90
Safety
3.4 Video Sensor Figure 4 shows the current setup of the Robert Bosch camera module. The camera head is fixed on a small PC board with camera relevant electronics. On the rear side of the camera board the plug for the video cable is mounted. The whole unit is shifted into a windshield mounted adapter.
Fig. 4.
Video camera module
CMOS technology with non linear luminance conversion will cover a wide luminance dynamic range and will significantly outperform current CCD cameras. Since brightness of the scene cannot be controlled in automotive environment, imagers with a very high dynamic range are needed. Due to the high information content of a Video picture, Video technology has the highest potential for future functions. They can be realized on the Video sensor alone or Video signals can be fused with Radar or Ultrasonic signals. Regarding sensor technology, all aspects of high sophisticated Micro Systems Technology are covered by these surrounding sensors. Sensor performance is still in an early stage and cost of the components is still too high to allow a widespread application. There is a huge potential for sensor performance improvement and cost reduction by introducing new Micro Systems Technologies.
4
Driver Assistance Systems for Convenience and for Safety
Figure 5 shows the enormous range of driver assistance systems on the way to the „Safety Vehicle“. They can be sub-divided into two categories: Convenience systems with the goal of semiautonomous driving, Safety systems with the goal of collision mitigation and collision avoidance.
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
Driver support systems without active vehicle interaction can be viewed as a pre-stage to vehicle guidance. They warn the driver or suggest a driving maneuver. One example is the parking assistant of Bosch. This system will give the driver steering recommendations in order to park optimally in a parking space.
Fig. 5.
Driver assistance systems on the way to the safety vehicle
Another example is the Night Vision Improvement system. As more then 40% of all fatalities occur at night this function has high potential for saving lives. Lane departure warning systems can also contribute significantly to the reduction of accidents as almost 40% of all accidents are due to unintended lane departure. ACC, which has been introduced to the market a few years ago, belongs to the group of active convenience systems and will further be developed to a better functionality. If longitudinal guidance is augmented by lane-keeping assistance (also a video-based system for lateral guidance), and making use of complex sensor data fusion algorithms, automatic driving is possible in principle. Passive safety systems contain the predictive recognition of potential accidents and the functions of pedestrian protection. The highest demand regarding performance and reliability is put on active safety systems. They range from a simple parking stop, which automatically brakes a vehicle before reaching an obstacle, to Predictive Safety Systems (PSS).
91
92
Safety
4.1 Adaptive Cruise Control (ACC) Figure 6 shows the basic function of the ACC system. With no vehicle in front or vehicle in safe distance ahead, the own vehicle cruises at the speed which has been set by the driver (figure 6, up). If a vehicle is detected, ACC adapts automatically the speed in such a way that the safety distance is maintained (figure 6, middle) by interaction with brake and accelerator. In case of a rapid approaching speed to the vehicle in front, the system additionally warns the driver. If the car in front leaves the lane the own vehicle accelerates to the previously set speed (figure 6, below).
Fig. 6.
Basis function of ACC
In order to avoid excessive curve speeds the signals of the ESP system are considered simultaneously. ACC will reduce automatically the speed. The driver can override the ACC system at any time by activating the accelerator or with a short activation of the brake. The current systems of the first and second generation are active at speeds beyond 30km/h. To avoid too many false alarms, stationary objects are suppressed. With the improved ACC of the 2nd generation this convenience function can be used also on smaller highways. The next step in functionality will come with the ACCplus function, which will brake the car to stand still. With ACC FSR (Full Speed Range), with a data fusion of the long range radar with a Video camera will allow a complete longitudinal control at all vehicle speeds, and also in urban areas with a high complexity of road traffic scenery.
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
The today’s ACC system is a convenience function supporting the driver to drive more relaxed. Starting from 2005 on, Bosch will extend the functionality of ACC to „Predictive Safety Systems“, and enter, thus, into the field of safety systems.
4.2 Video System The above mentioned Video technology will first be introduced for convenience functions that provide transparent behavior to and intervention by the driver. Fig. 7 shows the basic principle of operation for a video system. The enormous potential of video sensing is intuitively obvious from the performance of human visual sensing. Although computerized vision has by far not achieved similar performance until today, a respectable plurality of information and related functions can readily be achieved by video sensing:
Fig. 7.
Basic principle of a video sensor and functions being considered
lane recognition and lane departure warning, position of own car with-
in the lane, traffic sign recognition (speed, no passing, ...) with an appropriate
warning to the driver, obstacles in front of the car, collision warning vehicle inclination for headlight adjustments.
93
94
Safety
New methods of picture processing will further improve the performance of these systems [4]. Besides the measurement of the distance to the obstacle the camera can assist the ACC system by performing an object detection or object classification. Special emphasis is put on the night vision improvement function in the introduction phase of Video technology.
4.3 Predictive Safety Systems Inattention is the cause of 68% of all rear end collisions. In 11% besides inattention following too closely is the cause, 9% of the rear end collisions are caused by following too closely alone. These statistics [6] show that 88% of rear end collisions can be influenced by longitudinal control systems. We assume a stepwise approach from convenience systems to safety systems where the first step has been made with the Adaptive Cruise Control.
Fig. 8.
Shows the analysis of the braking behavior during collisions.
In almost 50% of the collisions the drivers do not brake at all. An emergency braking happens only in 39% of all vehicle – vehicle accidents, and in 31% of the accidents with no influence of another vehicle, respectively. This analysis confirms that inattention is the most frequent cause for collision type accidents and shows the high collision avoidance and collision mitigation potential of predictive driver assistance systems if the braking process of the driver can be anticipated or a vehicle interaction can be made by the vehicle’s computer. Predictive safety systems will pave the way to collision avoidance with full interference in the dynamics of the vehicle. They are partly based on signals derived from additional sensors, allowing to integrate the vehicle’s surrounding. From the measurement of the relative speed between detected obstacles
Predictive Safety Systems – Steps Towards Collision Avoidance and Collision Mitigation
and the own vehicle, dangerous situations can be recognized in an early state. Warnings and stepwise vehicle interactions can be derived. The introduction of predictive safety systems comes most likely with convenience systems where safety systems will use the same sensors. From 2005 on Bosch will extend ACC as the most important component of predictive safety systems to Predictive Safety Systems (PSS) in three stages. PSS1 addresses the cases with partial braking. It prepares the brake system for a possible emergency braking. In situations where there is the threat of an accident, it prepares for it by building up brake pressure, brings the brake pads into very light contact with the brake discs and modifies the hydraulic brake assist. The result is that the driver gains important fractions of a second until the full braking effect is achieved. In about half of all collisions drivers crash into the obstacle without braking. Bosch is developing the two succeeding generations of Predictive Safety Systems for these kind of accidents. PSS2 addresses the cases with no braking. It warns the driver from the danger of driving into the vehicle in front. The second generation of the Predictive Safety System does not only prepare the braking system; it also gives a timely warning to the driver about dangerous traffic situations, helping to prevent accidents in many cases. To do this it triggers a short, sharp operation of the brakes. Studies of drivers have shown that a sudden braking impulse is the best way of drawing the driver’s attention to what is happening on the road; drivers react directly to the warning. Alternatively or additionally, the system can also warn the driver by means of optical or acoustic signals, or by a brief tightening of the normally loosely fastened safety belt. PSS3 performs an emergency braking in the case of an unavoidable accident. The third developmental stage of the Predictive Safety System will not only recognize an unavoidable collision with a vehicle in front, but the system will in this instance also trigger automatic emergency braking with maximum vehicle deceleration. This will especially reduce the severity of an accident when the driver has failed to react at all to the previous warnings, or has reacted inadequately. Automatic control of vehicle function demands a very high level of certainty in the recognition of objects and the assessment of accident risk. In order to be able to reliably recognize that a collision is inevitable, further metering systems – such as video sensors – will have to support the radar sensors.
95
96
Safety
5
Outlook
The political institutions have put the right emphasis on their programs to reduce fatalities and road traffic accidents, e.g. the European Union with the e-Safety program with the vision to reduce fatalities to 50% until the year 2010, and the German government with programs such as INVENT. Car makers and suppliers have responded to these programs and try to make their contributions to reach the goal [5]. In conjunction with these programs there is a big challenge for the Micro Systems Technology: Sensor Technology and sensor (data) fusion, setup and connecting technologies, reliability and data security. The price of the components will play a dominant role. Only costly components allow a widespread distribution of safety technologies, being a precondition for the effectiveness of future accident prevention and mitigation.
6
References
[1]
Enke, K.: „Possibilities for Improving Safety Within the Driver Vehicle Environment Loop, 7th Intl. Technical Conference on Experimental Safety Vehicle, Paris (1979) Anonymous statistics of accident data of the “Statistisches Bundesamt (German Federal Statistics Institution), Wiesbaden, Germany (1998 – 2001) Statistics from the “Gesamtverband der Deutschen Versicherunswirtschaft e.V.” (Association of the German Insurance Industry) (2001) Seger, U.; Knoll, P.M.; Stiller, C.: “Sensor Vision and Collision Warning Systems”, Convergence, Detroit (2000) Knoll, P.M.: Predictive Safety Systems – Steps towards Collision Avoidance“ VDA Technical Congress, Rüsselsheim, Germany (2004) NHTSA Report (2001)
[2] [3] [4] [5] [6]
Peter M. Knoll, Bernd-Josef Schaefer Robert Bosch GmbH AE-DA/EL2 Daimerstr. 9 71229 Leonberg Germany
[email protected]
97
Datafusion of Two Driver Assistance System Sensors J. Thiem, M. Mühlenberg, Hella KGaA Hueck & Co. Abstract This contribution deals with data fusion between two sensors for driver assistance; a lidar-based ACC-sensor and a CMOS-based vision sensor system for LDW. The main properties and experimental results of the proposed approach will be described. The first fusion task is to supply the ACC sensor with lane information, obtained by the vision system, to improve the relevant target determination and control strategy. Furthermore, the LDW-sensor is concurrently able to use and verify ACC target hypotheses by vision-based object detection and tracking. This goes along with the improvement of the estimation of lateral target-position and –dynamics. Several test drives figure out the capability of this multiple sensor system. The main focus lies on information processing in the vision sensor, i.e. lane and object detection. In addition, the fusion method and data association inside the object detection module is specified.
1
Introduction
In general the functionality of sensors for Advanced Driver Assistance Systems (ADAS) are optimised according to their primary application. Today’s Adaptive Cruise Control (ACC) in upper class cars depends on the reliability and consistency of the measurement data of a single radar or lidar sensor. For the first generation of comfort-orientated driver assistance this works, because the designated driving areas are highway-like and therefore moderate due to complexity of target vehicle movement, ego-vehicle dynamics and the changes of the driving course. Signal processing and target tracking can be done in a model-based way and scene interpretation can be based on transparent assumptions and constraints, covering standard and uniform situations. Though the distance sensors are specialists for longitudinal control tasks, inconsistency or data lack occurs in non-standard situations, e.g. construction sites with crash barrier and reflectors producing phantom objects or curves with low radii with the effect of loosing the relevant target. In contrast to the longitudinal driver assistance the Lane Departure Warning (LDW) requires a different additional physical sensor principle, in this case a
98
Safety
vision sensor and a processing unit to detect the lane markings in front of the ego-vehicle by aid of image processing methods. Furthermore the establishment of this second and heterogeneous ADAS sensor enables the possibility to overcome physical limitations of the distance sensor. Combining the multiple inputs and utilizing the strength of each sensor results in a better and improved ACC application. In this first step towards a ADAS sensor fusion roadmap, a ACC lidar sensor is combined with a CMOS-vision sensor system for LDW, using the image processing capability of detecting visual pattern (vehicle rear), the better lateral resolution (vehicle width, lateral motion) and lane position information (egoposition, lane geometry). Both sensors cover the area in front of the car. From a topological view this coverage areas are different in range, horizontal and vertical opening angle. There is also a difference in the physical features delivered by the sensors: The lidar sensor delivers a range map of the detected objects, clustered by the number and the arrangement of beams. The grey value image stream of the CMOS-camera is calculated by an image processing device; the output are edge features representing lane markings, object contours, other contrast pattern, etc. After introducing the system overview and explaining the sensor properties, this paper describes the main vision modules “lane detection” and “object detection”. Hereafter, it focuses on the fusion aspects in section 5, i.e. the MAP fusion method and data association inside the object detection. Finally, section 6 outlines experimental results.
2
System Overview
Significant in the redundant coverage area are the longitudinal distance measurements of the lidar sensor and the lateral object contour determination by aid of the image sensor (figure 3). The combination of this data allows a precise localization of the target and also its classification, improving e.g. the ACC control strategy. Furthermore, the lane tracking of the LDW profits on additional stationary objects detected by the lidar sensor. Signal processing in the complementary areas enables the early prediction of new incoming objects (e.g. “cut-in”) and the preconditioning of both systems. In general, sensor data fusion will have a lasting effect on the architecture of driver assistance systems in vehicles; the functional separation in sensors and applications, the sensor communication and common interfaces are only few, but important points to mention.
Datafusion of Two Driver Assistance System Sensors
The redundant covering area of both sensors in front of the car is of importance (figure 1) for the initial target detection (lidar) and tracking (lidar, vision). The area borders are the opening angle of the lidar with 16° in azimuth and a fixed vision sensor range of 70m for objects. Objects which drift in the vision sensor’s right and left complementary areas will be tracked for a certain life time only by image processing to bypass short time object loss of the ACC, e.g. in narrow curves. In addition, objects that enter the vision area from left or right (near cut-in) should also be recognized initially by the LDW sensor.
Fig. 1.
Sensor covering areas and ranges.
The ACC- and LDW-sensor are arranged in a decentralized fusion network cluster and linked via High-Speed CAN 2.0A (figure 2). In our approach other vehicles are initially detected and tracked by the ACC sensor, the target attributes are delivered to the Fusion-CAN. The LDW uses this track lists to verify and measure up the objects again with image processing methods and returns the determined vision-based track lists, plus additional lane information to the ACC for the fusion task. So the idea for this first step in sensor data fusion is to improve the existing application ACC, but not to create a new application. Due to this proceeding the functional structure of both sensors can be preserved in most aspects, which means low modification and variation of the single sensor system. Especially sensor-specific signal processing and object tracking tasks remain in each device. A close sensor-internal data connection between this two tasks ensure a optimum system performance without facing the problems of a centralized fusion architecture, e.g. high data bandwidth requires FlexRay, new system setup in case of sensor enlargement, costs, etc. Due to the fact, that the fusion is based on objects association, data transfer is limited and a private CAN link fulfils all requirements, in our case with a bus load about 35% of a 500kBit CAN.
99
100
Safety
Fig. 2.
2.1
Lidar/vision sensor cluster and the partitioning of the function blocks.
Lidar Sensor
The ACC-system described in this paper is based on the optical lidar technology IDIS® (figure 3). The wavelength is 905nm, so the sensor works active in the near infrared (NIR).
Fig. 3.
Lidar sensor for ACC. Significant in the illustrations are the transmitter and receiver lens of the sensor. The right picture illustrates the mounting position in the cars front bumper area (without black IR-lens hood).
The sensor emits 15ns laser pulses in 16 single, horizontally arranged channels (Multibeam) and measures the return pulse time-delay of the reflected beams. The arrangement of the beams allows a certain spatial resolution in azimuthal
Datafusion of Two Driver Assistance System Sensors
direction (1°) and beams are capable of multi-target determination. The measurement data are therefore distance vectors. After signal preprocessing, object grouping and tracking, the lidar sensor measures the distance and estimates the relative velocity to the moving relevant targets in front of the ego-vehicle. Stationary objects are not taken into account for the ACC longitudinal control strategy, although they are part of the sensor-internal track-list. In case of initial detection, objects become new tracks after a few cycles according to quality of the tracking process (‘lifetime’). The sensor’s properties are listed in Tab. 1.
Tab. 1.
2.2
Lidar sensor properties
Vision Sensor
The LDW-system is based on a forward looking CMOS-Imager with a wideangle lens and a separate image processing unit. The sensor works mainly in the visual spectrum and needs no additional, special light source at night. In contrast to the distance sensors it delivers a matrix of grey-values representing the sceneries brightness distribution, leading to a 2D pattern- or template-based signal processing for object detection without the information of depth (geometric model). The lane-detection task is based on a mixed feature-/model-based image processing approach. By sensing the position of lane boundaries like white lane markings in front of the car, it estimates lane assignment, curvature ahead and ego-position of the vehicle in the own lane track. For reproducible system oper-
101
102
Safety
ation this assumes at least driving environments with a “model”-like character. Much investigations has been devoted to the image processing task, yielding a combination of an edge- and area-based lane marking detection algorithm adaptable to most road and illumination conditions. The object-detection approach is based on contour information extracted from the grey-value in the image scene. With this information, vehicle hypotheses are generated and feed into a multiple-target multiple-hypotheses tracking process.
Fig. 4.
Vision sensor for LDW. Best mounting position due to perspective view of the road ahead is behind the upper windscreen
Tab. 2.
Vision sensor properties
Datafusion of Two Driver Assistance System Sensors
3
Lane Detection
ACC objects must be judged according to their track position to determine the relevant target. The determination of lane width wLane and ego-position by the LDW complements and improves this ACC task and results in a better and more precise lane-object mapping. Furthermore the curvature cLane ahead is a useful info. But due to reduced look ahead range of the vision sensor under adverse weather and lighting conditions the availability and consistency of the curvature information from the LDW is limited. Therefore a decision in the fusion unit is done whether the curve info from the vision wVision or the yaw sensor wGyro is used for the track prediction. The decision is based on measurement or prediction quality factor QSensor, e.g. the estimation error covariance. (1)
This completes the lane and ego-motion model to: (2)
Here, l is the lateral offset of the vehicle and ∂ψ the heading angle difference due to the lane orientation. The slipping angle β is not considered in this model. The lidar sensor object track m can be described by: (3)
with dm: target distance, vrel,m: relative velocity and nR,m and nL,m as the target border describing lidar beams (n = -8…-1, 1…8). So we can predict the lane boundaries in the range dm of the target m by:
103
104
Safety
(4)
(5)
and come to the relevant target parameter determination (6)
with nR,m=-1…-8 and nL,m=1…8. Now we can complete our target state vector with the relevance factor (7)
Fig. 5.
4
Frame of the lane detection. The actual lane offset of lEgo: 68cm and lane width wLane: 337cm, determined by LDW.
Object Detection
Once the multibeam lidar detected an object, the lateral position and size of this target can be described accurately enough to establish a search area in the
Datafusion of Two Driver Assistance System Sensors
image plane, depending on the lidar measurement variances. The vision sensor task now is to detect and track all ACC objects.
Fig. 6.
Demonstration of the Fusion based Object Detection.
The image processing methods are based on horizontal and vertical edge detection, while the resulting edge candidates are analyzed concerning their attributes and must be arranged in certain geometrical orientation, e.g. the Upattern [4]. This works during day-time and under good conditions, at night or in low-light situation additional features, like the tail-light of the vehicles have to be detected, so we use a multiple geometrical model approach for object segmentation. Finally, Kalman filtering is used to track the objects by the LDW system in parallel to the ACC tracking.
Fig. 7.
5
Object detection. a) horizontal and vertical edges. b) segments based on grouped edges. c) pattern from model-based grouping d) tracked object.
Data Fusion
The ACC and LDW sensor extract target attributes by tracking the objects separately. The parameters differ in quality and accuracy depending on the sensors’ physical principle. For example, the lidar’s distance measurements are of
105
106
Safety
higher precision than the results obtained by the vision system. At this point, fusion is cooperative and feature-complementary by simple assembling of predefined sensors’ measurements, assuming that this measurements are constantly available. On the other side, object features are often competitive due to the different situation- and weather-depending availability of the sensor systems (quality factor Qm,Sensor). So the merging of features can lead to a better result.
Fig. 8.
5.1
The fusion object inherits the competitive attributes (here the lateral position) of the single sensors.
Maximum A Posteriori Estimation
For accurate merging of the measured object positions obtained by the two sensors, one further have to consider the sensors’ different lateral and longitudinal resolutions. For this purpose, the uncertainties of the estimated positions are modelled with a gaussian probability function [1] (8)
where Pi(x, y):=P(xm,i, ym,i | x, y) denotes the probability that the measurement (xm,i , ym,i) of sensor i with a given uncertainty (σx,i , σy,i) represents the “real” object position (x,y). Regarding two sensor estimates, the likelihood of the fused object position can than be determined by the conditional probability P(x, y | xm,1, ym,1, xm,2, ym,2). This “a posteriori probability” describes the probability of the object position in the case that the sensors provide the measurements (xm,1 , ym,1) and (xm,2 , ym,2) . Under the assumption of statistical independent measurements an the theorem of BAYES we get the expression
Datafusion of Two Driver Assistance System Sensors
(9)
that can be simplified to (10)
Finally, the most probable object position can be obtained as “Maximum A Posteriori” estimation (MAP) by (11)
If the “a priori probability” p(x, y) is unknown or uniformly distributed, than the solution is simply (12)
However, we assume gaussian distributions here, so the resulting expression that has to be maximized in Eqn. 11 is gaussian as well. For this reason, no time-consuming optimization algorithm is necessary. The estimated optimal object position (xF , yF ) and the resulting uncertainties (σx,F , σy,F ) are given explicitly. In case Eqn. 12 we get (13)
(14)
while the gaussian function can be separated in x, y. In figure 9 the object positions measured by the lidar and vision sensor are shown. Here, the different lateral and longitudinal resolutions of the sensors
107
108
Safety
become obvious. However, the resulting position of the fusion object offers high accuracy in both lateral and longitudinal dimension, which is well illustrated.
Fig. 9.
5.2
Fusion of the object location measured by the ACC and LDW sensor.
Data Association
In the process of object detection, existing image processing objects (IPobjects) of the LDW sensor have to be associated with incoming ACC objects. Fig. 6, e.g. illustrates a scene with 9 ACC targets while two stable IP objects will be considered in the tracking cycle. Similar to the fusion of objects explained in the preceding section, we use the method of MAP estimation to calculate a measure for the geometrical “distance” or similarity between ACC targets and existing IP objects. Here, the goal is not to estimate the optimal position as in Eqn. 11, but to evaluate the probability itself. If we assume gaussian distributions the probability function Eqn. 10 can be rewritten to (15)
that results for the optimal position (xF , yF ) in
Datafusion of Two Driver Assistance System Sensors
(16)
to obtain the confidence factor C. This factor is zero if both coordinates are equal and will increase with the geometrical distance regarding the individual uncertainties. In case of Eqn. 12 this factor is generally given by (17)
The abbreviation c(•) is called confidence function. The contrast to other tracking problems is, that IP-objects are not spot-wise objects. Instead, they are described by the left and right border edges that will be extracted in the edge detection process. Therefore, we have to associate the ACC target position with both left and right border of the vehicle hypothesis (18)
The fact that left and right border edges of a preceding vehicle should have the same distance xLIP = xRIP =: xIP leads to (19)
The confidence regarding the lateral position (20)
is arranged with an additional parameter λ
109
110
Safety
(21)
If the image processing could reliably extract both border points yLIP, yRIP (case i), and the ACC target is located between them, then the resulting confidence is CLy = CRy = 0, i.e. λL = λR = ∞ . If there is just one stable border point (case ii and iii), then a plausible vehicle width wmax is used and the parameter is chosen to widen or narrow the gaussian distribution of the ACC target. By this means candidates that seem to be located inside (ii) or outside (iii) the plausible vehicle are treated in a different way. In figure 10 this mechanism is illustrated.
Fig. 10. Adaptation of the confidence function . Data association in case ii (left) and iii (right).
6
Results
Fig. 11 shows the example for using the additional LDW lane information to improve the object-track-assembling. The ego-vehicle is travelling on the left track in a highway-like situation with increased lane width. In this case the real lane width is 4.2 m in contrast to the supposed value of 3.5 m of the ACC. The dotted lines are the measured lane borders from the LDW, the solid lines belong to the ACC hypothesis. One can see, that the cut-in of the target directly in the right front of the ego-vehicle is in an advanced stage, so the ACC control strategy is able to react earlier on this event by using the lane border from the vision sensor.
Datafusion of Two Driver Assistance System Sensors
Fig. 11. Example for using lane information from LDW instead of the ACC lane hypothesis (different scale for x- and y-axis).
The vision-based object detection capabilities are illustrated in figure 12. Once again a lane-change manoeuvre of the relevant track ahead is shown (double line symbol). The small bottom line represents the object width measured by the lidar, the top thick one is the fusion object attribute target width, improved by the vision system. The actual range in the given frame is about 60 m, so the accuracy of the fusion step demands primarily on the resolution of the vision sensor. Figure 13 points out the differences between the lidar and the vision sensor in lateral tracking. Here, the left and right border of the preceding test-vehicle detected by the image processing and ACC sensor is shown. With this information the error of width estimation and finally the lateral accuracy can be determined (figure 14).
Fig. 12. Additional vision sensor based lateral parameter (thick top line) improves the position and width of the target.
111
112
Safety
The vision sensor system is capable of tracking up to 5 objects (own lane-track, neighbouring lane-tracks) in parallel. Of course, the performance of the vision sensor and the object detection depends on weather and lighting conditions. This has to be taken into account when defining the ACC longitudinal control strategy based on fusion.
Fig. 13. The target vehicle’s lateral position and width, tracked and interpolated by lidar (ACC) and vision (IP), travelling distance: 120 m, constant measurement distance: 30..40 m.
Fig. 14. Estimation of the lateral resolution of the Lidar and vision sensor.
Datafusion of Two Driver Assistance System Sensors
7
Conclusion
Along with ACC, future luxury cars will be equipped with a vision sensor system realizing LDW. In our fusion approach both sensors deliver their object lists to a fusion module, which can be a software task located in one of the sensors, e.g. the ACC sensor. The entire system also includes an edge-based multiple-target object detection and tracking in the vision-sensor that is triggered by potential ACC targets. The combination of all data from the lidar and vision sensor allows a precise location of the target and also its classification, improving e.g. the ACC control strategy. Furthermore, the lane tracking of the LDW profits on additional stationary objects detected by the lidar sensor. Signal processing in the complementary areas also enables the early prediction of new incoming objects (e.g. “cut-in”) and the preconditioning of both systems. Obviously, this additional vision sensor information causes some extra workload in the image processing of the LDW sensor unit, but resulting in a more precise scenario description, advantageous in multi-track environments.
References [1]
[2]
[3]
[4]
[5] [6]
[7]
Bähring D., Hoffmann C.: “Objektverifikation durch Fusion monoskopischer Videomerkmale mit FMCW-Radar für ACC“, Workshop Fahrerassistenzsysteme FAS 2003, Leinsweiler (Pfalz) Sept. 2003, p.9 (2003). Darms M., Winner H.: “Fusion von Umfelddaten für Fahrerassistenzsysteme“, Workshop Fahrerassistenzsysteme FAS 2003, Leinsweiler (Pfalz) Sept. 2003, p.13 (2003). Dickmanns E.D., Mysliwetz B.D.: “Recursive 3-D Road and Relative Ego-State Recognition“, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, (1992). Gern. A., Franke U., Levi P.: “Advanced Lane Recognition – Fusing Vision and Radar“, Proceedings of the IEEE Intelligent Vehicles Symposium 2000, Dearborn (MI), USA, p. 45-51 (2000). Hutchins R.G.: “Target Tracking Algorithms Em,ploying Imaging Sensors”, Dissertation, University of California, San Diego (1988). Mühlenberg M.: “Bildsensoranwendung im Kraftfahrzeug am konzeptionellen Beispiel der Fahrspurerkennung“, VDI Elektronik im Kraftfahrzeug, Baden-Baden 2001, VDI-Berichte 1646, p.879 (2001). Mühlenberg M., Thiem J., Rotter A.: “ ACC- and LDW-Sensor in a Fusion Network“, 7th International Symposium on Advanced Vehicle Control 2004, Arnhem, The Netherlands, Aug. 2004.
113
114
Safety
[8]
Rieder A.: “Fahrzeuge sehen – Multisensorielle Fahrzeugerkennung in einem verteilen Rechnersystem für autonome Fahrzeuge“, Dissertation, Universität der Bundeswehr München, Fakultät für Luft- und Raumfahrtechnik (2000).
Jörg Thiem, Martin Mühlenberg Hella KGaA Hueck & Co. Dept. GE-ADS Advanced Development – Systems and Products Beckumer Stra. 130, 59552 Lippstadt Germany
[email protected] [email protected] Keywords:
sensor fusion, intelligent cruise assistance, driver assistant systems
115
Reducing Uncertainties in Precrash-Sensing with Range Sensor Measurements J. Sans Sangorrin, T. Sohnke, J. Hoetzel, Robert Bosch GmbH Abstract This paper focuses on the data processing development for PrecrashSensing, when only the object range is provided. If the object is close to the sensor, perturbations in form of measurement uncertainty and other classes of noises will become critical due to the small amount of time available to make decisions. Nevertheless, these decisions are required to be accurate and highly reliable. The wide variety of object classes and possible situations in vehicle surroundings aggravate the effects of all these perturbations so that the signal processing development for Precrash becomes quite a challenge. Based on current techniques for object tracking and multilateration, crash-situations can be detected and crash-parameters such as velocity at the impact point can be computed. Experiments were performed with a vehicle equipped with a multiple-sensor system based on two different sensor technologies (radar and ultrasonic).
1
Introduction
Since governments and authorities put much effort into the reduction of road accidents, the demands on vehicle safety are continuously growing. Thus, new technologies for passive safety are being developed to enhance the performance of current systems, such as airbag or pyrotechnical belt pretensioner (see [1][2][4][6]). These systems reduce the risk of injury and its level during a crash. The integration of information about the situation which precedes the first contact can be used for optimised control of restraints (see [3]). This motivates the development of Preventive Safety Systems (PSS) such as PrecrashSensing. Therefore, in the scope of the European project PReVENT, Bosch researches and develops into systems that contribute to the road safety targets set by the European commission transport policy for 2010. The objective is to increase the protection for vehicle occupants and even pedestrians in case of impending crashes. These new systems are based on sensors collecting information about vehicle surroundings. The aim of these systems is to implement different driver safe-
116
Safety
ty and convenience assistance functions (see [4]). Depending on the objective of the function, the area of interest varies. Hence, the selection of the technology becomes relevant because of the sensor’s field of view (FOV) has to outperform this area of interest. Furthermore, the information provided by the sensors depends on the used technology. Ultrasonic sensors provide only object range, whereas object velocity and even angle can be additionally provided by radar. The economic cost of the function has to be taken into account for the selection of the technology, because parameters such as number of sensors, computational resources or sensor features influence the system performance. After a dedicated selection of the technology, the fulfillment of the requirements has to be verified. Taking account of the requirements on several function, it is possible to implement a platform which supports multiple functions (see [1]). The information provided by the sensor system is used to categorize the situation in vehicle surroundings. Thus, in case of an impending crash-situation, the relevant information, such as closing-velocity, is included in the Crash Object Interface (COI) (see [2]). Precrash makes high demands on the sensor system and data processing. High measurement rates are required in order to get enough data for reliable decisions. In the case of a multiple range-sensors system, multilateration is a common technique for sensor data-fusion. Using this method in vehicles, high accuracy in single sensor data is required, due to the small distances between sensors. Therefore, single sensor measurements are pre-processed to obtain more consistency (see [5]). Moreover, random and systematic uncertainties in the position estimate are propagated to the computation of the object velocity. Methods to reduce these uncertainties are therefore needed to describe the vehicle environment and to perform accurate crash-predictions. These methods are adaptations of current algorithms for object tracking and multilateration techniques, taking the functional requirements of Precrash into account.
2
Precrash-Sensing System
2.1
Data Processing Architecture
The sensors mounted in the vehicle periphery communicate with a central electronic control unit (ECU). The distribution of the software components for data processing has to take account of the available computational resources in sensors and ECU’s. Fig. 1 gives the system architecture for Precrash data processing. This schema can be adapted to different technologies such as ultrasonic and short range radar (SRR) sensors. The data processing blocks are distributed among the available resources in the system components. In case of
Reducing Uncertainties in Precrash-Sensing with Range Sensor Measurements
the radar system, a microcontroller is available in each sensor and in the ECU, whereas for ultrasonic systems only a microcontroller is available in the ECU.
Fig. 1.
Data processing architecture
After a scan of the sensor FOV, the raw-data processing generates the measurement list. This list contains the ranges of the objects within the FOV. As given in Fig. 1, each sensor processes its own measurement list (1D-Tracking). Thus, each sensor describes the situation in vehicle surroundings from its own “point of view” and the necessary information, such as object distance and velocity, is included in the one-dimensional Object list (1D). Based on multilateration techniques, the information contained in each 1D-Object list is fused to a two-dimensional description of the situation. Thus, the relative object position and velocity can be computed in Cartesian coordinates and included in the 2D-Object list. This is the interface which contains the basic information for implementing the Precrash functions.
2.2
Crash Object Interface
Precrash deals with highly threatening situations in vehicle surroundings. Thus, by continuously analysing the information contained in the 2D-Object list, impending crash-situations can be detected. In this case, the relevant information is transmitted to restraints by means of the Crash Object Interface (COI). COI contains information about only one object. Hence, due to the fact that more than one object can be found within the area of interest for Precrash, the level of threat is assessed for each object contained in the 2D-Object list. Then, in case of an impending crash-situation, the most dangerous object is selected. From this selection, predictions of parameters such as closing-velocity (cv), time-to-impact (tti) and offset (dy) at contact point are computed. In order to
117
118
Safety
classify the crash severity, the closing-velocity including the angle of the vector (α) becomes advantageous.
Fig. 2.
2.3
Parameters of the Crash Object Interface
Precrash Functions
The aim of Preset is to improve the performance of current restraints by integrating vehicle surroundings information. Preset only provides additional information, so that the pyrotechnical restraints are not activated on this basis. However, this information improves the categorization of the situation and becomes specially advantageous for multi-stage airbags, which require an accurate estimation of the closing-velocity. Early detection of a crash is used for the function Prefire. This enables the advance activation of the reversible restraints, which require a longer deployment time (e.g. 100-200ms) than pyrotechnical. These systems might be activated in any tentative crash-situation. The most prevalent system is an electronic belt pretensioner. A reversible belt pretensioner keeps the occupants in position by removing the belt slack. Fixing the passengers in the current position increases the survival space. This system avoids a forward moving of the body during the crash is before the pyrotechnical pretensioner is activated. Furthermore, in slow speed crashes (e.g. <20kph) the activation of pyrotechnical belt pretensioner can be blocked, as the protection of the reversible belt is sufficient. For Prefire, an actuator has to be deployed in time before a crash, and therefore the functional requirements are mainly determined by the required deployment time of the actuators. For Preset, the closing-velocity becomes the most important information. The time-to-impact determines the time point when the COI is transmitted to restraints. Moreover, an accurate offset esti-
Reducing Uncertainties in Precrash-Sensing with Range Sensor Measurements
mate becomes necessary to recognize if the object will pass by the vehicle. An overview of typical requirements are given in the following table (see [6]):
Tab. 1.
Overview of typical requirements
3
Single Sensor Measurement Features
3.1
The Measurement List
Object detection within a sensor FOV is performed from reflected or retransmitted signals (echoes). The object range is determined by these received echoes. The signal usually presents perturbations in form of uncertainties and noises. Using appropiate processing (raw-data), the ranges are computed for those echoes that are potentially true objects. These computed ranges are included in the measurement list, which is updated at each cycle. This list contains both measurements from true objects and false alarms such as ground clutter.
Fig. 3.
Measured ranges in mm versus time in cycles during an approaching manoeuvre
Moreover, the ability of the sensor to provide a measurement of an object and the quality of this measurement depend on the amount of the reflected energy. This reflected energy varies depending on object features, such as shape, size, material, and the position with relation to the sensor. Therefore, uncertainties in the range measurement and even missed detections are usually present. Figure 3 gives an example of the measurement list recorded by a
119
120
Safety
short range radar (SRR). The list includes the measured object ranges versus time, during an approaching manoeuvre.
3.2
Latency Systematic Error
The latency is the time since the object reflects the echo until the distance value is available in the 2D-Tracking step (see figure 1). During this time, the object varies its position with relation to the sensor so that a systematic error is present in the calculations. The consequences are that a systematic error is induced both for the estimates of object position and velocity. This error depends on the object position and approaching velocity. It is possible to reduce this error using the object-velocity estimate,. However, due to the propagation of the measurement uncertainty in calculations, the object-velocity estimate also presents uncertainty which has to be taken into account in the performance evaluation of this correction.
Fig. 4.
3.3
Geometry-induced acceleration in m/s2 versus object position with relation to the vehicle in m
Radial Component of Object Y-Position
If an object passes by the sensor, the measured range is different from the object Y-position (see figure 2). In Precrash situations, accurate estimations of the velocity at close distances are required. As outlined for a radar system in [7], a large geometry-induced acceleration is seen for the slant range even though the object has itself no acceleration. The closer the object passes by the radar, the larger is the maximum geometry-induced acceleration seen for the slant range. Due to the fact that the pass-by distance is unknown, the estimation of the radial velocity becomes very difficult because of the large values of this acceleration. Due to this induced acceleration, the measured ranges tend to the pass-by distance as the object approaches the vehicle. The object-
Reducing Uncertainties in Precrash-Sensing with Range Sensor Measurements
velocity estimate tends to zero, although the object approaches with constant velocity. Fig. 4 gives the geometry-induced acceleration versus the Y-position for an object approaching by offset 0m (black), 0.5m (red), 1m (green), 1.5m (blue) and 2m (orange):
4
Multilateration
Multilateration consists of obtaining the object position in Cartesian coordinates relative to the own vehicle. This is performed by means of range measurements from different sensors. The object velocity can be obtained either by extrapolation over some cycles of the calculated position by multilateration or making use of the 1D estimate of object velocity. These resulting object positions and velocities are included in the 2D-Object list, which is the basis for the computation of the COI parameters (see section 2.2). The measured range correspond to the closest point from object. The object class influences the performance of the evnironment description. This becomes relevant because in the prediction of future situations. Figure 5 gives examples of different crash situations detected by a two-sensor system. The black arrows give the mesured distance. The green arrows give the results after multilateration.
Fig. 5.
5
Measurement and multilateration of the closest object point in different situations
Tracking Algorithms for Precrash Functions
Precrash require tracking multiple objects. The objective of object tracking is to reduce uncertainties and noises such as false alarms (e.g. due to ground
121
122
Safety
reflections) within the measurement list. Thus, the measurements from true objects are associated into sets so that tracks are formed. By means of computation of parameters such as true-object probability, tracks are confirmed and false alarms reduced. At this moment, the object dynamic can be estimated for each track. Afterwards, these estimates are used for the computation of the COI parameters. Due to the propagation of the measurement uncertainty, large errors in the estimate of object velocity can be present. Therefore, specially for low approaching velocities and high measurement rates, high variance reduction of measurement uncertainty is needed, although sensors such as SRR or ultrasonic are very precise. This becomes even more relevant in order to reduce the error propagation in multilateration. Moreover, due to the small amount of time available to take decisions in Precrash, the initial error of the velocity estimate is a critical problem. In the first cycles old information about object dynamics are not available. High reduction of measurement uncertainty during this phase can yield large initiation times until the velocity estimate reaches the required accuracy. The estimation of the acceleration becomes very difficult due to the fact that propagated errors are even larger than for the velocity estimate. However, experimental results show that the computation of COI is more accurate taking account only of object position and velocity, even when the vehicle is breaking before the crash. Finally, the geometry-induced errors in single sensor measurements and multilateration have to be considered also by the tracking-algorithm. The developed tracking method is a simple recursive method. At each cycle, tracks are updated with measurements. Based on a second order model for object dynamic, the object position and velocity are predicted. In order to reduce the error of estimates, the variance reduction factor of measurement noise is adapted at each cycle by means of filtering techniques such as the Kalman Filtering. Multiple assumptions about the situation are tested to perform the task of environment description correctly. These assumptions are mainly based on the geometry-induced effects of the radial component and multilateration (see figure 4 and figure 5). These geometry-induced effects increases, the closer the object is to the sensor. Therefore, a test is performed at each cycle when the object is close to the vehicle (e.g. ≤3m). Unfortunately, the result of a single cycle may have a low confidence level. Therefore, the assumptions are tested sequentially in successive cycles. The probability of correct assumption (PCA) is calculated to decide whether an assumption is true or not,. Thus, the assumption with the highest probability is accepted as true with some assumed uncertainty. The parameters that determine this probability are selected in order to reduce the influence of measurement uncertainties and missed measurements to a minimum.
Reducing Uncertainties in Precrash-Sensing with Range Sensor Measurements
As expected, the system performance increases with the number of sensors. However, the analysis of performance of two sensors becomes important, not only to reduce costs, but also to take account of these situations in which only two sensors detect the object. The next section gives a real example of the computed COI-data in two different situations and the PCA for each case.
6
Experimental Results
6.1
Prototype Description
An experimental Precrash-Sensing system was integrated in the front-end of a vehicle. The sensor system is based both on ultrasonic and radar sensors. Four radar sensors are mounted behind the bumper, whereas four ultrasonic sensors on the bumper itself. Both systems work independently. A camera records a video of the test, which can be synchronized with the recorded measurements over the time. In order to check the accuracy of the estimates, a contact switch gives the time point of impact. The data contained in each interface (see Fig. 1), the video film and other events such as the contact switch or the own velocity are stored for further analysis and processing in the labour. The microcontroller used for data processing is an ARM7 (TMS 470).
Fig. 6.
6.2
Vehicle prototype for R&D of Precrash-Sensing systems by Robert Bosch GmbH in Frankfurt
Situation Assessment
The following example shows the estimates of object distance (1D), closingvelocity (COI-cv) and time-to-impact (COI-tti) provided by the surroundings sensing system. Two different situations are tested with two SRR (80cm separation). Fig. 8 shows the situations considered. In the situation A, the vehicle
123
124
Safety
approaches a fixed object with approximately 50kph. In the situation B, the vehicle approaches a wall with approximately 80kph. Additional information can be obtained from the geometry-induced effects. In the situation A, the radial component is present in measurements, whereas in situation B, the radial component is not present (see Fig. 8). The environment description and crash-prediction can be improved by recognition of these different situations. In each case, the PCA is computed based on the features of the radial component of measurements for the assumption “Wall” and the assumption “Pole”.
Fig. 7.
Test situation: A (left) and B (right)
In both cases, the situation is correctly described by an ad-hoc rule for the computation of the PCA. This rule takes account of the difference between the expected and the measured object distance (i.e. innovation or residual) and decisions in previous cycles and object position. However, in the situation A the assumption “Wall” can be quickly rejected, whereas in the situation B the PCA for the assumption “Pole” has a similar value as for the assumption “Wall” (see Fig. 10). Nevertheless, the assumption “Wall” is accepted in the last cycles. On one hand, this results in the assumption “Wall” in the situation A being rejected with higher confidence than accepted in the situation B. On the other hand, the assumption “Pole” can be accepted with higher confidence in situation A than rejected in situation B. The conclusion is that the situation B present more recognition difficulties. In both cases, the decisions are taken about 0.7m in advance of the crash. A possible solution is to increase the distance between sensors so that the geometry-induced effects can be better recognised. This option presents a problem due to the fact that the near distances are detected poorly by the sensor system. The most advantageous solution would be to increase the number of sensors which yields a more expensive system.
Reducing Uncertainties in Precrash-Sensing with Range Sensor Measurements
Fig. 8.
Real measured ranges and 1D-Object distance in mm by the right sensor in situation A (left) and B (right) versus cycles
Fig. 9.
COI-cv in cm/s (upper curves) and -tti in ms (lower curves) (black dots are the reference of velocity to be estimated)
Fig. 10. Probability of correct assumption for “Pole” (red) and “Wall” (blue) (Max. probability = 1)
125
126
Safety
7
Conclusions
It was exposed the main factors in the adaptation of current object tracking algorithms for Precrash-Sensing, where only object range measurements are available. In this case, accurate crash-predictions and situation assessment are specially difficult. Two possible sensor classes are ultrasonic or radar. Thus, depending on the selected technology, the system architecture and performance varies. However, both system shares data processing methods. The exposed approaches and considerations can be applied for any similar sensing system. A vehicle was equipped with these two sensing systems, so that the developed algorithms can be tested with real measurements. The results of a real experiments using a radar sensing system were shown for two different objects (wall and pole). In both cases the impending crash-situation is recognized, so that the crash-parameters are computed and provided to restraints for further processing. Moreover, initial approaches for situation assessment were also presented. These methods are based on geometryinduced effects in measurements and multilateration, which are present when the object is close to the vehicle. These methods allow the acquisition of more information about the approaching object, which can be relevant in order to predict crashes correctly and accurately.
References [1]
[2]
[3]
[4] [5]
Hoetzel, J.; Sans Sangorrin, J.; Sohnke, T. : „Kfz-Umfelderkennung mittels Nahbereichsensoren für eine Multifunktionale Systemplatform“; Sensoren, Signale, Systeme: 10th Symposium; Esslingen, Germany, 2004 Sohnke, T.; Sans Sangorrin, J.; Hoetzel, J.; Bunse, M.; Kuttenberger, A.; Theisen, M.; Knoll, P.: “Precrash-Sensing as information platform”; airbag 2004: 7th International Symposium and Exhibition on Sophisticated Car Occupant Safety Systems; Karlsruhe, Germany, 2004 Bunse, M.; Kuttenberger, A.; Theisen, M.: “PRECRASH Sensing by Use of a Radarbased Surround Sensing System”; airbag 2002: 6th International Symposium and Exhibition on Sophisticated Car Occupant Safety Systems; Karlsruhe, Germany, 2002 Hoetzel, J., „Sensoren im Frontendmodul“, IIR Deutschland GmbH, Cologne, April 2003 Klotz, M.; Rohling, H.: „24 GHz Radar Sensors for Automotive Applications“; International Conference on Microwaves and Radar, MIKON2000, Wroclaw, Poland, 2000
Reducing Uncertainties in Precrash-Sensing with Range Sensor Measurements
[6] [7]
http://prevent-apalaci.ce.unipr.it; European project PReVENT-Apalaci; October 2004 Brookner E.; „Tracking and Kalman Filtering Made Easy”; John Wiley & Sons, 1998
Jorge Sans Sangorrin, Thorsten Sohnke, Juergen Hoetzel Robert Bosch GmbH, Corporate Research and Development, Drivers Support Systems P.O. Box 94 03 50, D-60461 Frankfurt(Main), Germany
[email protected] [email protected] [email protected] Keywords:
precrash-sensing, range sensors, situation assessment of vehicle surroundings
127
129
SEE – Sight Effectiveness Enhancement H. Vogel, H. Schlemmer, Carl Zeiss Optronics GmbH Abstract Every year, numerous accidents happen on European roads due to bad visibility (fog, night, heavy rain). Similarly, the dramatic aviation accidents of year 2001 in Milan and Zurich have reminded us that aviation safety is equally affected by reduced visibility. The SEE project (EU funded, contract IST 2001-38228) is aimed to raise human situation awareness in conditions of reduced visibility in the automotive and the aeronautical context by the development of a dual band camera system. In such a system cameras operating in spectral bands less obscured by reduced visibility condition than the visible band produce an image which is then presented to the driver or the pilot.
1
Introduction
Traffic accident rates during night are much higher than during day because of multiple combined reasons. The limited range and field of view of headlamps, glaring due to incident headlamps, hardly visible cues (darkly clothed pedestrians, road edges, …) and driver’s view. Fog driving is seldom but dangerous. The degradation of perception modifies driver behaviours and increases danger (tendency to get close to preceding car). Aeronautics is quite safe, but two major accident causes are identified. Runway incursion (risk increasing with traffic) and CFIT (Controlled Flight Into Terrain). Runway incursion means that a plane is illegitimately entering on a runway or taxiway. CFIT occurs when a plane that is still under control crashes into terrain. This can happen when the crew have not a good perception of its position compared with surrounding terrain. In both cases reduced visibility is a major contributing factor. Economically bad visibility reduces also airport platform throughput and provoke rerouting.
130
Safety
2
Project Parts
The first part of the SEE project is the definition of requirements based on the statistics of accidents in automotive and aeronautic field. The operational analyses shows major accident causes and scenarios. This part was carried out by Thales Avionics, BMW and ESG. The detectors used within the project are selected by CEA-leti. The second part is the development and production of the cameras. This part is carried out by Carl Zeiss Optronics GmbH and is described more detailed in chapters 3 and 4. In the third part, data gathering is the main issue. During different tests and test rides real image data were recorded. The image fusion of the two cameras is realised in the fourth part. Here pyramidal image fusion strategy is used. This part is under control of Galileo Avionica. To evaluate the usefulness of the SEE camera system, the cameras as well as scenes and weather conditions are simulated within the fifth part. This simulation, as real-time and as non-real-time model, is carried out by Oktal SE. After integration in car and flight simulator, in the sixth part the benefit of the cameras will be investigated by Risoe. Several scenarios in the automotive and aeronautic field will be evaluated by observing different drivers and pilots and by interviewing them. The result will be an objective figure of the benefit to be gained by using the SEE camera as enhanced vision system compared to direct viewing only.
Fig. 1.
Wavelength bands of electro-magnetic radiation
3
LWIR Camera Development
3.1
LWIR Band
The wavelength range of electro-magnetic radiation between 8µm and 12µm is commonly known as the Long Wave Infrared LWIR. It is far away from the visible band on the wavelength scale as indicated by figure 1 and the human
SEE – Sight Effectiveness Enhancement
eye does not have any sensitivity to detect this kind of radiation. The origin of radiation in the LWIR band typically is due to rotational motion of molecules or vibrations in the lattice of solids according to temperature. Unlike in the visible and SWIR band imaging in the LWIR wavelength band does not need an external light source because every object of a scene is emitting IR radiation (thermal radiation) due to its surface temperature. Actually, for objects around room temperature of 300K the power L(l,T) of the emitted thermal radiation is maximum at a wavelength of 10µm (see figure 2). Also the derivation of the emitted power with respect to the temperature d L(l,T)/dT which is vital for the achievable contrast in a thermal image has a maximum within the LWIR band. This enables high quality imaging even at night and when all natural and artificial light sources are absent.
Fig. 2.
Power density of a black body according to Planck’s law
As shown in figure 3 atmospheric transmission has a broad window of high transmittance in the 8-14µm spectral region which is even broader than the nominal LWIR band. Atmospheric scattering also is significantly lower than in the visible and in the SWIR band because of the wavelength is by a factor of 10-20 longer (see also chapter 4.1). These favourable atmospheric properties are of great importance when recording high contrast IR images also of far away objects is anticipated.
131
132
Safety
Fig. 3.
Atmospheric windows of high IR transmission
State-of-the-art uncooled microbolometer cameras can resolve temperature differences lower than 1/10th of a degree. Good contrast images therefore can be recorded even when temperature differences are low, e.g. in the background of a scene. In particular any heat source may it be natural (human body, animal) or artificial (heater, engine, light bulb) typically causes an image of high brightness, sometimes almost a hot spot, which can be detected easily in the IR image.
3.2
LWIR Camera
The LWIR camera (see figure 17 left) contains an uncooled amorphous silicon microbolometer detector with a resolution of 320×240 pixels. The pixel pitch is 45µm. As described above, the band from 8µm to 12µm wavelength is used. The field of view was selected to have 32°×24° to fit to automotive as well as aeronautic applications. It is realised by a highly opened Germanium optics with 25 mm focal length and F-number F/1.0. As video interfaces CCIR, RS-170 and LVDS serial 8 Bit or 16 Bits are integrated. Internally available are also VGA, XGA and digital parallel TTL 8 Bit or 16 Bit. The control of the camera can be realised by RS-xxx and CAN interface. Additionally as maintenance and control interface USB is integrated. To correct the detector in-homogeneity a two-point non-uniformity correction with individual gain and offset for each pixel is used. The correction tables are factory setting but can be re-calculated in maintenance mode. During initialisation of the camera an additional one-point correction by using a thermal reference is carried out. To optimise the homogeneity at the actual scene temperature, additional offset-correction by using the defocused scene can be carried out.
SEE – Sight Effectiveness Enhancement
A general problem of the microbolometer detectors is the behaviour in case of die temperature changes. To compensate these drifts, an appropriate method was developed and integrated to control the detector behaviour and therefore achieve a homogeneous image over temperature.
Fig. 4.
Homogenisation of the LWIR image
Figure 4 shows the amazingly large difference between the un-corrected detector image (left) and the resulting image after the non-uniformity correction and homogenisation (right) was carried out. The achieved image quality gives vivid evidence of the effectiveness of the applied algorithm. In figures 5 to 8 some more examples of images captured during test rides are shown.
Fig. 5. Fig. 6.
LWIR image: evening after a warm day (left) LWIR image: persons (right)
133
134
Safety
Fig. 7. Fig. 8.
LWIR image: rainy weather (left) LWIR image: doe at night (right)
To estimate the performance of the LWIR camera, for persons and vehicles the detection ranges are calculated. As interesting objects a vehicle with 1.5m×1.5m size and 2K temperature difference to the environment (see table 1) and a person with 1.75m×0.5m size and 5K temperature difference to the environment (see table 2) are used. Detection of an object is defined that it has the size of one line-pair in the image. Recognition means 3 line-pairs and identification 6 line-pairs.
Tab. 1. Tab. 2.
Detection ranges of a vehicle (left) Detection ranges of a person (right)
4
SWIR Camera Development
4.1
SWIR Band
The wavelength range between 0.8µm and 2.0µm is known as the Short Wave Infrared SWIR band (sometimes also called Near Infrared NIR band) and is located adjacent to the long wavelength (red) edge of the visual band (see fig-
SEE – Sight Effectiveness Enhancement
ure 1). Radiation of this wavelength range typically originates from vibrations of molecules or lattice vibrations of a rigid body. As in the visible range imaging in the SWIR band needs illumination of the scene by natural (sun, moon, night air glow) or artificial (incandescent or discharge lamps, LEDs) light sources. The intensity of thermal radiation in this wavelength band emitted by objects at room temperature 300K is much too low to be detected by any available camera sensor (see also figure 2). On the other hand, most traffic lights, position lamps or search lights have maximum brightness in the SWIR band (figure 9) and thus can be detected over a much longer range than in the visible.
Fig. 9.
Spectral emission of an incandescent lamp with 3250 K filament temperature
Light travelling through the atmosphere in general is influenced by absorption and scattering. Absorption occurs at specific wavelength bands caused by some constituents of the atmosphere mainly water vapour H2O and carbon dioxide CO2. Though there are some strong absorption lines within the SWIR band the remaining transmission is still sufficient to provide visibility even over a range of several kilometres (figure 10).
135
136
Safety
Fig. 10. Atmospheric transmission in the SWIR band
The scattering of light shows a more continuous dependence on wavelength than absorption. It is a general rule of thumb that scattering is much more effective at shorter wavelength than at longer wavelength. Consequently the atmosphere has a significantly lower transmission for short wavelength radiation and the straylight caused when the sun is shining onto clouds of aerosols (dust, smoke, mist) also is much more intense at shorter wavelength. This is the reason why great differences in contrast can appear between the visual and the SWIR band under certain environmental conditions as is shown in figure 11. Although the radiation in the SWIR band is longer in wavelength by only a factor of 2 the scattering of the light by aerosol clouds is significantly lower compared to the visible band.
Fig. 11. Images of the same scene recorded in the visual (left) and in the SWIR band (right)
SEE – Sight Effectiveness Enhancement
Even within the SWIR band the amount of straylight may vary significantly over wavelength as can be derived from atmospheric modelling (figure 12 left). When the sun is shining into clouds of dense fog most of the scattered light (also known as scattered air light) is emitted on the short wavelength side of the SWIR band while on the long wavelength side the straylight level is almost zero.
Fig. 12. Spectral distribution of scattered light and SWIR detector responsivity without (left) and with bandpass filter
The short wavelength straylight alone is capable of blurring the contrast of a recorded SWIR image remarkably. Under certain circumstances it is therefore advantageous to have the option to split up the SWIR band at 1.3µm and to suppress the short wavelength sub-band by means of a long pass interference filter. The disturbing air light then will be suppressed too and the overall image quality will be increased in spite of the fact that the total power of the radiation falling on the detector plane is obviously decreased. It must be noted that the detection of long ranging signal lamps (like an airport approach lamp e.g.) which are typically shining brighter at the short wavelength side of the SWIR band may be hampered by using the filtering technique. Thus it depends on the particular application which option is the most favourable. If the task is spotting signal lamps as early as possible over a long range then imaging in the full SWIR band is appropriate. In case the resolution of faint details of a scene under low visibility conditions is of major importance then imaging in the 1.3µm–2.0µm sub-band might be advantageous. The detection ranges for the airport approach lamps in different visibility conditions are shown in tables 3 and 4.
137
138
Safety
The appearance of all kinds of objects and materials in a SWIR image is depending on their spectral absorption and transmission characteristics. Practical experience discovers significant differences between visible and SWIR wavelength band also in this field. For example, it is perfectly typical for inks and dyes to appear transparent in the SWIR band whereas in the visible the painted or dyed objects exhibit different colours. On the other hand, materials which have similar transmission in the visible band like glass and water may appear totally different in the SWIR band. These two effects are demonstrated by one of the authors in figure 13 holding a glass of clear water in his hand. Especially water and ice have strong absorption bands in the SWIR range as already noted and thus typically appear dark in SWIR images. This effect can be utilized in the automotive field for spotting slippery patches of water or ice on the road. A helpful application in aeronautics could be to detect icing of the wings of an aircraft.
Fig. 13. Reflection and transmission of different materials: visible (left) and SWIR image (right) recorded without any additional filtering in the 0.8µm to 2.0µm band
The overall information displayed in an LWIR image is mainly surface temperature difference and by this it is complementary in many practical cases to the information about surface reflectance which can be extracted from a SWIR image. Figure 14 (left and middle) illustrates how much the SWIR images depend on the lighting conditions while the LWIR images display the temperature distribution at night and day virtually independent of any natural or artificial illumination.
SEE – Sight Effectiveness Enhancement
Fig. 14. Road at night ahead of a car with full head lamp beam and low beam (left, centre) and crossing of roads (right)
The complementary nature of the intelligence that can be derived from SWIR and LWIR images is demonstrated by the image pair on the right hand side of figure 14. The LWIR image shows clearly the bending of the road, the exit to the large building on the left and more buildings ahead of the road but all sign posts, traffic signs and traffic lights are only to be seen in the SWIR image.
4.2
SWIR Camera
Also the SWIR camera (see figure 17 right) has an uncooled detector. In contrast to the LWIR camera, the internal TEC is not only used for temperature stabilization but also for slight cooling to 210K. Therefore not only one TEC but a for-stage cooler cascade is integrated in the detector. The detector material is CMT (Cadmium-Mercury-Tellurite) with a doping that leads to a sensitivity of the detector between 0.8µm and 2.0µm. To reduce straylight (see section 4.1) the sensitivity of the camera can be reduced to 1.3µm to 2.0µm by activating the integrated long pass filter. The detector resolution is 320×256 pixel but only 320×240 are used to have the same image size as the LWIR camera. Because the pixel pitch of the SWIR detector is 30µm and therefore different to the one of the LWIR detector, the focal length of the SWIR optics is 16.7mm and additionally adjustable to reach an equivalent field of view to the LWIR optics. The aperture is F/1.4 which is the maximum achievable without vignetting at the detector window. Because of the high dynamics in the SWIR band, a variable aperture stop is integrated to enable an aperture between F/1.4 and F/7.5. The video and control interfaces are identical to the ones of the LWIR camera.
139
140
Safety
Similar to the LWIR camera as in-homogeneity correction a two-point non-uniformity correction with individual gain and offset for each pixel is used here too. The correction tables are factory setting but can be calculated in maintenance mode. During initialisation of the camera or in case the user commands it, an additional one-point correction by using a distortion plate is carried out. For each sub-band (0.8µm to 2.0µm or 1.3µm to 2.0µm) separate homogenisation optics is integrated. Example images captured with the SWIR camera are shown in figure 15 for day and figure 16 for night.
Fig. 15. SWIR image: captured during day (left) Fig. 16. SWIR image: captured at night (right)
The performance of the SWIR camera was calculated for the main task in aeronautics This is the detection of the approach lights on airports during landing. If the approach lamps can be detected at the decision height, the landing procedure can be continued safely. Table 3 shows the detection ranges in fog during day. The same for fog during night is shown in table 4. As one can see, only in advective fog during daytime the narrow band (1.3µm to 2.0µm) has a benefit in the range. At night with advective and radiative fog as well as radiative fog during the day, the wide band (0.8µm to 2.0µm) achieves better ranges for all visibilities. Compared to the visual visibility, these ranges are much higher.
SEE – Sight Effectiveness Enhancement
Tab. 3. Tab. 4.
Detection range of the approach lights during daytime for different visibilities in fog (left) Detection range of the approach lights during night for different visibilities in fog (right)
The set-up of both cameras is shown in figure 17. To enable good image fits of both cameras at a desired distance, the lines of sight of the cameras is adjustable. For automotive applications the lines of sight cross each other at a distance of about 50m. Also the field of views are adjusted to be identical and the optics are designed that the maximum distortion differs less than 0.7 % for both cameras. This is important to enable good results for the image fusion (see chapter 5).
Fig. 17. Set of LWIR (left) and SWIR (right) camera
141
142
Safety
5
Image Fusion
As described in sections 3.1 and 4.1 the LWIR band and the SWIR band often show complementary information (see figure 14). To collect information of both bands, the images of the cameras are fused by using pyramidal image fusion strategies. In contrast to a simple superposition, the fusion avoids drowning out or reduction of the information available only in one image. In figure 18 the scene of figure 14 is shown with additional images of the fused image and the impression the driver has. The fused image shows information of both, the LWIR and the SWIR camera.
Fig. 18. Fused image of figure 14
6
Test Rides
To evaluate the performance and usefulness of the SEE cameras, several test rides during different weather conditions were carried out. To have the possibility to compare the information of the SEE images, a high sensitive CCD camera was also integrated and recorded to show what the driver sees without the SEE cameras. These four simultaneously captured videos are arranged in 4-tiled video sequences. Snaps of these sequences are shown in figures 19 to 22.
SEE – Sight Effectiveness Enhancement
Fig. 19. Doe at night, invisible although with full-beam
Fig. 20. Dog at night, visible much earlier in LWIR than in visual band
In figure 19 the doe which would be invisible to the driver otherwise can be spotted easily in the LWIR image and consequently also in the fused image. In figure 20 the dog at the right hand side of the road behind the underpass is only to be seen in the LWIR image because its body is warmer than the background. The animal does not appear in the SWIR image or the driver’s sight
143
144
Safety
because of the poor lighting conditions. Overall, the fusion of the LWIR and SWIR images gives the driver a much better view of the road conditions ahead.
Fig. 21.
Country lane at night, complementary information of both bands united in the fused image
Fig. 22. Persons in town, more showy in LWIR than in fused image
SEE – Sight Effectiveness Enhancement
An excellent example how much a driver can benefit from the fusion of the two images containing complementary information is shown in figure 21. The LWIR image presents information about road conditions and environs including the town some kilometres ahead and the SWIR image adds traffic signs and the town lights to provide a comprehensive overview. Again the persons in figure 22 can only be spotted in the LWIR image because the temperature of their bodies being higher than that of the background. In this case the lights added to the fused image from the SWIR band are more confusing than helpful.
7
Simulation and System Evaluation
To evaluate the usefulness and the benefit the SEE cameras compared to the direct sight of the driver or pilot, the whole camera system is simulated. Additional the objects and atmospheric conditions as well as the display are modelled. In case of aeronautic applications this is implemented in a flight simulator. Several pilots have to land, taxi and take-off under different visibility conditions and with different types of obstacles or misleading of the plane. The same scenarios are simulated with and without the image of the SEE camera available. With this method the advantage of SEE can be measured in e.g. time gained to avoid collisions or distance of airport detection compared to the decision height.
Fig. 23. Simulated scenes during day with fog in visual band (origin: Oktal SE)
145
146
Safety
In case of automotive applications this is used to present several drivers typical traffic situations with possible hazards. Also here different visibility conditions and the sight with and without the SEE cameras are simulated. Figure 23 shows typical scenes during day with fog. The left image shows a car ahead, a motor cyclist in oncoming traffic and a cyclist on the right side. The right image shows a tree as obstacle across the road. Also here the benefit of the SEE camera system can be measured e.g. in time gained by detecting hazards and obstacles earlier. Results of the system evaluation (under responsibility of Risoe) are devided in detection and manoeuvre tasks. For automotive applications detection results will be the response time (and time-to-collision) in relation to the detection of road obstacles and threats as well as the driver evaluation of the quality of the information of the SEE system in relation to obstacle detection and identification. For manoeuvre results will be the driver evaluation of the quality of SEE system and the terrain information it provides in relation to safe driving. For aeronautic applications detection evaluation will result in the response time (and time-to-collision) in relation to the detection of obstacles as well as the pilot estimation of quality of SEE system and the information it provides in relation to obstacle detection and identification which is very similar to automotive. In manoeuvre the results will be the landing and take-off performance evaluated in objective and subjective parameters as well as the pilot estimation of the quality of SEE system and the terrain information it provides in relation to safe landing and take-off.
8
Applications
As mentioned above a very promising application of the SEE camera system is in aeronautics. Although flying is very safe in general, the small remaining risk described above can be further decreased by enhancing the pilot’s vision. Especially landing under foggy conditions can be made more safe when the pilot can see the airport approach lamps earlier. Also orientation during taxiing can be improved so that e.g. runway incursions can be reduced further more. Also during take-off the SEE system can give the pilot a better view whether the runway is free of obstacles. Similar enhancements can be realised by equipping the airports with the SEE cameras. Giving the air traffic controller a sight to the landing planes allows additionally shorter distances of the incoming planes and therefore increases airport throughput. With this rerouting can be avoided and the additional landing charges will amortise the system rapidly.
SEE – Sight Effectiveness Enhancement
As shown in chapter 6 in automotive applications the camera system also enhances the driver’s situation awareness. Because of the doubled cost of a two band camera in the first step the LWIR camera alone integrated in a vehicle also enables the driver to detect persons, obstacles and other hazards earlier. In dedicated applications like trucks, heavy goods vehicles or especially hazardous goods transport, the SEE systems allows safer driving. In this field the price is not as much critical as in the manufacturing of passenger cars. The heavy accidents during fog in autumn 2004 in inland water transportation on the river Rhein reminded us that the high density of shipping traffic on narrow rivers is also of high risk under adverse weather and lighting conditions. From the television news reports that neither the helmsman had seen his opponent before the collision nor the water guard was able to find the collided vessels after the alarm was raised it is quite obvious that also this field of public transport could benefit a lot from an enhanced vision system like the SEE. Additionally the risk of environmental pollution can be reduced in case of hazardous goods transport and inland water transportation where ships often have hazardous freight.
9
Conclusion
The results obtained with the SEE camera system so far have proved that using the combination of the SWIR and the LWIR band has been an excellent choice. The fusion of the SWIR and LWIR images in most situations combines complementary information of both bands to a comprehensive image and therefore enhances the vision significant. This gives pilots, drivers, captains etc. more information about the situation and possible hazards than would be available with direct sight only. The realised cameras at least performed as was expected. Actually in case of the LWIR camera the expectations were even exceeded as a result of the sophisticated correction strategy. The results of the evaluation in flight and car simulators will give numerical evidence of the overall benefit achievable with the SEE camera system.
147
148
Safety
Dr. Holger Vogel, Dr. Harry Schlemmer Carl Zeiss Optronics GmbH Carl-Zeiss-Str. 22 73447 Oberkochen Germany
[email protected] [email protected] Keywords:
LWIR, SWIR, NIR, dual-band camera, thermal imager, image fusion, uncooled CMT, a-Si, microbolometer, automotive, aeronautics
149
System Monitoring for Lifetime Prediction in Automotive Industry A. Bodensohn, M. Haueis, R. Mäckel, M. Pulvermüller, T. Schreiber, DaimlerChrysler AG Abstract System monitoring in automotive industry has multiple demanding facets. Increasing complexity of control tasks in modern automobiles has caused a shift in the way we think about sensor and actuator functions. Support systems for controlling a highly efficient engine, handling difficult driving conditions and realizing outstanding comfort functions are what usually is understood as system monitoring in vehicles. However, a new aspect is gaining significant importance: system monitoring for life time prediction. This paper outlines the fundamentals of system monitoring for life time prediction in vehicles. The vision of “Car Health Management” is introduced. The particular example of oil condition monitoring is chosen to outline the concept of predictive maintenance. Technological challenges encountered with this new philosophy are discussed. As an example an autonomously operated oil sensing system is presented.
1
Introduction
System monitoring in automotive industry has multiple demanding facets. Increasing complexity of control tasks in modern automobiles has caused a shift in the way we think about sensor and actuator functions. Simple sensor systems grew to become system monitoring systems, involving more than 20 electronic control units (ECU) communicating through complex bus systems via a variety of bus systems (CAN, LIN, MOST) and gateways. Support systems for controlling a highly efficient engine, handling difficult driving conditions and realizing outstanding comfort functions are what usually is understood as system monitoring in vehicles. However, a new aspect is gaining significant importance: system monitoring for life time prediction. Life time prediction is key for new automobile monitoring systems since it has tremendous impact on customer satisfaction, profitability and brand. This paper first describes the vision of car health management and in particular the concept of predictive maintenance. Important aspects of sensor hard-
150
Safety
ware and analysis software requirements derived from this concept are outlined. Second, fluid condition monitoring is explained as a specific example for predictive maintenance. Oil sensing in particular has direct impact on maintenance cost, vehicle lifetime and emissions of the vehicle. Finally, the presentation of an autonomous oil sensor shows the continued effort to find new solutions for applications where sensor accessibility by cables is not given and where flexibility in mounting the sensor has to be improved.
2
Car Health Management
2.1
Vision
The vision of car health management is an approach that guarantees reliable service of the vehicle and reduces service time to its minimum. Examples for replaceable parts are motor oil, transmission fluid, break fluid, lamps etc. Short service intervals have direct impact on cost factors of car fleets and logistic companies.
Fig. 1.
The concept of car health management consists of 4 interacting modules: prevention, case history, diagnostics and therapy. A special group of methods and technologies for the module “Prevention” and the module “Case history” is named predictive maintenance.
Car health management can be compared to medical treatment of humans, Figure 1. Treatment usually, but not necessarily starts with a look at the case history. In this phase the root cause of a problem is investigated. Based on case history one or more diagnostic tools are used to understand the problem of the component/subsystem. The outcome is the decision about an appropriate therapy. The knowledge of a potential health problem will be considered in a prevention phase where measures are taken to prevent the system from entering the state of illness. A subgroup of methods and technologies for the module
System Monitoring for Lifetime Prediction in Automotive Industry
“Prevention” and the module “Case history” is named predictive maintenance. Key for predictive maintenance concepts are enhanced sensor functions, sensor signal fusion and flexible sensor integration. The accuracy of the predicted service is determined by the quality of the data and the model. Next generation predictive maintenance concepts will consists of advanced prediction models, enhanced data integration and improved quality of data taken today in a harsh environment.
2.2
Predictive Maintenance Concept
Flexible predictive maintenance is a concept that maximizes the vehicle use time between service intervals while it minimizes the risk of failure and wear. It consists of autonomous sensing modules that are connected to a central monitoring module where the life time prediction process takes place. Incoming data is transmitted for example by CAN, LIN or WLAN connections. Measurement functions of existing ECU’s are accessed with a standard CAN interface between the ECU network and the monitoring module in the automobile (figure 2).
Fig. 2.
Predictive maintenance requires a central monitoring module where the life time prediction model is calculated. Incoming data is transmitted for example by CAN, LIN or WLAN connections. Data for modeling is gained from autonomous sensors where a cable connection is impossible and/or from existing ECU’s. The result is named intelligent sensor fusion.
Today, service time is derived from average use cases and model based estimates. Uncertainties are considered by adding safety factors so that service
151
152
Safety
takes place well in advance before harm can occur to the system. Main reasons for uncertainties are the complexity of the system, simplification of applied models, poor quality of data taken and unavailability of measurement points required by the model. The influence of uncertainties is visualized in Figure 3.
Fig. 3.
The goal of predictive maintenance is to schedule the time of service depending on the use case of the vehicle. New methods and technologies improve models and available data so that service takes place as late as possible and as early as necessary.
The quality of predictive maintenance depends on the ability to translate wear mechanisms of the automotive subsystem into failure models and on the availability of input data of the model. The technological challenges are the design of automotive subsystems that can be modeled and a cost efficient way to make the input data available. Micro- and Nanotechnology has a considerable potential to find new ways of sensing parameters that are not yet accessible today. The predictive maintenance concept of consumable fluids in an automobile is an illustrating example for car health management. Its history reaches back more than 10 years when oil sensors started to replace the dip stick for oil level measurement. Oil quality analysis itself has a much longer history with its roots in power supply machinery maintenance and aircraft maintenance. Whether the sensing principle, measurable parameters, the accuracy of the measurement, the avoidance of cable connections or simply the cost of the sensing unit is concerned: fluid condition monitoring challenges all those factors. With respect to analysis software a range of improvements in decision making were reported by applying advanced data analysis such as for exam-
System Monitoring for Lifetime Prediction in Automotive Industry
ple fuzzy logic. As far as sensor hardware is concerned inaccessibility of the measurement location, a harsh environment and cost can be the very limiting factors. Those limitations can be addressed by using new sensing technologies such as for example autonomous sensor systems.
3
Fluid Condition Monitoring
3.1
Challenges of Fluid Condition Monitoring
Monitoring the fluid condition of engine oil requires sensing of a complex oil chemistry inside the oil pan at temperatures as high as 130°C. Furthermore the sensor has to withstand the oil stream pumped with high turbulence and speed through the oil reservoir. Overall, such monitoring system has to be aggressively priced. The costumer benefits of oil condition monitoring are a reduction of operation costs due to decreased fuel consumption, longer engine lifetime, lower emissions and an increase of the up-time.
Fig. 4.
Fluid condition monitoring: Example motor oil.
The condition of motor oil is determined by the power train and motor oil interaction. On one hand, the condition of the oil can theoretically be read directly from the oil. On the other hand, the oil quality can be concluded from the power train load and its history. Practically, the wear models of the motor and the oil (figure 4) are incomplete due to its complexity. The lack of information in both models is compensated by analysing measurement data from both of them for decision making.
153
154
Safety
3.2
Oil Sensing
Having access to real-time fluid condition data allows tracking of oil changes and signaling critical motor conditions to be acted upon immediately. Oil is characterized by several parameters as indicators for different, independent degradation mechanisms. A typical laboratory for oil analysis measures a variety of parameters such as wear metals, contaminants, additives, lubricant physical properties, viscosity, wear particles and contaminants such as water and gasoline. Complex laboratory equipment is required for oil analysis. Usually such laboratory contains equipment for thermogravimetric analysis (TGA), Fourier Transform IR Spectroscopy (FTIR), nuclear magnetic resonance (NMR), gas chromatography in conjunction with mass spectrometry (GC-MS), viscosity measurement, measurement of electrical properties and other measurements. Automotive industry is facing the challenge to predict oil quality without having above mentioned complex analysis tools in a car [1]. Industry has chosen two approaches to solve this problem. The approach chosen by Daimler Chrysler AG is called ASSYST (in Europe) and Flexible Service System (FSS) in the United States. The breakdown in oil is determined by such factors as driving habits, driving speed and failure to replenish low oil levels. Therefore, ASSYST monitors time between oil changes, vehicle speed, coolant temperature, load signal, engine oil temperature, engine rpm and engine oil level. A different approach is pursued for example by General Motors Corporation. GM introduced 1998 their Oil-Life System where oil life time is predicted by solely monitoring engine revolutions, operating temperature etc. and without using an oil sensor.
Tab. 1.
Inline oil quality monitoring systems.
Sensor makers suggested different oil sensing products in the past, table 1. Their products show different approaches to integrate essential functions for oil analysis into a single, low cost inline automotive oil sensor. New inline auto-
System Monitoring for Lifetime Prediction in Automotive Industry
motive oil sensors can only be justified if they supply vehicle information that increase the accuracy of the predicted oil service significantly or if they reduce cost. Of course, decision making is also based on oil breakdown models. The accuracy of the result is determined by the quality of the model and, of course, the quality of data from measurements. In the following section the aspect of data acquisition for oil analysis is discussed. Modelling of the motor and the oil breakdown is not part of this paper. Microsystems- and Nanotechnology will open new approaches for oil sensing. Examples are surface acoustic wave sensors built with microsystem technology (Bosch and BiODE [2]) or sensors using polarization measurements based on a polymer bead matrix (Voelker Sensors [3]). The advance of autonomous oil sensors for onboard applications or plug-and-play test applications will depend on progress in material science, reliability, technology for energy harvesting, energy storage and wireless data transmission.
Fig. 5.
3.3
Autonomous oil sensor. In the case of an oil sensor the use of thermal energy is privileged for power supply.
Autonomous Oil Sensor
The feasibility of an autonomous oil sensor has been demonstrated first during AMAA 2004 [4]. New results were achieved by optimizing the thermoelectric generator and improving energy management. Several approaches for using autonomous microsystems for condition monitoring [5], [7] were reported in literature. Examples for using thermoelectric energy to drive applications with power demand in the range if Microwatt are given in references [6] and [8].
155
156
Safety
Fig. 6.
Left: Electronic components for power management, Middle: Electronic components for data transmission as interface between PWM signal of the oil sensor and channel architecture of the wireless module, Right: Oil pan with mounted oil sensor and thermoelectric power generation (Mercedes S-class).
Figure 5 shows a block diagram of an autonomous oil sensor. The demonstrator was integrated into a Mercedes S-class equipped with a TEMIC QLT sensor. The sensor was powered by a thermoelectric power generator with onboard energy storage and energy management. Data between main ECU and sensor are transmitted wirelessly.
Fig. 7.
Operation of an autonomous oil sensor.
By replacing the DTS thermoelectric converter, presented in [4] with an Eureca module (TEC1M-9-12-3.7/67) the thermoelectric generator energy density was increased to drive a 6 mW load in continues mode. Energy storage is crucial to guarantee the operation of the sensor immediately after the car is started, while the car is standing or driving slowly. Ultracaps (10F, 2.3V) by EPCOS AG were used for energy storage. Ultracaps offer an excellent ratio of energy storage density and totally stored energy. Data transmission was demonstrated using an EnOcean STM 100 signal transmitter.
System Monitoring for Lifetime Prediction in Automotive Industry
The mounting of the oil sensor and the power generation module is shown in figure 6. The mount of the thermoelectric generator and cooling fins is custom made. Circuit boards are mounted separately in order to make installation with minor changes of the motor feasible. The results that were achieved with an autonomous oil sensor are plotted in figure 7. The graph shows the speed of the car after start of the motor (grey line). The black line shows the output of the power generator. The output increases with the heating up of the motor and the oil pan. The green line shows the stored energy. Since the engine is cold after startup of the motor the generated power is minor. Therefore, during startup the sensor is driven by the energy stored in the Ultracaps. The orange line shows the oil sensor status as its signals are available on the motor CAN. As can be seen, oil sensing and CAN communication was assured throughout the test.
4
Summary
The concept of car health management and predictive maintenance has been described. Fluid condition monitoring, in particular, is a powerful tool to reduce maintenance costs and improve life time of vehicles. The decision about oil change service is based on sound knowledge about oil quality. Tolerances of existing oil sensors and incomplete oil breakdown models are main reasons for today’s uncertainties in predicting oil service time. The ability to optimally predict oil service will be improved by future sensor systems for car health management. Future oil sensing systems will benefit from advances in mechatronics and micro- and nanosystem technology. As example an autonomous oil sensor is introduced. Feasibility of such system has been successfully shown. However, an autonomous oil sensor is still expensive and only of particular interest if accessibility by cables is not given and flexibility in mounting is required. Progress in new materials for thermoelectric power generation, the capability to store energy with high density and efficiency as well as the integration of sensor, poer management and data transmission will improve performance and reduce size and cost.
Acknowledgements We thank W. Klein for his fruitful discussions about oil analysis, J. Kraus and K. Ehrlinger for their valuable contribution during characterization of the sen-
157
158
Safety
sor system and J. Hedges from Voelker Sensors Inc. for his discussion of the Oil-Insyte system.
References [1]
[2] [3] [4]
[5]
[6]
[7]
[8]
Gebarin S., Fitch J.: Determining Proper Oil and Filter Change Intervals: Can Onboard Automotive Sensors Help?, Practicing Oil Analysis Magazine, January 2004. Durdag K.: Solid-state Viscometer for Oil Condition Monitoring, Practicing Oil Analysis Magazine, November 2004. ShahR., Voelker P.: Oil Condition Now, Lubes’N’Greases, June 2004, pp18-22. Bodensohn A., Falsett R., Haueis M., Pulvermüller M.: Autonomous Sensor System for Car Applications, Advanced Microsystems for Automotive Applications 2004, Springer, Berlin, 2004, pp 225 – 231. James EP, Tudor MJ, Beeby SP, Harris NR, Glynne-Jones P, Ross JN, White NM: A wireless self-powered micro-system for condition monitoring, Proc. Eurosensors XVI, September 2002. Qu W, Plotner M, Fischer J: Microfabrication of thermoelectric generators on flexible foil substrates as a power source for autonomous microsystems, Journal of Micromechanics and Microengineering, 11(2), March 2001, pp 146-152. Hettich G, Vieweger W, Reuss1 H, Mrowka J, Naumann G: Self-supporting power supply for vehicle sensors, 10. Congress “Elektronik im Kraftfahrzeug”, BadenBaden, Germany, 27/28.9.2001, 2001. Stordeur M, Stark I: Thermoelectric Sensor for HF-Power Detector: Proccedings of 6th European Workshop on Thermoelectrics, Freiburg, 2001.
Alexander Bodensohn, Martin Haueis, Rainer Mäckel, Michael Pulvermüller, Thomas Schreiber DaimlerChrysler AG Böblingen, Hans-Klemm Strasse, 70546 Stuttgart Germany
[email protected] Keywords:
life time prediction, reliability, system monitoring, micromechanics, oil analysis
159
Replacing Radar by an Optical Sensor in Automotive Applications I. Hoffmann, Aglaia Gesellschaft für Bildverarbeitung und Kommunikation GmbH Abstract The author would like to present here in the outline a passive optical alternative to commonly used and tested active sensors like radar or laser. The proposed stereo camera system is able to detect traffic on the road and obstacles at distances up to 200 meters or more. Subsequent algorithms track individual measurements and estimate the object’s motion. The same sensor provides information about the road’s direction in order to classify detected objects and to support additional assistance in lane keeping and monitoring the driver’s attention. Possible applications in driver assistant systems will be demonstrated.In conclusion the authors will present a sensor system comparable or even superior to commonly used radar or laser sensors. The advantages are greater resolution, usability for various driver assistant systems, attractive price, robustness against electromagnetic interference and the absence of any kind of radiation. On the other hand, the disadvantage is slightly less accuracy at greater distances, which can be scaled by the sensor’s aperture and resolution and compensated for by adequate tracking algorithms and a high sample rate.
1
Introduction
At the present time, radar systems are used in luxury cars to recognise surrounding vehicles and obstacles and comfort systems like ACC or safety-relevant systems like pre-crash sensors or emergency brakes have been installed and are already in use. These systems are based on new sensor technology providing information not only about scenes inside the car but also about the periphery around and ahead. Using this information, the car is able to warn the driver, to suggest alternatives in critical situations, to prepare its security systems for a crash or to take over the control to actively avoid collisions. To reliably perform these tasks, it is necessary to get as much information as possible about the course of the road and all relevant obstacles, i.e. vehicles,
160
Safety
pedestrians, animals, trees, buildings and so on. All these obstacles have to be detected and tracked up to a distance of 150 meters, determining their position, size, speed and thread potential. When the classification is confident about these parameters it is possible to deduce actions and to provide this information to special driver assistance systems. Thus, it is possible to adapt to the preceding car’s speed providing ACC-functionality, to adjust the angle of the subject’s headlamp to get the scenery illuminated depending on the traffic or to initiate pre-crash actions in case of emergency. A next step could be the integration of navigation system data, the detected course of the road and all obstacle information to approach the goal of autonomous driving. Appropriate sensor technology is the basis of all of those applications to get a trustworthy map of the surroundings and the appearance or even absence of obstacles closer than and beyond safe distance. Systems already installed are equipped solely with radar sensors. Laser sensors and stereo cameras are currently being tested. The advantages and disadvantages of these alternatives are investigated in the following.
2
Radar Sensor
The major advantage of the radar sensor is its availability to production. Its insensitiveness regarding dirt and weather conditions is beneficial as well. The resolution and accuracy of the distance data is very impressive. But the lateral resolution and aperture of this sensor is very poor. When a few (maybe 5) sensors are combined and calibrated, it is possible to compare the distance data of every sensor to get a more precise lateral position. But it becomes more difficult to track and separate multiple objects in the same range. In addition the use of more than one sensor increases the system costs and the effort necessary to calibrate the whole system. In the luxury class, this might be no problem. But it seems unlikely that these sensors will enter the mass market.
Fig. 1.
Illustration of a radar beam pointing out the narrow coverage angle and the low resolution of 1 sample per beam
Though it is possible to determine the distance to an obstacle, it is quite difficult to get the right position and speed vector in space due to the poor lateral resolution. But since there is no information about the course of the road it is
Replacing Radar by an Optical Sensor in Automotive Applications
hard to deduce a thread potential. Additional assistance systems like navigation systems or video are necessary to assign the radar targets to a lane or a region beside the street. This way, it is possible to estimate a reliable relevance and thread potential of the obstacle. The slight vertical aperture angle complicates use in mountainous regions. When an object is not in the field of view because the street cambers below or beyond the radar beam, the tracked object disappears. Today it is not likely to meet two cars equipped with radar at the same time. Interference could become a problem when a significant number of cars encounter emitting radiation. The main problem could be a lack of consumer acceptance as well as legal restrictions to avoid a conflict with trunked radio or radio astronomy.
2
Laser Sensor
Today, some prototype sensors are being tested. A laser beam is deflected by a rotating mirror to range a complete horizontal line. The beam itself is very parallel and has no aperture. When it is necessary to scan a vertical band as well the use of multiple beams is useful. Thus, it is possible to get accurate lateral and distance data in a few rows. The designs currently being tested use 4 divergent beams covering a vertical band of 3.5°. At a distance of 100 metres a car could hide between two beams. When the car approaches a mountain and the street cambers it is likely to miss the obstacle.
Fig. 2.
3
Illustration of multiple laser beams pointing out the narrow coverage angle of 3.5°
Stereo Camera
The basic principle of measurement of a stereo camera is the parallactic displacement. Since the displacement depends on the camera parameters and solely the distance of the imaged object it is possible to assign a distance with a known degree of uncertainty to each correspondence found in both images. It is very important to know all camera parameters like lens distortion and
161
162
Safety
absolute and relative position (rotation and translation) to provide a reliable measurement. Since this sensor is very sensitive regarding dirt in the optical path it is usually mounted behind the middle rear mirror to get a clear view provided by the windscreen wiper. Determining an unambiguous correspondence is the major task in stereoscopic view. This can be solved by computational power, what increases with Moore’s Law even on embedded devices. Because of the enormous amount of input data, it is possible to find a distinct correspondence in the majority of cases. Ambiguous cases should be dealt with separately. These basics can be solved. The major problem of this system is its susceptibility to the slightest disarrangements of the relative positions of both cameras. That’s why prototypes currently being tested use an extremely rugged assembly and need to be recalibrated before each test. This is only possible in test applications. The major advantage of this sensor is the large number of measurements per image and the high sample rate beyond 20Hz. Furthermore, it is possible to pass the same input data to alternative systems to detect the course of the road or traffic signs, for instance. No additional sensor is needed to get multiple benefits. Integrating the results of the particular systems, it is feasible to get deduced information about the detected objects. Thus, they can be assigned to a lane using data from the lane departure warning system. This way, a comprehending view of the vehicle’s surrounding can be achieved.
Fig. 3.
3.1
Illustration of the coverage angle (~20°) of a stereo camera system capable of detecting obstacles even in mountainous conditions
Potentials and Limits of Object Ranging Using a Stereo Camera
As a matter of principle it is possible to determine a distance at each image position, where a gradient intersects an epipolar line. Using horizontally separated cameras the distance to every point on a vertical object edge can be measured with a predictable degree of uncertainty. A concurrent image-processing step could identify the edge as a part of a physical object. Thus, we can obtain a number of measurements, making the result more accurate than a single sample by far. A combination of 2D image preprocessing determining
Replacing Radar by an Optical Sensor in Automotive Applications
object candidates and 3D edges with assigned distances provides an extensive survey. This very dense information could be a basis for various applications. The question of range, accuracy, reliability and long-term stability of the measurements is crucial for the adoption of stereo camera techniques in automotive applications. The accuracy depends on the following parameters: aperture angle, camera displacement, sensor resolution along the epipolar line and the algorithm’s accuracy detecting the disparity. Furthermore the permanence of the calibration data has to be investigated. Therefore it is necessary to determine the time-dependency of disarrangements of the relative positions. This source of error has to be taken into account while determining the accuracy of the measurement. To investigate the potentials of stereo camera applications several stereo camera designs were assembled. At present time a camera with 20cm displacement, a horizontal resolution of 800 pixels and an aperture angle of 28° is being tested. As a subproject, a processing for nighttime conditions was implemented and potentials of range detection were investigated.
Fig. 4.
Theoretical limits of the method using Aglaia’s stereo camera
First of all the theoretical limits of the method were calculated regarding the parameters used in the assembly. A sufficient level of accuracy is achieved up to a distance of 200m. Objects beyond this distance can be detected, but their speed remains very uncertain. The high sample rate of more than 20 samples per second is very beneficial in improving the accuracy of the results. A pow-
163
164
Safety
erful tracking algorithm estimates the position as well as the speed of the object the more accurately, the longer it is observed. This way, it is possible to double the range of comparable radar or laser sensors without the emission of radiation.
Fig. 5.
Pair of tail lamps tracked accurately at a distance of 170 metres; note the parallel motion and the shake in the vertical position caused by a pitch drive of the carrier vehicle
A key stereo camera problem is the consistency of calibration data and actual relative camera position. While in use the system is exposed to mechanical and thermal stress. The relative distance is not affected by these influences, but the relative angles are. Even the slightest angles can have a huge impact on the distance calculation. An object at 200m could appear to be at 300m when an error angle of 1’ occurs. When this distance is in the desired range calibration and assembly should not differ more than this value. There are two options: It is possible to try to guarantee the mechanical position after calibrating the system. Rock-solid assemblies of steel and granite are unable to do so. In case of a crash, such heavy projectiles are also not acceptable near the head of the passengers. That is why Aglaia decided to detect any drift of the calibration angles and to adjust the calibration data accordingly. Thus, it is possible to design a light and smart prototype. During long term trials, a drift of 1’ per week was detected. The automatic recalibration was able to effortlessly compensate for these errors. That’s why we assume the system to be sufficiently accurate at distances up to 200 m.
Replacing Radar by an Optical Sensor in Automotive Applications
Fig. 6.
4
To get a feeling for an angle of 1’: Jupiter and Saturn cover this angle from our point of view on earth. If one camera points to the centre of the bright dot you see at the sky, the second camera should point to the left or the right edge of it. This accuracy cannot be achieved by a mechanical setup but only by automatic recalibration.
Applications
The nighttime project being currently tested is able to detect all luminous sources and to classify them regarding their position, size and speed. This way, reflectors, reflections, street lightings and traffic signs can be distinguished from relevant vehicles. Supporting the driver is useful especially during nighttime conditions, because the output curve of any human is not the highest at this time. A variety of assistants is plausible. An ACC-assistant could observe and control the position in the lane regarding obstacles. A collision warning could alert the driver as well as the car’s security systems. A headlamp assistant could adjust the lights to provide the best possible lighting avoiding blinding oncoming traffic. As an example a complex traffic situation is observed by our system in real time. First we follow a car at a distance of 170 metres. Then oncoming traffic appears beyond 230 metres. At the same moment a fast bike overtakes the subject vehicle (note the longer speed vector). As the bike departs, the oncoming vehicles pass us activating a slight collision warning because of the narrow street. At last the fast bike (note again the longer speed vector) overtakes the slow car. Using this information, it is easy to adapt the vehicle’s speed depending on traffic, to maintain a safe distance, to predict potential collisions deducing warnings or actions or at least to control the high beam.
165
166
Safety
Fig. 7.
A complex traffic situation analysed by a stereo camera (top view); the aperture angle is painted cyan, the red circle points a distance of 25m, the yellow one 50m, the green one 100m and the blue one 200m
Fig. 8.
Controlling the high beam on an uphill road; the vehicle 160 metres ahead is located 10 metres above the subject’s level. A controlled high beam (yellow line) provides much better visibility than a low beam.
Replacing Radar by an Optical Sensor in Automotive Applications
5
Conclusion
As a result of these investigations we are of the opinion that automatic recalibration is a key feature for bringing stereo cameras to the mass market. Neither a solid mechanical design nor repeated static calibration is able to guarantee that the error angles are small enough. Adjusting those angles concurrently to normal operation enables the system to provide a permanent accuracy, compensating for thermal and mechanical influences. This dense data can be used as a basis for a variety of driver assistant systems. In addition the raw image data can be made available to further applications. The output of all the systems can be combined to deduce additional information to recognise complex situations. In addition to the algorithms for daytime processing it should be possible to outperform all active techniques emitting no radiation. This way no legal or technical restrictions are to be feared. Even the costs should be less than those of the active techniques because neither mechanically moving parts nor special hardware but solely hardware following Moore’s Law is used. The work at hand is based in the accumulated knowledge of the Aglaia GmbH Berlin - one of the few independent companies developing complex image processing of natural scenes. Dipl.-Ing. Ingo Hoffmann Aglaia GmbH Tiniusstraße 12-15 13089 Berlin Germany
[email protected] Keywords:
stereo, assistant, radar, laser, ACC, video, camera, passive, driver assistant system
167
169
System Design of a Situation Adaptive Lane Keeping Support System, the SAFELANE System A. Polychronopoulos, Institute of Communications and Computer Systems N. Möhler, Fraunhofer Institute for Transportation and Infrastructure System S. Ghosh, Delphi Delco Electronics Europe GmbH A. Beutner, Volvo Technology Corporation Abstract The goal of SAFELANE system is to develop the technology for a safe, reliable, highly available, acceptable and legally admissible onboard lane keeping support system for use in commercial and passenger vehicles on motorways and rural roads. The system reaction in critical lane departure situations includes the control of warning devices and an active steering actuator. The input to the system comes from cameras, which are supplemented by active sensors, vehicle CAN bus data, digital road maps and precise vehicle positioning data. In the paper, the system design will be presented. The system architecture consists of three layers: the perception layer being responsible for the environment perception, the decision and the action layer, which are responsible for taking and executing actions respectively. Thus, the design is constrained by the above mentioned layers and is presented in details as a domain model. The domain objects of the model describe the tasks of the SAFELANE system from the sensors’ measurements until the final trigger of the warning and actuator system. SAFELANE is part of PREVENT Integrated Project.
1
Introduction
Road traffic accidents in the European Union annually claim more than 40,000 lives and leave more than 1.7 million people injured. These accidents represent estimated costs, both direct and indirect, of 160 billion Euros. Considering these figures, European Commission, national governments, vehicle manufacturer industry and other stakeholders have promoted over the years a number of projects and programs with the common goal of reducing the number of fatalities and injuries in road traffic accidents. SAFELANE is a subproject within PReVENT Integrated Project, aiming at reducing the number of lane departure related accidents. Recently, first Lane Departure Warning Systems (LDWS)
170
Safety
have been introduced to the market and are today offered by several vehicle manufactures as optional choice on their vehicles. In 2002, The Dutch Ministry of Transport performed a test pilot with a LDWS. In the preparation of this pilot a prediction was made for the number of traffic victims for the “status quo-scenario” of 2010. In this scenario it was assumed that the current traffic safety situation would not alter (this does not mean that this is necessarily the most realistic scenario). The amount of vehicles on Dutch roads would stabilize and 10% of all vehicles would be equipped with some kind of driver assistance system. By analyzing the number and type of vehicle related accidents, a prognosis could be made for the amount of accidents that can be avoided by equipping passenger cars and other vehicles with driver assistance systems. The analysis shows that the introduction of lane departure warning systems and lane keeping systems could prevent 47 traffic fatalities and 388 serious injuries in The Netherlands in one year and that it would certainly enhance road safety in the country, but probably also in other European countries [1]. The goal of SAFELANE system is to improve the functionality of existing lane departure warning and lane keeping systems beyond the state of the art for use in commercial and passenger vehicles on motorways and rural roads. The paper is organized as follows: first, the system concept is presented starting from the problem domain and concluding with the solution provided by SAFELANE. At the same time, the innovation of the system is pointed out. Moreover, the sensor system is presented and the relevant modules, whose role is to implement the SAFELANE solution and tasks. Finally, a conclusions section points out the future steps towards the demonstration of SAFELANE activities.
2
System Concept
The goal of SAFELANE is to develop the technology for a safe, reliable, highly available, acceptable and legally admissible onboard lane keeping support system. The system increases the active vehicle safety by driver warning or active intervention in critical situations concerning unsatisfactory lateral driving by the driver. The request is to build a system that works under various road and driving conditions, especially in situations where lane markings are missing or ambiguous, or the visibility is restricted, additional traffic has to be taken into account, the driver intention has to be regarded or complex situations like overtaking are to be handled. The main application field is the driving of commercial and passenger vehicles, like trucks and cars, on motorways
System Design of a Situation Adaptive Lane Keeping Support System, the SAFELANE System
and rural roads. The system is based on vehicle-side technologies. The basic input comes from cameras monitoring the road in front of the vehicle. The cameras are supplemented by vehicle Controller Area Network (CAN) bus data, digital road maps and a precise vehicle positioning. Adaptive Cruise Control (ACC) systems or other forward looking active sensors provide supplementary information as well. The system reaction in critical lane departure situations involves the control of an acoustic or/and haptic warning actuator and an active steering actuator. The essential quality enhancement of the approach comes from a decision component that analyses the incoming sensor data, determines the relevant situation, predicts vehicle paths, computes the most likely vehicle trajectory that the driver will follow and synthesises data for controlling the system actuators. For this, a flexible technology is envisaged that bases on a model base. It allows the system to be adaptive to several situations and to be configured to different sensors or actuators. An essential feature is a self-assessment property that informs the driver in each situation how reliable the support is. In addition to vehicle-side technologies, special lane or infrastructure elements suitable for automated detection are considered.
Fig. 1.
System concept of SAFELANE
Figure 1 depicts the principle structure of a system that is able to apply the described technologies in order to provide the aimed functionality within a given context. The system comprises of a sensor system, an actuator system and a decision component with a self-assessment facility and a data base of stored driving situations (i.e. models). The system is realized on a certain
171
172
Safety
hardware and software basis. The context of the system is determined by the vehicle, by the driver and by the surrounding road, infrastructure, traffic and environment. SAFELANE is a typical advanced driver assistance system (ADAS) within the PREVENT various function fields. Although the system is function oriented, given the requirements and the specifications, a modular approach is followed, with clearly defined scalable blocks and layers; in this way the system allows exchangeability, benchmarking from OEMs and future deployment. Due to the complexity of the information handled in SAFELANE, the high quantity of information sources and the above listed system requirements a sensor fusion approach will be adopted. The approach was also recommended from [2]. In general, sensor data fusion includes several processing steps which take the data of several single sensors and combine the information from them in order to achieve a certainly better result than the outcome of each single sensor’s processing could provide. This effect of getting better or more adequate results out of sensor data fusion compared to single sensor processing is called “synergy” or “synergetic effect”.
Fig. 2.
Generic design of a sensor data fusion system for ADAS functions
A typical driver assistance system’s architecture consists of at least three layers: the perception, being responsible for the environment perception, the decision and the action layer, which are responsible for taking and executing actions respectively. Sensor data fusion, may take place on every individual layer depending on the approach or even in all layers. Usually, by sensor fusion, one can assume fusion in the perception layer (e.g. for obstacle detection), but this is not true, especially in the case of SAFELANE, where a deci-
System Design of a Situation Adaptive Lane Keeping Support System, the SAFELANE System
sion fusion fuzzy system will be introduced. In the system’s perspective this is indifferent, if the layers support the same input output interfaces. Within the three layers (perception, decision and action level), in a driver assistance system’s architecture several tasks could take place e.g. in the perception level, this could be calibration, feature extraction and tracking, while in the decision layer tasks as path prediction and risk assessment play a major role. As mentioned before, even for path prediction, risk assessment and so on arranged more in the decision level, sensor data fusion is considered, mostly to increase the reliability and the robustness of these functions and the overall decision making. The outcome of the perception layer is a specific model of the environment. A physical environment model is a representation of the environment that gives the appropriate information to the application to decide and act according to the defined functionality. The first step is to achieve a common temporal and spatial reference, transforming raw sensor data into a consistent set of units and coordinates; as a second step the perception layer interprets observations to detect objects and extract features in the current physical environment. The task of the decision layer is more application oriented; using a physical environment model, its task is to analyze the current situation and to decide the actions to perform according to the functionality of the system. The solution in the problem domain SAFELANE gives the guidelines for the specification of the modules and the specific tasks of the layers. The initial system concept and SAFELANE functionality is constrained with the preliminary requirements and the 3-layer system design. Hereto, the system design is pertinent to a functional or domain model. A domain model is an object model of a problem domain. Elements of a domain model are domain object classes, and the relationships between them. The domain classes are the 3 layers – the perception, the decision and the action. The domain objects describe the tasks of the SAFELANE problem and are all identified into the three different domain classes. The problem has been identified during the specification phase keeping in mind some higher-level statement of what the objectives are and what the problems are; attention is also paid to the tools that will be used to solve the problem. These tools offer the solution and will be described later under the label solution domain. The following domain objects – tasks can be identified for the SAFELANE domain:
173
174
Safety
(T1) (T2) (T3) (T4) (T5) (T6) (T7) (T8) (T9) (T10) (T11) (T12) (T13)
Measure ego-speed, yaw-rate Measure object movement in sensor range and characterize object dynamics in terms of object position & speed (lateral and longitudinal) Detect and track the ego- and adjacent lanes Extract map and positioning data Estimate/compute accuracy of the measured parameters Measure driver actions, e.g. direction indicator, steering wheel angle Predict the path of the ego-vehicle for the next 3-4s (short term path prediction) Compute the most likely path for the ego-vehicle (long term path) Analyze the current situation and predict future possible situations (e.g. compute future lateral offset, TLC etc.) Match the current situation with stored models Decide (if there will be an unintentional lane change) Perform a self assessment Trigger the actuator system (warning or active steering)
While the problem domain defines the environment where the solution will come to work, the solution domain defines the abstract environment where the solution is developed. The differences between those two domains are the cause for possible errors when the solution is planted into the problem domain. Thus, the two domains: the problem and the solution one will be addressed also later at the development phase. The solution proposed maps the sensor system to the HMI (i.e. to the driver) using 3 domain classes. The perception domain class includes the domain objects:
a) b) c) d)
Driver state = driver actual activities and driver data Object State = the dynamics of the objects in the longitudinal field Ego State = the dynamics of the ego-vehicle Road State = the geometry of the lanes and road borders and its evolution in time
The analysis and decision domain class includes:
a) b) c) d) e)
Evolution of the ego state in time (trajectories’ estimation) Situation analysis Situation matching Decision making Self-assessment
The action domain class includes:
a) The control of the actuators b) The activation of the actuators
System Design of a Situation Adaptive Lane Keeping Support System, the SAFELANE System
Fig. 3.
Solution provided by SAFELANE for a lane keeping support system
The solution is a high level of abstraction; thus, it does not contain knowledge on how the tasks will be accomplished. For example, how the object state is generated is indifferent to the domain model as long as it meets the predefined design requirements and contributes to the tasks – solutions. This is the work of the SAFELANE modules and sensors that will be defined in the remainder of the paper.
3
Sensor System
The SAFELANE system uses a combination of complementary sensors in order to provide an enhanced warning/intervention function to the driver in case of an unintentional lane change or road departure. The system consists of a Vision system providing monochrome images for the extraction of the lane markings, an Electronic horizon providing map and positioning data, active sensors providing the object list of the vehicles ahead and on-board sensors and signals available on the vehicle bus. Active sensors and vehicle data are not included in the system design, since it is assumed that a SAFELANE vehicle should be equipped with them. In Fig. 4, the list of sensors is sketched, given the solution presented in the previous section 2 of the paper. In the remainder the sensors are briefly described.
175
176
Safety
Fig. 4.
3.1
Sensors supervision area
Vision Sensor
The camera has a single CMOS monochrome sensor chip with the following main parameters: 60° horizontal field of view, frame rate of 30 frames per second, resolution of 640x480 pixels, gray level dynamic range of 100dB. The image processing unit is separated from the camera head. The camera is mounted directly behind the windscreen in the middle of the vehicle, within the range of the wiper. For commercial vehicles it should be located in the lower part and for passenger vehicles in the upper part of the wind screen. The supervision area of the camera comprises the road in front of the vehicle covering 5 to 50m ahead, also including the neighbouring lanes. The image processing unit determines road and lane parameters.
3.2
Electronic Horizon
The “Electronic Horizon” is not a sensor in the conventional way. It is more a virtual sensor providing information retrieved from a digital map database. Based on the vehicle’s current position, the Electronic Horizon provides information about road segments lying ahead of the vehicle. The map data parameters (e.g. link attributes) and vehicle position interface is implemented in close cooperation with the other map related activities (e.g. MAPS&ADAS PRE-
System Design of a Situation Adaptive Lane Keeping Support System, the SAFELANE System
VENT project). Depending on applications’ requirements two types of Electronic Horizon information can be delivered: Electronic Horizon information for a fixed distance (e.g. 900 meters) Electronic Horizon information for a defined time (e.g. 10 seconds), i.e. the extended length is calculated dynamically depending on the current speed. Electronic Horizon consists of a hardware platform (including a processing unit, positioning sensor box including GPS receiver, Gyro, etc.) and software modules.
3.3
Active Sensors
The radar system (or whichever active sensor) is an additional sensor. The additional parameters provided by the radar are the number of objects in front of the car, the distance from the radar sensor to the objects, lateral position, the relative position and size of the object. The field of view of the radar should be at least 6° according to the specifications. The radar typically is a 77GHz far range radar used for the ACC system. As an alternative, the radar can be replaced by a lidar system.
3.4
Vehicle Sensors
Main vehicle sensors are motion sensors for the longitudinal vehicle speed and yaw rate sensors. However, other sensors can be added for the detection of the environmental conditions such as visibility detector, rain detector or a sensor detecting the load of a truck. The goal for the longitudinal speed sensor is to measure the longitudinal speed of the host vehicle with accuracy error of less than 2% of the current speed. The goal for the lateral vehicle motion sensor is to measure the lateral motion of the vehicle, i.e. one of the following parameters: yaw rate, lateral speed, lateral acceleration, steering wheel angle, steering torque and speed from vehicle’s four wheels. Accepted accuracy of the yaw rate sensor is about 1-2mrad/s and refresh rate about 10ms. The SAFELANE system will be based exclusively on the vehicle sensors that the demonstrators are equipped with. No additional sensors are foreseen to be integrated. However, the demonstrators use sensors with different accuracies; this is not considered as a problem (cf. Chapter 4 in the remainder) but will lead to different configurations of the system.
177
178
Safety
3.5
Interfaces
In this section the communication buses and protocols that will be used in SAFELANE system are briefly described. Data will be exchanged from: sensors and modules different processing units and on board sensors, vehicle’s Gateways and SAFELANE system
Fig. 5.
SAFELANE sensors
The main processing unit, namely the Decision PC will have a CAN interface to communicate with the vehicle Gateway (e.g. xPC Target box) and several Ethernet ports. CAN-communication output level promises flexible handling of sensors (replacement, connecting different types of sensors, recording etc). However, in certain cases (e.g. communication between PCs or link between the camera and its processing unit), fast and convenient solutions should be implemented. In order to achieve real-time performance the Internet Protocol, IP, is selected as the main communication channel between the SAFELANE Processing Units for the exchanging of messages. The use of IP and standard 100 or 10 Megabit network cards provides a cheap and robust solution for high-speed communication for demonstration purpose. One proposal for protocol over IP to use is UDP, which does not provide error correction and retransmits but is very reliable in shorts links and provides very low channel load and delay. An alternate choice is TCP that provides a more robust link but the system may “hang” if all computers are not talking in a correct way.
System Design of a Situation Adaptive Lane Keeping Support System, the SAFELANE System
Figure 5 shows the supervision area around the vehicle with the own vehicle, the road, the own lane, the two neighbour lanes, the lane markings, as well as traffic and infrastructure objects. All the sensors have different supervision areas. The areas are shown by different colours.
4
Object Classes
The SAFELANE modules provide the tools for the implementation of the domain solution described in Chapter 2. The modules that will be designed and developed in SAFELANE are: Image Processing module (IPM) Most Likely Path (MLP) Lane data fusion (LDF) Trajectory estimation module (TEM) Decision module (DEC) and Actuator control module (ACT) Each module is responsible for realizing a specific task according to the solution given in Figure 3 and belongs to a domain object class i.e. the perception, the analysis and decision and the action layer (cf. 2). Some tasks are assigned to other not mentioned modules e.g. the object state is assigned to an obstacle data fusion or an object tracking module. Such modules pre-exist in the demonstrator vehicles and will not be developed in SAFELANE; thus, they are omitted from this section. The Image Processing module is mainly based on a vision sensor. It additionally receives vehicle data and data from enhanced digital road maps to further improve the system performance. The module recognizes road and lane borders and estimates the geometrical parameters of an underlying lane model (e.g. clothoid) and the position and orientation of the vehicle relative to the lane (heading and offset). The presence of neighbour lanes is also recognized. To enhance the robustness of the system a simple obstacle detection subsystem identifies areas within the lanes, where other traffic objects are located in order to restrict the lane recognition to the remaining free areas. Within the lane keeping support function, the lane tracking module serves as the major information source of the vehicle environment to perform its task. A camera takes images of the road in front of the vehicle. The digitized image data is further processed with a processing unit. At first, areas in the image where lane (or the road borders) is most likely to be found are calculated. This is done with the previously estimated lane model projected back into the
179
180
Safety
image. In these areas potential lane (or the road borders) and markings (lane measurements) are detected with edge detection and segmentation techniques. In a selection phase only the measurements belonging to lane borders are selected. Finally all the relevant parameters like lateral offset, heading, pitch angle, lane width and curvature are estimated. Based on information provided by the Electronic Horizon sensor and vehicle status data, the Most Likely Path module (MLP) predicts the most probable route for the vehicle to take. Therefore, a minimum cost function for road classes in the digital map database combined with vehicle’s data leads MLP to an educated guessing of most probable route. Depending on Electronic Horizon (EH) settings, two types of Most Likely Path can be delivered: Most Likely Path for a fixed distance (e.g. 900 meters) Most Likely Path for a defined time (e.g. 10 seconds), i.e. extended length is calculated dynamically depending on the current speed. In the digital map database available link attributes along the Most Likely Path (such as “Number of Lanes”) will be provided to other applications. Lane data fusion is required due to the large number of sources that are used to track lanes and borders on the road. Different sources of information are considered, such as the vision lane tracker, the map data, the fused objects and their trails, and finally the ego-vehicle dynamics (velocity, yaw rate and steering wheel angle). The conventional obstacle or track fusion is not included in SAFELANE project. The input of the lane fusion module will be the output of the following sub modules and systems Image processing (Vision lane tracker), most likely Path (Map and positioning Data), Fused Objects and their trails and Vehicle Dynamics. All the previous systems will provide the fusion module with estimated values of the road, lane and obstacle parameters and their respective values for the variance of the estimation error or/and a level of confidence. Lane data fusion provides a prediction even when lane markings are partially or completely missing. The trajectory estimation module (TEM) predicts driver’s intention in a short term (some few seconds in the future) by estimating the future path of the ego-vehicle and its dynamics with respect to a given tracked road geometry and infrastructure. TEM is focused on the vehicle and the driver; it also calculates conventional and new parameters of typical lane departure warning/lane keeping systems like time to lane crossing or future lateral offsets. It is part of the situation analysis block and contributes to the final decision making. The task of the decision system is to map between lane keeping relevant driving situations and adequate system reactions. More precisely, it has the task to
System Design of a Situation Adaptive Lane Keeping Support System, the SAFELANE System
map between the output of the sensor system and the input of the actuator system. The decision system takes the sensor system output, analyses the respective data and decides what the current situation is. After this, it decides which action has to be undertaken in this situation. At last, the action is synthesised, transformed into respective data format and handed over as input data to the actuator system. The described transformation process is accompanied by a permanent self-assessment that decides how reliable and safe the decisions made by the system are. The decision system contains the following components: derivation and self-assessment. Each of these components has its own functionality. The components interact by the use of specified interfaces which are defined by internal data models. The decision system contains the following functions: situation analysis modules using fuzzy techniques and the actual decision making task. The task of the actuator system is to make effective the action determined by the decision system. The actuator system manages driver warning, active steering and vehicle braking. It receives the action from the decision system in an adequate way. The actuator processes this information and delivers the control signal to actuators (e.g. current, voltage and dB). The actuator system has the ability to work both in open and in closed loop to make tracking of the decision system outputs. The actuator system also performs self-assessment of the decision system actions and verifies the vehicle dynamics evolution. Safety oriented actions will be adopted.
Fig. 6.
Future integration of lateral control functions
181
182
Safety
5
Conclusions
Two heavy trucks (a Volvo and an IVECO truck) and a passenger car will be equipped with SAFELANE adaptive lane keeping system based on the vision system and the actuator control system. The equipped vehicles serve for test and validation purposes and will be used for testing the functionality and performance of the developed system and for evaluation of the driver acceptance of the human-machine-interface. SAFELANE belongs to the cluster of Lateral Support Functions of PREVENT Integrated Project. In the same cluster there is also LATERALSAFE Sub-project. The combination of the complementary functions Adaptive Lane Keeping Support (SAFELANE) and Lane Change Assistance/Lateral Collision warning (LATERALSAFE) will provide in the future an integrated driver support system for handling critical lateral support in all traffic scenarios. (Fig. 6) The project is coordinated by Volvo Technology; it started in February 2004 and will finish in January 2007.
References [1] [2]
Road safety in the Netherlands, key figures, AVV, edition 2004, ADA implications voor verkeersveiligheid (ADA Implications for Road Safety), AVV, 2002 Aycard, O., “Introduction on Sensor Data Fusion”, presented at Grenoble, 2nd Workshop on data Fusion, ProFusion, PREVENT, May 2004
Aris Polychronopoulos Institute of Communications and Computer Systems 9, Iroon Polytechniou St. 15573, Athens Greece
[email protected] Nikolaus Möhler Fraunhofer Institute for Transportation and Infrastructure System Zeunerstrasse. 38, 01069 Dresden, Germany
[email protected]
System Design of a Situation Adaptive Lane Keeping Support System, the SAFELANE System
Sharmila Ghosh Delphi Delco Electronics Europe GmbH Vorm Eichholz 1, 42119 Wuppertal, Germany
[email protected] Achim Beutner Volvo Technology Corporation Götaverksgatan 10, SE-405 08 Göteborg, Sweden
[email protected] Keywords:
lane keeping support, system design, perception layer, situation analysis, decision system, sensor and actuator control
183
185
Intelligent Braking: The Seeing Car Improves Safety on the Road R. Adomat, G. Geduld, M. Schamberger, A.D.C. GmbH J. Diebold, M. Klug, Continental Automotive Systems Abstract Over the past few years, the performance of active safety systems (chassis management, steering, brake control and stability systems) and passive safety systems (seat belts, airbags, headrests and the passenger safety cell) has improved substantially, thanks mainly to electronics. However, since development activities in the two sectors – active and passive safety – have been kept largely separate, significant safety potential still remains untapped. The APIA project demonstrates the benefit of linking existing components in the vehicle. Active and passive components are controlled based on the current accident risk. The test results of the APIA vehicle confirm the significant improvement in stopping distance and safety, when the Safety Control Module is implemented. The demonstration vehicle is composed of existing components and first production implementations will be seen in a few years.
1
Motivation
Thanks to the electronic stability program ESP, today’s brakes can intervene automatically to prevent a large number of accidents as a vehicle approaches its lateral handling limits. Advanced anti-skid brake systems with brake assist functions and adaptive cruise controls (ACC), which automatically apply the brakes to maintain a safe distance, give the driver greater control over the forward dynamics of the vehicle. And electronic control units for airbags, seat belts and rollover protection have significantly improved occupant protection over the past few years. Objective of the Active Passive Integration Approach (APIA) project is to demonstrate the benefit of linking existing components. In this way, for example, the information provided by one yaw-rate sensor may be used by the ESP and ACC, not only ensuring unparalleled safety but also cutting costs.
186
Safety
APIA will allow the driver to master impending hazards and use what time is available to minimize the risk of injury if – despite the driver’s best efforts – an accident becomes unavoidable.
Fig. 1.
Phases of an accident
The key software component of APIA is the Safety Control Module, which collects and continuously evaluates the data received from all the individual safety systems and the environmental sensors available in the vehicle. For any given traffic situation, this module implemented as an additional software module in the EBS control unit determines a hazard potential, which reflects the current accident risk. Active and passive safety components are controlled based on the current accident risk.
Fig. 2.
APIA system overview
Intelligent Braking: The Seeing Car Improves Safety on the Road
2
APIA System Overview
An example of a networked APIA system is shown in the following picture, where the brake system, environmental sensors, force feedback pedal, door-, roof- modules, seat memory, reversible belt pretensioner and the airbag control module have been networked to an APIA system.
3
Environmental Sensors – the Eye of the Brake
Advanced environmental sensors will play a key role in the development of the car of the future, designed for accident avoidance and injury prevention. Sensor technologies based on radar, infrared and image processing are used for gathering data about the area around the vehicle. This data enables the safety control module to use the time period before a collision even more effectively, for example by appropriate pre-conditioning of the airbag-activation or for pre-filling of the electronic brake system. Today environmental sensors are widely used for ACC. The requirements for those radar or infrared sensors in dynamic range, detection area and lateral accuracy are given by the ACC specification. ACC sensors can be used for APIA functionalities as well, but by optimizing the sensor specification for different APIA approaches a low-cost alternative to high-end ACC sensor can be realized. One example of such a low cost alternative is the infrared mid-range sensor KIS, which can be used either for comfort as well as for APIA functionality. KIS observes the area in front of the vehicle up to 60 m, and this range is significant to calculate the current hazard potential. The reduced sensor range results in advantages: comfort applications as low speed following can be implemented, in combination with an ACC sensor a high performance full speed range ACC can be offered (sensor fusion), requirements for the vehicle integration are reduced (sensor alignment), most attractive system costs. In future image-processing camera systems will allow an even more dramatic improvement in safety. These systems will not only be able to detect objects near to a vehicle but also to classify them. Safety systems can then be acti-
187
188
Safety
vated as appropriate for a given situation, providing even more effective protection for vehicle occupants and other road users.
4
The Safety Control Module
The software component of APIA is the safety control module, which collects and continuously evaluates the data received from all the individual vehicle systems. Only the combination of environmental information distributed by a distance sensor, together with the vehicle dynamic information given by the brake system and the driver interaction leads to a comprehensive safety strategy. A driver accepted combination of safety measures is only possible by the evaluation of objective sensor and vehicle data in combination with data that represents the driver’s intension (e.g. accelerator pedal travel). For any given traffic situation, the safety module determines a hazard potential which reflects the current accident risk depending on environmental and vehicle information. It is most advantageous to realize the safety control module as a centralized module. This module coordinates all safety measures mostly the active safety measures in advance of a crash. The central coordination module ensures a plausible combination of the different measures in different situations. The necessary interfaces are small.
4.1
Staged response to accident risk
If the hazard potential reaches a defined limit, the safety control module initiates a staged hazard response strategy, adapted to the accident risk in each case. If two vehicles are driving nose to tail and the lead vehicle is forced to make an emergency stop, the scenario from the point of view of the second vehicle is as follows: Stage 1: vehicle ahead brakes and slows APIA warns the driver of a potentially dangerous situation by means of a visual cockpit warning (safety distance!) or an applied counterforce at the accelerator pedal. To set up an intuitive perceivable warning function, it is necessary to give the driver a hint, what to do without any interpretation effort (release the accelerator pedal). The advantage of a force feedback to acoustical or optical warning information is the immediately comprehensible meaning of it. The force feedback at the accelerator pedal is an appropriate way of telling the driver discretely, that the current dis-
Intelligent Braking: The Seeing Car Improves Safety on the Road
tance within the traffic situation is too dangerous. Stage 2: vehicle ahead brakes abruptly, rapidly reducing the gap between the two vehicles Stage 1 response, plus: The brake system is preconditioned autonomously by prefilling. The measure shortens the response time and overcomes the system immanent free travel during the driver is still on the accelerator pedal. The driver is still able to overrule the system by actuating the accelerator pedal. The seat belts are reversibly pre-tensioned to reduce the belt-slack to take up the slack with low force at this stage. In a possible crash situation the pre-tensioned belt leads to an optimized forward displacement of the occupant. The side windows and sunroof are closed to avoid that objects get into the car or to avoid that occupants get partially out of the car. Stage 3: vehicle ahead brakes very hard, dramatically reducing the gap between the two vehicles Stage 1 and 2 responses, plus: APIA actively applies the brakes up to a deceleration of 0.3g to reduce the kinetic energy prior to driver braking during switching phase from accelerator to brake pedal. Reversible seatbelt pretensioners are activated with a higher force level. The belts are effectively fastened. The front passenger seats are adjusted to an optimal position for a possible crash. The inclination of the seat cushions is adjusted to avoid submarining. Stage 4: emergency stop by vehicle ahead; driver of second vehicle reacts with pushing the brake pedal Stage 1 to 3 responses, plus: In this stage the extended brake assist is activated. It is an additional brake support to realize an optimized reduction of kinetic energy during driver braking. Depending on the criticality of the current traffic situation, the driver is supported to brake. The deceleration, which would be needed additionally to driver’s braking for mitigating an impending crash, will be added. As the brake system has already been prefilled in the former stages, maximum braking pressure can rapidly be applied, ensuring the shortest stopping distance as possible. Reversible seatbelt pretensioners are activated with maximum force, positioning the occupants safely in their seats.
189
190
Safety
Stage 5: despite emergency braking, a collision occurs Stage 1 to 4 responses plus The airbag (non-reversible restraint system) is preconditioned, depending on the crash scenario (severity derived from closing velocity & impact direction) for an adapted deployment (smart airbag) - depending on a combination of environmental information and onboard standard high g-acceleration sensors. This adapted deployment leads to an optimized occupant safety to mitigate injuries. The advantages of using the closing velocity instead of acceleration satellites are: i improved timing adapted to crash severity: earlier triggering of airbag-system for pole, under ride and high speed crashes possible later triggering of airbag-system for low speed crashes possible decreased aggressiveness of airbag deployment ii improved fire/no fire decision: lower number of (unnecessary) low-speed deployments - minimized risk reduction of misfire on rough roads, curb collisions, pot holes The full range of responses described above is only available if the vehicle is fitted with a brake system with ESP and an environmental sensor (e.g. ACC environment sensor). This safety improving functions are also realizable with a low cost sensor system like the HIS/KIS. Both infrared and radar distance sensors can be used for this functionality.
Fig. 3.
Force feedback pedal functions
Intelligent Braking: The Seeing Car Improves Safety on the Road
5
Reduced Stopping Distance
The most effective safety measure to reduce the accident risk or the severity of an accident is to reduce the crash energy prior to the accident. To achieve this, both earlier driver reaction and autonomous precrash braking lead to a shorter reaction time and an energy reduction. Based on the information of the environmental traffic situation, derived from the environmental sensor and evaluated by the safety control module, the driver will be assisted to cope with the traffic hazard by a haptical warning and three increasing brake interventions.
5.1
Haptical Warning
The first active interaction of a precrash system should be a driver warning in that way, that the driver himself could react immediately to the traffic hazard and avoid the crash by his own reaction. Therefore, an intuitively perceivable warning is most advantageous in terms of reduction of reaction time. A most quick and perceivable warning can be performed, if there is only a little interpretation effort of the driver to get the meaning of the warning. A haptical feedback on the accelerator pedal, indicating the driver by a counterforce, that he should reduce the vehicle speed is quicker perceivable than just a warning lamp or a warning sound, which has to be associated to the current risk, which has to be interpreted and which has to be transferred to the appropriate action. The warning with a counterforce at the Force Feedback Pedal FFP already leads the driver to the appropriate action to reduce the speed. Figure 3 is showing an integration of the FFP to a standard accelerator pedal, which will be first in production end of 2005.
5.2
Brake Interventions
Additionally to the driver warning, three increasing brake interventions are initiated. First, during the driver warning, when the driver is still on the accelerator pedal, the brake system will be preconditioned by pre-filling the brake caliper with a brake pressure of maximum 5bar, which does not lead to a noticeable brake torque, but overcomes all the free travel in the brake system and minimizes the clearance. This allows a quick response time of the brake system, when the driver hits the brake pedal in an emergency case.
191
192
Safety
If the driver is then reacting by releasing the accelerator pedal, the brake pressure will be increased depending on the accident risk up to 0,3g deceleration. This autonomous pre-braking prior to the drivers brake application is the second stage of the increasing brake assistance. In case the driver is stepping on the brake pedal, APIA is amplifying the driver input on the brake pedal according to the accident risk and the required deceleration in the current traffic situation. This extended brake assist BA+ is the third stage of brake intervention, which is allowing even a non-skilled driver an easy emergency braking with low application forces during a traffic hazard. Figure 4 is showing in principle a comparison between three different brake systems. The black diagram shows a system without any support and the red diagram shows a conventional brake assist, which is analyzing the brake pedal travel and application speed of the brake pedal to support the driver in a panic application. There is already a high advantage in terms of a significant reduction of the braking distance. But there is still a high potential untapped to reduce the whole stopping distance. This is shown in the orange diagram with APIA pre-filling, pre-braking and BA+.
Fig. 4.
5.3
APIA active safety strategy
Measurement Results
To quantify the efficiency of these brake interventions, a set of comparable measurements with different driver types are carried out.
Intelligent Braking: The Seeing Car Improves Safety on the Road
The target is to estimate the maximum possible benefit of these brake interventions in terms of reduction of stopping distance. This can be investigated most efficiently with real vehicle tests on a test track, assuming the most critical traffic situation with an unavoidable crash by using a trigger signal for the system activation and driver reaction instead of trying to reproduce a special traffic situation with a preceding car for several hundred measurements. The generated trigger signal simulates the secure detection of the traffic situation by the environmental sensor and allows both the APIA system and the driver in parallel to react on it. This ensures the comparability of these measurements without a high effort for the test environment. The benefit is highly dependent on the driver’s behavior. Therefore the measurements are separated into three different driver brake profiles: First a test driver, who is applying the brake pressure himself as quick as possible and almost stepwise. Second a normal driver with an almost constant brake pressure build-up and third a hesitating driver with a delayed applying of the brake pedal. Within reaction time and pedal change time, all three drivertypes are almost comparable as shown in figure 5.
Fig. 5.
APIA test profiles
The hesitating driver could even be worse in reaction and pedal change time, which leads to more system support. The measurements are done with a mid-size category production car with an initial speed of 100km/h.
193
194
Safety
Figure 6 shows the maximum benefit based on a set of 138 comparable measurements.
Fig. 6.
APIA results
The production car equipped with ABS, ESP and conventional brake assist needs in average between six and 13m more stopping distance than the car equipped with APIA and the prefill, prebrake and BA+ for precrash functions. This means, that the APIA can avoid the crash, while the production car will crash with remaining crash energy of 18-34%, which would be a crash with a speed of 44-55km/h.
Fig. 7.
APIA test results: reduction of stopping distance
A more detailed picture (fig 7) is showing the benefit in stopping distance for the three driver types depending on the degree of APIA brake support type.
Intelligent Braking: The Seeing Car Improves Safety on the Road
APIA type I (0-0-0) means no additional brake support to the production system. APIA type II (4-4-4) means only APIA prefill is investigated. APIA type III (4-4-120) means APIA prefill and BA+ is activated. APIA type IV (4-17-17) means only APIA prefill and prebrake is active and APIA type V (4-17-120) all three APIA brake functions are supporting the driver. The test driver can obviously only be supported by a prefilling of the brake system. No additional pressure build-up by activation of the hydraulic piston pump of the brake system is able to support the fast and powerful brake pedal apply of the test driver. Even though, he is supported just by the preconditioned brake system with a benefit of about 6m less stopping distance. The most benefiting driver type is obviously the hesitating driver. With the slow and delayed applying of the brake pedal, he is benefiting most of the APIA prebraking and the extended brake assist (BA+) up to 13m less stopping distance. Besides the above results the measurements also show, that even with a detection range of 50m, a crash of the host car running at 100km/h with a preceding car running at 40km/h can be fully avoided. So even a mid-range sensor system can be used for implementing the reduced stopping distance functionality.
6
Outlook
Looking ahead to the future, passive safety systems will be activated much more rarely than at present. The „seeing“ car of the future will feature onboard intelligence, data interchange with other vehicles and telematics information, allowing it to actively avoid a large number of potential accidents. Although it will be ten or twenty years before customers can buy a vehicle of this type, the first APIA functions, which will already ensure significant safety improvements, will have reached the production stage in a few years’ time.
References [1]
EU commission initiative “e-Safety - Research on Integrated Safety Systems”, 2002, http://europa.eu.int/information_society/programmes/esafety/index_en.htm
195
196
Safety
[2]
[3] [4] [5] [6] [7] [8]
[9]
DaimlerChrysler, “Your car will be watching the road – even if you’re not”, http://www.daimlerchrysler.com/index_e.htm?/company/campaign/mot6/detail6_ e.htm Nissan Motor Co., Ltd „Technologies for reducing the driver’s workload”, 2000, http://www.nissan-global.com/GCC/Japan/NEWS/20000405_0e.html ContiTemic Press Release, “Optische Sensoren erhöhen Insassenschutz”, 2001, http://www.kfzelektronik.de/b_020023.htm Frost&Sullivan, 2002, „Moving Into Top Gear: Driver Assistance Systems Propel Driving Experience To A New Level“, http://www.frost.com Fatality Analysis Resorting System, NHTSA, 1999, http://www.nhtsa.dot.gov Automobilclub von Deutschland AvD, Alfred F. Fuhr, “Sekundenschlaf”, 2001, http://www.avd.de/index.html?presse/aivs/aivs_vortrag_sekundenschlaf.htm EASi und ContiTemic „Sensorstudie für den Frontalaufprall”, 2002, http://www.easi.de/company/publications/bag_belt_2002-de/bag_belt_2002de.pdf Study of Continental AG, November 2002, “Continental Sicherheitsstudie – Deutsche wollen mehr aktive Sicherheit im Auto”, http://www.contionline.com/generator/www/index_uv.html
Rolf Adomat, Georg-Otto Geduld, Michael Schamberger A.D.C. GmbH Kemptener Str. 99 D-88131 Lindau Germany
[email protected] Jürgen Diebold, Michael Klug Continental Automotive Systems Guerikestr. 7 D-60488 Frankfurt Germany
[email protected] Keywords:
ACC, active safety, adaptive cruise control, APIA, braking distance, distronic, drive-by-wire, driver assistance systems, emergency braking, full speed range ACC, HIS, headlamp IR sensor, image sensor, infrared sensor, key IR sensor, KIS, lane departure warning, lane keeping support, LDW, LKS, long range radar, mid-range sensor, multi-use of sensors, passive safety, pre-crash detection, radar sensor, safety systems, sensor technology, short range radar sensor, stop-and-go support, stopping distance, vision based systems, vision enhancement
197
Roadway Detection and Lane Detection using Multilayer Laserscanner K. Dietmayer, N. Kämpchen, University of Ulm K. Fürstenberg, J. Kibbel, W. Justus, R. Schulz, IBEO Automobile Sensor GmbH Abstract Multilayer laserscanner sensors measure a precise range profile of road traffic situations. Moreover, they are able to detect different reflectivities of object surfaces. In addition to object detection and object tracking, these sensory features enable a roadway and lane detection function without requiring additional sensors. A special multilayer laserscanner prototype has been developed for this purpose. The roadway detection is based on an identification of changes in the distance profiles pattern at the borders of roadways. A classification and tracking of reflection posts extends the effective range of the roadway detection and enhances its robustness. The lane detection makes use of the differential reflectivity of lane markings and road surface. The lane detection operates within a distance range of 4m up to 30m and has been tested on highway scenarios.
1
Introduction
Reliable information about the lanes, in which the ego-vehicle and other road users are presently driving, is one of the crucial factors for future driver assistant and safety systems. It allows a localisation of the ego-vehicle in the lane, which is necessary for lane departure warning. The localisation of other road users with respect to the ego-vehicles lane is also mandatory to assess the hazard of driving situations, e.g. the detection of cut-ins. Although future digital maps will include more detailed information on road construction and lanes, a self localisation based on onboard sensors is still worthwhile to overcome the limitations of DGPS with respect to position accuracy or temporal changes in map data due to road works. Research on lane and roadway recognition based on video image processing started over 20 years ago with the EU-Projects PROMETHEUS and DRIVE but is still an ongoing topic with respect to robustness under adverse weather conditions, e.g. [1], [2], [3] and [4]. Nevertheless a first simple application based on
198
Safety
these features, namely Lane Departure Warning (LDW), is already in serial production. Scanning LIDAR sensors (Laserscanners) measure a precise range profile of their environment. As a consequence, they are well qualified for automotive applications like Pre-Crash, AEB or Pedestrian Protection, where moving objects must be detected and tracked with high reliability and their geometrical dimension should be known. Some examples are given in [5], [6] and [7]. Moreover, Laserscanners are able to determine the different reflectivity’s of object surfaces which allows for the differentiation between lane markers on the road and road surface. These sensory features initiated investigations on roadway detection and lane detection using a laserscanner without an additional video sensor. This paper presents algorithms for roadway and lane detection as well as first results achieved with the prototype sensor setup.
Fig. 1.
2
ALASCA laserscanner of IBEO Automobile Sensor GmbH.
Sensorial Setup
The prototype sensor setup is based on the standard multilayer laserscanner ALASCA (Automotive LAserSCAnner) of the company IBEO Automobile Sensor GmbH (figure 1). This sensor acquires distance profiles of the vehicles environment of up to 270° horizontal field of view at a variable scan frequency of 10 to 40Hz. Distance measurements result from a time of flight signal evaluation. Scan planes are realized by mechanical scanning. At 10Hz scan frequency the angular resolution is 0.25° with a single shot measurement standard deviation of ±5cm, thus enabling a precise distance profile of the vehicles environment. The laserscanner ALASCA uses four scan planes in order to compensate for the pitch angle of the ego vehicle. It has been optimised for auto-
Roadway Detection and Lane Detection Using Multilayer Laserscanner
motive application and performs robustly even in adverse weather conditions. For the project it is embedded in the front bumper of the test vehicle (figure 2), thus a horizontal field of view of 160° is obtained.
Fig. 2.
Test vehicle at the University of Ulm. The multilayer laserscanner prototype including additional mirrors is embedded in the front bumper.
To enable lane detection, the standard measurement setup described above was extended by two additional scan planes directed towards the road surface (figure 3). These additional scan planes are created by a special mirror arrangement, which reflects the laser pulses usually occluded by the vehicle. Under normal driving conditions, the first of these planes hits the road surface at approximately 5m, the other at approximately 15m in front of the car. The additional scan planes cover a horizontal opening angle of 35° each. The other four scan planes are adjusted horizontally and cover a vertical opening angle of 3.2°. They can be used as usual for object detection and object tracking. The range measurements of the multilayer laser are acquired together with the sensor signals from the dynamic stabilisation system (ESP). These data, namely the individual wheel speeds, yaw-rate, cross acceleration and steering angle feed a dynamic ego-motion estimation module used for ego-motion compensation.
199
200
Safety
Fig. 3.
Schematic visualization of the sensorial setup: The multilayer laserscanner ALASCA is embedded in the front bumper. The four horizontally adjusted scan planes are used for object detection and object tracking. In addition to the standard ALASCA laserscanner there are two more scan planes directed to the road surface.
3
Lane Detection
3.1
Original Data
The majority of the road surface measurements are caused by the lane markings due to their higher reflectivity compared to asphalt. If the sensitivity of the receiver channel is correctly adapted, only measurements from lane markings become visible as shown in figure 4. The distance measurements stemming from the additional scan planes are coloured in red and blue. They reveal a cut-out of the lane markings and the borderline of the emergency lane. The borderline is detected by both additional planes, each of them instantiated by a separate mirror, and is therefore also discontinuous. The distance measurements coloured in black stem from the regular scan planes. These are adjusted parallel to the roadway surface with a vertical opening angle of 3.2° (see figure 3). These regular scan planes detect the car ahead and the crash barriers which border the roadway.
Roadway Detection and Lane Detection Using Multilayer Laserscanner
Fig. 4.
3.2
One unprocessed scan utilizing the additional two scan planes for lane detection. The measurements from the closest scan plane are coloured blue, the measurement from the distant scan plane are coloured red. All distance measurements generated by the regular scan planes are coloured black. Refer to Fig 3 for details of the setup.
Lane Detection using Histograms
It is obvious that the few measurements in one scan stemming from lane markings are not sufficient to identify lane markings and lane widths reliable. To meet this challenge, an advanced algorithm has been developed which evaluates the histogram of the occurrence of measurements from multiple scans. The following algorithm for lane detection has been developed for data acquired by a normal four layer Laserscanner, where all four scan planes were directed towards the road surface. Actual work adapts and optimises the algorithm for the described Laserscanner setup with mirrors. One example using scan data acquired during a drive on a three-lane highway is shown in figure 5.
201
202
Safety
Fig. 5.
Histogram of the y-coordinates of distances measurements from the road surface accumulated over eight scans (green). The bin width of the histogram is 0.1m. The original measurements (red dots) were acquired on a three-lane highway with extra emergency lane.
The scan data were taken from eight consecutive scans. The visualisation in Fig 5 represents the coordinates of the latest scan, whereas all previous scans were ego-motion compensated. The histogram reveals the three lanes and the emergency lane as well as their positions relative to the ego-vehicle. Unfortunately, not all detected reflections from the road surface are caused by lane markings. Some originate from dirt or other small objects on the road. However, as these disturbances do not occur as regularly as lane markings do, they cause only an additive noise in the calculated histogram which allows an elimination by an adaptive threshold. This threshold can be determined either based on the histogram itself or on a gradient histogram. A gradient histogram embodies successively the differences of to consecutive histogram bins. The advantage of using the gradient histogram is that larger contiguous areas of disturbances caused by dirt or the grass strip near to the emergency lane will not become a significant influence on the threshold. Figure 6 gives an example. Most of the disturbances stem from dirt on the right lane. As a consequence, the noise in the corresponding part of the histogram is significant. The red coloured parts of the histogram denote the detected noise, which is not taken into account for further data processing.
Roadway Detection and Lane Detection Using Multilayer Laserscanner
Potential lane markings are represented in the histogram by local maxima encircled by two local minima lying above the noise floor. Due to inevitable errors in the ego-motion compensation, it may occur that local maxima are very close to each other. An assessment of the inaccuracy of ego-motion compensation and a priori knowledge about the construction of roadways allows the separation or fusion of the ambiguous peaks. Figure 7 gives a detailed picture of the histogram from Fig 6. The peaks on the left side are separated because their distance is too large to be caused by errors of the ego-motion compensation. Indeed the most left peak is caused by measurements on the crash barrier. In contrast, the different peaks on the right side are condensed as their distance is small enough to originate from the non-ideal ego-motion compensation. The width of the rectangles in figure 7 denotes the position of the encircling local minima. All distance measurements lying inside this area are assumed to originate from lane markings.
Fig. 6.
Histogram of a highway scenario with additive noise caused by dirt or other smaller objects on the road surface. The region of interest is chosen between 0m and 30m in driving direction.
203
204
Safety
Fig. 7.
Detailed histogram of figure 6. Potential lane markings are indicated by rectangles.
In a last processing step, a linear regression is applied to these specific measurements. The calculated regression lines describe mathematically the actual orientation of the lanes with respect to the ego-vehicle. This processing step is visualized in figure 8. Evaluation of the regression lines allows also determining the width of the lanes and the position of the ego-vehicle within its lane. The lane detection algorithm is stabilized by applying a temporal filtering.
Fig. 8.
Determination of the lanes by linear regression. For calculation of the regression line the measurements lying in the regions marked by rectangles are used.
Roadway Detection and Lane Detection Using Multilayer Laserscanner
3.3
Results
The lane detection algorithm was tested during multiple drives on three-lane and two-lane highways. To allow a quantitative assessment of the methods, road sections were chosen from which construction details were known exactly. The calculation of the offset of the ego-vehicle was tested during very short drives straight ahead. As no ground truths data are available, no quantitative results can be given for this state variable. The results are visualised in Fig. 9 to figure11.
Fig. 9.
Detection rate of the lane markings of different highway scenarios. There are three-lane roads (Scenario 16, 20 and 22) and two-lane roads (Scenario 26 and 28). The results for the ego-lane are dyed grey. The minus indicates that this lane did not exist in this road section.
Figure 9 summarizes the detection range achieved in different highway scenarios. The detection rate of the ego-lane is above 90%. However, in some cases the detection rate for other lanes decreases significantly.
205
206
Safety
Fig. 10. Test results for determination of the width of the ego-lane.
The width of the ego-lane is determined accurately with a small standard deviation of the estimate (figure 10). As a consequence, it is likely that the absolute position of the ego-vehicle in the ego-lane can be determined with the same absolute accuracy. As no ground truths data could be acquired in a moving vehicle during these first experiments, only relative changes were evaluated for the offset. The experiment was carried out on the short section of the road while the car was moving straight ahead (offset = 0m). The results are given in Fig 11.
Fig. 11. Test results for determination of the offset in the ego-lane when driving straight in the lane.
It turned out that the absolute deviation and standard deviation is relatively small. It must be mentioned that these are first results. The evaluation and optimization of the algorithms is an ongoing process.
4
Roadway Detection
4.1
General Concept
The roadway detection is based on an identification of pavement changes at the borders of roadways. These are caused by plants or grass which are detect-
Roadway Detection and Lane Detection Using Multilayer Laserscanner
ed by the laserscanner. Using special optimisation techniques, a 2D model of the run of the curve can be adapted to these measurements, thus giving an estimate of the run of the roadway up to a distance of 40m. This range limit can be extended if reflection posts are positioned at the borders of the road. As reflection posts are easily detectable by a laserscanner due to their good reflectivity, they are recognised, classified and tracked at distances of up to 80m. The roadway is described by two parallel parabolas. It is therefore characterised by the width w of the roadway, the offset a0 of the ego-vehicle relative to the roadway centre and the parameter a2 describing the curvature. More complex model structures like clothoid models have shown to worsen the results as the additional parameters are very hard to determine. Moreover, it has been shown that especially for short-range modelling a parabola approach is the adequate choice. In consequence, the middle, the left, and the right side of the roadway are described by the following simple equations
(1)
where x is the coordinate axis in driving direction. A typical measurement scenario on a country road is shown in figure 12. The coordinate system is referred to the ego-vehicle. Its origin is the attachment point of the embedded laserscanner and therefore moves with the ego-vehicle in both, position and orientation. Distance measurements resulting from the bushes on the right and left side of the roadway can be detected at distances up to 35m (scattered black dots in figure 12). At longer distances, the measurement pulses of the laserscanner are mirrored due to the angle of incidence, i.e. the pulse energy reflected diffusely becomes too low to be detected by the laserscanners receiver unit. However, the reflectors of reflection posts or reflectors on the rear of vehicles are still detectable due to their higher reflectivity. In figure 12, a pair of reflection post is visible at a distance of approximately 80m, the rear of a car is detected at a distance of 95m. The detection and classification of reflection posts using laserscanner data is as follows: If two measurements or small clusters of measurements are detected at distances larger than 50m, it must be a reflector. In consequence, if the distance between a pair of reflectors is nearly identical with the present estimate of the roadways width and the connecting line between the respective
207
208
Safety
reflectors is perpendicular to the orientation of the roadway, a potential pair of reflection posts has been detected. This hypothesis is verified within the next scans by tracking the reflection post pair while assuming static objects. If the objects are retrieved within the next scans at stationary positions, for which the ego-motion is compensated in the distance measurements, they are classified as reflection posts. The tracked reflection post pairs are used to stabilize the roadway detection.
Fig. 12. Roadway detection using unprocessed laserscanner measurements combined with a tracking of reflection posts. The coordinate system is referred to the ego-vehicle.
The concept of roadway detection based on laserscanner data is therefore a fusion approach. Signal evaluation of unprocessed distance measurements at short range as a basis is combined with information generated by reflection post tracking. The basic algorithm for the road detection using unprocessed distance measurements from a laserscanner has already been described in [8].
4.2
Determination of Width and Offset
In the basic approach, the width of the roadway and the offset of the ego-vehicle relative to the roadways boundaries are determined by observing the unprocessed measurements at both sides of the roadway middle. This is visualised in figure 13
Roadway Detection and Lane Detection Using Multilayer Laserscanner
Fig. 13. Setup to determine the approximate width of the roadway and the offset of the ego-vehicle. All calculations are performed in the coordinates of the embedded laserscanner.
The algorithm starts at the origin of the laserscanner coordinate system. It determines the two measurements left and right with the lowest absolute ycoordinate value. These two values determine the actual width of the roadway and the offset of the ego-vehicle relative to the roadway boundaries. In order to compensate the impact from a bend, this search is performed only in the lower view range. The bound for the lower view range is adapted depending on the actual curvature of the roadway model and depending on the number of measurements available [8]. For noise reduction and additional robustness, a history of road width and position values is filtered by calculation of a weighted mean value. The lengths of the filters differ, because of the slower changes of the width but faster changes of the relative position of the car to the middle of the road. The highest weighting is done when the current estimation of the width is close to the mean estimation. A medium weighting is performed for width values that are smaller than the mean estimation. A small weighting is useful for width values significantly larger than the mean estimation, which occur very often and lead to a wrong result. This method reflects the strong non-linearity of measurements compared to the model parameter. However, it assumes that all moving objects on the road have been detected and eliminated from the measurements set beforehand. Moreover, distance measurements lying on the roadway surface have to be marked and excluded from this evaluation. If reflection posts are tracked, the distance between a pair of reflection posts, gives an additional, very reliable estimate of the actual width of the roadway. Therefore the present estimate of the roadways width is enhanced by calcu-
209
210
Safety
lating a weighted average of both results, whereas the input from the reflection post tracking has a higher weight. However, the offset of the ego-vehicle relative to the boundary of the roadway is still determined without using the reflection post tracking.
4.3
Determination of the Curvature
The concept of the basic algorithm using unprocessed laserscanner measurements is visualised in figure 14.
Fig. 14. Setup to determine the approximate curvature of the roadway. All calculations are performed in the coordinates of the embedded laserscanner.
A finite set of curvatures, represented by a variation of the parameter a2, is determined around the present estimate to generate discrete roadway hypothesis. The functions fleft(x,aj) and fright(x,aj) describe the borders of the roadway hypothesis for a specific curvature a2,i from the set. Now the number of measurements lying inside each of these roadway hypotheses is counted. The curvature a2,i with the lowest count is the best estimate of the curvature from the current scan. A moving average filter reduces the measurement noise. However, this method is only applicable if sufficient distance measurements are available at far ranges. Unfortunately, this is seldom the case, because at far range the measurement pulses of the laserscanner are mirrored due to the angle of incidence. Moreover, other road users will occlude the roadway boundaries making distance measurements on these objects impossible, espe-
Roadway Detection and Lane Detection Using Multilayer Laserscanner
cially if the traffic situation becomes denser. As a consequence, the basic algorithm often results in an unreliable estimate of the curvature. If the field of view is limited, the best strategy to handle this problem is setting the parameter a2 to zero. If reflection post tracking is possible, this limitation in range can be overcome. Depending on the number of visible reflection post pairs, fitting the functions fleft(x) and fright(x) to these object positions by means of interpolation or leastsquare fit results in a much more reliable estimate for the parameter a2. However, this estimate has also to be filtered due to the uncertainty of the reflection posts position, which is mainly caused by non-ideal ego-motion compensation and occlusions due to other objects on the roadway. Figure 12 shows an example at one point in time. Here, the left and right boundary of the estimated roadway is plotted in the measurement scan. The curvature has been calculated using reflection post tracking, the width of the roadway by a fusion of the results from reflection post tracking and the basic algorithm, and the offset using the basic algorithm. The boundaries do not coincide with the refection post position at any time. This is a result of the applied filtering. The algorithm was tested on country roads and on highways. It turned out that on highways the determination of the curvature is less accurate due to the wider roadway and limited detection range of the laserscanner. This is not really a drawback as curves on highways are not so tight and therefore a straight road assumption (setting a2 to zero) will not result in significant errors. On country roads, however, good results even for the curvature estimation could be achieved if reflection posts were present.
5
Conclusions
Algorithms for roadway detection and lane detection using laserscanner data have been presented. The roadway is described mathematically by a parabolic model. State variables are the width of the roadway, the position of the egovehicle relative to the boundaries of the roadway and the curvature of the roadway. These parameters are determined by a combination of reflection post tracking and processing of the original data in short range. Lane detection is realized by the evaluation of the differential reflectivity between lane markings and asphalt. A histogram approach is used to eliminate scattered disturbances caused by dirt or small objects on the road surface. First results revealed a very reliable detection of the ego-lane and an accurate determination of its width. The results with respect to ego-localization within
211
212
Safety
the lane are also very promising. However, as no ground truth data was available during the tests, no definite statement regarding absolute accuracy is possible at present. Future work is focused on an adaptation and optimization of the algorithms for the prototype laserscanner setup including the mirrors. The algorithms for roadway detection, reflection post tracking and lane detection will be integrated into a fusion system in order to enable a consistent road recognition. Moreover, it will be investigated, whether the lane detection can be improved towards a detection and modeling of bends required for applications on country roads. Therefore extensive tests on different roads will be carried out to generate a reliable statistical basis for performance and reliability assessments.
6
References
[1]
Bertozzi M.; Broggi A.; Fascioli A.: An extension to the Inverse Perspective Mapping to handle non-flat roads. Proceedings IEEE Intelligent Vehicles Symposium, Stuttgart, October 1998, pp. 305-310 Bertozzi M.; Broggi A: GOLD: A Parallel Real-Time Stereo Vision System for Generic Obstacle and Lane Detection. IEEE Transactions on Image Processing 7 (1998), Nr. 1, pp. 62-81. Taylor C. J.; Malik J.; Weber J.: A Real-Time Approach to Stereopsis and LaneFinding. Processdings IEEE Intelligent Vehicles Symposium, Tokio, September 1996, pp. 207-213 Apostoloff, N.; Zelinsky: Robust Vison based Lane Tracking using Multiple Cues and Particle Filtering. Processdings IEEE Intelligent Vehicles Symposium, Columbus Okio, USA, June 9-11, 2003. Fuerstenberg, K.; Dietmayer, K.; Lages, U.: Laserscanner Innovations for Detection of Obstacles on the Road. In: Krueger, S.; Gessner, W. (Eds.): Advanced Microsystems for Automotive Applications. Yearbook 2002, Springer Verlag, 2002. Streller, D.; Dietmayer, K.: Object Tracking and Classification using a multiple hypothesis approach. Proceedings of the Intelligent Vehicle Symposium, IV2004, June 15th -17th 2004, Parma, Italy. Fuerstenberg, K.; Dietmayer, K.: Object Tracking and Classification for Multiple Active Safety and Comfort Applications using a laserscanner. Proceedings of the Intelligent Vehicle Symposium, IV2004, June 15th -17th 2004, Parma, Italy. Sparbert, J.; Dietmayer, K.; Streller, D.: Lane Detection and Street Type Classification using Laser range Images. Proceedings of ITSC 2001, IEEE 4th
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Roadway Detection and Lane Detection Using Multilayer Laserscanner
International Conference on Intelligent Transport Systems, ITSC 2001 Oakland, USA. Prof. Dr.-Ing. Klaus Dietmayer, Dipl. Ing. Nico Kämpchen University of Ulm Department of Measurement, Control and Microtechnology Albert-Einstein-Allee 41 89075 Ulm, Germany
[email protected] [email protected] Dipl.-Ing. Kay Fürstenberg, Dipl.-Ing. Joerg Kibbel, Winfried Justus, Dr. Roland Schulz IBEO Automobile Sensor GmbH Fahrenkroen 125 22179 Hamburg, Germany
[email protected] Keywords:
roadway detection, lane detection, multilayer laserscanner, LIDAR
213
215
Pedestrian Safety Based on Laserscanner Data K. Fürstenberg, IBEO Automobile Sensor GmbH Abstract This paper presents a prototype system for the early detection of a car-to-pedestrian accident that has been tested in a passenger car. For environment sensing a high resolution Multilayer-Laserscanner with a horizontal field of view of about 120 degree in front of the vehicle is used, which covers more than 70% of all car-to-pedestrian accidents. Under consideration of a moving pedestrian (94%) 2/3 of all car-to-pedestrian accidents are addressed with the presented Pedestrian Protection approach. A region of no escape (RONE) is introduced. This RONE describes an area in front of the car where the car-to-pedestrian accident is unavoidable, if the pedestrian is detected inside this area.
1
Introduction
More than 436,000 pedestrians are injured and about 39,000 are killed in the worldwide traffic. Pedestrians account for 24% of all traffic fatalities worldwide [1]. Pedestrian protection systems will become an essential safety function in future vehicles as the government will force automotive manufacturers to improve pedestrian safety within the next few years [2]. For such systems the ability to recognise pedestrians is mandatory. This includes detection, tracking and classification of these vulnerable road users. The paper starts with a detailed interpretation of the accident analysis of carto-pedestrian accidents. The detection of endangered pedestrians is followed by the introduction of a new strategy for a reliable and robust pedestrian classification to be used in pedestrian protection systems. A region of no escape (RONE) is introduced, which describes an area in front of the car where the car-to-pedestrian accident is unavoidable, if the pedestrian is detected inside this area, as illustrated in figure 1.
216
Safety
Fig. 1.
2
Region of no escape (RONE) assuming a non-moving pedestrian
Accidentology
A detailed accident analysis is carried out by [3] based on data of 663 car-topedestrian accidents in the city and district of Hanover, which are characterized by 149 details each.
Fig. 2.
Distribution of the point of first contact of the pedestrian in a carto-pedestrian accident [3]
Most of the pedestrians involved in a car-to-pedestrian accident have the first contact with the cars frontal region. This usually means that the legs make contact with the front bumper and after 50 to 150ms the body and especially the head hit the bonnet or the windscreen of the car.
Pedestrian Safety Based on Laserscanner Data
Fig. 3.
Cumulative probability of the severity of the injuries with respect to the cars velocity. The dashed curve shows the total percentage of involved pedestrians at a certain car speed. The left curve displays the lightly injured pedestrians. The middle curve shows the serious injured pedestrians and the right curve marks the fatal accidents [3]
The severity of the pedestrian’s injuries is strongly related to the speed of the involved car, as shown in figure 3. The according distribution of the collision speed of the involved cars is displayed in figure 4. Therefore the considered speed range of the involved car should be at least 0 to 60km/h, where about 96% of all car-to-pedestrian accidents and 90% of the car-to-pedestrian accidents with serious injuries happen.
Fig. 4.
Distribution a of the collision speed of the car in a car-to-pedestrian accident [3]
217
218
Safety
Figure 5 displays the different types of movement of the involved pedestrian. It is a quite essential result that 94% of the pedestrians are moving before the car-to-pedestrian accident will occur.
Fig. 5.
Distribution a of the different movement of the pedestrian in a carto-pedestrian accident [3]
Fig. 6.
Probability of a moving pedestrian involved in an accident with the frontal region of the car
The accident analysis describes the car-to-pedestrian accident mainly as a front crash (figure 2) with a moving pedestrian (figure 5), which covers 2/3 of all car-to-pedestrian accidents, as shown in figure 6.
3
Pedestrian Safety
Pedestrian safety is one of the most ambitious challenges in active safety area.
Pedestrian Safety Based on Laserscanner Data
3.1
Related work
There are quite a few publications dealing with pedestrian detection and classification for pedestrian safety systems. [6] and [7] describe a stereo vision based detection approach carried out during the EC-funded research project PROTECTOR. A sensor coverage area is defined, where the capability to detect the pedestrian is required, as shown in figure 7. The HMI provides the driver an acoustic warning in case of a possible dangerous situation with a pedestrian. The warning is given if the pedestrian and vehicles are on a collision course. Outside the sensor coverage area, it is considered that the detection capability is optional in a sense that the system is not rewarded/penalized for correct/false/missing detections.
Fig. 7.
Sensor coverage area (Detection area) of a video based pedestrian safety system, used in the EC projects PROTECTOR (left) [6] and in SAVE-U (right) [7]
A sensor coverage area far away from the vehicles front, as described in [6] and [7], introduces a high amount of false alarms. This is caused by the fact that some pedestrians, who are detected in the sensor coverage area, could escape the impending crash in the area of 0 to 5m / 0 to 10m in front of the vehicle. However, this strategy is fine for a warning strategy, but not suitable to trigger irreversible actuators, like an active hood.
3.2
Pedestrian Safety based on Laserscanners
The strategy to classify pedestrians is based on the following idea: An object that looks like a pedestrian with respect to its outline is classified as a pedes-
219
220
Safety
trian only if the object is presently moving or has moved during the period of tracking in the past [4]. This is a weak assumption, as 94% of the pedestrians involved in a car-to-pedestrian accident are walking or running [3]. Otherwise the object will be detected as a non-moving obstacle. As long as a small obstacle is not moving towards the road, there is no hazard implied, neither for the vehicle nor for the object itself. In order to classify a pedestrian more robustly, its typical movement of the legs in the range image sequences and the history of the tracked object with respect to previous classification results are additional criteria for a reliable classification [5].
3.3
Region of no escape - RONE
If the pedestrian is detected in a well defined region in front of the car – the so-called region of no escape (RONE) – the car-to-pedestrian accident is not avoidable anymore. The dimensions of the RONE depend on the absolute velocity of the car and the pedestrian and their maximum acceleration capabilities. Every pedestrian colliding with the car’s frontal region will enter this RONE in a certain time before the crash. Therefore, the time to collision (up to 300 ms) and the point of first contact can be estimated with a high confidence level, if the pedestrian is within this RONE. The dimensions of the RONE are determined under consideration of maximum acceleration capabilities of the ego vehicle and the pedestrian. The maximum acceleration of a passenger car is smaller than in urban environments. The potential initial acceleration of a pedestrian has been evaluated in more detail. Therefore experiments performed in order to determine the potential acceleration of a pedestrian. In the experimental setup a pedestrian is standing in front of the car trying to escape as fast as possible, as shown in figure 1. The time, which is needed to jump the distance (bego/2~0.8m half the width for passenger cars) from a central position in front of the vehicle to one side, is measured for 10 different persons (students), with different escape strategies, such as forward, backward, running or jumping.
Pedestrian Safety Based on Laserscanner Data
Fig. 8.
10 different persons tried to escape the endangered region in front of a passenger car as fast as possible
According to the evaluated escape times displayed in figure 8 the minimum escape time is estimated to be more than 0.4s in every case. Thus, the maximum initial acceleration of a pedestrian can be estimated as (1)
The most relevant scenario is a walking pedestrian crossing the road in front of the vehicle, as displayed in figure 9. In case the pedestrian is endangered it will enter the RONE at a certain time and the vehicle-pedestrian accident is not avoidable any more.
Fig. 9.
Region of no escape (RONE) assuming a walking pedestrian
221
222
Safety
The tip of the RONE defines the position of a pedestrian, where the maximum remaining time to contact (TTC) is achievable within the RONE. The following equation determines the maximum remaining time to contact (TTC), based on a moving pedestrian, as illustrated in figure 9, (2)
Under the following assumptions:
(3)
the remaining time, if the pedestrian is located at the tip of the triangle, can be determined to (4)
The lateral position of the tip of the RONE is determined to (5)
Thus, the longitudinal distance of the tip of the RONE, i.e. the distance between the endangered pedestrian and the vehicle, can be determined by (6)
with
Pedestrian Safety Based on Laserscanner Data
Typical distances of the tip of the RONE at different vehicle speeds are
(8)
Every pedestrian colliding with the car’s frontal region will enter the RONE in a certain time before the crash, which can be monitored using Laserscanner data. The determination of the RONE parameters - in addition to pedestrian classification algorithms - provides a strategy to predict an impending car-topedestrian accident with a very high reliability.
4
Results
Results carried out on recorded real data of a Laserscanner integrated in a passenger car, with a total driven distance of about 10,000km in different road types. This database is growing every week, as new test rides are added permanently. The first results show a false alarm rate of 0.7 per 100km for the pedestrian protection application, without use of the analysis of the pedestrian legs. Introducing the pedestrian leg analysis it can be assumed that false alarm rates of 1 per 108km are possible. This is current work at IBEO Automobile Sensor GmbH and will be finished in the next months.
5
Conclusions
The described system offers the early detection of 2/3 of all car-to-pedestrian accident scenarios up to 300ms before the crash. This offers the introduction of reversible actuators, such as an active hood, in order to mitigate the consequences of a car-to-pedestrian accident.
223
224
Safety
Well known contact sensors in the front bumper are functional for vehicles with a distinctive bonnet with a low vertical gradient, where body and especially head contact is about 100ms later than the first contact with the legs. However, for vans and trucks with a vertical front this strategy is not suitable at all.
Fig. 10. Motion of a pedestrian hitting the bonnet of a car [2]
The functionality of the pedestrian safety system can be demonstrated in reality in the concept car of IBEO, which is planned to be shown at the AMAA 2005.
Pedestrian Safety Based on Laserscanner Data
References [1] [2] [3] [4]
[5]
[6]
[7]
http://www.unece.org/trans/roadsafe/rs3accibua.html www.eevc.org Heinrich, T.: Bewertung von technischen Maßnahmen zum Fußgängerschutz am Kraftfahrzeug. Technische Universität Berlin, Studienarbeit, Berlin, 2003. Fuerstenberg, Kay Ch.; Dietmayer, Klaus C.J.; Willhoeft, Volker: Pedestrian Recognition in Urban Traffic using a vehicle based Multilayer Laserscanner. Proceedings of IV 2002, IEEE Intelligent Vehicles Symposium, June 2002, Versailles, France. Fuerstenberg, Kay Ch.; Dietmayer, Klaus C.J.: Object Tracking and Classification for Multiple Active Safety and Comfort Applications using Multilayer Laserscanner. Proceedings of IV 2004, IEEE Intelligent Vehicles Symposium, June 2004, Parma, Italy. Gavrila, Dariu; Giebel, Jan: Vision-Based Pedestrian Detection: The PROTECTOR system. Proceedings of IV 2004, IEEE Intelligent Vehicles Symposium, June 2004, Parma, Italy. Strategies in terms of pedestrian protection. Deliverable 6 of the EC project SAVEU, http://www.save-u.org/file_html/library.htm.
Kay Ch. Fürstenberg Research Management IBEO Automobile Sensor GmbH Fahrenkrön 125, 22179 Hamburg, Germany
[email protected]
225
227
Model-Based Digital Implementation of Automotive Grade Gyro for High Stability T. Kvisterøy, N. Hedenstierna, SensoNor AS G. Andersson and P. Pelin, Imego AB Abstract SensoNor has together with Imego for the last three years developed and tested gyro concepts with bias-stability in the range of a few deg/h using SensoNor’s existing roll-over vibratory gyro die. This has been possible by improving the LNA’s and implementing a modelbased concept where the feedback loops are implemented in the digital domain. The digital part includes 5th order sigma-delta AD converters and 5th order feedback filters to accommodate for an optimized trade-off between complexity and signal-noise ratio. The approach allows for using a low cost MEMS gyro die measuring 3 by 2.5mm achieving close to navigational performance. The “Allan Variance” method as well as the unquestionable measurement of earth rotation has been performed to prove bias-stability. The novel digital algorithm based approach is a significant leap enabling flexible high performance solutions to be built upon fairly conventional MEMS structures.
1
Introduction
Sport-utility vehicles (SUV’s) have high centres of gravity which makes them prone to tipping over as result of a sliding action or a collision. This risk may be lowered by the use of electronic stability programs (ESP) to interact with the braking of individual wheels to helping prevent spinning or plowing out. Due to the high concentration of SUV’s in US and at the same time the low number of US cars equipped with ESP the number of systems is assumed to grow significantly in the years to come. Up to now the necessary gyro’s needed for the ESP systems have been quite expensive due to the implementation on a moderate number of cars and the effect of such gyro’s to some extent being historically adopted versions of solutions made for high-performance aerospace applications. The same goes for gyro’s used to trigger side curtain airbags in the event of a rollover to protect the head and to keep passengers from being thrown out of the vehicle.
228
Safety
Often automotive sensor approaches mean a much customized solution where flexibility and “not needed” performance is offered in exchange of cost savings. However, since digital semiconductor solutions develop rapidly giving more and more performance for less money this motivates for digital approaches enabling the flexibility and improved performance. Particularly the use of FPGA’s during development and use of software routines as part of the finished product will make development cycles and application adjustments faster.
Fig. 1.
Gyro applications
Figure 1 shows applications for gyro’s as function of bias-stability and scalefactor-stability. In the outmost low performance corner we find the rollover application. Somewhat better performance is needed for ESP. Over the next five to ten years it is anticipated that the use of inertia measurement units (IMU’s) will become the main path to serve the car with inertia signals. Such signals will then be distributed to the needed electronic control units (ECU’s) and used in parallel and fused with signals from other sensors (“sensor fusion”) to accommodate for complex integrated control functions. Even if the needed bias-stability today is in the range of several hundred deg/h this fusion trend will create need for greater dynamic ranges and stability requirements an order of magnitude better. SAR10 [1] is an example of a vibratory gyro for rollover applications based on a pure MEMS platform including wafer scale chip packaging. Due to the “Butterfly Structure”, where two coupled masses vibrate in opposite phases and two fully symmetric pairs of electrodes are used for electrostatic drive and
Model-Based Digital Implementation of Automotive Grade Gyro for High Stability
capacitive sense of the excitation mode and the detection mode, a high mechanical and electrical common mode rejection is achieved. The fact that the intrinsic performance of the SAR10 gyro die is much better than what is utilized for the rollover implementation and the ever better cost performance ratio for digital electronics motivated us for the challenging task of implementing the complex feedback and demodulation functions of an automotive gyro on a digital platform.
2
The Sensor Model
The operational principle for the sensing die is based on tuning the detection frequency to coincidence with the excitation frequency using a DC bias voltage (“electrostatic spring”). Since the vibratory gyro is based on symmetry and balancing, the error signals will be due to asymmetry and imbalance. A behavioural function model has been established for all parameters to allow for simulations of different implementation concepts. In figure 2 a block model is shown for the SAR10 sensing die.
Fig. 2.
Block model description of the “Butterfly Gyro” die
A full description of the model and the sensor behaviour is outside the scope of this paper; however, a few of the most significant parameters are defined below. ∆CE and ∆ CD are representing respectively the excitation and detection capacitances.
229
230
Safety
IG = gyroscopic torque
kED = asymmetric spring forces
KE = torque-exc-voltage
ζED = asymmetric damping forces
KD = torque-servo-voltage
IED = asymmetric mass
Sθ = output capacitance (angle)
KED = asymmetric excitation
Sψ = output capacitance (angle)
Sθψ = asymmetric detection
3
System Concept
An electronic demonstrator system has been realised to prove the performance of the gyro die, to test the versatility of the concept and to enable the fine tuning of the algorithm structure. The system includes an “analogue” ASIC (named µSIC) to convert the sensing signals (capacitances) to digital high frequency bit-streams, one for the excitation and one for the detection. The bitstreams are feed to a FPGA that contains software configurable control loops for the excitation and the detection feedback. The FPGA contains also the decimation filters that reconstruct the output bit-streams to variable lengths words (flexible depending on application) and calculates a “raw” rate-signal (not compensated). The ASIC also includes two reconstruction filters needed to “smooth” the bit-stream feed-back signals. Also included in the ASIC is a temperature sensor (PTAT) that gives the input to temperature correction algorithms (not used for the demonstration system). As for SAR10 the sensing interface uses a switch network (250kHz) to enable the same electrodes to be used for drive (electrostatic) as for sensing (capacitive). See figure 3. In a commercial product the FPGA will be substituted with a DSP (digital signal processor) configurable solution or a fully “hardwired” custom ASIC. The main advantages with a digital concept compared to the conventional analogue solution with a mixer are: The gyro operates only in the oscillation frequency region and is therefore more immune to 1/f noise than AD converters in the base band. No (analogue) mixers are required. Instead the excitation and detection signals are calculated in the digital domain, which gives better amplitude and phase resolution and control. The gyro becomes easily reconfigurable in software or hardware implementations; this gives almost unlimited flexibility (range, bandwidth, resolution and noise) during both the development and the commercial phase of a program. This is very favourable in respect to simulation and
Model-Based Digital Implementation of Automotive Grade Gyro for High Stability
verification time.
Fig. 3.
Schematic for the digital concept
In principle the concept is ideal for implementing adaptive techniques that tracks variations in the sensor parameter characteristics and thereby allows for compensation (Kalman filters). However, such adaptive techniques are not pursued to any great extent in this first implementation and are not needed for the SAR10 sensing die used for ESP requirements. The implementation of the algorithms relays on the good understanding of the variation and causes for variation of the model parameters that has been acquired during the development and manufacture of the SAR10 sensor.
Fig. 4.
3.1
Noise Sources
Noise Shaping Converters
Figure 4 shows the noise sources for the implemented system.
231
232
Safety
A 1-bit AD converter with a noise shaping feedback loop and over-sampling (Sigma-Delta-Converter) will “move” the noise form the “base-band” and up in frequency. When reconstructing the useful signal by low pass filtering the noise will be removed. The structure of such converters is shown in figure 5. The resulting spectral noise density is shown in figure 6.
Fig. 5.
Structure of a 1-bit Sigma-Delta AD converter
Fig. 6.
The noise is no longer white using a noise shaping converter
From textbooks the signal to noise ratio as function of the over-sampling-rate is given as shown in figure 7 with the complexity (order) as a parameter.
3.2
Analogue ASIC
The ASIC, designed and manufactured in a 0.35µm foundry CMOS process, includes two identical signal paths; one for excitation and one for detection. Each path consists of a charge amplifier (LNA) and a 5th order Sigma-Delta Modulator (SDM). The charge amplifiers are fully differential charge integrators using the correlated double sampling technique. The differential amplifiers are designed to have an equivalent input noise of 2nV/√Hz and a gain bandwidth product of 30MHz. The gain of the charge amplifier is determined
Model-Based Digital Implementation of Automotive Grade Gyro for High Stability
by integration capacitors which can be adjusted in 0.7pF steps. The SDM’s are designed to have an equivalent input noise level lower than 50nV/√Hz (in the frequency range from 2kHz to 15kHz).
Fig. 7.
Signal Noise Ratio as function of over-sampling-rate (OSR)
The SAR10 chip scale package is based on “buried” electrical crossings (pn isolated conductors buried in the substrate) “under” the area used for anodic glass-silicon bonding. This approach secures high reliability vacuum cavities with a proven field track record manufactured in an established mature process. A minor disadvantage is the quite large parasitic capacitances due to substrate coupling. For electrode capacitances at 4.8pF the FS (full scale) signal levels are well below 1pF and the parasitic in the range of 5pF. A design strategy for the LNA (low noise amplifier) was to minimize all noise contributions to the extent that the thermal noises (kT/C) from the gyro die capacitors becomes dominating. Another design constraint is that the diodes (crossings) limit the voltage levels that can be used. Measured output noise from the LNA was 3.3µV/√Hz compared to the calculated theoretical thermal noise of 2.0µV/√Hz. Figure 8 shows the principal detection circuitry.
233
234
Safety
Fig. 8.
Principal detection circuitry (LNA)
As a “compromise” between complexity and performance we decided to use 5th order SD-converters at 2MHz sampling frequency (derived from a 4MHz crystal oscillator). This results in an OSR of 64 at 15kHz bandwidth. As shown in figure 9 the final (SDM output) SNR is then 112dB at 15kHz. The outputs are 2MHz 1-bit data streams with voltage levels according to the LVDS-standard. The analogue reconstruction filters are four-pole low pass with cut-off at 40kHz.
Fig. 9.
Implemented 5th order sigma-delta converter (SDM)
Model-Based Digital Implementation of Automotive Grade Gyro for High Stability
3.3
Digital Signal Processing Algorithms
The digital signal processing consists of three main blocks, the excitation feedback, the detection feedback and the angular rate signal processing. All blocks have in common that they work only on the necessary bit width and parts of the algorithms even operate directly on the Sigma-Delta bit-stream. Both control loops contain SDM’s, which output 1-bit data streams back to the ASIC reconstruction filters.
Fig. 10. The excitation loop filter
The excitation loop maintains the oscillations at the excitation mode of the gyro and at constant controlled amplitude. The main building blocks of the excitation loop filter as shown in figure 10: The front end filter, HFEx, maintain the loop phase at 360deg at the excitation frequency and keep the amplitude variations within the specifications. It as well obtains a low loop gain and a loop phase of 180deg at the “harmful” mode 4 (see [1]). It blocks DC and attenuates noise from the SDM to secure sufficient dynamic range for the AGC (automatic gain control). The filter takes advantage of the properties of the input signal (1-bit stream) resulting in a multiplier-free implementation. It includes the measures to meet all variations as described by the behavioural model and no gyro specific parameter has to be set. The frequency response of the filter is a complex low-pass type with advanced properties acting as a phase equalizer. The amplitude estimator, Aest, estimates the amplitude of the excitation oscillation. It is a combination of LP (low pass) filters, decimators and a rectifier. The block is designed such that in combination with HFEx and the filters implemented on the ASIC the error is minimized. The AGC is a simple PID controller. Since the effective gain of the gyro is quite variable, optimum performance requires a gain parameter to be set. The SDM is of 5th order and in principle identical to the one implemented on the ASIC.
235
236
Safety
The detection loop creates the force feed-back signal to counteract with the Coriolis force. The filter reduces the high natural Q-factor of the detection mode and increases the signal bandwidth of the gyro. The main building blocks of the detection loop filter as shown in figure 11: The front end filter, HFEd, and the variable gain, G, maintains the closed loop phase response at the excitation frequency at a fixed value and keep amplitude variations to a minimum. It controls the signal bandwidth and blocks DC. It as well attenuates noise from the SDM to give sufficient dynamic range. The filter takes advantage of the properties of the input signal (1-bit stream) resulting in a multiplier-free implementation. The frequency response of the filter is a complex low-pass type with advanced properties acting as a phase equalizer. The SDM is of 5th order and in principle identical to the one implemented on the ASIC. A separate algorithm (not shown in figure 11) adjusts the duty-cycle of the switched voltage signals to the gyro die and thereby controls the DC voltage used for electrostatic tuning of the detection frequency.
Fig. 11. The detection loop filter
The down converter takes the rate signal, modulated on the excitation frequency, and converts it to base band. The input signals are the 1-bit excitation signal from the ASIC, the 1-bit force feedback signal and an internal signal from the amplitude estimator, Aest, in the excitation loop. The output is a multibit angular rate signal. The main building blocks of the down converter as shown in figure 12:
Fig. 12. The down converter
Model-Based Digital Implementation of Automotive Grade Gyro for High Stability
The PLL (phase-locked loop) recreates a harmonic signal from the bit stream excitation signal. It uses the Fxest signal to set the free-running
frequency of the internal numerical oscillator. As the PLL uses the frequency estimate it can be designed to be quite narrow-banded, which is advantageous in terms of phase performance and complexity. No gyro parameters have to be set. A variable phase delay, θ, and a mixer is used to down convert the inphase part of the detection signal. The actual value of θ must be calibrated. The decimator block reduces the sample rate of the gyro signal. It consists of a number of LP filters and sample rate converters.
3.4
Practical Implementation
At present the demonstration system is implemented in the FPGA using the Xilinx System Generator. A commercial product will be implemented on a DSP platform or by custom digital hardware (ASIC) according to application needs (flexibility). A key issue when translating the originally designed algorithms to digital hardware design is the word length. Word lengths will be selected just long enough to give sufficient performance, while keeping complexity at a minimum.
4
Measurements
A number of measurements have been performed to verify the performance of the ASIC as well as the consistency of the digital signal processing. In this paper we will not present such detailed measurements, however a few significant results are already mentioned in chapter 3.2. The focus will be on some astonishing results in respect to bias-stability. These results must be seen in relation to the requirements for state-of-art ESP systems. Such systems require typically a bias-stability better than 1deg/s including all errors (better for short intervals). This converts to 3600deg/h. Noise and bias-stability is related through the integration time used for measurements. For short integration times we define the result as noise and for longer integration time as bias-stability (or drift). Figure 13 shows the measured output noise spectral density, representing a value of 0.0045deg/s/√Hz. Figure 14 shows the step response for a 1deg/s step.
237
238
Safety
Fig. 13. Measured power spectral density at zero angular rate
Fig. 14. Measured step response from zero to 1deg/s to zero
An adhoc approach to prove gyro bias-stability is to measure the earth rotation (15deg/h). The gyro was put on a rate table with its sensitive axis oriented parallel with the earth surface. The rate table was then rotated at a speed of three turns per minute. What shows up is a sinusoidal rate signal with period of 20 seconds, where the peak rate amplitude corresponds to half the earth rotation at our latitude (see figure 15). The somewhat “distorted” shape of the signal has been identified to be caused by uneven speed of the rate table. To better understand how small this rate is, one can compare with the speed of the hour pointer of a watch. The measurements show a signal that is four times lower
Model-Based Digital Implementation of Automotive Grade Gyro for High Stability
than the rate of the hour pointer of a watch. This is accommodated with a low cost automotive gyro chip measuring 3 by 2.5mm.
Fig. 15. Measured earth rate with the implemented system
Fig. 16. Measured “Allan Variance” for standard SAR10 without over-mould
A more scientific way of measuring bias-stability is using the “Allan Variance” method [2]. This gives a quantitative measure for how much the average value of the signal changes at a particular value of averaging time. This is then plotted as function of averaging time. At short averaging times the “Allan Variance” is dominated by noise in the sensor. At some point the “Allan Variance” will have a minimum and then start to increase again due to inherent drift in the output of the sensor (causes can be very complex and related
239
240
Safety
to external natural phenomenons). The minimum point is often used as a definition for bias-stability. Figure 16 shows the “Allan Variance” for a standard SAR10 rollover sensor, but in a different package (no over-mould). Figure 17 shows the “Allan Variance” for a SAR10 gyro die, however with the digital electronic implementation.
Fig. 17. Measured “Allan Variance” for SAR10 gyro die in a ceramic package and with digital implementation
The standard SAR10 reads in the range just above 100deg/h and the implementation on the digital platform reads the astonishing 3deg/h. Measurements were done at normal room temperature and without any temperature compensation. The used gyro dies had quadrature-offset below 100deg/s.
5
Conclusion
A model-based gyro with all feedback filters as well as the rate demodulator in the digital domain has been realised and tested. The measurements clearly show that the gyro sensing die in the SAR10 roll-over gyro is close to matching the performance of expensive gyros used in inertial navigation systems and is by far capable of meeting bias-stability requirements for short and long term needs in automotive vehicle stability and safety systems. The developed ASIC meets all performance needs, however, the demonstration ASIC has some flexibility and test features that will be reduced for a commercial prod-
Model-Based Digital Implementation of Automotive Grade Gyro for High Stability
uct. The final testing points in the direction that the SD-converters can be reduced to 4th order with only marginal loss of performance; this significantly reduces complexity. It has been evaluated that all digital functions can be realised in a hardwired implementation with power consumption of less than 10mA at 5V. For next generation systems including IMU’s a digital approach give a number of optional system architectures, which will result in flexible and cost optimised solutions.
Acknowledgements The authors would like to thank D. Sandström of Imego AB for work in relation to ASIC design and testing. The work has been partially founded through NFR (The Research Council of Norway) “Microgyro for navigation and stability-control” program.
References [1]
[2]
T. Kvisterøy, N. Hedenstierna, S. Habibi, B. Nyrud, “Design and Performance of the SAR10 Rate Gyro”, Advanced Microsystems for Automotive Applications 2001, Springer Verlag, p. 189-200, 2001. W. Stockwell, “Bias Stability Measurement: Allan Variance”, Crossbow Technology, Inc. http://www.xbow.
Terje Kvisterøy, Nils Hedenstierna SensoNor, P.O.Box 196, NO-3192 Horten, Norway
[email protected] [email protected] Gert Andersson, Per Pelin Imego, Arvid Hedvalls Backe 4, SE-41258 Gothenburg, Sweden
[email protected] [email protected] Keywords:
gyro, digital, bias, stability, navigation, ESP, SDM, MEMS, algorithm, rollover, VSC
241
243
Next Generation Thermal Infrared Night Vision Systems A. Kormos, C. Hanson, C. Buettner, L-3 Communications Infrared Products
1
Introduction
Night Vision Systems are introduced on the market in the US and Japan and soon will arrive in Europe finally, where customer surveys show high level of interest. The challenge and also the limitation of these first generation systems was the time to market, the price target and also the packaging space given by the car manufacturers. Now, with first generation systems already on the market, the focus is no longer ‘how the system operates’. Rather, the focus must be upon whether there is benefit to the driver, market acceptance in general, and also the improvements necessary to overcome some of the first generation system weaknesses. Another point of discussion is still the human machine interface (HMI). The night vision image, independent of whether near infrared (NIR) or far infrared (FIR) technology, needs to be close to the drivers line of sight, bringing a new and sometimes challenging moving image to the attention of the user and hence represents a potential distraction. This has been identified using consumer feedback from first generation systems users. Next generation night vision systems should address those critical issues and try to overcome them as best as possible. A simple addition of color detail to allow an easier interpretation of the image is one example that reduces the drivers workload needed to match image/display content with the street scene in front of the vehicle. An algorithm to adjust the image content to speed and movement of the car is another good example, “aiming” the camera to where the driver is actually looking. Similar to the steering of the headlamps, the camera image can be steered to follow the road edges when the car is turning. Furthermore the seamless zoom leads to a magnification of objects ahead of the vehicle allowing for better discrimination and assimilation of important details by the driver. Also the first system with detection of objects has been announced. The thermal imaging devices are well suited to support digital image processing algorithms in order to allow the identification and even the determination of distance and trace of objects such as cars and pedestrians. The Honda Motor Company has just announced the launch of a “stereo” far infrared system that
244
Safety
detects and tracks pedestrians, giving a selective warning for pedestrians moving in or into the car path. These kinds of innovations are accompanied by advancements in FIR sensor technology to improve dynamic scene representation in the car. Making the FIR sensor smaller and cheaper, more reliable and higher performing is the goal.
2
Night Vision Market
It was 1999 when the first night vision system hit the road, introduced by Cadillac in 2000 DeVille Model. It was not until 2003 when the second system was launched in Japan and the US, the Lexus LX470. This was the first active NIR system available. For this fall, the Honda Motor Company has announced the launch of a stereo far infrared system that detects and tracks pedestrians giving a selective warning for pedestrians moving in or into the car path on the new Legend car. Also, DC has announced the first European night vision systems in the new S-class, which will be followed suit by others. Looking at JD Powers’ studies of consumer interest in new features, it shows night vision as number one in Europe and number two in the USA. So why did Europe take so long? There are several reasons for that: limited packaging space for new sensors and displays, competing night vision approaches (FIR vs. NIR), numerous new sensor technologies to name a few.
3
Human Machine Interface
Today’s display concepts allow different options to present the Night Vision Imagery to the driver: Head Up Display (HUD) Display Panel While the HUD is the preferred solution for the FIR image, the constant presence of the image may lead to more eye workload or distraction. A display panel allows for detailed full format view of the image if it is located close enough to the driver’s line of sight to be monitored. Nevertheless, in some instances, the driver’s attention may be fully focused on the traffic ahead, leading to a situation where important objects visible on the Display Panel may not be noticed.
Next Generation Thermal Infrared Night Vision Systems
4
FIR camera based Image Processing
The future night vision concepts will employ image processing which will help the driver by highlighting objects ahead. The benefits are: Less workload for the driver No need to present the Night Vision image to the driver all the time Detection of moving and stationary pedestrians Determines range to pedestrians, angular position and crossing speed based on monocular image processing Provides time to contact for pedestrians in collision-path Copes with real-life conditions (walking, running, carrying bags, pushing carriages, etc.) Operates in cluttered urban environments “Crowd” detection signal Today’s available pedestrian protection applications use inputs from a single FIR camera to detect and track pedestrians and to identify the danger of potential collisions. The system detects and tracks stationary or moving pedestrians in the vehicle headway as well as pedestrians that are walking into the vehicle headway. The system measures target range, angular position, lateral velocity, and also calculates the host vehicle path. The software then determines whether the pedestrian is in a collision path with the vehicle and can issue warnings to the driver accordingly. The pedestrian detection software functions reliably even in cluttered urban conditions, and is capable of detecting pedestrians in challenging real-life conditions, including walking and running pedestrians, and including cases where the contour of the person is distorted such as when carrying parcels or shopping bags, as well as when pushing baby carriages or trolleys with goods. In cluttered situations, e.g., when pedestrians are occluded by clutter, the system issues a “Crowd warning” signal, allowing additional cautionary measures to be taken. The pedestrian detection feature may be used as part of a warning system for drivers, for pre-crash applications where certain measures may be taken to try and avert an imminent collision, or to decrease the impact. The image processing software also detects pedestrian crosswalks and issues a signal that can be used for speed warnings to the driver. The application uses a combination of optical flow analysis together with pattern recognition techniques suited to coping with a wide variety of postures
245
246
Safety
and motions. Advanced tracking is required for tracking of non-rigid objects such as pedestrians. In complex cases such as pedestrians crossing in opposite directions and occluding each other, the system tracks the individual targets continuously through the time of occlusion and maintains a correct path prediction. The vision system compensates for turns by the host vehicle in order to predict pedestrian path correctly. The system also relies on vehicle detection capabilities and the ability to identify road vs. non-road regions and pedestrian crosswalks for enhanced performance and for reducing false detections. The pedestrian protection application can be integrated with vehicle data such as vehicle speed for improved estimation of time to contact. It performs robustly in daylight and in night and under a variety of weather conditions. In case of poor visibility conditions such as heavy rain or dense fog the system’s diagnostic functions notify the driver and perform automatic shutoff.
Fig. 1.
Pedestrian protection algorithms effectively detect pedestrians even in cluttered situations (Courtesy of MobilEye Technologies Limited)
The HMI concept should take advantage of existing displays like the HUD but still allow the driver to look at the detailed FIR image when necessary or useful. One option could be to audibly and visually alert the driver of a detected pedestrian using the HUD while highlighting the detected person on a Panel.
Next Generation Thermal Infrared Night Vision Systems
5
Sensor fusion – Using Other Driver Assistance Systems to Create Synergies
The FIR camera can be used to improve the reliability of other driver assistance systems, by verifying objects and features that have been acquired by a visible camera.
Tab. 1.
6
Comparison visual and FIR camera.
Color Overlay for Better Readability of the FIR Image
Because the FIR produces a black and white image, the color of signal lights and taillights are not visible. Also in conditions of low thermal contrast, the ability to interpret road edges or road markers in the FIR image may be somewhat diminished. To overcome both of these issues, we would take advantage of an existing CMOS color camera in the visual range, or to add a very low cost camera with limited performance to enhance the displayed Night Vision image by superimposing selected color/visual information of preference. L-3 Communications Infrared Products has the experience to enhance its Intelligent-SceneProcessing algorithms to affect the visible camera fusion in such a way that is most attractive and useful for the driver. Both the intensity and color of the visible information may be used to automatically and dynamically adjust key fusion parameters.
247
248
Safety
Fig. 2.
FIR/Visible fused imagery illustrating the addition of vehicle taillights.
The result will be an image that looks more natural to the driver, providing important vehicle information such as tail-lights indicator, brake-lights, etc., and gives the driver the ability to better associate the FIR image with the realworld scene through the windshield. The (limited) addition of visible road marks; even the reflection of road-posts, road signs and other visible light information may be desirable in order to provide reference information to the driver that would make it easier to match a detected object in the FIR range to its position in the real world. However our experience tells us that the superimposed visible information needs to be very selective in order to avoid cluttering of the image, as seen in NIR systems. Too much low-interest white light information would compete for the driver’s attention, reducing the ability to quickly assimilate high interest FIR objects from the display. Especially the blooming of oncoming cars should be suppressed to reduce driver workload.
6
Seamless Zooming and Panning
The drivers attention and focus on thing in front of the vehicle is not static but very much situation specific. It is also speed dependent: at lower speeds (in town) much focus is to things close to the car. With higher speed the attention moves further away from the immediate proximity of the car front. The advanced night vision systems should support that by magnifying objects of
Next Generation Thermal Infrared Night Vision Systems
interest and following the driver’s visual behavior. We have studied a seamless zoom and panning concept, where the seamless zoom operates in a way supportive to the driver’s attention and focus: The faster the car moves the more the driver focuses on things further ahead of his car. By using a seamless zoom, this is assisted by the seamless magnification of the objects ahead. The parameters for that function need to be developed and specified, but multiple feedbacks from different drivers gives a first indication that this is indeed a desirable function. The seamless panning works in a similar way to the function of the steerable headlamps. While going through bends, the driver is looking into the curve to watch for things ahead. Now, why should a camera system continue to look straight out? Since the seamless zoom function is done electronically, the displayed FIR contend is not using the full detector. The remaining part is now used to shift the displayed frame depending on vehicle movement. This may sound confusing and irritating but in reality it is so natural and smooth, that it is not even noticed by drivers.
Fig. 3.
The red window defines instantaneous content to be displayed for the driver (Speed information in Km/hr is provided for reference only during the prototype evaluation phase).
249
250
Safety
7
Advanced FIR Night Vision Camera
One result of the broader use and synergetic use of FIR cameras will be more demanding performance criteria, together with the ever challenging automotive expectations towards lower size, weight, price etc. L-3 Infrared Products next generation FIR night vision camera will strive for the smallest, most versatile, advanced and affordable FIR camera. The image quality should be the finest in the industry under all environment conditions and for all thermal scenes encountered during the driving scenario, both day and night. The attributes of this advanced round-the-clock FIR next generation FIR night vision camera are summarized below: Heated protective infrared transparent window with automatic breakage sensor Low-cost glass infrared optics assembly manufactured by L-3 Infrared Products to the highest possible quality standards Advanced MEMS focal plane array for best-in-class image quality Proprietary wafer-level vacuum packaging No visible fixed-pattern noise under any scene conditions, resulting in industry leading image quality Impervious to damage from direct viewing of the sun Custom ASIC signal processing core for the lowest possible power consumption, electromagnetic emissions and cost Intelligent-Scene-Processing emphasizes the information important and useful to the driver [e.g., pedestrians], smoothly and automatically responding to thermal scene dynamics Environment based setting of advanced image processing parameters to enhance camera sensitivity under severe weather conditions Automotive-capable digital or analog video interface(s) Advanced Theft-Deterrence circuitry
8
Camera Package
The next generation FIR night vision camera package is shown below. The camera is very small, mainly limited by the enclosure itself, which is sealed against water intrusion to a depth of 3 meters. There are no moving parts inside the camera, no thermo-electric cooler, and the power consumption is less than 2 Watts at all ambient temperatures.
Next Generation Thermal Infrared Night Vision Systems
9
Fig. 4.
Current generation Prototype (left) and next generation FIR night vision camera (right).
Fig. 5.
Front view (left) and top view (right).
Ideal FIR Detector Technology for Next Generation FIR Night Vision Camera
The technology for next-generation uncooled FIR imaging sensors should have the high MTF and sensitivity (NETD) of bolometers, and the low spatial noise of BST systems. It should achieve this with neither a chopper nor a shutter. Systems should be simple, like BST systems, requiring no calibration other than room-temperature responsivity. The detector should inherently require no temperature stabilization. It should be a monolithic technology, capable of being manufactured in very high volumes using standard silicon wafer (200mm) processing steps. It should also be compatible with wafer-level vacuum packaging techniques. The technology that is currently capable of meeting all these conditions is an exciting and new detector technology. The figure below shows the basic MEMS detector pixel.
251
252
Safety
Fig. 6.
Advanced MEMS detector pixel (not to scale)
The structure is a sandwich of a thin material between two layers of thin, transparent conductive electrodes. The top electrode is split, with each half connected to a post that contacts the readout integrated circuit (ROIC) below it, one post going to ground and the other to the preamplifier input. The bottom electrode is electrically floating, serving only to electrically connect the two halves into two series capacitors. The top and bottom electrodes act together as the first layer of a resonant cavity tuned to maximize absorption in the 7.5µm to 13.0µm spectral region. The bottom layer of the cavity is the mirror on the ROIC beneath the pixel. The material used in these detectors is quite different from BST. Whereas the BST Curie temperature is near room temperature, the new MEMS device operation is based on the conventional pyroelectric effect. This means the new detectors do not require temperature stabilization. Furthermore, the entire range of operating temperatures is sufficiently below the Curie temperature that the change in responsivity across the operating range is mimimal, and the non-uniformity of the change is negligible. Since the detectors are AC coupled, there is no electrical offset to remove or correct. Therefore the detectors operate without temperature stabilization and without extensive calibration. In practice, the entire factory calibration procedure is completed in minutes. During use in the automobile, the detector will continue to self-calibrate at a 60 Hz rate, which is totally invisible to the driver.
10 Wafer Level Packaging for Very Low Cost Advanced wafer level vacuum packaging techniques, developed by L-3 Infrared Products, will be used for the advanced MEMS detector for the lowest possible cost.
Next Generation Thermal Infrared Night Vision Systems
Fig. 7.
Component level and wafer-level vacuum packaging
11 Camera Imaging Performance - Static The imaging performance of the next generation FIR night vision camera will be superior to that of any known system technology available today. Utilization of state-of-the-art MEMS fabrication techniques results in a focal plane array having extremely high thermal isolation between pixel elements and exhibiting near theoretical Modulation Transfer Function (MTF) performance limited only by the spatial geometry of the detector [or about 64% at the fundamental pixel spatial frequency]. High MTF translates into very sharp details from the scene being reproduced in the image. While basic thermal sensitivity, or Noise Equivalent Temperature Difference (NETD), is predicted to be commensurate with the state-of-the-art available today, achieving such excellent performance while utilizing a smaller 25mm pixel pitch is a significant achievement [since NETD tends to be inversely proportional to the pixel pitch and fill-factor utilized]. The real performance differentiator for the next generation FIR night vision camera is the total elimination of static fixed pattern noise from the image under all scene conditions, all the time. This reduction of fixed pattern noise [known as svh or random spatial noise in the 3-D noise theory] coupled with state-of-the-art thermal sensitivity will revolutionize the appearance of the FIR camera image viewed by the driver, resulting in a sharp “TV-like” image.
253
254
Safety
Fig. 8.
“TV-like” image of FIR camera image
The next generation FIR night vision camera will achieve this performance without the need for a continuously rotating chopper disk (currently used in BST cameras), or an intermittent shutter assembly (currently used in DC-coupled cameras). Coupling state-of-the-art MTF and NETD with a total elimination of fixed-pattern noise for the next generation FIR night vision camera results in superior Minimum Resolvable Temperature Difference (MRTD) performance.
12 Camera Imaging Performance - Complex and Dynamic While a traditional MRTD test is a very good measure of the ability to resolve image detail against a uniform and stable background temperature, it is not predictive of camera imaging performance under dynamically changing background conditions encountered in the driving scenario. Critical information needed by the driver may be unavailable due to background clutter. For example, the ability to detect a pedestrian, either through a man-machine interface represented by an in-vehicle display, or a machine-to-machine interface represented by an automatic pedestrian detection algorithm, will be limited by the FIR camera’s ability to produce an adequate thermal contrast between the pedestrian and its thermal background under a wide variety of constantly changing conditions. The current state-of-the-art in FIR imaging sensors dictates that AC-coupling is the preferred solution for High dynamic
Next Generation Thermal Infrared Night Vision Systems
Scene (moving objects), High temperature dynamics (displayed temperature), and Dynamic temperature environments (changing ambient temperature).
Fig. 9.
Using a DC-coupled FIR camera a pedestrian is hidden in wide dynamic range scene (top). AC-coupled FIR camera utilizing a mechanical chopper extracts pedestrian from complex thermal background (bottom)
L-3 Infrared Products Intelligent-Scene-Processing algorithms do not require a mechanical chopper in order to enhance such information of interest to the driver. Intelligent-Scene-Processing algorithms maintain an output adjusted dynamic range of 60 to 70dB, by smoothly and automatically responding to ever changing thermal scene dynamics. This ability to always emphasize the minute details of interest within a very wide dynamic temperature range scene is probably the most important characteristic of the next generation FIR night vision camera.
255
256
Safety
Acknowledgement Special thanks to our friends at MobilEye Technologies Limited for providing the descriptions and basis for the pedestrian detection and other driver assistance systems discussed in this paper. Alex Kormos, Charles Hanson L3-Communications Infrared Products 13532 N. Central Expressway Dallas, TX 75243 USA
[email protected] [email protected] Christof Buettner L-3 Communications Infrared Products Europe 85316 Freising
[email protected]
257
Development of Millimeter-wave Radar for Latest Vehicle Systems K. Nakagawa, M. Mitsumoto, and K. Kai, Mitsubishi Electric Corporation Abstract We developed the millimeter-wave radar that adopted both the FMpulse Doppler method and the Polarization-twisting Cassegrain antenna with mechanical scanning. Our radar features the following advantage: Wide target distance detection range with high distance accuray No “ghost” output for multiple target detection Superior accuracy of lateral position and excellent separation performance of vehicles running side by side Those advantages are proven with the prototype millimeter-wave radar we developed.
1
Introduction
Millimeter-wave radar is used as a sensor to the Adaptive Cruise Control (ACC) system, which has been brought to the market over the last several years. The system used to aim at driver’s comfort; thus the demands for superior detection performances, such as wide target distance detection range, were not so high. However, millimeter-wave radar has recently been applied not only to such comfort systems, but also to safety systems. One of the typical safety systems is a collision mitigation system: namely, a pre-crash safety system. Such system gives an alarm to the driver when possible collision is anticipated. In case that the collision is unavoidable, the system controls some devices, such as seat belt and/or brakes to mitigate the collision damage. In addition, millimeter-wave radar is to be used for even more advanced systems, such as LowSpeed Following (LSF) system and/or the Full Speed Range ACC (FSRA) system, including automated stop-and-go. Therefore, millimeter-wave radar will need to have superior performances and detection reliability. In order for us to realize the millimeter-wave radar that is applicable to various safety systems, three main items were focused: Expansion of the target distance detection range, no “ghost” output, and separation performance of vehicles running side by side. The expansion of the target distance detection
258
Safety
range, for instance from 1m to 150m, provides more flexible system that can detect both a distant target running at high speed, as well as a close target running at low speed. Meanwhile, if the radar outputs an undesirable “ghost” target that does not exist in fact, the system may suffer false operation. No “ghost” target detection and superior separation performance of vehicles running side by side contribute to reduction of false operations of the system, and provide high detection reliability. Of course, for wider market penetration of those systems, radar needs to become smaller in size so that it can readily fit to compact cars. This paper describes the millimeter-wave radar we developed that incorporated above features. In the next chapter, the FM-pulse Doppler method [1] is described. Principles of the high distance accuracy, the wide target distance detection range, and no “ghost” output will be explained. In the following chapter, we will explain our development of the Polarization-twisting Cassegrain antenna with mechanical scanning. You can see that the antenna can provide long distance detection, superior separation performance of vehicles running side by side, and high detection accuracy of the lateral position, in small-size radar. Finally, we are to show the test results of our prototype millimeter-wave radar to bolster our argument.
2
Radar Principles
When millimeter-wave radar is used for safety systems, requirements will be: higher distance accuracy, wider target distance detection range and no “ghost” output. To achieve all of those at the same time, we adopted the FMpulse Doppler method for the target detection. The FM-pulse Doppler method can be considered as a combination of the Frequency-Modulation Continuous Wave (FMCW) and the Pulse Doppler methods. The structure of our FM-pulse Doppler radar is shown in figure 1. The frequency-modulated (FM) wave is switched into intermittent pulse (pulse-modulation: PM) before transmission. The received signal is mixed with the frequency-modulated wave to generate the beat signal. After that, the beat signal is processed using FFT like the FMCW method to calculate the distance and relative speed at each specified receiving period (range gate). The FMCW method is generally preferred for motor vehicle usage because of its high distance detection accuracy. However, due to its signal processing procedure, it principally outputs ghost targets when multiple targets exist. On the
Development of Millimeter-wave Radar for Latest Vehicle Systems
other hand, the Pulse Doppler method is chiefly used for military long-range radar, because it can calculate distance from the delay time of the received pulse, and relative speed from the transmitted pulse and the Doppler frequency; that explains why this method does not suffer from ghost targets. However, the distance detection accuracy depends on the range gate step that is often equivalent to the sampling period; therefore, it is generally insufficient for automobile use that require strict distance detection accuracy. It needs a quite fast sampling rate.
Fig. 1.
The Structure of our FM-pulse Doppler radar
We, in consideration of the above, adopted the FM-pulse Doppler method to benefit both from the FMCW and the Pulse Doppler methods. That is, high distance detection accuracy without ghost output. In accordance with the radar equation [2], the received power shall be in inverse proportion to the 4th power of the distance; so the dynamic range of the received signal deemed to be necessary is 80dB, when radar detects a target within 1-100m range. One of countermeasures against such a large required dynamic range is the Sensitivity Time Control (STC) [2]. For the sake of reducing necessary receiving dynamic range, the STC controls the receiving sensitivity depending on target distances that correspond to the delay time. The STC, therefore, is not suitable for a radar method that samples whole distance signals at a time, such as the FMCW radar, but it is applicable to the pulse radar [3]. Hence, the FM-pulse Doppler method can process larger dynamic range that results in wider target distance detection range of the radar, using the STC. Table 1 shows the summary of the foregoing explanations.
259
260
Safety
Radar principles Tab. 1.
The comparison of radar principles
We conclude that the FM-pulse Doppler method has higher distance detection accuracy, no “ghost” output, and wider target detection range.
3
Antenna System
Our FM-pulse Doppler radar satisfies such basic requirements as higher distance detection accuracy, wider target detection range, and no “ghost” output; nevertheless, when it comes to the radar applied to safety systems, a separation performance of each vehicles running side by side and high detection accuracy of their lateral positions must also be realized in a small package. To achieve them, we developed the Polarization-twisting Cassegrain antenna with mechanical scanning. This antenna consists of a main reflector with the polarization-twisting function, and a sub reflector with grids, and operates as follows, as shown in figure 2 (a). The primary radiator outputs a beam towards the sub reflector. The beam is reflected at the sub reflector with grids. On the main reflector, the beam is again reflected and polarization is twisted orthogonally. The twisted beam passes through the sub reflector and is radiated outside. The main reflector is controlled mechanically to turn the transmitted beam toward the desired directions within the scanning area. Our Polarization-twisting Cassegrain antenna with mechanical scanning has the following advantages. Narrower beam width achieved by introducing an antenna that can be mutually used for both transmitting and receiving waves: successful in optimal utilization of limited antenna dimension Extremely flexible beam directions enabled by arbitrarily controllable
Development of Millimeter-wave Radar for Latest Vehicle Systems
angle of the main reflector Benefits of parabolic antenna: higher efficiency and gain than array
antenna Benefits of sub reflector: focus distance shortened to half of the radar
with a parabolic antenna only
Fig. 2.
The Polarization-twisting Cassegrain with mechanical scanning. (a)Main reflector and sub reflector (b) Principle
According to the advantages (1) and (2), our millimeter-wave radar has higher detection accuracy of lateral positions and a superior separation performance of vehicles running side by side. Thanks to the advantage (3), the long distance detection can be realized, while due to the benefit of the advantage (4), downsizing of the radar is achieved.
Fig. 3.
Our Millimeter-wave radar (prototype)
261
262
Safety
4
Outline and Specifications
We developed prototype millimeter-wave radar that applied the FM-pulse Doppler system as well as the Polarization-twisting Cassegrain antenna with mechanical scan mechanism. Its radar is shown in figure 3 and the main specifications of the radar can be found in table 2. Items Specifications
Tab. 2.
5
Main specification
Test Results
In this chapter, we would like to show the performance test results of our millimeter-wave radar. The target distance detection range and accuracy The target angle detection range and accuracy The separation performance for the vehicles running side by side The detection performance of two or more targets
5.1
The Target Distance Detection Range and Accuracy
In this section, we present the performance test results over the target distance detection range and accuracy. As shown in figure 4, the minimum and the maximum detectible distances to the target contour the target distance detection range. The target distance detection accuracy was measured from
Development of Millimeter-wave Radar for Latest Vehicle Systems
discrepancies between the detected and the actual distances, after we placed a target made of a corner reflector (RCS10dBsm: equivalent of a passenger car) on each 20m, 40m, 60m, 80m, 100m, 120m, and 140m points. As shown in table 3, the target detectible distance range expanded from less than 1m to more than 150m. Moreover, the distance target detection accuracy was less than 1m as shown in figure 5. Therefore, our millimeter-wave radar was proven to possess wide target distance detection range and high detection accuracy.
Fig. 4. Tab. 3.
Fig. 5.
The target distance detection range and accuracy performance tests (left) The target distance detection range performance test results (right)
The target distance detection range and accuracy performance test results
263
264
Safety
5.2
The Target Angle Detection Range and Accuracy
In this section, we show the performance test results over the target angle (azimuth) detection range and accuracy. As shown in figure 6, we placed the same reflector again at 140m ahead to measure the maximum detection angle. Moreover, the target detection angle accuracy was measured from discrepancies between the detected and the actual angles of the same target placed at 140m ahead. As shown in table 4, the target angle detection range was ±8.0 degrees. As shown in figure 7, the target angle detection accuracy was less than ±0.4 degrees. Therefore, our millimeter-wave radar proved to be capable of detecting horizontal positions with sufficient detection accuracy.
Fig. 6. Tab. 4.
Fig. 7.
The target angle detection range and accuracy performance test (140m, left) The target range detection performance test results (140m, right)
The target angle detection range and accuracy performance test results (140m)
Development of Millimeter-wave Radar for Latest Vehicle Systems
To further strengthen our argument, here are the target detection angle range performance test results when we continually placed a target on each 20m, 40m, 60m, 80m, 100m, and 120m, each of which are shown in figure 8.
Fig. 8.
5.3
The target angle detection range performance test results
The Separation Performance for Vehicles Running Side by Side
In this section, we show the separation performance test result for the vehicles running side by side. We defined this performance as the maximum distance that our radar could recognize two targets as separated ones. A pair of targets was placed in 3m distances from each other, while the subject vehicle approached to them at 10 km/h, as shown in figure 9.
Fig. 9. Tab. 5.
The separation performance test (left) The separation performance test result (right)
265
266
Safety
In accordance to figure 10 and table 5, the separation performance test result was about 80m. Therefore, our millimeter-wave radar can correctly separate the vehicles running side by side.
Fig. 10. The separation performance test result
5.4
The Detection Performance of Two or More Targets
In this section, we show the detection performance test result of two or more targets. We tried to confirm whether our radar could detect two or more targets correctly, without ghost output. Figure 11 shows the way our test was conducted under actual traffic conditions. As shown in figure 12, our millimeter-wave radar can detect four targets correctly, without ghost output.
Development of Millimeter-wave Radar for Latest Vehicle Systems
Fig. 11. The detection performance test of two or more targets
Fig. 12. The detection performance test result of two or more targets test result
267
268
Safety
6
Conclusion
We developed the millimeter-wave radar applying the FM-pulse Doppler and the Polarization-twisting Cassegrain antenna with mechanical scanning mechanism. Not only can this millimeter-wave radar be used as a mere sensor for driver’s comfort systems, but it can also be applied to safety systems, because it has the following advantages: (1)Wide target detection distance range with high distance accuracy (2)No “ghost” output for multiple target detection (3)Superior accuracy of lateral position and excellent separation performance of vehicles running side by side Finally, we showed the test results of our prototype millimeter-wave radar, and the above-mentioned advantages were clarified. We intend to apply our radar to various systems such as comfort and/or safety systems.
References [1]
[2] [3]
S.Noda, K.Kai, N.Uehara and M.Akasu, “A Design of Millimeter-Wave Radar Dynamic Range with Statistical Analysis,” SAE 2003 World Congress, 2003-010014, 2003. M.I.Skolnik, “Introduction to RADAR systems (3rd ed.),” McGraw Hill Companies, Inc., 2001. N.Uehara, K.Kai, S.Honma, T.Takahara and M.Akasu, “High Reliability Collision Avoidance Radar Using FM-Pulse Doppler Method,” SAE 2001 World Congress, 2001-01-0803, 2001.
Kado Nakagawa, Masashi Mitsumoto, Koichi Kai Mitsubishi Electric Corporation Automotive Electronics Development Center 840, Chiyoda-Machi Himeji Hyogo, 670-8677, Japan
[email protected] [email protected] [email protected] Keywords:
millimeter-wave radar, FM-pulse doppler, polarization-twisting cassegrain, adaptive cruise control, ACC, pre-crash safety, low speed following, LSF, full speed range ACC, FSRA, frequency modulated continuous wave, FMCW, pulse doppler
269
New Inertial Sensor Cluster for Vehicle Dynamic Systems J. Schier, R. Willig, Robert Bosch GmbH Abstract The concept of Bosch´s new sensor cluster SC-MM3.x has been first presented at the AMAA conference in 2003 [21]. This sensor cluster will replace the current DRS-MM1.x, which was the first silicon micromachined angular velocity and lateral acceleration sensor for the Bosch ESP system. Start of series production will be in the spring of 2005. The sensor cluster features a new generation of silicon micromachined angular velocity and acceleration sensor elements, outstanding fully digital electronic readout circuits, a modular concept for hardware and software and many new safety features, which lead to a flexible and reliable solution for many vehicle dynamic systems. Compared to previous generations, robustness against vibration and EMI have been increased significantly. The single sensor elements are featured with a digital SPI interface and, due to the plastic IC package the sensor elements can be integrated into electronic control units of automotive equipment manufacturers. The sensor cluster has been designed to fulfill the requirements not only of the Bosch ESP system, but also of other systems which make use of angular velocity and acceleration signals, e.g. HHC (Hill hold control), APB (Automated Parking Brake), ACC (Adaptive Cruise Control), 4w (Four wheel drive), ROM (Roll Over Mitigation), RoSe (Roll over Sensing) or EAS (Electronic Active Steering). Signal monitoring is no longer performed by the ECU of the higher level vehicle dynamic system only, but in addition self monitoring has been implemented within the sensor cluster. This new safety concept is explained in detail. The result of our development is an open CAN interface for all relevant angular velocity and acceleration signals which vehicle systems of all kinds can make use of. In this paper, the accuracies and characteristics of the sensor elements are presented as well as other results of our tests. For advanced vehicle dynamic systems an even higher level of safety is required compared to the ESP system. This can be achieved by redundance of the sensor elements in combination with a two microcontroller concept and intelligent software algorithms. Because of its modular design the SC-MM3.x is best suited for those systems.
270
Safety
1
Introduction
In 1995 Bosch started mass production of the first VDC-System (Vehicle Dynamics Control System) for vehicles. This system is called ESP (Electronic Stability Program) and is known as a safety system for road vehicles which controls the dynamic vehicle motion in emergency situations by controlled braking of individual wheels to make the vehicle motion approach the nominal motion intended by the driver. It uses signals to discern the drivers intention, such as steering wheel angle, brake pressure and engine torque, and signals to derive the actual motion of the vehicle, which is e.g. angular velocity of the car around it`s vertical axis and lateral acceleration. The key part of this system was a Yaw Rate Sensor of the first generation DRS 50/100 [12], based on a metal vibrating cylinder with piezo-electric transducers, followed by the second generation DRS MM1 in 1998 [13], which is based on a combination of silicon bulk and surface micromachining with an electromagnetic drive and capacitive detection including an integrated micromechanical lateral acceleration sensor element. The ESP system, the interconnection with other chassis comfort systems and the development of advanced high performance vehicle stabilizing systems called for higher requirements to the inertial signals of the vehicle dynamics, especially with respect to signal performance and robustness, additional measuring axes and reliability. Therefore Bosch has developed the third generation, the flexible and cost-effective inertial sensor cluster MM3.x to meet the requirements of Hill Hold Control (HHC), Automated Parking Brake (APB), Navigation (Navi, Travel Pilot), Adaptive Cruise Control (ACC), Four Wheel Drive (4w), Roll over Mitigation (ROM), Electronic Active Steering (EAS), Roll Over Sensing (RoSe), Active Suspension Control (ASC) and Steer-by-Wire (SbW), see figure 1. The sensor cluster MM3 provides additional inertial measuring data of the dynamic behaviour of the vehicle like angular acceleration, longitudinal acceleration, tilt angle, acceleration in direction of the z-axis, and angular velocity around the x-axis for enhanced vehicle dynamic systems. To meet the demand of high safety relevant systems a high-end variant is available which provides redundance of the inertial data in combination with intelligent SW algorithms. This new sensor cluster features increased robustness and compact design, higher accuracy and better signal to noise ratio. The improved insensitivity against external interference (vibrations and EMI) and the high signal refresh rate ability lead to an easy applicability. With the modular concept for hard-
New Inertial Sensor Cluster for Vehicle Dynamic Systems
and software combined with the integral safety concept flexible use in various systems with different requirements is possible. The concept of Bosch`s new sensor cluster SC-MM3.x, the basic funtions and design have been first presented in the AMAA conference in 2003 [21]. This paper presents a short description of function and design of the inertial sensor elements and of the sensor cluster as well as exemplary measurement results of signal performance, accuracies and characteristics, the modular and advanced safety concept of the sensor cluster, application and integration in other systems and future trends.
Fig. 1.
Sensor cluster for various system requirements
2
Function and Design of Inertial Sensor Elements
2.1
Function and Design of Angular Velocity Sensor Element
The sensor element consists of the micromechanical measuring element, the electronic readout circuit (ASIC) and a 20-pin plastic surface mount package (Premold PM20) and is shown in figure 3. The new micromechanical angular velocity measuring element belongs to the well known Coriolis Vibratory Gyroscopes (CVG, [1]-[19]). It is designed as an inverse tuning fork configuration with two linear in-plane orthogonal oscillation modes, the drive mode and the detection mode [21]. The oscillation as well
271
272
Safety
as the sensing of the motion of the drive mode is provided by electrostatic forces at comb drive electrodes. The measuring of the Coriolis acceleration in the detection mode is realized capacitively via interdigitated electrodes. The measuring element is designed as two spring-mass structures, mechanically coupled via the coupling spring (see figure 2) and with one natural resonant frequency for both oscillating modes. To reduce sensitivity to external mechanical interferences and overloads, the design of the micromachined structures features a high natural resonant frequency of typical 15kHz to be out of the range of the power spectrum in the vehicle.
Fig. 2.
Micro-mechanical angular measuring element, SEM photo
The basic function and design of the angular velocity sensor element was presented at the AMAA in 2003 [21]. The ASIC and the micromechanical measuring element are glued into the premolded plastic surface package with an adhesive that minimizes mechanical stress coupling to the micromachined element. They are connected by chip-to-chip bond wires, the ASIC is bonded to the leadframe. In order to protect against environmental influences the inside of the premold package, including the micomachined measuring element, ASIC, lead frame and bond wires, is sealed with gel. Finally the sensor element is covered with a metal lid in a hot stamping process.
New Inertial Sensor Cluster for Vehicle Dynamic Systems
Fig. 3.
2.2
Angular velocity sensor element: Photo of uncovered premold package PM20 with lead frame, micromechanical element and ASIC (left) and final sensor element (right)
Function and Design of Linear Acceleration Sensor Element
Similar to the angular velocity sensor element the linear acceleration sensor element shown in figure 5 consists of the micromechanical measuring element, the electronic readout circuit (ASIC) and a 12-pin plastic surface mount package (Premold PM12).
Fig. 4.
SEM photo of the micromechanical linear acceleration measuring element
The deflection of the spring-mass structure in the sensitive axis due to external acceleration forces is detected with a differential capacitive comb structure, see figure 4. The spring-mass structure covers a high mechanical g-range of 40g and together with the new electronic readout circuit a variable low g measuring range from 2g to 5g with low noise and high accuracy is achieved.
273
274
Safety
The basic function and design of the linear acceleration sensor element and the electronic readout circuit were presented at the AMAA in 2003 [21]. The packaging type of the acceleration sensor elements and the fabrication process is very similar to the angular velocity sensor element, see figure 5.
Fig. 5.
3
Acceleration sensor element: Photo of uncovered premold package PM12 with lead frame, micromechanical element and ASIC (left) and final sensor element (right)
Signal Performance of the Sensor Cluster
Design and basic function of the inertial sensor elements and electronic readout circuit (ASIC) were presented in [21]. In this chapter exemplary measurement results of the signal accuracies and characteristics of the sensor cluster MM3.x over temperature are presented as well as other results of our tests.
3.1
Angular Velocity Signal Accuracies and Characteristics
To fulfill the various demands of signal accuracies and performance (resolution, signal noise, linearity, offset and sensitivity error and drift or bandwidth) a high Q-factor, a double resonant oscillation and a closed loop principle is used and implemented into the sensor electronic readout circuit module (ASIC). A force rebalance control loop including an electromechanical ∆Σ-modulator (high sampling rate and quantisation, noise shaping) electrostatically drives the Coriolis force to zero. An electrostatic stiffness tuning and quadrature control loop are also implemented. In figure 6 the offset error over a temperature range of -40°C to +85°C is traced. The offset of each sensor element is calibrated and compensated over temperature. The offset deviation is in a range of ±0.4°/s. Due to the described
New Inertial Sensor Cluster for Vehicle Dynamic Systems
basic functions a temperature independent sensitivity of angular velocity is generated and a compensation is not necessary. Figure 7 shows the sensitivity error over temperature in a range of ±1.0%.
Fig. 6.
Offset error over temperature
Fig. 7.
Sensitivity error over temperature
The closed control loop with electromechanical ∆Σ-modulation and the double resonant principle have advantages regarding the signal noise and resolution. Figure 8 presents the signal noise (rms) over temperature with a sensor band-
275
276
Safety
width of 50Hz (-3dB). The values are less than 0.030°/s rms. Using the Allan Variance calculation (see figure 9) a so called bias (offset) instability of 0.001°/s (=3.6°/h) and random walk or random white noise of 0.005°/s/√Hz (=0.3°/√Hz) are measured and calculated.
Fig. 8.
Signal noise (rms) over temperature with a bandwidth of 50Hz (–3dB)
Fig. 9.
With Allan variance, bias (offset) instability of 0.001°/s (=3.6°/h) and random walk or random white noise of 0.005°/s/√Hz (=0.3°/√Hz) are measured and calculated
New Inertial Sensor Cluster for Vehicle Dynamic Systems
3.2
Linear Acceleration Signal Accuracies and Characteristics
For the micromechanical measuring element of linear acceleration, an in plane spring-mass structure is used according to the improved Bosch silicon surface microfabrication technology. The spring-mass structure is designed for a high mechanical g-range (40g) and together with the new electronic readout circuit (ASIC) a variable low g measuring range (from 2g to 5g) with low signal noise (5mg rms) and high accuracy is possible. In this way external high mechanical interferences cause no clipping of the mechanical structure or of the signal processing. This high dynamic range was the challenge of the signal processing. The ASIC contains the analog front-end with a fully differential charge to voltage conversion and a correlated double sampling (CDS) to achieve high dc stability. The fully digital signal processing includes a mechanical open loop electrical closed loop ∆Σ-modulation, filtering, offset adjust with temperature compensation, output filter and finally a scale factor calibration. The sensor element of linear acceleration has two signal outputs with different sensitivity and measuring ranges. Figure 10 shows measured data of the signal noise with a sensor bandwidth of 50Hz (-3dB) over temperature less than 3.0mg rms, figure 11 shows offset error over temperature in a range of ±20mg and figure 12 shows sensitivity error over temperature in a range of ±0.8%.
Fig. 10. Signal noise over temperature with sensor bandwidth of 50Hz
277
278
Safety
Fig. 11. Offset over temperature
Fig. 12. Sensitivity error over temperature
3.3 Sensor Cluster g-sensitivity Performance
The insensitivity to external mechanical interference and overloads in the vehicle such as linear and angular vibrations is also a result of the described sensor principles. The signal output filtering is done with a digital low pass Chebyshev-IIR-filter, type 2. No additional mechanical damping measure in the
New Inertial Sensor Cluster for Vehicle Dynamic Systems
sensor cluster is necessary to suppress the effects of external interferences of vibration due to the mounting location in the vehicle and specific driving maneuvers. Figure 13 shows the measured data of the g-sensitivity of the sensor cluster done on a shaker in the three main directions with a sinusoidal excitation amplitude of 5g in the frequency range from 30Hz to 10kHz.
Fig. 13. g-sensitivity in Y-, X-, Z-direction, 30Hz to 10kHz, sinusoidal excitation amplitude 5g (blue is angular velocity sensor element in °/s, red lateral acceleration sensor element in g, magenta longitudinal acceleration sensor element in g, green shaker reference in g)
4
Design, Modular Concept and Advanced Safety Concept
4.1
Mechanical Design and Modular Concept
The packaged sensor elements are mounted on a Printed Circuit Board (PCB) together with a microcontroller, non-volatile memory (EEPROM), voltage regulator, quartz and passive components. The PCB is fixed in a plastic housing and sealed with a cover. A four pin connector is used for the external power supply (14V) and for data transfer via CAN. The mechnical design is depicted in figure 14.
279
280
Safety
Fig. 14. Mechanical design of sensor cluster MM3.x
The internal supply voltage is 5V. The internal data communication between the sensor elements and the microcontroller is done by a synchronous bidirectional serial data link with up to 4MBd. The microcontroller transforms the signals into one or more CAN messages together with essential status information for communication with the system ECU. The content of the CAN matrix is extendable according to the equipment rate with sensor elements. Furthermore the signal refresh rate is adjustable from 5ms to 20ms according to dynamic system requirements. Due to this modularity in hardware and software (see figure 15), a cost-effective extendable sensor cluster with different versions for various applications is available.
Fig. 15. Modular concept, schematic block diagram of SC-MM3.x
New Inertial Sensor Cluster for Vehicle Dynamic Systems
The sensor cluster can be installed in the passenger compartment or trunk. The housing provides two bushings to allow it to be fastened in the vehicle. Figure 16 pictures the two housing sizes available depending on the required sensor element equipment rate. The small housing is mechanically compatible to the current MM1 with 79x80x32mm3. The dimensions of the large housing are 98x90x32mm3.
Fig. 16. Two housing variants available depending on sensor element equipment rate
4.2
Safety Concept and Software Algorithms
The internal safety and monitoring concept of the sensor cluster is based on a number of different levels. It uses various error counters and time bases to detect failures in the micromachined measuring elements, in the ASICs, at interconnections (i.e. bonds and pins), at the SPI communication, in the microcontroller, in the EEPROM, in the voltage regulator and at the CAN interface. Nevertheless the availability requirements for the sensor cluster are provided for. yet to also allow for the demanded availability. The functionality of the realized safety concept is exemplified in figure 17. Sensor element internal self testing and monitoring at ASIC level. During normal operation and at power on, the control parameters of the oscillator stage (drive mode), damping stage (detection mode) and output stage (digital backend and SPI module) of the angular velocity sensor element and signal and control parameters of the acceleration sensor element are monitored. The status of the ASIC internal monitoring is transmitted via SPI to the microcontroller. Sensor element initialisation and signal monitoring at microcontroller level. At power on, several self-tests of the ASIC and the measuring element are initiated by the microcontroller. The derived signals are evaluated by comparison with memorized end-of-line data. During nor-
281
282
Safety
mal operation ordinary signals like angular velocity and acceleration as well as additional signals such as quadrature, temperature and control loop parameters are transmitted to the microcontroller. These are checked for valid signal range, gradients and long-term drift. For all variants with redundant sensor elements auxiliary the tolerance band checks of the signals are performed. Monitoring of microcontroller and sensor cluster functions. A high coverage of microcontroller failure detection is achieved by intelligent algorithms which perform initial and periodic memory checks (RAM, ROM, EEPROM), and ensure the logically and timely correct program flow in combination with the external hardware watchdog. Safety relevant data is stored redundantly and checked by a 16-bit CRC (cyclic redundancy check). The SPI and CAN communication as well as the ADC are continually tested during normal operation mode.
Fig. 17. Overview safety and monitoring concept. The tests and monitoring (right) are evaluated by the error counters and mapped to signal and sensor cluster specific status flags on the CAN bus (left) Failure detection, timing and status information via CAN. The sensor
cluster`s internal main cycle is 1ms. Within this cycle time the above described monitoring functions are processed. If one or more failures are detected internally, an error counter is increased. The increase step width depends on the type of error and its severity. After a latency time of less than 25 ms the status flag is signalized as temporary or permanent on the CAN bus as illustrated in figure 17. Signal monitoring and plausibility checks at ECU level. Depending on the specific system in which the sensor cluster is applied, different dynamic requirements can be met by adjustable signal refresh rates on the CAN, configurable signal filtering and adaptable signal ranges. The current as well as additional new monitoring and plausibility checks in the ECU (with specific system reactions) enable an increase in reliability and availability of the sensor cluster for the system.
New Inertial Sensor Cluster for Vehicle Dynamic Systems
This safety concept was based on FMEA and FTA methods which are applied during the development phase.
4.3
Sensor Cluster for Advanced Safety Requirements
For systems with high safety requirements a redundant version is available with two angular velocity sensor elements and two lateral acceleration sensor elements. The signals of the redundant elements are monitored and compared in the microcontroller. The difference of the signals may not exceed a defined tolerance band. This increase in safety requires a difference in phase and amplitude of the signals for failures in the sensor elements to be detected. A high-end variant of the sensor cluster for the detection of signal failures caused by the microcontroller is also available. Single transient signal failures are detected with a redundant signal processing path realized by a second microcontroller. In this new fail-safe strategy shown in figure 18 the two calculation results of the signal processing are compared in both microcontrollers mutually. This leads to enhanced safety and reliability. The development of this concept is also based on FMEA and FTA methods.
Fig. 18. Redundant signal calculation path with two microcontrollers for high safety requirements
5
Application, Integration Strategies and Future Trends
Future complex cross-system applications will require an intelligent sensor platform, which senses all inertial values of the three main axes internally. The sensor platform receives external sensor signals (e.g. wheel speeds and forces, steering angle, engine torque, braking pressure, tire pressure, actuator states) and information from the driver assistance systems (e.g. radar, video, GPS) and
283
284
Safety
then calculates or estimates the dynamic values (side slip angle, speed over ground in longitudinal and lateral direction, yaw rate, road inclination, road uphill gradient, etc.) and controller values. These applications will need high data volume, high update rates and safe data transfer. Therefore time triggered data networks like TT-CAN or FlexRay will be required [20]. To meet the enhanced requirements regarding functionality, data volume, timing requirements and advanced bus communication interfaces a high performance microcontroller with appropriate memory, floating point signal processing unit and higher computing power will be necessary. This can be realized with a stand-alone sensor cluster or by integrating the inertial sensor elements into a central chassis management ECU. A high synergetic effect is achieved by integrating the sensor elements into the ESP ECU. This provides a good customer benefit regarding cost due to saving wire harness for a seperate sensor cluster. Another opportunity with synergies for inertial sensor elements is the integration into the Airbag system ECU. The size of the packaged inertial sensor elements will be further reduced and a multi-axes linear acceleration sensor element is being developed.
Tab. 1.
Specification of the sensor cluster MM3.x
Summary Some specification data of the sensor cluster MM3.x over life time and temperature are summarized in a table. The data (with the exception of measuring range, nominal sensitivity and cut-off frequency) are statistical min./max. values guaranteed for a life time of 17 years or 8000 hours in a temperature range from -40°C to +85°C.
New Inertial Sensor Cluster for Vehicle Dynamic Systems
In this paper we presented the function and design of inertial sensor elements and the modular concept of the sensor cluster MM3.x. We presented the signal performance, accuracy and characteristics based on typical measurement data, the safety concept and gave an outlook of system application, integration strategies into system ECUs concluding with future trends.
7
Acknowledgements
The authors would like to thank all their colleagues at Automotive Equipment, Division Automotive Electronics and Division Chassis Systems for the design, development and testing of the complete sensor system, the design and layout of the sensor measuring elements and sensor readout electronic modules, and for the mechanical construction as well as for the design and implementation of hard- and software.
References [1] [2] [3]
[4] [5]
[6]
[7] [8] [9]
M.W. Putty, K. Najfi, „A Micromachined Vibrating Ring Gyroscope“, Solid-State Sensor and Actuator Workshop, June 13-16, 1994. J.D. Johnson, S. Z. Zarabadi, D. R. Sparks, „Surface Micromachined Angular Rate Sensor“, SAE Technical Paper Series, 950538. J. Bernstein S. Cho, A. T. King, A. Kourepenis, P. Maciel, M. Weinberg, „A Micromachined Comb-Drive Tuning Fork Rate Gyroscope“, 0-7803-0957-2/93, 1993 IEEE. K. Funk, A. Schilp, M. Offenberg, „Surface-micromachining of Resonant Silicon Structures“, Transducers `95, 519-News, page 50. M. Hashimoto, C. Cabuz, K. Minami, M. Esashi, „Silicon Resonant Angular Rate Sensor Using Electromagnetic Excitation and Capacitive Detection“, Technical Digest of the 12th Sensor Symposium, 1994. M. Hashimoto, C. Cabuz, K. Minami, M. Esashi, „Silicon Resonant Angular Rate Sensor Using Electromagnetic Excitation and Capacitive Detection“, Technical Digest of the 12th Sensor Symposium, 1994. Y. Cho, B. M. Kwak, A. P. Pisano, R. Howe, „Slide film damping in laterally driven microstructures“, Sensors and Actuators A, 40 (1994). M. Offenberg, F. Lärmer, B. Elsner, H. Münzel, W. Riethmüller, „Novel Process for a Monolithic Integrated Accelerometer“, Transducers 95, 148 - C4. K.H.-L. Chau, S.R. Lewis, Y. Zhao, R.T. Howe, S.F. Bart, R.G. Marcheselli, „An integrated Force-Balanced Capacitive Accelerometer for Low-G Applications“, Transducers 95, 149 - C4.
285
286
Safety
[10] M. Offenberg, B. Elsner, F. Lärmer, Electrochem. Soc. Fall-Meeting 1994, Ext. Abstr. No 671. [11] M. Offenberg, H. Münzel, D. Schubert, “ Acceleration Sensor in Surface Micromachining for Airbag Applications with High Signal/Noise Ratio, SAE Technical Paper, 960758. [12] A. Reppich, R. Willig, “Yaw Rate Sensor for Vehicle Dynamics Control Systems”, SAE Technical Paper 950537 (1995) [13] M. Lutz, W. Golderer, J. Gerstenmeier, J. Marek, B. Maihöfer, D. Schubert, “A Precision Yaw Rate Sensor in Silicon Micromachining“, SAE Technical Paper 980267 (1998). [14] A.W. Leissa, Vibration of shells, NASA SP-288 (1973) [15] G.B. Warburton, Vibration of thin cylindrical shells, Journal Mechanical Engineering Science, Vol 7 No 4 (1965). [16] C.H.J. Fox and D.J.W. Hardie, Harmonic response of rotating cylindrical shells, Journal of Sound and Vibration, 101 (4) (1985) [17] J.S. Burdess: The dynamics of a thin piezoelectric cylinder gyroscope, Proc. I. Mech. E., Vol. 200 No c4 (1986) [18] P.W. Loveday, “Analysis and Compensation of Imperfection Effects in Piezoelectric Vibratory Gyroscopes“, Diss. Virginia Poytechnic Institute and State University, Blacksburg, Virginia, 1999. [19] J.A. Geen, “A Path to Low Cost Gyroscopy“, Solid-State Sensor and Actuator Workshop, June 8-11, 1998 [20] D. Sparks et al., “Multi-sensor modules with data bus communication capability”, SAE paper 1999-01-1277, 1999. [21] R. Willig and M. Moerbe, “New Generation of Sensor Cluster for ESP- and Future Vehicle Stabilizing Systems in Automotive Applications”, AMAA 2003. Johannes Schier, Rainer Willig Robert Bosch GmbH Automotive Equipment, Division Chassis Systems, Product Group Sensors P.O. Box 13 55 74003 Heilbronn Germany
[email protected] [email protected] Keywords:
inertial sensor cluster, yaw rate sensor, angular velocity sensor, acceleration sensor, silicon surface micromachining, modular concept, signal accuracies and characteristics, cross-system application, integration, ESP, HHC, ACC, 4w, ROM, RoSe, EAS, steer-by-wire
Powertrain
289
Multiparameteric Oil Condition Sensor Based on the Tuning Fork Technology for Automotive Applications A. Buhrdorf, H. Dobrinski, O. Lüdtke, Hella Fahrzeugkomponenten GmbH J. Bennett, L. Matsiev, M. Uhrich, O. Kolosov, Symyx Technologies Inc Abstract The continuous improvement in engine technology in order to achieve new emission norms have led to an increased amounts of petrol, diesel and soot in the engine oil during engine operation. This dilution causes a fast decrease of the oil properties. Additional, the recommended oil change intervals for automotive engines have continuously been extended over the last few decades. In many cars, this interval is calculated using a set of certain characteristic engine parameters and driving behaviour (oil temperature, engine speed, number of engine ignitions). In order to prevent engine failures as a result of abnormally aged oil, or extreme driving conditions, it is necessary to monitor the oil condition continuously. This can only be reliably realized by means of a sensor directly located in the harsh oil environment of combustion engines. The developed sensor enables the measurement of viscosity, density, permittivity and temperature of the engine oil and provides therefore relevant data for sophisticated oil condition algorithms.
1
Sensor Concept
As a major supplier of oil level sensors, the company Hella KGaA investigated several sensor solutions for this measurement task in the past. Existing, oil condition sensors are either able to measure only one single oil parameter, such as permittivity or viscosity, or just a product of different parameters [4]. Although the concept of a multi-chip module having a surface acoustic wave sensor was previously presented [1], Hella in collaboration with Symyx Technologies, Inc. developed a new oil condition sensor for the simultaneous measurement of four physical and electrical parameters by using an oscillating tuning fork. Compared to other described mechanical resonators, such as QCM (quartz crystal microbalance) and SAW (surface acoustic wave) by which it is only possible to analyse a viscosity-density product, this new sensor detects dynamic viscosity, specific density as well as permittivity, independently and
290
Powertrain
unambiguously. The temperature as an additional fourth parameter is detected by a ASIC based sensor. The main focus in the development has been the integration of the micro sensor into an established and highly reliable oil level measurement system with consideration for assembly concerns. Nevertheless, the functionality as a stand alone device can be supported in order to enable an operation as a single oil condition product. The achievement of both targets represents a revolution in the field of oil applications (figure1). Compared to existing or conventional oil condition sensor principles, the tuning fork sensor device demonstrates the capability to measure four parameters as viscosity, density, permittivity and temperature.
Fig. 1.
2
Photograph of Oil Level Sensor with incorporated Oil Condition Sensor Module
Theory
The function of the multiparametric oil condition sensor described above is to transform physical properties of the liquid (i.e. viscosity, density and permittivity) into the easily interpretable electrical signals. While electrical response of practically any electromechanical system (e.g., surface acoustic wave device, thickness shear mode resonator, even a mobile phone vibrator) submerged in liquid will depend on liquid properties, the a) short and long time stability of this response, b) its sensitivity to the liquid properties, c) ability of
Multiparameteric Oil Condition Sensor Based on the Tuning Fork Technology for Automotive Applications
the sensor to differentiate changes in a particular liquid property, d) ability of the sensor to quantify absolute values of the property as well as e) mechanical and electrical robustness of the sensor - are the key requirements for producing a winning device for the automotive and broad consumer market applications. Additionally, unlike the tuning fork resonator, the long term stability and performance of QCM and SAW devices are effected by surface fouling that may occur in the engine. While these requirements are often contradictory, the successful compromise was realized in the tuning fork (TF) flexural resonator described in details elsewhere [5]. Such resonator is made of piezoelectric material – crystalline quartz – and has a set of electrodes on its surface. Due to the piezoelectric properties of quartz, AC voltage applied to the electrodes (usually in the range of few tens of kHz) creates oscillating mechanical stress in quartz and, correspondingly, mechanical vibration of the TF sensor. This vibration, in turn, changes electrical current through the electrodes. The ratio of the driving AC voltage to the resulting current is the electrical impedance of the TF sensor. This impedance is a function of driving frequency and it also depends on the mechanical motion of the sensor, with a characteristic resonance shape (figure 2). In the fluid, vibrating sensor creates viscous drag (which damps the resonance) and increases the mass loading on the sensor (which shifts the resonance into the lower frequencies, figure2).
Fig. 2.
Left: Response of the TF sensor in vacuum. Right: Response of the TF sensor in liquid. Note the shift of the impedance range and resonance frequency compared with vacuum.
It also should be noted that TF resonator has, in fact, no microscopically moving parts and can be pictured as a piece of solid crystalline rock placed in the liquid flow, making it a robust solution for the demanding applications. In addition, due to the balanced nature of the resonator (two symmetrical tines), it
291
292
Powertrain
becomes very insensitive to the mechanical vibration of the sensor support, acoustic noise, etc. In order to quantify changes in this curve, a comprehensive electromechanical model (figure 3) of the TF was developed in [3,5].
Fig. 3.
Electromechanical equivalent circuit of a tuning fork
This model includes values which do not depend on the properties of the medium Cp, Cs, R0, L0 (representing the unloaded, or “vacuum” resonator parameters) as well as additional impedance of a flexural resonator Ζ(ω) (representing influence of the fluid). According to this model, the fluid effect on the whole electrical circuit can be represented as
where ω is the operation frequency, ρ is the liquid density, η is the liquid dynamic viscosity, A and B are the geometrical factors that in normal measurement conditions depend only on the resonator geometry and mode of oscillation. At first, vacuum parameters are easily determined from a one frequency sweep of free TF sensor (outside the fluid, e.g. figure 2) by fitting the measured frequency dependent impedance with the model corresponding to figure 3 where Z(w)=0 (zero fluid load) by fitting experimental data using, for example, the least squares method. Then, the sensor is submerged into the fluid with known density and viscosity, and another frequency sweep of sensor impedance (now in the fluid) is performed. At this stage the fit allows determination of the calibration parameters A and B as well as the modified value of capacitance Cp (changed due to the difference between dielectric properties of fluid and vacuum). Now the sensor is fully calibrated and ready for the measurements. Finally, during the measurements, frequency sweeps are performed continuously, and the unknown values of fluid density ρ, viscosity η and dielectric permittivity ε are measured.
Multiparameteric Oil Condition Sensor Based on the Tuning Fork Technology for Automotive Applications
3
Multi-Chip Module
The multi-chip module (MCM) combines all functional blocks as sensor element, ASIC and temperature sensor into an open-cavity SOIC-28 package [2] (fig. 4). The integration of the multi-chip module with the existing oil level sensor now allows the oil condition to be measured.
Fig. 4.
Photograph of a multi chip module consisting of the tuning fork, ASIC, temperature sensor.
The integration of all components into a standard IC package comprises many possibilities and advantages. The standard devices can be handled with standardized systems in series production. This offers also an easy way to use standard testing and mounting equipment. Furthermore, well-established fabrication processes in the field of microelectronic and microsystem technology are applied, which are well-suited for a high volume production. Beside that, these technologies offer a very precise fabrication together with a high process reproducibility that fulfills the requirements of the automotive industry. The packaging of the tuning fork and the electronic in one housing minimizes the length of electrical connections between sensor and electronic. The result is a reduction of electrical noise sources and therewith a precondition to achieve a maximum sensitivity and accuracy for the measurement. An integrated on-chip temperature sensor guarantees an assignment to the measured physical parameters as especially the viscosity is strongly dependent on the temperature. Finally, the available functions of the ASIC can be requested by a communication protocol consisting of a set of function-specific commands. These as well
293
294
Powertrain
as the measurement data are transferred via a two-wire interface to a microcontroller-based system.
4
Results
The intensive characterization of this sensor device in the laboratory, as well as the car environment, has verified the measurement concept and the extraordinary potential and reliability as an oil condition sensor for the automotive market.
4.1
Laboratory Results
Measurement of motor oil samples with a tuning fork sensor gives an indication of the oil condition. When samples of new and used oils are compared, changes in the viscosity measured by the tuning fork sensor provide a measure of the oil condition and thus the useful lifetime remaining.
Fig. 5.
Viscosity and dielectric properties vs. temperature for new and used Mobil 1® 5W30 oil
Utilizing a tuning fork sensor and specialized software, we have measured in a laboratory environment, samples of commercial motor oil. Samples of fresh and used oil of the same type and brand were measured. Measurements were performed at temperatures between room temperature and approximately 100C in a temperature controlled test cell. The sensor calibration procedure
Multiparameteric Oil Condition Sensor Based on the Tuning Fork Technology for Automotive Applications
used for these measurements was designed to reliably measure trends for viscosity, density and dielectric properties variations between used and fresh oil at each given temperature. Density and dielectric properties temperature trends required different calibration protocol not deployed during these tests, while for viscosity existing protocol has allowed evaluation of viscosity T trends. In this example, Mobil 1® 5W30 synthetic motor oil was measured. A sample of fresh oil and one taken at 3400 5500km miles were characterized using a tuning fork sensor. Results for the Mobil 1®1 as shown in the figure 5 indicate a clear difference in the viscosity over the entire temperature range. The difference between viscosity, density and dielectric properties of used and fresh oil is very profound for the whole T range. Overall, all these parameters tend to increase for aged oil. At the same time, should fuel dilution occurs (reducing the oil density), the relative changes in density and viscosity will enable differentiation of this condition from the oil aging. Motor oils are also characterized by the viscosity index which is related to the values of the viscosity at 40°C and 100°C. The results for viscosity and density for these oils are shown in the table below.
Tab. 1.
Viscosity and density for various temperatures.
In another example, the decrease of the oil viscosity due to an added amount of diesel and petrol is shown in figure 6. With this technology, the influence of oil dilution with certain amounts of diesel or petrol can be detected.
295
296
Powertrain
Fig. 6.
Left: Influence of the dynamic viscosity regarding a content of diesel in engine oil (Mobil 0W30). Right: Influence of the dynamic viscosity regarding a content of petrol in engine oil (Mobil 0W30)
Fig. 7.
Measurements in an Audi FSI 2.0 engine
Multiparameteric Oil Condition Sensor Based on the Tuning Fork Technology for Automotive Applications
4.2
Car Results
A prototype unit has been applied in a car in order to evaluate the system in a real operation environment. The system was mounted in the oil pan position of the current oil level sensor. The measurements with the prototype unit were performed at different temperatures and show the expected decrease of the dynamic viscosity with increasing temperature. An influence regarding driving condition (e.g., engine speed) could not been observed.
Conclusions and Outlook A highly reliable oil condition sensor was developed using the tuning fork technology, already introduced and established in the field of chemical analyses. This micro-system is realized in a low-cost MCM package. It is able to work as a stand-alone oil condition sensor providing data for viscosity, density, permittivity and temperature, and can be also modularly integrated into various designs of automotive oil level sensors. The achieved product concept guarantees an upgrade of the existing oil level sensors for the automotive market without requiring a new mechanical interface to the oil pan.
References [1] [2] [3]
[4] [5]
D. Wüllner, H. Müller, O. Lüdtke, H. Dobrinski, T. Eggers, Multi-function Microsensor for Oil Condition Monitoring Systems, AMAA 2003 Yearbook I. Van Dommelen, Plastic Packaging for Various Sensor Applications in the Automotive Industry, AMAA 2002 Yearbook, pp 289-296 L. F. Matsiev, J. W. Bennett, E. W. McFarland, Application of Low Frequency Mechanical Resonators to Liquid Property Measurements, 1998 IEEE Ultrasonics Symposium; See also patents EP 0943091 B1, US6336353, US6494079 and additional patents pending., B. Jacoby et al., A Multifunctional Oil Condition Sensor, AMAA 2001 Yearbook L.F. Matsiev Application of Flexural Mechanical Resonators to High Throughput Liquid Characterization. Proceedings of 2000 IEEE Ultrasonics Symposium. v.1, 427-434
297
298
Powertrain
Andreas Buhrdorf, H. Dobrinski, O. Lüdtke Hella Fahrzeugkomponenten GmbH Dortmunder Straße 5, 28199 Bremen Germany
[email protected] J. Bennett, L. Matsiev, Mark Uhrich, O. Koslosov Symyx Technologies, Inc. 3100 Central Expressway, Santa Clara, CA 95051 USA
[email protected] Keywords:
automotive MEMS, micro system technology, oil condition, oil level sensor, tuning fork
299
Automotive Pressure Sensors Based on New Piezoresistive Sense Mechanism C. Ernsberger, CTS Automotive Abstract To date, nearly all automotive pressure sensors share a common design: They have a mechanical diaphragm that either converts pressure into in-plane strain (piezoresistive sensors) or acts as one side of a variable capacitor in capacitive designs. In both cases, the diaphragm is the limiting design element for accuracy and reliability. CTS has developed a pressure sensor that converts pressure directly to a resistance change without any kind of intervening diaphragm. The sensor is based on thick film resistors that have been modified to respond directly to changes in pressure. In this paper, we share results of sensors designed for common rail diesel fuel pressure measurement. These sensors must withstand over 3000bar and have total errors of less than 1% over the full temperature and pressure range. Data is included on performance over temperature, thermal shock, corrosive fuel exposure, burst pressure, and pressure fatigue cycle testing.
1
Introduction
Nearly all automotive pressure sensors share a common design element. In capacitive type sensors, a thin, flexible membrane or diaphragm forms one side of a variable capacitor. In the more common piezoresistive sensors, strain sensitive resistive elements are placed on the diaphragm. The sense element is typically a diffused silicon resistor, a metal foil or thin film resistor, or a “cermet” thick film resistor. When the sensor is exposed to pressure, the diaphragm deflects, and the pressure is converted into an in-plane strain, which is measured by the strain sensitive resistors. The resistors are arranged in a wheatstone bridge configuration and bridge voltage is proportional to pressure. Depending on the application, the diaphragm may be made of steel, ceramic, or silicon. Automotive pressure sensors spanning the range from less than 1bar to nearly 2000bar are currently in production using corresponding diaphragm thickness ranges from less than 20 microns to nearly 2 mm.
300
Powertrain
While versatile, diaphragm based sensors have several limitations. The diaphragm must be designed to be thick enough to withstand overpressure requirements, yet thin enough to result in a usable output. In practice, the maximum stress that can be allowed in the diaphragm is a small fraction of the diaphragm material’s offset yield stress. Even at very small fractions of this engineering yield stress, permanant, plastic deformations occur [1]. Thus, some hysteresis will always be contributed by a mechanical diaphragm. Repeated pressure applications causing higher stress levels can result in fatigue failure of the diaphragm resulting in catastrophic failure of the sensor. These facts are reflected in current pressure sensor specifications known as “proof” and “burst” pressure. Proof pressure is the maximum pressure that the sensor can be exposed to before permanent changes in output are observed. Burst pressure is the maximum pressure the sensor can be exposed to without leaking. Accuracy of diaphragm based sensors is also limited, with small changes in placement of the bridge resistors on the diaphragm affecting linearity and sensitivity.
Fig. 1.
Common rail fuel system
CTS Automotive, a global high volume supplier of automotive sensors, has recently developed pressure sensors based on a new sensing technique. The sensors are particularly well suited for high pressure measurements that require high accuracy. The diesel common rail pressure sensor is an excellent example of such a sensor. Next generation common rail sensors must measure fuel pressures of 2000bar with accuracies approaching 1% over the full auto-
Automotive Pressure Sensors Based on New Piezoresistive Sense Mechanism
motive temperature range. A scheme of the common rail fuel system and the location of the sensor is shown in figure 1.
2
CTS Pressure Sensor
2.1
Theory
As mentioned earlier, most automotive pressure sensors are based on strain sensitive resistors arranged in a Wheatstone bridge configuration. The figure of merit defining strain sensitivity is the Gauge Factor (GF). GF is defined as: (1)
where ε is the strain induced in the resistive element. Choices for resistive elements include thin metal films, diffused silicon resistors, and thick film materials consisting of dispersed conductive phases in a glass ceramic matrix. GF for silicon resistors is in the range of 100-200, while metal foil resistors have a GF of only 1-2. Thick film resistors have a GF that varies from essentially 1 to values as high as 20. Silicon based sensors are popula because of this advantage in sensitivity. On the other hand, stability of silicon gauges are not as good as lower sensitivity metal foil gauges. Thick film gauges have the desirable combination of good sensitivity, stability, and low temperature performance. [Ref. 2-16] A more in depth analysis shows that (2)
where G is the piezoresistive coefficient, the unit change in resistivity per unit change in strain in the x, y, and z directions. (equation 2). The equation has two components, the piezoresistive component (3)
and the geometric component
301
302
Powertrain
(4)
In the case of thick film and silicon piezoresistors, the piezoresistive component is by far the larger one. Conventional strain gauges are arranged to measure the longitudinal strain in the element. In this case, the gauge factor is designated (5)
where v is the Poissons Ratio. Since the first term predominates in high gauge factor materials like silicon and certain thick film materials and v is approximately 0.25 for silicon and thick film materials, (6)
Equation 2 suggests that strain gauge elements might respond directly to hydrostatic pressure as indicated in figure 2. In this case the hydrostatic gauge factor [17]. (7)
Again, since the first term dominates, it can be seen that the same change in resistance can be obtained with approximately half the strain in a hydrostatic vs. a longitudinal strain gauge element, enhancing durability. In addition, for a strain gauge element on the low pressure side of the diaphragm the induced strains are tensile. Materials are much stronger in compression. In the hydrostatic case, all strains are compressive. We have evaluated the hydrostatic pressure sensitivity of a variety of thick film resistor formulations, as well as diffused silicon resistors. Silicon strain gauges showed virtually no response to hydrostatic pressures in the range of a few 100bar while certain thick film formulations showed changes in resistance up to 10% over a pressure change of 2000bar. These thick film resistors show a precise relationship between resistance and pressure, free from any non linear or hysteresis effects of an intervening diaphragm. They can be
Automotive Pressure Sensors Based on New Piezoresistive Sense Mechanism
placed on a rigid substrate that does not bend during application of pressure. The new CTS pressure sensors are based on a standard Wheatstone bridge configuration of resistors with sense resistors exposed to pressure and reference resistors at ambient pressure. Since the linear relationship between resistance and pressure is valid through at least 3400bar, (the highest pressures we have tested to date) the new sensors are not limited to a “proof” pressure unlike conventional sensors. The new sensors have additional benefits as outlined below.
Fig. 2.
2.2
Direct sensing principle
Pressure Sensor Construction Benchmark
There are several figures of merit that describe sensors based on Wheatstone bridge configurations. The key ones are: Sensitivity or span Linearity Hysteresis Offset Temperature dependence of both sensitivity and offset Offset (zero pressure bridge output) is directly related to Temperature Coefficient of Resistance (TCR) tracking of the bridge resistors. The figures of merit listed above determine the ultimate accuracy, signal to noise ratio, calibration, and temperature compensation required. We characterized the unamplified and uncompensated bridge outputs of common rail pressure sensors currently used in the market and compared these to CTS sensors based on pressure sensitive resistors. The results appear in table 1.
303
304
Powertrain
Table 1 clearly shows the advantage of using the new sensing approach to manufacture high accuracy pressure sensors with minimal temperature compensation and calibration costs. The balance of this paper discusses design and performance of diesel common rail pressure sensors based on this improved sensing technology.
Tab. 1. Pressure sensor type benchmark comparison
2.3
Direct Pressure Sensor Design
An exploded view of a CTS common rail pressure sensor is shown in figure 3. The CTS design places the reference and sense resistors on a ceramic pin. The thick film resistors and conductor traces are printed onto all four sides of the ceramic pin using automated equipment developed by CTS. The resistor and conductor traces are passivated with a thick film dielectric material and then the center section of the pin is metallized to accept a standard braze material. The ceramic pin is brazed into the stainless steel header in a controlled atmosphere brazing operation below thick film firing temperatures. The brazed assembly is then heat treated to achieve the physical properties necessary to withstand pressure fatigue cycle testing. Soldered connections are made between the ceramic pin and the circuit board containing the sensor electronics. A picture of this assembly is also shown in the lower right hand corner of figure 3. This design has several advantages over others considered. The ceramic pin serves triple duty. It serves as the seal, the sense resistor substrate, and reference resistor substrate. This eliminates both die attach materials and high pressure connections and welds required in other separate substrate designs. As a result, reliability is improved, and there are no additional design elements that can contribute to linearity and hysteresis errors. In addition, the cross sectional area of the seal is kept to a minimum, reducing shear stress on the seal when the sensor is pressurized.
Automotive Pressure Sensors Based on New Piezoresistive Sense Mechanism
2.4
Temperature Sensor Integration
Currently, common rail fuel injection systems employ separate pressure and temperature sensors. The temperature sensor is located on the low pressure side of the system while the pressure sensor is located on the high pressure rail. There are two primary drivers to integrate the temperature and pressure sensors. First, packaging cost represents a substantial portion of both pressure and temperature sensor costs. By combining both sensors in one package, cost savings can be passed along to the customer. Secondly, by moving the temperature sensor to the fuel rail, a much more meaningful temperature measurement of the fuel can be obtained. We have solved this integration issue by combining a thick film thermistor onto the same ceramic pin that contains the reference and pressure sensitive resistors. This design is Patent Pending.
Fig. 3. CTS Automotive pressure sensor design
3
Results
A partial list of sensor requirements taken from a composite of common rail fuel injection system requirements appears in table 2. This section of the paper will report results against these specifications.
305
306
Powertrain
Tab. 2. Partial list of sensor requirements
3.1
Operating, Proof, and Burst Pressure
As stated earlier, the output of our sensor is linear through at least 3400bar, the highest pressure tested. The concept of a proof pressure does not apply to our sensor, and operating pressure is limited only by the burst pressure limit. We report burst pressure results in combination with thermal shock below.
3.2
Accuracy over Temperature and Pressure
Sensors where built, calibrated, and temperature compensated using a simple two temperature span calibration procedure. The sensors were evaluated over temperature and pressure against an instrument grade sensor with a stated accuracy of ±0.2%. The results appear in table 3, which displays CTS results and future system requirements. Total errors on the CTS sensors where well below current specifications and meet future requirements as well.
Tab. 3. Allowable error (black: system requirements, red CTS performance)
Automotive Pressure Sensors Based on New Piezoresistive Sense Mechanism
3.3
Burst Pressure and Thermal Shock
Burst pressure and thermal shock tests are specified separately in system requirements. We combined these tests by applying burst pressure of 3000bar and checking for resistance changes or leaks after every 100 thermal shock cycles. No leaks or resistance changes occurred after 2000 thermal shocks at which point the test was terminated.
3.4
Corrosion
Since the sense resistors are exposed to the pressurized media, an obvious requirement is that they do not drift due to media or pressure exposure. System requirements call out several flowing corrosive media tests. We identified the 100°C, 500 hour exposure to 5% salt water contaminated diesel fuel as the most aggressive. Circuitized ceramic pins where exposed to this contaminated diesel fuel at 100°C for 500 hours. No change in resistance was measured after exposure.
3.5
Extended Time at Pressure and Temperature
Sensors were attached to a high pressure test manifold and placed in a temperature controlled oven. Pressure was raised to 1500bar and temperature to 125°C. No change in resistance could be measured after the specified 200 hour time period.
3.6
Pressure Cycle
Customers specify a high frequency pressure pulsation test of 10 million cycles of 200 to 1800, or 2000bar pressure changes. The test is run at a rate of 10 Hz and is designed to evaluate the sensor performance against fatigue damage. Several cells of a design of experiment capturing several design elements were put on this test. The best cell had no failures at 3 million cycles when the test was stopped. At the time of this writing, a larger group of parts with the optimum cell design is being built for further pressure cycle testing.
3.7
Integrated Temperature Sensor
Our integrated temperature sensor design requires a thick film thermistor material that is stable, and whose resistance varies with temperature, but not
307
308
Powertrain
pressure. We identified a negative temperature coefficient thick film thermistor material with the required stability. Thermistors were printed and fired onto ceramic pins and evaluated over temperature and pressure. As shown in figure 4, resistance is a function of temperature only within the experimental error of the test.
Fig. 4. Thick film thermistor plot
3.8
Conclusions
Pressure sensors based on direct sensing thick film resistors have been demonstrated to have several advantages over conventional diaphragm based pressure sensors. These advantages include: Elimination of proof pressure limits Elimination of diaphragm fatigue concerns Higher accuracy with less calibration costs Ease of temperature sensor integration The sensors are ideally suited for high pressure, high reliability applications requiring high accuracy, and low cost. CTS is currently engaged in design validation of direct sensing diesel common rail pressure sensors.
Automotive Pressure Sensors Based on New Piezoresistive Sense Mechanism
References [1] [2]
[3]
[4]
[5]
[6] [7]
[8] [9]
[10]
[11] [12]
[13]
[14]
Robert Brick, Alan Pense, Robert Gordon, “Structure and Properties of Engineering Materials’’, Fourth Edition, pp. 21. M. Prudenziati, B. Morten, F. Cilloni and G. Ruffi, “Very High Strain Sensitivity in Thick-Film Resistors: Real and False Super Gauge Factors’’, in Sensors and Actuators, 19 (1989) pp. 401 – 414. A. Masoero, B. Morten, M. Tamborin and M. Prudenziati, “Excess Noise in Thick Film Resistors: Volume Dependence’’, Department of Physics, University of Modena, Modena, Italy, pp.5. Marko Hrovat, Darko Belavic, Zoran Samardzija, Janez Holc, “A characterization of thick film resistors for strain gauge applications’’, Journal of Materials Science 36 (2001), pp. 2679-2689. B. Puers, W. Sansen, S. Paszczynski, K.U. Leuven, “Miniature highly sensitive pressure – force sensor using hybrid technology’’, Proc. 6th European microelectronics Conference, Bournemouth, England, June 3-5, 1987, pp. 416-420. S. Chitale, C. Huang, M. Stein, “High gauge factor thick film resistors for strain gauges’’, Hybrid Circuits Technol., 6, (5), (1989). C. Song, D.V. Kerns, Jr., J.L. Davidson, W. Kang, S. Kerns, “Evaluation and design optimization of piezoresistive gauge factor of thick film resistors’’, IEEE Proc. SoutheastCon 91 Conf. (Vol. 2), Williamsburg, 1991, pp. 1106-1109. N.M. White, J.D. Turner, “Thick film sensors: past, present and future’’, Meas. Sci. Technol., 8 (1), (1997), pp. 1-20. M. Hrovat, D. Belavic, S. Soba, A. Markosek, “Thick-film resistor materials for strain gauges’’, Proc. 20th Int. Conf. On Microelectronics / 28th Symp. On Devices and Materials MIEL-SD 92, Portoro, 1992, pp. 343-348. M. Hrovat, G. Drazc, J. Holc, D. Belavic, “Microinstructural investigation of thickfilm resistors for strain sensor applications by TEM’’, Proc. 22nd Int. Conf. Microelectronics MIEL-94 / 30th Symp. On Devices and Materials SD-94, Terme Zrece-Rogla, 1994, pp. 207-212. M. Hrovat, D. Belavic, J. Holc, S. Soba, “An evaluation of some commercial thick film resistors for strain gauges’’, J. Mater. Sci. Lett., 13, (1994), pp. 992-995 M. Hrovat, G. Drazic, J. Holc, D. Belavic, “Correlation between microstructure and gauge factors of thick film resistors’’, J. Mater. Sci. Lett., 14 (15), (1995), pp. 10481051. Sanjay Chitale, Cornelious Huang, and Michael Stein, “High Gauge Factor Thick Film Resistors for Strain Gauges, Part II’’, Hybrid Circuit Technology, (June, 1989) pp. 45. Osamu Abe, Yoshiaki Taketa, and Miyoshi Haradome, “New thick-film strain gauge’’, Rev. Sci. Instrum. 59 (8), August 1988, American Institute of Physics, pp. 1394.
309
310
Powertrain
[15] Sanjay Chitale, Cornelius Huang, Michael Stein, “High Gauge Factor Thick Film Resistors,’’ 3rd International SAMPE Electronics Conference, June 20-22, 1989, pp. 294. [16] Roberto Dell’Acqua, “Thick Film Resistor Strain Gauges: Five years After’’, IMC 1986 Proceedings, Kobe, May 28-30, 1986, pp. 343. [17] Nigel Fawcett, Martyn Hill, “The electrical response of thick-film resistors to hydrostatic pressure and uniaxial stress between 77 and 535 K’’, Sensors and Actuators 78 (1999) pp.114-119. Craig Ernsberger 905 West Blvd. North Elkhart, IN 46514 USA
[email protected]
311
Multilayer Ceramic Amperometric (MCA) NOx-Sensors Using Titration Principle B. Cramer, B. Schumann, H. Schichlein, S. Thiemann-Handler, T. Ochs, Robert Bosch GmbH Abstract The removal of nitrogen oxides from exhaust emissions of combustion processes containing substantial levels of oxygen is a challenging venture, especially in lean burn spark ignition and diesel engines for automobiles, trains and ships, and coal/oil-fired power stations. We have developed multilayer ceramic amperometric (MCA) NOxsensors made of an oxygen ion conducting ceramic like yttrium stabilised zirconia using a new operation mode, the titration mode, which measures the consumption of hydrogen reducing NOx. The operation temperature is around 800°C which is controlled by an online impedance measurement. The integrated platinum heater is driven by a pulse width modulation. MCA NOx-sensors performed well under static and dynamic conditions. The titration mode was compared to a standard reduction mode where the nitrogen oxides are directly reduced electrochemically. Both operation modes show O2-cross sensitivity.
1
Introduction
In order to reach low emission levels, it is essential to have a sensor that can detect NOx at concentrations below 100ppm, even in the presence of oxygen concentrations up to 20%, with a high accuracy and a low temperature dependence. A NOx-sensor combined with an eligible catalyst and a smart engine control will guarantee to meet future exhaust gas legislation, especially in diesel cars with their higher NOx emissions working far away from the stoichiometric point where a three way catalyst is applicable. The market for diesel cars in Europe will probably outrun the gasoline car market in the near future. Thus, diesel cars emission will be in focus.
312
Powertrain
2
Sensor Element
2.1
Functional Principle
The titration principle of the MCA NOx-sensor is based on the well known regeneration reaction in NOx storage catalysts where reducing agents such as hydrogen and carbon monoxide reduce the nitrogen oxides. Oxygen is pumped out of an inner chamber to maintain a constant electrochemical reduction of water and carbon dioxide to hydrogen and carbon monoxide as reducing agents. The re-oxidation of these components to water and carbon dioxide at another location in the sensor by pumping in oxygen is controlled by the level of nitrogen oxides present in the chamber. Figure 1 shows the cross section of a MCA NOx-sensor. Within a body of yttrium stabilized zirconia (YSZ) three adjacent chambers and a reference chamber are positioned containing electrodes of noble metal alloys, like platinum gold or platinum rhodium or pure platinum with YSZ. The reference chamber is connected to ambient air. The first chamber is separated from the exhaust gas by a porous diffusion barrier (DB). The inner pump electrode (IPE) together with the outer pump electrode (OPE) forms an electrochemical pumping cell which allows to pump oxygen into or out of the fist chamber corresponding to the applied voltage. The voltage is increased by a controller as long as the voltage of the Nernst cell formed by the Nernst electrode (NE) and the reference electrode (RE) is below a given level, e.g. 500mV. Thus, a very small oxygen partial pressure in the first chamber is adjusted to 10-7 bar. The nitrogen oxides pass the chamber nearly unaffected, due to the non catalytic behaviour of the platinum gold electrode. In the third chamber water and carbon dioxide are reduced to hydrogen and carbon monoxide by the titration gas electrode (TGE). They diffuse into the second chamber where hydrogen and carbon dioxide are re-oxidised. This re-oxidation is driven electrochemically by the potential on the NOx-measuring electrode (NOE). In the absence of nitrogen oxides the oxygen ion current is only determined by this potential. If nitrogen oxides are present, they re-oxidise the reduction gases and the oxygen ion current decreases. The following reactions can take place:
Multilayer Ceramic Amperometric (MCA) NOx-Sensors Using Titration Principle
Fig. 1. Cross section of a MCA NOx-sensor
(1)
(2)
Therefore, the current as the measuring signal of the NOx measuring electrode decreases linearly with the NOx concentration. The pumping current of IPE is a gauge for the oxygen concentration. It is also possible to read out the Nernst voltage between OPE and RE to get a very precise signal near the stoichiometric point (λ=1).
2.2
Design
The sensor is realised in thick film screen printing technology. Based on CADlayouts, masks and screens were produced. Five zirconia sheets were printed with noble metal alloy pastes for heater and electrodes. Alumina containing pastes were used for electrical insulation. The chambers were punched out of the sheets, and the diffusion barrier is printed with a zirconia paste filled with an inorganic pores creator. The printed sheets were laminated at elevated temperature and pressure before singularising and sintering between 1200°C and 1500°C where the pore creator and organic components like the binder and solvents were burned. Over all, the manufacturing process consists of approximately 70 steps. Figure 2a shows a sensor element before sintering: The outer pump electrode is surrounded by a meander for additional temperature measurement and compensation of the voltage drop in the supply wires. Electrodes are preconditioned after sintering for achieving sufficient O2-pumping capability.
313
314
Powertrain
The black two lines at the front are the exhaust gas inlets. The dimensions of the element are 2mm x 4mm x 60mm fitting in a usual UEGO housing with 18mm thread (see figure 2b).
Fig. 2.
3
a) Sensor head before sintering (left) b) Sensor in operation (right)
Electronics
The MCA NOx-sensor requires for its operation an electronic circuit interfacing the sensor element with the control unit of the engine. The potentiostatic circuit ensures the control of the voltages of the inner electrodes versus the reference electrode (RE) by applying a pump current to the outer pump electrode. The potentials of the NOx-measuring electrode and the titration gas electrode are held constant with regard to the reference electrode. The circuit transforms the electrode currents which are in the micro ampere range for the NOx measuring electrode and the titration gas electrode to voltages for displaying and signal processing. By applying an additional ac-current at a fixed frequency to the Nernst cell and evaluating the ac-voltage response one obtains the impedance of the electrolyte which is a logarithmic measure of the inverse temperature. The internal heater is driven by a pulse width modulation (PWM) at a given voltage (12V). The duty cycle of the PWM is varied by a controller between 10% and 90%. Thus, the temperature of the senor is kept constant and independent of temperature, composition and velocity variations of the exhaust gas. Figure 3 shows the block diagram of the electronic circuit. A programmable event controller allows the correct timing of the different functions. The meas-
Multilayer Ceramic Amperometric (MCA) NOx-Sensors Using Titration Principle
urement of the electrode currents and the impedance is synchronised with the heater pulse in order to ensure minimum interference of signals and heater operation.
Fig. 3.
4
Block diagram of electronic circuit
High Purity Material
Due to the enormous ratio of the signal currents (µA) compared to the heater supply currents (some amperes), it is important to have an excellent heater insulation, even at 800°C. Otherwise, the signal currents of the NOx measuring electrode would be completely overwhelmed by cross-talk of the strong heater currents.
Fig. 4.
Heater insulation resistance (logarithmic)
In thick film technology insulation is a big challenge; since the alumina insulation layer has a thickness of 30µm, the material must have an extremely high insulation resistance. The insulation resistance of alumina, with no alkali impurities, can even be enhanced by doping with heavy alkaline earths. The
315
316
Powertrain
standard sheets used for UEGO contain 1000ppm titania which diffuses into the insulation material also. Therefore, the standard sheet material was replaced by a high purity material. These high purity materials and the corresponding manufacturing processes had to be improved reaching the same mechanical stability and density after sintering. This could not be taken for granted at the start of the development. A comparison between the different insulation systems is shown in figure 4. The column height mirrors the insulation resistance Ri,DC with logarithmic scaling. It clearly shows the distinct influence of high purity materials on heater insulation.
5
Sensor Operation
MCA NOx-sensor performance has been tested under static and dynamic conditions in standard reduction mode as well as in the advanced titration mode, first in synthetic gas in a furnace at a flow rate of 200mm3/min, subsequently in a propane gas burner with a flow rate of 5m3/h and engine like gas composition at 200°C to 300°C. The engine testing was performed in the exhaust pipe of a VW 2.0 litre SULEV (Super Ultra-Low Emission Vehicle) engine at the Bosch test bench at Schwieberdingen near Stuttgart. The NOx-sensor was located after a three way catalyst and in parallel with a gas analyser. Thermo mechanical stability of the element still has to be tested in long term test bench operation.
Fig. 5.
Signal current versus NO-dosage a) O2-cross sensitivity (left) b) mathematically compensated (right)
Multilayer Ceramic Amperometric (MCA) NOx-Sensors Using Titration Principle
6
Results
6.1
Linearity and Sensitivity
Both operation modes show a O2-cross (0-10%) sensitivity of around 100ppm NOx equivalent in raw data which is below the target value. Figure 5 shows the signal current versus the NO dosage in synthetic gases. The NO concentration was increased stepwise (100ppm steps) and the oxygen concentration was varied from 0% to 18% at a sensor temperature of 840°C. The carrier gas was humid nitrogen. The dependence is nearly linear with a sensitivity of about 4nA /ppm. It has a parabolic oxygen dependence which is compensable as shown in figure 5b.
6.2
Lean and Rich Conditions
The results of the gas burner measurements are shown in figure 6. Even in the rich region a NOx-signal could be obtained. The NOx stimulus has an amplitude of 250ppm with an offset of 250ppm; the frequency is 50mHz. The oxygen concentration varies over time from 10% oxygen excess, stoichiometric point to 5.5% respectively 8% oxygen equivalent propane excess and back to 10% oxygen excess. The current IO2 of the IPE shows the oxygen concentration.
Fig. 6.
Periodic NOx excitation in propane gas burner under rich and lean conditions
317
318
Powertrain
6.3
Dynamic Measurements
The comparison of the sensor’s signal and the signals of a gas analyser (NOx chemiluminescence detector, Horiba) is presented in figure 7. The raw data can be seen in the first 20 seconds of the plot. By applying a filter (Kaiser Window, Fp=2.5 Hz, Fs=5Hz) the gas analyser follows the sensor signal nearly identically. To facilitate the comparison of the two signals, the analyser’s phase shift was corrected. The oscillation of the NO concentration in the SULEV engine is a result of oscillations in the combustion inside the cylinders of the engine.
Fig. 7.
7
Sensor signal versus analyser
Conclusions
A new MCA NOx-sensor has been built and a novel operation mode was tested. Instead of using the direct electrochemical reduction of NOx the “consumption” of the reducing agents by NOx which is accompanied by a decrease in oxygen concentration is measured. This mode is not influenced by a mutual contamination of the electrodes during the sintering process.
8
Acknowledgement
We would like to thank our colleagues in the EU’s 5th framework project SMART who helped us with discussion, electrochemical characterisation of the materials, material analysis, and spectroscopy and sensor testing on test benches.
Multilayer Ceramic Amperometric (MCA) NOx-Sensors Using Titration Principle
References [1] [2] [3] [4]
Kato, N., Nakagaki, K., Ina, N.: Thick film ZrO2 NOx-sensor. SAE-Paper 960334, Detroit, 1996. Kato, N., Hamada, Y., Kurachi, H.: Performance of thick film NOx-sensor on diesel and gasoline engines. SAE-Paper 970858, Detroit, 1997. Kato, NOx-sensor and Method of measuring NOx: patent, EP1376118A2. Schumann, Cramer, Schuele, Ochs, Thiemann-Handler, Alkemade: Gas Sensor and Method for measuring a gas component in a gas mixture, patent, WO 2002065113A1.
Berndt Cramer, B. Schumann, H. Schichlein, S. Thiemann-Handler, T. Ochs FV/FLC, Postfach 10 60 50, 70049 Stuttgart Germany
[email protected] Keywords:
exhaust gas sensor, amperometric, NOx, titration, solid electrolyte, oxygen partial pressure, platinum, gold, hydrogen
319
Comfort and HMI
323
Infrared Carbon Dioxide Sensor and its Applications in Automotive Air-Conditioning Systems M. Arndt, M. Sauer, Robert Bosch GmbH Abstract In this paper, we present the first carbon dioxide sensor designed for automotive applications. The sensor is based on the spectroscopic measurement principle. It includes a new robust micromachined infrared gas-detector and a corresponding, newly developed ASIC. First application studies show its suitability for automatic vehicle airmanagement systems and for leak detection in R744 air conditioning systems.
1
Introduction
Today, gas-sensing in passenger car applications is mainly restricted to two areas: exhaust monitoring in vehicles with catalytic converters using chemical sensors such as lambda probes and determination of exterior air quality for the control of outside-air intake in upper and middle class vehicles using metaloxide gas-sensors [1, 2]. In future, new applications like natural gas sensing or hydrogen sensing in alternative fuel vehicles will arise [3, 4]. Further, the optimization and enhancement of mobile air-conditioning systems will generate opportunities for gas sensors and in particular for carbon dioxide sensors. Examples may be air-management systems which balance outside-air and recirculated air or sustainable mobile air-conditioners using R744. In such systems, carbon dioxide will be used as a target gas for ventilation control and leak detection. As carbon dioxide is difficult to be measured reliably using chemical sensor elements, other working principles will be used. The most promising working principle here is the measurement of wavelength specific infrared absorption due to carbon dioxide molecule oscillations by use of a spectroscopic sensor. In the next sections, we will describe the first carbon dioxide sensor developed for automotive applications and measurement results obtained by using the sensor in an air-management system and in R744 leak detection.
324
Comfort and HMI
2
Spectroscopic Carbon Dioxide Sensor
The carbon dioxide sensor presented in this paper utilizes the principle of spectroscopic gas analysis. The gas concentration is measured via the wavelength specific reduction of electromagnetic radiation within the infrared range. The Lambert-Beer equation (1) describes this correlation. (1)
Here, I0 is the intensity of the incident light, I is the intensity after passing through the gas, k is the extinction coefficient, λ is the length of the absorption path and c is the gas concentration, respectively. In order to compensate for sensor degradation, two measurements are made at different wavelengths. One at a wavelength influenced by the carbon dioxide content and the second at a wavelength unaffected by atmospheric gases. This measurement acts as a reference for the sensor signal. The radiation at the two wavelengths is transformed by an infrared detector into a thermal voltage and then considered in the evaluation of the gas concentration (2). (2)
R is the calculated ratio, UCO2 and Uref are the thermal voltages and UCO2,0ppm is the thermal voltage at 0ppm CO2 partial pressure used for scaling (see also figure 2).
2.1
Thermopile Detector Element
The essential part of the spectroscopic carbon dioxide sensor is a novel infrared detector made by surface micromachining. It is hermetically sealed by a capping wafer on top of a thermopile wafer. Different gas specific optical filter chips (e.g. reference and carbon dioxide) can be mounted on top of this structure. The detector can either be mounted without housing by chip on board or flip chip technology or it can be mounted with a housing. For automotive applications the detector is mounted in a plastic housing and can be processed via standard SMD production lines (figure 1).
Infrared Carbon Dioxide Sensor and its Applications in Automotive Air-Conditioning Systems
Fig. 1.
Thermopile detector element
The main aspects of this infrared detector are: two-channel infrared thermopile detector reference channel for drift compensation high sensitivity (85 V/W) hermetically sealed, micro structured chip integrated thermistor for temperature compensation self test ability robust, automotive capable and low cost detector In figure 2 the analogue voltages of the reference and carbon dioxide detector signals, both in a range of 1mV, are shown. Additionally the normalized sensor signal is included (see eq 2). In the presence of a carbon dioxide level of 30.000ppm, the detector produces a usable signal change of about 20%.
Fig. 2.
2.2
Analogue voltage signal of the detector
Sensor
Apart from the detector element the sensor consists of a thermal radiant source and a reflector for directing the radiation onto the detector. The signal
325
326
Comfort and HMI
acquisition is carried out by an ASIC which also controls the source. The possible combination with a micro-controller offers a high degree of flexibility in the signal preconditioning. This sensor design results in a small, stable, flexible and inexpensive sensor.
Fig. 3.
Housing of the CO2 sensor
The automotive housing of the CO2 sensor is shown in figure 3. In order to achieve a very quick gas exchange, minimized internal space is one of the major design goals for the sensor housing. The gas permeates through a water resistant but gas permeable membrane into the housing.
Fig. 4.
Sensor signal
The sensor communicates via a digital interface or an analogue interface. The digital interface enables the sensor of a far higher functionality, i.e. different operation modes.
Infrared Carbon Dioxide Sensor and its Applications in Automotive Air-Conditioning Systems
The sensor characteristic is shown in figure 4. During the measurement the CO2 concentration has been successively increased from 0 to 30.000ppm. As expected, the sensor signal decreases by 20% (compare figure 2). Further, it can be seen that a step of about 200ppm can be resolved.
Tab. 1.
Specification of the CO2 sensor
To summarize the characteristics of the CO2 sensor, in table 1 the relevant specification data is given. Parameter
3
Value
Applications and Measurements
Using this sensor, first measurements related to two automotive applications have been carried out. The first application is an air-management system based on the carbon dioxide level in the cabin air, the second application is the detection of leaks in an air-conditioning system using R744 as a refrigerant.
3.1
Vehicle Cabin Air-Management System
In building climate control, so called demand controlled ventilation systems are used today. In such systems, the occupancy of a building is estimated by measurement of the carbon dioxide level inside the building [5]. Based on this estimation, the outside-air supply of the building is adapted accordingly. Thus, the power consumption of the air-conditioning system is minimized by up to 50%. Similar systems, which balance outside-air and re-circulated air will be applied in vehicles to optimize the power consumption of mobile air-conditioners, while ensuring good cabin air quality. The power consumption of the air con-
327
328
Comfort and HMI
ditioning system will be minimised, if a high fraction of cabin air is re-circulated and only a small fraction of outside-air is taken into the cabin. Thus, it would be advantageous to completely re-circulate the cabin air. Unfortunately, this leads to a degradation of cabin air and a related carbon dioxide concentration increase due to passenger breathing. Finally, unwanted levels would be reached. The rate at which the increase occurs, depends on the number of occupants, the cabin volume and the leak tightness of the cabin (see figure 5).
Fig. 5.
Increase of the carbon dioxide level in the cabin of an upper class vehicle (varying occupation)
Therefore, an air-management system must be implemented, which is based on the measurement of the carbon dioxide level in the cabin. Figure 6 shows the principle setup of such a system consisting of the ventilation ducts terminated by outside-air intake, recirculation-air intake and several air outlets. A recirculation flap is used to switch between outside- and re-circulated air. The position of the recirculation flap is controlled by an electronic control unit (ECU). The ECU is connected to a carbon dioxide sensor and controls the position of the recirculation flap according to the carbon dioxide level in the cabin. A fan accelerates the air towards the interior of the car [6].
Infrared Carbon Dioxide Sensor and its Applications in Automotive Air-Conditioning Systems
Fig. 6.
Principle setup of an automotive automatic ventilation system
First tests of an automatic ventilation system were carried out using the carbon dioxide sensor described above. A fall of the carbon dioxide level below 1.000ppm was used to start a recirculation cycle. When the carbon dioxide level reached 2.500ppm due to breathing activity, the recirculation cycle was stopped and outside-air (at 700ppm approx.) was taken in until the level decreased below 1.000ppm again. Figure 7 shows the corresponding run of the carbon dioxide level over several recirculation/outside-air cycles. It can be seen that the period of the saw-tooth like carbon dioxide level changes depends on the occupancy of the car.
Fig. 7.
Carbon dioxide level in a car with automatic ventilation system (a) 1 occupant b) 2 occupants c) 3 occupants d) 4 occupants). The trigger thresholds were 1.000ppm and 2.500ppm carbon dioxide in air
The corresponding recirculation and outside-air periods and the time fractions are shown in Figure 8. It can be seen that a high recirculation time fraction can be reached if the car is occupied by only one or two passengers. In the case
329
330
Comfort and HMI
of three or more passengers, the recirculation time fraction must be decreased in order to secure good air quality.
Fig. 8.
Measured recirculation- and outside-air periods and time fractions at various occupations
The results show the feasibility of an air-management system for automotive applications. With such a system low power consumption and thus high efficiency of a mobile air conditioning system can be achieved while securing good indoor air-quality. Due to the measurement of carbon dioxide as a lead parameter, the system reacts on the occupation of the car and thus becomes independent of the number of passengers. The system will contribute to the optimization of vehicle fuel consumption.
3.2
Detection of R744 Leaks
Triggered by the planned EU legislation on fluorinated greenhouse gases, automotive air-conditioning systems using the refrigerant R744 (carbon dioxide) are presently developed. As the refrigerant circulates under high pressure in these systems, leaks may occur in which carbon dioxide would enter the cabin [7]. As carbon dioxide cannot be seen or smelled by the passengers, but has negative effect on passenger condition, a carbon dioxide sensor is necessary to be able to detect leaks early enough to initiate countermeasures. A carbon dioxide sensor of the type described above, but with enhanced measurement range (0ppm - 50.000ppm) was used to investigate the carbon diox-
Infrared Carbon Dioxide Sensor and its Applications in Automotive Air-Conditioning Systems
ide concentrations at various positions in the car following a leak of the air conditioning unit. Following leak simulation was performed:
Fig. 9.
R744 leak measurement in a vehicle. CO2 discharge inside the air conditioning unit (rate: 10,4gram/s) over a time of 25s. Fresh-air flushing after 25s
Carbon dioxide was discharged over a time of 25s inside the air conditioning unit at a rate of 10,4 g/s and the carbon dioxide level was monitored at three positions inside the front area of the car cabin (left side floor, right side floor and centre ceiling). After 25s the windows were opened and the car was flushed with fresh air to simulate countermeasures against the leak. Figure 9 shows the run of the carbon dioxide concentration. It can be seen, that high concentrations (up to 45.000ppm) develop quickly during discharge in an area at the front floor of the car. At the ceiling of the car peak concentration is lower (10.000ppm) and appears later than in the front area. In our experiment peak concentration was reached after flushing of the car had been started. It is presumed that flushing leads to increased circulation of the cabin air which in turn leads to homogenisation of the carbon dioxide concentration throughout the cabin. The experiment shows that carbon dioxide sensing will be necessary in R744 air conditioning systems, as high concentrations of carbon dioxide will appear in case of an evaporator leak, which cannot be smelled or seen by the passengers.
331
332
Comfort and HMI
4
Conclusion
Based on the measurements carried out in this study, key requirements for automotive carbon dioxide sensors can be determined. For air-management systems, a high resolution of about 200ppm and a good accuracy over the full automotive temperature range will be necessary to realise an effective control of the carbon dioxide level inside the car. The measurement range should be between 0ppm and 5.000ppm. For leak detection, a short reaction time below 10 s and a measurement range between 5.000ppm and 30.000ppm will be necessary. This measurement range and reaction time will be sufficient to detect leaks of the system before unwanted concentrations can develop. For both applications good long term stability and the ability to measure absolute concentrations is mandatory. The spectroscopic sensor described in this paper is able to fulfil both set of requirements and can thus be used for air-management systems as well as for leak detection in R744 systems.
References [1]
[2]
[3]
[4]
[5]
[6]
[7]
I. Simon, N. Barsan, M. Bauer, U. Weimar, “Micromachined metal oxide gas sensors: opportunities to improve sensor performance”, Sensors and Actuators B: Chemical, Volume 73, Issue 1, 25 February 2001, Pages 1-26 I. Simon, “Thermal conductivity and metal oxide gas sensors: micromachining as an opportunity to improve sensor performance”, Aachen: Shaker, 2004, Tübingen, Univ., Diss., 2003, ISBN 3-8322-2410-6 I. Simon, M. Arndt, “Thermal and gas-sensing properties of a micromachined thermal conductivity sensor for the detection of hydrogen in automotive applications”, Sensors and Actuators A: Physical, Volumes 97-98, 1 April 2002, Pages 104-108 M. Arndt, “Micromachined thermal conductivity hydrogen detector for automotive applications”, Sensors, 2002. Proceedings of IEEE, Volume: 2, 12-14 June 2002, Pages: 1571 - 1575 R. Wetzel, “Bedarfsgerechte Lüftung mit neu entwickelten Mischgas Sensorsystem”, Fortschritts-Berichte VDI, Reihe 8, Nr. 1014, Düsseldorf: VDI Verlag 2003 G. Wiegleb, C. Stein, V. Hülsekopf, „CO2-Konzentration – Sensorsystem zur Überwachung des Fahrzeug-Innenraums“, ATZ 7-8/2004 Jahrgang 106, Seiten 688 – 693 M. Fischer, „Simulation of CO2 propagation into vehicle cabin following an Interior Heat Exchanger leakage failure based on experimentally derived mass flow data”, VDA Alternate Refrigerant Wintermeeting, 13.-14.2.2003, Saalfelden, Austria, www.vda-wintermeeting.de
Infrared Carbon Dioxide Sensor and its Applications in Automotive Air-Conditioning Systems
Dr. Michael Arndt, Dr. Maximilian Sauer Robert Bosch GmbH Dept. AE/SPE2 Tübingerstr. 123 72762 Reutlingen Germany
[email protected] [email protected] Keywords:
gas sensor, infrared detector, carbon dioxide, automotive, air-conditioning, R744, demand controlled ventilation
333
335
The role of Speech Recognition in Multimodal HMIs for Automotive Applications S. Goronzy, R. Holve, R. Kompe, 3SOFT GmbH Abstract The importance of speech dialogue systems in automotive environments drastically increased in the past years. More and more applications need to be controlled by the user. Using speech interfaces to do so can improve safety during driving. However, there are special requirements to the speech dialogue in such environments. This paper will outline some of them and propose solutions. As the major topic design issues concerning the integration of speech into graphical user interfaces are discussed and the GUIDE design tool will be introduced. GUIDE was developed as a tool for designing GUIs and was recently extended to allow the simultaneous integration of speech dialogue. As we will discuss in the paper, such an integrated design is extremely important for the development of natural and intuitive human machine interfaces.
1
Introduction
The amount of infotainment applications in cars drastically increased in the past few years, as well as the possibility to control certain properties and settings of the car itself. While offering a much higher level in comfort, the numerous new functionalities such as climate control, navigation systems and many more put an additional burden on the user, since their operation gets more and more complicated. One key technology to alleviate this problem is automatic speech recognition. Speech-enabled interfaces allow the user to keep his eyes on the street and the hands on the wheel while at the same time issuing potentially complicated commands simply by using his/her voice. This fact led to the opinion that speech recognition in cars is a must in order to ensure safety during driving. However, due to the limited computing resources in automotive systems often the speech dialogue capabilities are rather limited and require the user to learn pre-defined sets of words that can be understood by the system at different steps in the dialogue. An additional weakness of many state-of-the-art
336
Comfort and HMI
systems is that the speech dialogue is designed independently from the rest of the human machine interface (HMI) – usually the graphical user interface (GUI), yielding inconsistent system behaviour that potentially confuses the driver. It is therefore of major importance to apply an integrated design of GUI and speech dialogue to ensure consistent system behaviour and enable intuitive interaction. The article will give a brief overview about speech recognition and the particular challenges that it has to face in automotive environments. It will outline technologies that are necessary for the design of a well-accepted speech dialogue interface. This includes the sensible application of speaker adaptation techniques to improve the speech recognition accuracy which in turn allows to increase the vocabulary size considerably. Also the usage of confidence measures (CMs) will be described. These allow an automatic evaluation of the quality of a speech recognition result using additional knowledge sources and can trigger intelligent system reactions such as “sorry, I didn’t understand you”. Also the automatic detection of out-of-vocabulary (OOV) words (i.e. words spoken by the user but unknown to the system) is an important issue because users often undeliberately utter OOV words. Without the system’s capability to detect such OOVs, a misrecognition will take place and the following system action can be very confusing for the user. The focus of this paper will be the presentation of an HMI design tool that allows an integrated design of GUI and speech dialogue, yielding a consistent and therefore intuitive overall HMI. The design tool does not only allow to specify the design and behaviour of the HMI but also its simulation, which enables the immediate evaluation of the HMI behaviour at early design stages. This integrated design of GUI and speech dialogue ensures the consistency and compatibility of the overall HMI and can thus help to develop “better” – from the usability point of view – speech enabled HMIs. Such a tool-aided integrated design furthermore allows the automatic code-generation to a considerable extent making the software engineering process much more efficient. In particular change management can be handled easily. This paper is organised as follows: In section 2 an overview of speech dialogue systems is given including brief explanations of some key technologies such as speaker adaptation, CMs and OOV word detection. Section 3 describes the GUIDE toolkit and section 4 its extension for an integrated design of GUI and speech dialogue. Finally, section 5 summarises the paper.
The role of Speech Recognition in Multimodal HMIs for Automotive Applications
2
Overview of Speech Dialogue Systems
As mentioned before, the integration of speech dialogue capabilities into HMIs has several advantages. Especially in automotive environments and basically all environments where eyes and hands of the user are occupied by other tasks, speech dialogue enables easy controlling of devices and applications. Commands can be uttered by voice and system output is also given by (prerecorded or synthesised) speech. Apart from freeing the user from the necessity to push certain buttons and look at the system feedback on the screen, speech dialogue systems allow the user to concentrate on his/her task. Furthermore speech dialogue helps to develop more natural interfaces. E.g. more than one command can be contained in one utterance, and shortcuts can be easily defined. Also jumping between different applications can be handled more easily by speech. We will explain in the following how such speech dialogue systems work in general. Figure 1 shows an overview of a speech dialogue system. First of all, the words the speech recognition (SR) system is supposed to recognise have to be defined in the vocabulary. For each of these words a pronunciation has to be provided. These are usually stored together with the words in the vocabulary and the whole is often called the dictionary. The pronunciation consists of a finite set of phoneme symbols (phonemes are the “basic sounds” of a language) that are used to describe how a word is pronounced. For each language a specific phoneme set exists and for each of the phonemes a statistical model (Hidden Markov Model (HMM)) is built. The HMMs are trained on large amounts of speech data. Usually speech of many different speakers is used and these acoustic models are then called speaker independent (SI) models, since they more or less reflect the average way of pronouncing the different phonemes. The speech itself always has to be pre-processed to remove redundancies and extract feature vectors that represent the necessary information for the recognition. During the recognition process itself the speech recogniser tries to determine that sequence of words that was most probably spoken based on the HMM set given the current utterance. Additionally, in continuous SR systems (i.e. systems that can recognise sentences or phrases instead of just single words) a grammar or a language model defines the set of possible sentences that can be used in the current application. Without such a grammar, each word in the vocabulary would be equally probable at each moment in time. If at each time frame the probability for every word had to be computed the search space would soon become huge if vocabulary sizes of several thousand words are considered. After a sentence or command was recognised, a dialogue component is necessary that is able to interpret the recognised sequence of words and initiate
337
338
Comfort and HMI
appropriate system action. This includes the joint interpretation of the recognised sentence together with other potential input from the user such as gesture or haptic input. To give an example, after having recognised “Please play CD3 in random order” the command has to be “translated” to a system command that will start the CD player to play CD3 and to play the songs in random order. The dialogue component is also responsible for initiating system feedback that could e.g. be a speech output or a graphical output. Despite the fact that SR systems can recognise human speech and thus promise natural communication with the device, current systems are limited in the number of words they can recognise at a time. In order to ensure satisfying recognition rates, developers often restrict the active vocabulary, i.e. the set of words the SR system can understand e.g. in CD-player mode, to a minimum. The smaller the vocabulary the easier it is to distinguish these words even if they are spoken by different speakers or in noisy environment. However, this means for the user that he/she has to remember which words can be recognised for which application. This is quite far from a natural interface. Another disadvantage of current solutions is that often speech commands simply consist of a one-to-one mapping of the menu items that appear in the GUI. As a result the user has to follow the often deep and non-intuitive menu structure to finally issue the wanted command.
Fig. 1.
Overview of a Speech Dialogue System
Such an interface is neither natural nor time saving nor does it increase safety. Some techniques that help to alleviate this problem to a certain extent, such as speaker adaptation, CMs and OOVs, are explained in the following sections. In general there is a lot of variability in the speech of different speakers. So speech recognition, being inherently a very difficult problem, becomes partic-
The role of Speech Recognition in Multimodal HMIs for Automotive Applications
ularly difficult in automotive environments. There are many different types of background noise arising from the driving car, such as highway noise at different speeds, wipers, rain etc. that are problematic. One way to solve this problem is to use noisy speech databases for training. In particular for automotive applications such databases exist that were recorded in automotive environments, cf. [1], [2], so the HMMs can be trained with noisy speech data and are thus able to recognise speech in noisy environments. However, the speech recogniser will always be faced with noise that was not contained in the training database. Also “noise” like the stereo playing, other people in the car speaking etc. harm recogniser performance. Secondly, applications such as navigation systems require the SR component to ideally recognise several tens of thousand city names at the same time. Other applications require the system to recognise words that can not be known at the time when the system is designed such as CD and song titles. The pronunciations for such kind of dynamic content need to be generated online when the user inserts the new CD. Tools for this automatic graphemeto-phoneme conversion exist [3]. However, the bigger the vocabulary becomes when adding new content, the more difficult it will be for the recogniser to distinguish between words that are pronounced very similarly. Apart from echo cancellation techniques that try to reduce the amount of noise in the speech signal and thus increase the signal-to-noise ratio, speaker adaptation techniques are used to adapt the acoustic models to the current speaker and also to the current environment.
2.1
Speaker Adaptation
Speaker adaptation techniques are a means to adapt especially the acoustic models to the current speaker to improve the recognition rates. That means if speech data from the current speaker can be gathered, this can be used to adapt the acoustic models to his/her voice. In PC-based systems adaptation is often conducted offline and supervised. Offline means that the user has to speak predefined text (i.e. it is known, what the user is saying and thus adaptation is supervised). These sentences are recorded and afterwards the adaptation process is explicitly started and statistical parameters of the HMMs are re-learnt or re-estimated. After this has been finished, the user can use the system with the models now adapted to his voice. This scenario is hardly usable in the automotive domain. Here, a lengthy adaptation procedure before the system can be used is not acceptable. As a result, the adaptation has to be conducted online while the driver is using the speech dialogue system to control different applications. Adaptation can be conducted continuously while the driver is using the system so that the acoustic models are always adapted
339
340
Comfort and HMI
to the current situation which also includes adaptation to background noise to a certain extent. In this case it is not known which commands he/she is using. Instead the initial recogniser (that means the recogniser using the SI models) has to be used to recognise the commands. Using well-adapted (to user and environment) acoustic models can improve the recognition rate by more than 20%, cf. [4]. Higher recognition performance in turn allows increasing the vocabulary size. A larger vocabulary size means for an application that we can allow more alternative words for the same system commands thus offering more flexibility to the user because he/she does not have to learn a certain set of words.
2.2
Confidence Measures (CMs)
In order to avoid the adaptation of the acoustic models to misrecognised words CMs can be used. CMs try to determine the probability for a word being correctly recognised. This is done by computing additional confidence features during and after the core recognition process (Viterbi search). These features carry further information but it would be too costly to directly integrate them into the Viterbi search. After their normalisation a neural network or another statistical classifier can be used to learn a mapping between these features and the two classes ‘correctly recognised’ or ‘misrecognised’ words. This requires an annotated database. Meaningful features that are an indicator for misrecognitions are e.g. the speaking rate, since it is known that speakers who speak very fast are often misrecognised. Another good indicator is the difference in probability between the first best recognised word and the second best. If the difference is very small then the recognition result is not very clear whereas if the difference is very large, the recogniser was “sure” about the first best word. Also phoneme durations of the recognised words can be a good indicator for misrecognition. If the durations deviate a lot from the average durations determined on the training data, this might be caused by the wrong assignment of phoneme boundaries during recognition. This means that a misrecognition took place. There are plenty of possible features that are used in current systems, cf. [4]. If a word is finally classified as being misrecognised, it is first of all not used for adaptation (since we only know that it was misrecognised, we still do not know, what the correct result would have been). Furthermore, depending on where in the current dialogue this misrecognition took place, an appropriate reaction of the system like “sorry, I did not understand you” could be initiated. This is often more comfortable for the user than simply executing a misrecognised command. However, this requires a lot of fine tuning since the CM could reject a correctly recognised word, which is of course not wanted.
The role of Speech Recognition in Multimodal HMIs for Automotive Applications
2.3
Out-of-Vocabulary Vocabulary (OOV) Words
Often it is not sufficient to decide whether a word was misrecognised or not as is done by CMs. Sometimes users undeliberately utter words that are not included in the system vocabulary and can therefore not be recognised. Also, the active vocabulary of a recogniser might depend on the dialogue step. That means that some words, e.g. city names, can only be understood when the user is controlling the navigation system, but not when he/she is controlling the CD player. If then a city name is uttered, the SR system will ‘recognise’ that word from the vocabulary related to the CD player that best matched the spoken city name. Of course this will be a misrecognition. Even repeating the city name will not help. This behaviour might be very confusing for users since they are usually not familiar with the underlying vocabulary structure of the dialogue system. Here it could be very helpful to tell the user “sorry, but this word is not in the (current) vocabulary”, so the user does not end up repeating the word over and over again without success. Also, the knowledge that the user uttered an OOV could be an indicator for the dialogue when to display or output a help message or ask clarifying questions. OOV words can be determined by e.g. using special OOV word models. Such models are trained on words that are known not to be in the vocabulary. If the score for these word models is higher than for words in the vocabulary, we know that the user has most probably uttered an OOV word. Another possibility is to have a phoneme recogniser (i.e. a speech recogniser that can only recognise phoneme sequences, not whole words) running in parallel to the SR system. Any word can be composed of the phonemes in the phoneme set, so if there is any phoneme sequence that matches the uttered word better than a word from the vocabulary, we know that it might be an OOV. Again this requires a lot of fine tuning to circumvent the SR to recognise words that are in the vocabulary as OOV. A more detailed overview on these topics, including speaker adaptation, CMs and OOVs can be found in [4].
3
Tool-Aided Integrated Design of Speech-Enabled HMIs
Although speech input is very helpful in many situations, it is not the most appropriate means for all problems. Turning the volume of the radio up or down might be more intuitive using a knob than a speech command since it is not clear which size the steps for in/decreasing the volume should have. Also, not all system feedback can be appropriately output by speech, e.g. reading a long list of more than 10 city names to the user does not seem very sensible, because the user will have forgotten the first city already when the last one is
341
342
Comfort and HMI
read. Thus an intuitive speech enabled HMI uses a well adjusted combination of GUI and speech dialogue. 3SOFT has much experience in specifying and implementing automotive HMIs. This experience motivated the development of a specialised tool for the design of GUIs. This tool is currently being extended to enable the design of HMIs including speech dialogue. The following sections briefly explain the philosophy of the toolkit and its recent extension for speech dialogue. In section 3.1 we will outline the main capabilities and advantages of GUIDE to explain in more detail in 3.2 and 3.3 the main difficulties when integrating speech dialogue into HMIs and how they were solved.
Fig. 2.
3.1
The GUIDE View Editor
The GUIDE HMI Design Tool
When the HMI for e.g. an infotainment system needs to be developed, several steps are involved. First of all, a graphical layout needs to be specified, defining the colours, effects etc. Usually this also involves the design of the different menus for the later application using a special graphical tool. The different possible transitions between the different menus are often specified in UML using a different editor. The accompanying texts to be displayed, such as menu items etc. (and related translations for different languages) are kept in yet another document as well as any other information needed. Thus the specifications for the HMI are spread over different documents, making it extremely hard to keep them consistent if changes need to be taken into account. Even
The role of Speech Recognition in Multimodal HMIs for Automotive Applications
small changes in button size or menu text require the change of several files. It was shown that this fact is a major source for software errors [5]. In the automotive industry where the tool is currently mainly used, the problem often is that manufacturers want to have different suppliers for HMI components but with the same common look&feel. With every supplier using different HMI design tools, this common look&feel is often not given. A further disadvantage if no common tool is used, is that even after full specification using all documents, a prototype implementation is necessary in order to simulate and thus evaluate the complete HMI. GUIDE is the solution to the above mentioned problems. The GUIDE (GUI Designer) toolkit developed at 3SOFT is a HMI design tool that allows the consistent specification of HMIs using just one tool for all involved tasks. It thus ensures consistent specifications throughout the whole development phase. All entities are combined in one model, so that changes that are done in one part of the specification are immediately reflected in all other parts without explicitly editing several files. As a result, change management can be handled easily and a consistent specification can be ensured. In GUIDE views and widgets are defined in a WYSIWYG graphical editor, which allows easy operability. A screenshot of the view editor is shown in Figure 2. Figure 3 shows two example views for a CD/MP3 and a tuner application designed with GUIDE.
Fig. 3.
Views and Widgets
The interaction between views is defined in GUIDE using a UML compliant statechart editor that can be seen on the left hand side of Figure 5. All information about the views, widgets, state transitions, etc. is stored in one common database in XML format. This data can on one hand serve as the complete specification of the HMI, on the other hand it can be directly used as a basis for automatic code generation for the target. All data needed by the HMI are stored in properties inside the GUIDE toolset. Each defined widget can be assigned global (like height and width of a view, defined globally for all views) and constant properties (defined just for the currently selected widget, like e.g. the focussed, pressed or visible state of a button). Every widget can use both global and constant properties to facilitate the creation of a common look&feel. Properties can be enabled to support variants. One property can
343
344
Comfort and HMI
have different values for each variant. Variants allow specifying different skins in one model. If properties are defined ‘global’ they are reusable in the whole model. E.g. the list of currently available radio stations is held in a property. This allows a flexible abstraction of the real data sources in the target while creating the specification. An example for a property table is shown in Figure 4. Also shown in Figure 4 is a table for event assignment to widgets. Each widget can react on event notifications, such as a pushed button or a speech command.
Fig. 4.
Properties and Events
To be independent of any later implementation details, 3SOFT introduced the event mechanism in GUIDE. All notifications to the HMI and from the HMI to the rest of the system are modelled as events. If the user has pressed the ‘Enter’-button an ENTER_PRESSED event is generated. To notify the HMI of an incoming telephone call a TELEPHONE_INCOMING event is generated. Which event causes the widget’s reaction can be defined for each widget separately. Because the GUIDE events are part of the model, as well as the graphical design, the state transitions, menu items etc., they can be invoked immediately in a simulation to see the result while specifying without the necessity to have a prototype implementation ready. Figure 5 shows such a simulation view. Due to the common format used to store all model-related data the automatic generation of code becomes possible to a very large extent (50-80%).
4
Speech-Enabled HMIs
There are many speech dialogue systems on the market that also provide SDKs for the development of application specific dialogues. Often these dialogues are then designed independently by a supplier different from the HMI designer. That means that the speech dialogue was developed independently from the GUI. As a result the behaviour of the device or application is different if it is controlled by manual controls or by speech. This might cause the following problems:
The role of Speech Recognition in Multimodal HMIs for Automotive Applications
Interfaces that are not truly multimodal in the sense that the user can
freely mix button clicks with speech commands. If a dialogue was started by speech and in between a button is used the speech dialogue is stopped and the complete set of commands needs to be repeated hapitcally. Command words for the speech dialogue that are different from what is displayed as a menu item on the display. System responses being different depending on whether a certain command was issued by buttons or by speech. In applications like CD changer six physical buttons exist to select the CDs 1-6, but as speech commands only CD1-CD3 are available, i.e. CDs 4-6 cannot be chosen by speech, so that one of the CDs 1-3 is played instead. While it is often clear for the users where they currently are in the graphical dialogue, they often get lost in the speech dialogue because of insufficient feedback. Speech output from the speech dialogue system is played simultaneously with speech output from the navigation system, so the user is not able to understand either of them. System questions that are to be answered by “yes” or “no” result in jumps to completely different parts in the dialogue if misrecognised by words like “radio”, “service”, etc.
Fig. 5.
Simulation of GUI with state chart editor on the left
345
346
Comfort and HMI
4.1
Speech Dialogue in GUIDE
These were just a few problems that arise if the speech dialogue is designed and developed independently from the rest of the HMI. Thus, they are a good indication for the need of an integrated development of speech and GUI. When an interface is supposed to become speech-enabled, one should of course be free to choose any supplier for speech recognition. GUIDE fully supports the integrated design of GUI and speech, independently from the actual speech recognition system supplier. There are ‘trivial’ cases where one speech command can be used instead of pushing one button, such as hotkeys that allow users to switch between different applications. In GUIDE these cases are covered by simply allowing ‘speech-hotkey’ events that are processed in exactly the same way a buttonpressed event would do. However, there are more complicated cases. If several subsequent inputs are required from the user, as is usually the case when entering a destination city for the navigation (navigation->destination>Erlangen), this one-to-one mapping is no longer given. The user might say “navigate from Erlangen to Munich via Ingolstadt without using the highway”. This oral command comprises three or more menu steps in the graphical equivalent. It is now the task of the interpretation component of the dialogue in GUIDE to map the speech commands to appropriate system events. In GUIDE, these are events with appropriate parameters, e.g. NAVIGATION(DEST: Munich, VIA: Ingolstadt, ROUTEOPTION: NO_HIGHWAY).
Fig. 6.
GUI transition for user utterance: “Navigate to Erlangen”
The dialogue is also responsible for asking clarifying questions if user input could not be interpreted correctly (e.g. because of low levels of confidence or identified OOVs). However, the overall behaviour of the system should be exactly the same as if the buttons were used, otherwise the user would be confused. As a result the state transitions of the state chart defining the menu behaviour could be equivalent. But since actually three (or more) commands were given in one sentence, it is not necessary or even unwanted that the three commands are processed one after the other, each time showing the
The role of Speech Recognition in Multimodal HMIs for Automotive Applications
same screen as if buttons were used. Still, the final screen or the final confirmation is supposed to be the same. An example of the ‘final’ screen is given in Figure 6 after the user uttered “Navigate to Erlangen”. The GUI ‘jumped’ from the radio to the navigation input screen, but already ‘Erlangen’ is displayed as the destination since this was contained in the speech command. As street ‘center’ is displayed since this is the default if no street name is specified.
4.2
Ensuring Consistency between GUI and Speech Dialogue
In GUIDE, each state and/or widget is equipped with an additional editor for specifying the vocabulary and grammar for that particular state. States/widgets can inherit the vocabulary and grammars from parent states but also define vocabulary and grammar that are valid for this state only. By doing so the availability of certain commands that are valid in all dialogue steps concerning this application can be ensured. In order to ensure consistency between the graphical menu items and the speech commands the automatic generation of a base vocabulary for each state or widget is possible in GUIDE. Such a base vocabulary consists of the textual commands of the GUI. These should be available as speech commands in any case. If the textual commands are changed later, also the base vocabulary is changed accordingly. Abbreviations are treated separately. Simply providing the names/commands from the menu wouldn’t however lead to a natural dialogue and it is thus possible to extend the vocabulary by alternative expressions to enable a more natural interaction. Just as the properties described in section 2.1 that can be global and constant, the dialogue can have an overall, global behaviour, such as ‘confirmation strategy is always very explicit’ or ‘explicit confirmation is only used when absolutely necessary’ etc. On the other hand, there might be very state-specific confirmation strategies, e.g. in situations where PIN numbers are entered by the user. Other global properties could be e.g. whether speaker adaptation should be used. The level of confidence that is necessary to accept an utterance on the other hand could be state-specific as explained above, since different applications require different security levels.
4.3
Dialogue History and Multimodality in GUIDE
When closely following the state hierarchy of the GUI another advantage is that modelling the dialogue history becomes relatively easy since all parameters are kept in the global data pool that can also be accessed by the speech dialogue component. That means if the user is currently entering something
347
348
Comfort and HMI
that corresponds to a state very deep in the state hierarchy, ambiguities can be resolved by analysing the already entered parameters. Dialogue history is an extremely important factor if natural dialogue is desired. It is very unnatural that if one parameter, e.g. the destination in the navigation application is changed, all other parameters (such as route options etc.) have to be reentered, too. By closely coupling the speech and haptic input in the GUIDE model a truly multimodal interaction can be ensured. The user can jump between speech and haptic input also within a dialogue turn. The events, whether resulting from a haptic or a voice event, are always processed in the same way. In addition to the speech input, speech dialogue systems also use speech output (either pre-recorded speech or text-to-speech synthesis). So for each transition it can be specified that if an event was triggered by a speech or button event the output should be graphically and/or verbally.
4.4
Usability Testing in GUIDE
Another main advantage of using such a tool for the design of speech enabled HMIs is the fact that its behaviour can be tested immediately, since the HMI can be simulated at any time during the design process. Inconsistencies within the dialogue or between GUI and speech can be discovered very quickly. Of course, such testing cannot completely replace usability studies that are necessary at different steps during HMI development. But the tool can be used for usability tests without the need of having a working prototype on the target realised. In traditional HMI development the application can often be tested only when implemented on the target. However due to very strict time schedules results of usability tests that are conducted after implementation often do not find their way into the product. Using GUIDE, usability tests can be conducted at very early project states and thus there is a much higher chance that the results can be reflected in the final HMI. Even in later development stages changes can be incorporated quickly since changes in one aspect of the HMI model are immediately reflected in all other components of the system.
The role of Speech Recognition in Multimodal HMIs for Automotive Applications
5
Summary and Outlook
In this paper a HMI design tool was presented that allows for the simultaneous specification, design and implementation of HMIs. We focused on the integrated design of GUI and speech dialogue which is a very important topic in speech dialogue system design. By using GUIDE the development of truly multimodal interfaces is easily possible. The different mechanisms and properties of GUIDE were presented. Also special challenges to speech dialogue systems in automotive environments were discussed and proposals how to alleviate these problems were given. The speech-enabled version of GUIDE is currently under development. First aspects, as described in this paper, are already implemented. However, also methods for dialogue strategy development will follow soon.
References [1] [2] [3] [4]
[5]
http://www.speechdat.org/SP-CAR/ http://www/speecon.com S. Goronzy, Z. Valsan, M. Emele, J. Schimanowski, The Dynamic, Multi-Lingual Lexicon in SmartKom, Eurospeech 2003, pp. 1937-1940, Geneva, Switzerland S. Goronzy: Robust Adaptation to Non-Native Accents in Automatic Speech Recognition, Springer Verlag, Lecture Notes in Artificial Intelligence, 2002, ISBN 3-540-00325-8 C. Rosette: Elektronisch gesteuerte Systeme legen weiterhin zu. In: Elektronik Automotive, Heft 6/2002, p. 22f.
Dr. Silke Goronzy, Dr. Rainer Holve, Dr. Ralf Kompe 3SOFT GmbH, Frauenweiherstr. 14 91058 Erlangen Germany
[email protected] Keywords:
spoken dialogue, integrated tool-based UI design, GUIDE, speech recognition, speaker adaptation, confidence measures, OOV
349
Networked Vehicle
353
Developments in Vehicle-to-vehicle Communications Dr. D. D. Ward, D. A. Topham, MIRA Limited Dr. C. C. Constantinou, Dr. T. N. Arvanitis, The University of Birmingham Abstract The advanced electronic safety systems that will improve the efficiency and safety of road transport will rely on vehicle-to-vehicle and vehicle-to-infrastructure communications to realize their full potential. This paper examines the technologies that are presently in use for vehicle communications in the mobile environment, and indicates their limitations for achieving the range of functions proposed as part of future road transport developments. An area of research that shows considerable potential for these communication requirements is the use of mobile ad hoc networks, where the vehicles themselves are used to form a self-organizing network with minimal fixed infrastructure requirements. The development of this technology will be described, along with the technical issues that will need to be addressed in order to make effective use of it in the modern road transport system. In particular, a novel approach to the development of a framework of mobile ad hoc network routing protocols is described.
1
Introduction
Historically, both road vehicles and the electronic systems fitted to them have operated independently. Electronic systems have been developed and optimized to perform a specific function, such as engine management or anti-lock braking. Similarly the vehicles themselves have operated autonomously, without any knowledge of the surrounding environment except for that provided through the driver responding to traffic conditions, road signs and external command systems such as traffic signals. In the modern vehicle, many of the electronic systems are interconnected via a databus or network system. The most prevalent databus system is CAN (Controller Area Network), which is frequently used to network electronic modules that share responsibility for a particular set of global functions such as body control or powertrain management. The functionality that these systems can achieve through interoperation is frequently greater than could be
354
Networked Vehicle
achieved by the modules acting independently. For example, functions such as traction control are implemented by interaction between the anti-lock braking system and the engine management system. Elements of the traction control functionality are to be found in both modules. However the vehicles themselves are still autonomous. In the future, vehicles will interact, both with each other and with the transport infrastructure, to achieve additional enhancements to the safety and efficiency of road transport. These interactions rely on a means of networking the vehicles. The term “intelligent transportation systems” (ITS) describes a collective approach to the problems of enhancing safety, mobility and traffic handling capacity, improving travel conditions and reducing adverse environmental effects with a future aim of automating existing surface transportation systems.
Tab. 1.
Summary of IVC applications
Communication requirements are central to any ITS infrastructure because they enable the transmission and distribution of the information needed for numerous control and coordination functions. For example, the European Commission has a stated aim to reduce fatalities on European roads by 50% by 2010. Much of this reduction will come from the deployment of advanced elec-
Developments in Vehicle-to-vehicle Communications
tronic safety systems. The “e-Safety” initiative [1] is conducting research into a number of areas where electronic systems can substantially enhance road safety. Many of the applications proposed can only realize their full potential through the use of vehicle-to-vehicle and/or vehicle-to-infrastructure communications. As highlighted above an important subset of the communication requirements of an ITS is inter-vehicle communication (IVC), which is essential in realizing certain ITS applications as summarized in table 1. In order to meet the requirements of IVC, one solution is to implement a distributed communication network between the vehicles on the road. Mobile ad hoc networking is a candidate technology that can be employed to meet this requirement.
2
Introduction to ad hoc Networks
The term “ad hoc network” refers to a self-organizing network consisting of mobile communicating nodes connected by wireless links. Each node participating in an ad hoc network sends and receives messages and also acts as a router, forwarding messages to other nodes. Each node receives and analyses the addressing information in each message: the routing protocol then determines autonomously whether this message is retransmitted. Its decisions are generally based on information it continuously exchanges with its neighbours and measurements it might take of its environment. The networking capability arises because of the cooperative behaviour of nodes in forwarding messages for third party nodes. In theory an ad hoc network does not require communication with a fixed infrastructure since coordination within an ad hoc network is decentralized. Wireless LAN (WLAN) technology, as popularly used for networking of computers in offices and homes, is an example of a wireless communication technology that is operated in broadcast mode i.e. where all nodes can communicate directly with a common point known as an access point. However, WLAN technology has the potential to be operated in an ad hoc mode, provided the network layer in each node is modified to provide routing functions, so that communication need not take place via the access point. Currently, WLAN nodes can be configured to communicate in an ad hoc mode where communication between nodes is achieved through broadcasting messages to nodes within their direct communication range only.
355
356
Networked Vehicle
An important aspect of an ad hoc network is that when communication is required between nodes that are not within direct communication range of one another, messages can be forwarded using intermediate nodes. This is illustrated in figures 1 and 2.
Fig. 1.
Ad hoc network communications example
Fig. 2.
Ad hoc network communications example
In figure 1, node A wishes to transmit a message to node B. However nodes A and B are not within each other’s communication range. In figure 2, an intermediate node C is introduced which is within the communication range of both nodes A and B. Now, A may transmit a message to B via node C using a “multi-hop” approach, A → C → B. The forwarding of messages in this way is achieved through the use of a routing protocol, the design of which is crucial in performing efficient, reliable and expedited message delivery. Although in principle messages can be forwarded over a long distance in an ad hoc network, there are a number of practical limitations. One such limitation is the delay introduced by the routing protocol, which could potentially accumulate to a significant level over long distances. For some of the safety-related applications listed in table 1, a significant delay may be unacceptable which highlights the requirement for the routing protocol to be optimized to function efficiently within its target operational environment.
Developments in Vehicle-to-vehicle Communications
An uncontrolled relaying of messages causes a phenomenon known as a “broadcast storm” which is similar to an avalanche. For example, consider a message intended for a specific node four hops (or transmission radii) away which is broadcast to 10 other nodes; and each of these nodes broadcasts it to another 10 neighbours and this process repeats 4 times. Thus, the message ends up being relayed approximately 10,000 times, most of which are unnecessary as only four message transmissions are actually needed. The “broadcast storm” is avoided by dynamically discovering a route to the destination through the cooperation of the intermediate nodes and by exploiting their knowledge of their neighbourhoods. The challenge for IVC is to achieve the successful delivery of a message when the nodes are moving at high speed, as in ITS applications, and where any route discovered will have a very short lifetime. Current networking technology performs satisfactorily with stationary or slow moving nodes.
3
Analysis of ITS Communications Technology
There are a number of radio communication technologies that could be considered for ITS applications. These include: Wide-area broadcasting, using analogue or digital techniques. Examples include the RDS (radio data system) transmissions superimposed on analogue FM radio broadcasts, and DAB. Cellular radio systems, such as GSM and 3G mobile telephones Ad hoc networking techniques such as wireless LAN. Wide-area broadcasting has the advantage that a single transmission can reach multiple vehicles, but the disadvantage that usually it is not possible to address an individual vehicle nor to verify that the information has been successfully received. Furthermore, it is highly inefficient in terms of spectrum utilization to support any reasonable number of communication sessions with independent vehicles. Broadcast services are usually, but not always, “free to air”. Broadcast services are therefore best suited to disseminating information that needs to be sent to a wide range of consumers, where the data volumes are such that discrimination can be made easily at the receiving node. An example of this is traffic messages transmitted using the RDS-TMC system to vehicle navigation systems, where the receiver in the vehicle applies a filter to the messages based on the vehicle’s intended route. Cellular radio systems have the advantage that a message can be transmitted to an individual vehicle, but the disadvantages that there is usually a latency associated with message delivery and that the services have to be paid for.
357
358
Networked Vehicle
This communication latency arises fundamentally due to the need to maintain and search a location register, such as the home and visitor registers in GSM. The GSM short messaging service (SMS) is popular for transmitting short items of information, but contrary to popular opinion delivery is not necessary “instant”. Ad hoc networks appear to show the most promise for supporting the communication needs of ITS applications in the dynamic road traffic environment, and there is considerable interest in the global ITS community in the use of ad hoc networks for transport applications. In Europe, projects such as FleetNet [2] and CarTalk2000 [3] have used a self-organizing ad hoc radio network as the basis for communications. More recently, the PReVENT project [4] includes activities on wireless local danger warning and intersection warning. In the USA, activities are aimed at securing common frequency allocations for vehicle-vehicle communications in the 5.8GHz band alongside the allocations already reserved for DSRC. A key feature of many of these activities is that wireless LAN technology is being proposed as the basis for the communications. In order for ad hoc networks to be used effectively for IVC, the following technical challenges have to be overcome: Selection of an appropriate physical layer for radio communications. Wireless LAN technology has been used as a starting point, based on the industrial, scientific and medical (ISM) frequency allocations in the 2.4GHz and 5.8GHz radio bands. However, the ISM bands are unlicensed and may be used for a wide variety of applications. Long-term, dedicated radio bands for IVC may be required. Some bands have already been allocated (e.g. the 5.9GHz allocation for DSRC) but more may be needed in future. A dedicated communication standard, IEEE 802.11p, is currently under development. However, this addresses the physical layer only, i.e. single hop communication issues in a vehicular environment. Determining the level of fixed infrastructure required. Although in theory ad hoc networks can exist with no fixed infrastructure; in practice fixed infrastructure will be required to communicate messages to the wider transport infrastructure and to fill in the gaps in communication that may exist during periods of low vehicular traffic density. The fixed infrastructure will always be needed as wireless ad hoc networks cannot become arbitrarily large, since their data throughput would reduce to become arbitrarily small. Appropriate routing protocols to address the particular needs of IVC in the ITS context, which include (but are not limited) to defined message latency for safety-related applications, scalability, and the need for rout-
Developments in Vehicle-to-vehicle Communications
ing to be sensitive to the location of the vehicle and the direction of travel. In the remainder of this paper the development of a routing protocol for vehicular ad hoc networks is considered.
4
Review of ad hoc Routing Schemes
Many routing protocols have been proposed for ad hoc networks with the goal of providing efficient routing schemes. These schemes are generally classified into two broad categories: topology-based routing and position-based routing [5].
4.1
Topology-based Routing
Topology-based routing protocols use information about the links that exist in the network to perform packet forwarding. Topology-based routing can be further broken down into three sub-categories: proactive, reactive and hybrid routing. Proactive protocols attempt to maintain a global view of network connectivity at each node through maintaining a routing table that stores routing information to all destinations, computed a priori. Routing information is exchanged periodically, or when changes are detected in the network topology. Routing information is thus maintained even for routes that are not in use. Proactive protocols are “high maintenance” in that they do not scale well with network size or with speed of changes in topology. Reactive (also known as on-demand) routing protocols, however, only create routes when required by the source node and are based on a route request, reply and maintenance approach. Route discovery is by necessity based on flooding (it is assumed that the identity of the nodes is known a priori). On-demand routing generally scales better than proactive routing to large numbers of nodes since it does not maintain a permanent entry to each destination node in the network and a route is computed only when needed. However, a drawback of on-demand protocols is the latency involved in locating the destination node, as well as the fact that flooding is expensive in terms of network resource usage and reduces the network data carrying capacity. Hybrid routing protocols, the third category of topological protocols, are those that combine both proactive and reactive routing, e.g. the zone
359
360
Networked Vehicle
routing protocol (ZRP) [6, 7]. The ZRP maintains zones; within a zone proactive routing is used, whereas a reactive paradigm is used for the location of destination nodes outside a zone. The advantage of zone routing is its scalability since the “global” routing table overhead is limited by zone size and route request overheads are reduced for nodes outside the local zone. Further detailed reviews of topological routing techniques can be found in [8] and the references therein.
4.2
Position-based Routing
Unlike topology-based routing, position-based routing forwards packets based on position information, reducing and in some cases eliminating the overhead generated by frequent topology updates [9]. Position-based routing requires the use of a system that provides positioning information and a location service (LS) [10]. Although mobile nodes can disseminate their positioning information via flooding algorithms, a location service is important for scalability [11]. A location service helps a source node to detect the location of the destination node. A review of location services can be found in [5, 10]. There are two types of packet forwarding paradigms commonly used within position-based routing: restricted flooding and geographic forwarding (also referred to as “greedy forwarding”) [12]. In restricted flooding, a packet is flooded through a region that has been set up using the position of the source and destination nodes. Although restricted flooding is still affected by topology changes the amount of control traffic is reduced by the use of position information, thus limiting the scope of route searches and reducing network congestion. When a route to the destination cannot be found, network-wide flooding of the route request message occurs resulting in high bandwidth utilization and unnecessary network congestion. Geographic routing relies on the local state of the forwarding node to determine which neighbouring node is closest to the destination to forward the packet to and is thus not affected by the underlying topology of the network. The selection of the neighbouring node depends on the optimization criteria of the algorithm. Even though geographic forwarding helps to reduce routing overhead as a result of topology updates, the lack of global topology information prevents it from predicting topology holes [5].
Developments in Vehicle-to-vehicle Communications
5.
IVC Operating Constraints
5.1
Operating Environment
The majority of ad hoc networking research in the development and comparison of routing protocols has evaluated performance based on a 2-dimensional rectangular plane where nodes change their speed and direction randomly. This differs from the mobility model required for an ad hoc IVC network in several ways. Firstly, the movements of vehicles are spatially restricted to the road structure, thus constraining the mobility pattern significantly. Secondly, the speed of vehicles is often much faster than the node speeds used in the literature. Thirdly and most importantly, the dynamic nature of vehicular traffic flow (i.e. traffic flow patterns and density) must be used in order to evaluate the performance of the routing protocol for the target applications. The effect of differing mobility models on the relative performance of routing protocols has been highlighted in [13, 14]. This emphasizes the fact that the performance of a routing protocol modelled without emulating the movement characteristics and spatial constraints of the target application cannot be assumed to exhibit the same quantitative results demonstrated in the literature in a different operational environment. Another difference in mapping routing techniques to IVC is that no prior knowledge of the possible set of identifiers exists without maintaining either a centralized or a distributed database. As pointed out in [15], the possible number of identifiers can easily exceed a practical size and will be constantly changing, thus making it unmanageable to maintain such a database. Hence, node ID must be considered to be a priori unknown. Since vehicles are increasingly being equipped with positioning systems (e.g. GPS) it can be assumed that future vehicles will be equipped with an accurate positioning system as standard, allowing vehicles to be addressed by position. Vehicle ID must therefore consist of two fields, a geographical location field and a unique node identification number, as a minimum addressing requirement. In applications requiring data to be addressed to a specific destination, vehicle ID can be discovered through their current position and maintained by each neighbouring node only for as long as necessary. In this way, both conventional distributed and centralized node ID database solutions are avoided completely.
5.2
Routing Protocol Considerations
The potential size of an IVC ad hoc network coupled with the dynamic nature of traffic flow excludes the use of a purely proactive protocol for the following reasons. Firstly, continuous changes in vehicle connectivity will result in con-
361
362
Networked Vehicle
stant routing update packets being transmitted, compromising routing convergence (by the time a vehicle receives routing update information, it may already be “stale”). Secondly, as a consequence of control traffic consuming network resources (bandwidth and processing) the delivery of application data will be restricted. Thirdly, as the number of vehicles increases, the size of the routing update packet will increase proportionally, placing extra demands on network resources. However, proactive protocols may be suitable at a local level for a restricted number of vehicles where timeliness of delivery is imperative to the application and the relative velocity between vehicles is low. A purely reactive protocol assumes that the identity of a node is a priori known, in order for it to address a message to a particular destination. However, the creation and maintenance of a vehicle ID database is likely to be prohibitively complex. Even if a vehicle had such information, it would initially have no knowledge of a path to the destination. Therefore, finding a path would delay transmission and, for applications where timeliness of delivery is imperative, this would not be acceptable. Although route caching can be supported, there will still be an initialization period before information is built up, but the freshness of this information may be short-lived due to continuous changes in network connectivity. This further limits the protocol’s scalability. Thus, supporting a purely reactive routing protocol is unsuitable for IVC. For low priority applications where delay is acceptable a modified version of the reactive routing scheme, taking into account the addressing issues, may offer a suitable solution. However, for fast flowing traffic flow, message delivery may not be possible if links are continuously changing. A hybrid scheme using pure versions of both the reactive and proactive routing paradigms would not be suitable without modifying their methodologies to take into account position information, although it offers scalability advantages. It could automatically be assumed that, since position-based routing delivers messages based on position and fulfils one of the addressing requirements of IVC, it would provide the best routing solution. However, in the case of restricted flooding, knowledge of a vehicle’s position and ID are assumed to be a priori known so that the message can be flooded to the area where the vehicle is expected to be located. If the vehicle cannot be found then this may result in network-wide flooding in order to find the required destination. This is clearly not acceptable to applications for which timeliness of delivery is imperative. The level of detail of geographical information required to support efficient restricted flooding must include not only relative position of neighbouring vehicles, but also their direction of motion relative to the vehicular traffic flow and the message destination region. As will be seen in section 6, the justification for maintaining this level of geographical information com-
Developments in Vehicle-to-vehicle Communications
plexity in the routing layer is dictated by the ITS applications themselves. For low priority applications, where a vehicle has prior knowledge of the destination, this technique may be suitable although network-wide flooding for unicast transmissions must be avoided since the potential control overhead in locating a route could tie up network resources unnecessarily. Geographical routing suffers from the requirement for a location service. Although the method used in routing the message to the destination is effectively stateless, the location service will be affected by the underlying connectivity and may delay the delivery whilst waiting for position information. The position information also needs to be accurate up to one-hop away from the destination. The algorithmic complexity and maintenance overheads in implementing a location service can be highly costly. A modified version of the geographical forwarding scheme may be appropriate for certain groups of applications but the implementation of a location service is likely to be prohibitively complex for the range of application scenarios we are considering. The geocast scheme is a technique that can be applied for IVC for applications where information is of relevance to vehicles in a particular region, such as incident warning.
6
Proposed IVC Routing Framework
Having investigated the requirements of various ITS application scenarios in table 1, it became quite evident that they have quite different quality of service (QoS) demands, message delivery requirements and differing regions to which the data is relevant. The region to which the message is relevant for each application will be referred to as the routing zone of relevance (RZR), which is adapted from [16]. For example, ITS application scenarios such as vehicle platooning and cooperative driving will have a very low threshold in terms of acceptable communication delay, since any excess delay could mean the difference between the application either working as implemented, or potentially causing an accident. The RZR for these applications is considered to be in the near-vicinity of the source vehicle. On the other hand, applications such as mobile vending services and traffic information systems are not critically dependent on communication delays and have a wide area RZR. Thus, it is imperative to assign data priority depending upon the safety-related implications of the application. The message delivery requirements for various application scenarios are also different; e.g. platoons may require group delivery, incident warning message a broadcast to vehicles within a specific region and information for a specific vehicle such as a reply to a traffic information enquiry, unicast delivery.
363
364
Networked Vehicle
In order to implement an IVC ad hoc network which meets the requirements of the application scenarios, the need to use different routing scheme paradigms was identified, the selection of which is dependent on the application and its specific priority rating, required RZR and message delivery requirements. It is assumed that an accurate positioning system will be universally deployed in future vehicles (e.g. GPS or Galileo) and that there is neither a centralized, nor distributed, database maintaining a list of vehicle identifiers, as discussed in section 4. The message delivery requirements of the IVC applications can be classified into three different categories. The first classification consists of those applications such as incident warning, or an approaching emergency vehicle warning, which requires information to be broadcast to a geographic region. The required message delivery type used in this case is a geocasting delivery scheme. The second classification consists of applications such as a response to a traffic information request where an expected RZR can be determined from the packet sent from the requesting vehicle, using a method similar to the “expected-zone” technique used in [17]. The response message will be specifically addressed to the requesting vehicle using unicast delivery scheme. The third classification covers applications such as platooning or cooperative driving where communication between a number of vehicles is required in order coordinate manoeuvres between vehicles. The required delivery type is multicasting, also known as group delivery. The second and third classifications mentioned above will benefit from local connectivity information, specifically when the message is nearing its destination and in the case where application communication is on a local scale and timeliness of delivery is an issue. Maintaining network connectivity at the local level will aid delivery of unicast and multicast data for applications where timeliness of delivery is imperative as the message approaches its destination. Maintaining network connectivity information requires periodic exchanges of control packets. In a highly dynamic environment, where links are formed and broken frequently, the amount of control traffic required in order to maintain up-to-date connectivity restricts this type of protocol scheme from scaling well with an increase in network size [18]. However, knowledge of network connectivity is considered to be important for the implementation of the second and third classes of IVC applications mentioned previously, for two reasons. Firstly, local connectivity information is important for applications such as platooning, cooperative driving and any application requiring coordination between vehicles where timeliness of delivery is imperative. Secondly, local connectivity knowledge will help reduce the number of retransmissions required in order for a packet to find its destination within the RZR.
Developments in Vehicle-to-vehicle Communications
The latter mentioned applications require communication between specific vehicles, which are generally in the immediate vicinity of the source vehicle. Since these types of applications operate in the immediate vicinity of the source vehicle, we intend to maintain network connectivity information at the local level in zones called “local zones” (LZ) centred on each node. The size of the zone changes dynamically, depending on local vehicle traffic density, local mobility and local data traffic overhead. The above discussion leads to the conclusion that to satisfy the communication routing requirements of the plethora of application scenarios we wish to consider, it is necessary to deploy a suite of routing protocols. This work is similar to the FleetNet project [2] in that a wide range of IVC applications has been examined, but unlike [2] the application requirements have been analysed in their totality in order to identify efficiencies through the synergies of their underlying protocol mechanisms. This has resulted in the definition of a routing framework in order to satisfy the message delivery requirements of a number of different classes of IVC applications. The framework utilizes a hybrid routing approach that combines a routing scheme which maintains connectivity data at the local level with a position-based routing scheme beyond the LZ. Figure 3 shows a schematic diagram of the proposed IVC routing framework.
Fig. 3.
Proposed ITS routing framework
365
366
Networked Vehicle
Within the ITS routing framework, independently of the message delivery type and when the source vehicle lies outside of the RZR the message is forwarded towards the RZR using a routing technique called perimeter vehicle greedy forwarding (PVGF). PVGF is based on the principles of greedy forwarding [10, 12, 19]. When the message reaches the RZR of relevance the routing technique employed within the RZR changes depending upon the required message delivery scheme. This highlights the need for cross-layer communication as opposed to strict protocol layering in a vehicular environment. If the message type is geocast, a routing technique called distance deferral forwarding (DDF) is used to deliver the message within the RZR. However, if the message is addressed to particular vehicle(s) within the RZR the message is forwarded to the destination vehicle(s) using local zone routing (LZR) along with perimeter vehicle local zone routing (PVLZR). Both of these routing schemes utilize local connectivity information in order to locate the destination vehicle within the RZR. In the scenario where the source vehicle is a member of the RZR and addresses a message to a specific vehicle within the RZR, then both LZR and PVLZR are used to route the message to the destination(s). The following section expands the routing framework presented in figure 3 and discusses its constituent algorithms in outline.
6.1
Protocol Properties
The IVC framework is currently being developed for the motorway (freeway) environment and assumes that the “hello” packet header will also contain road identification information, facilitating vehicle classification per road. Further classifications can also be added optionally in order to classify vehicles at intersections. When a data packet m is transmitted by vehicle A which can either be the source vehicle, or the vehicle retransmitting m, the decision as to what type of routing protocol to apply depends upon the required type of message delivery (which is application dependent), as well as the location of A with respect to the RZR. For ITS applications where vehicle A is outside the RZR such as, a traffic interrogation request further along the motorway or a broadcast transmission
Developments in Vehicle-to-vehicle Communications
addressed to all vehicles in a particular region, the PVGF protocol is used to forward m towards the RZR. Unlike greedy routing techniques [13, 19] where recovery techniques are employed to route around the topology holes, such approaches are not required for the application of a vehicular ad hoc network on a highway. If partitions (topology holes) occur in the network or no appropriate vehicle class exists in the neighbour table to forward m in the direction of the RZR, the packet is stored in the neighbour waiting table until a vehicle meeting the required vehicle classification is detected. Although the packet is stored in the table the protocol takes advantage of the dynamic nature of vehicular traffic flow by waiting until it encounters an appropriate neighbour that can forward the message in the direction of the RZR. The method employed in dealing with topology holes is similar to the method applied in [20]. Once the message has reached the RZR, or the source vehicle is inside the RZR, the routing scheme changes according to the application delivery requirements, depending on whether a unicast, multicast, geocast or anycast delivery is required within the RZR. Broadcasting packets inherently suffers from the effect of congesting the network during packet flooding. Many schemes have been developed in order to reduce this effect [21, 22]. Applications requiring information such as an accident warning are considered to have safety related implications and hence require a high-priority delivery service along with high reachability within the RZR. Thus the reduction of retransmissions using a broadcast scheme allowing speedy delivery with a high penetration rate within the RZR is imperative in order to prevent unnecessary usage of the transmission medium. The technique employed is to reduce the number of retransmissions through the use of a (re)transmission deferral time which allows vehicles further away in the required forwarding direction to rebroadcast the packet first. Vehicles between the rebroadcasting vehicle and the source of the transmission will cancel their scheduled rebroadcasts for this packet. This technique reduces the number of retransmissions through allowing the vehicle farthest away to retransmit m first. Thus, the further away B is from A, the lower its deferral time is. This scheme is discussed in more detail in [23]. When the message delivery type within the RZR is either unicast or multicast, the search for the destination utilises the local connectivity information maintained in the LZ. When the destination node is found within a node’s LZ then LZR is used to deliver m. The selection of the next hop node is made depending on the location of the destination. This decision is repeated at each node receiving m within the LZ until m reaches its destination. However, if the destination does not exist in the LZ then PVLZR is used. In PVLZR, m is forward-
367
368
Networked Vehicle
ed to the node furthest away within the LZ called the perimeter node. LZR is then used to deliver the message to the perimeter node. At the perimeter node, if the destination is not within its LZ the above procedure is repeated. Otherwise, if the destination is found within the LZ, LZR is used to route m to its destination.
7
Conclusions and Future Work
Developments in road transport require a robust and scaleable means of communicating information between road vehicles to implement and fully realize the benefits of ITS applications. Mobile ad hoc networks are seen as the most promising technology to fulfil the communications requirements of these applications. This paper has described the development of a routing protocol framework suitable for use in the road transport environment. Ongoing work will verify the performance of the protocols by conducting simulations using a microscopic traffic simulator combined with a network simulation tool in order to derive performance metrics for the proposed protocols.
References [1] [2] [3] [4] [5]
[6] [7]
[8] [9]
European Commission; “eSafety Initiative”, www.europa.eu.int/information_society/programmes/esafety/text_en.htm FleetNet project website, www.fleetnet.de CarTALK 2000 project website, www.cartalk2000.net PReVENT project website, www.prevent-ip.org M. Kasemann et al., “Analysis of a Location Service for Position-Based Routing in Mobile Ad Hoc Networks” in Deutscher Workshop Uber Mobile Ad Hoc Networks, March 2002, Ulm. Z.J. Haas and M.R. Pearlman, “The performance of Query Control Schemes for the Zone Routing Protocol”, in ACM SIGCOMM, 1998. M.R. Pearlman, and Z.J. Haas, “Determining the Optimal Configuration for the Zone Routing Protocol”, IEEE Journ. On Selected Areas in Comms, Aug. 1999,17(8). E.M. Royer and C.K. Toh, “A review of current routing protocols for ad hoc mobile wireless networks”, IEEE Personal Communications, 1999. 6(2): p. 46-55. J. Tian, I. Stepanov, and K. Rothermel, “Spatial Aware Geographic Forwarding for Mobile Ad Hoc Networks”, 3rd ACM symposium on Mobile Ad Hoc Networking and Computing, MobiHoc 2002, Lausanne Switzerland, June 9 –1, 2002.
Developments in Vehicle-to-vehicle Communications
[10] M. Mauve and J. Widmer, “A Survey on Position-Based Routing in Mobile Ad Hoc Networks”, IEEE Network, 2001: p. 30 - 39. [11] H. Hartenstein et al., “A Simulation Study of a Location Service for Position-Based Routing in Mobile Ad Hoc Networks”, Reihe Informatik, University of Mannheim, June 2002. [12] B. Karp and H.T. Kung, “GPSR: Greedy Perimeter Stateless Routing for Wireless Networks” in ACM/IEEE MobiCom, August 2000. [13] J. Tian et al., “Graph Based Mobility Model for Mobile Ad Hoc Network Simulation”, in 35th Annual Simulation symposium, April 2002, San Diego, California: IEEE/ACM. [14] T. Camp, J. Boleng, and V. Davies, “A Survey of Mobility Models for Ad Hoc Network Research”, Wireless Communication and Mobile Computing (WCMC): Special issue on Mobile Ad Hoc Networking: Research, Trends and Applications, 2002. 2(5): p. 483 - 502. [15] L. Briesemeister and G. Hommel, “Overcoming Fragmentation in Mobile Ad hoc Networks”, Journal of Communications and Networks, 2000. 2(3): p. 182 - 187. [16] W Kremer and W. Kremer, “Vehicle Density and Communication Load Estimation in Mobile Radio Local Area Networks (MR-LANs)”, in VTC. 10 - 13 May, 1992. Denver, Colorado, USA: IEEE. [17] Y-B Ko and N.H. Vaidya, “Location-Aided Routing (LAR) in Mobile Ad Hoc Networks”, in 4th Annual Int. Conf. on Mobile Computing and Networking (MOBICOM’98). October 1998. [18] Z.J. Haas and S. Tabrizi, “On Some Challenges and Design Choices in Ad-Hoc Networks”, in MILCOM 98. Bedford, MA. [19] I. Stojmenovic, “Position Based Routing in Ad-Hoc Networks”, IEEE Communications Magazine, July 2002. 40(7): p. 128-134. [20] L. Briesemeister, L. Schafers and G. Hommel, “Disseminating Messages Among Highly Mobile Hosts Based on Inter-Vehicle Communication”, in Intelligent Vehicles Symposium. 2000: IEEE. [21] M. Sun and T.H. Lai, “Location Aided Broadcast in Wireless Ad Hoc Networks”, in Proceedings IEEE WCNC. 2002. Orlando, FL: IEEE. [22] S. Basagni, I. Chlamtac, and V.R. Syrotiuk, “Geographic Messaging in Wireless Ad Hoc Networks”, Proceedings of the IEEE 49th Annual International Vehicular Technology Conference (VTC’99),vol. 3, pp. 1957-1961, Houston, TX, May 16-20, 1999. [23] D.A. Topham, D.D Ward, Y. Du, T.N. Arvanitis and C.C. Constantinou, “Routing framework for vehicular ad hoc networks: Regional dissemination of data using a directional restricted broadcasting technique”, to be presented at WIT 2005, Hamburg, Germany, March 2005.
369
370
Networked Vehicle
David Ward, Debra Topham MIRA Limited Watling Street Nuneaton CV10 0TU UK
[email protected] [email protected] Costas Constantinou, Theo Arvanitis Electronic, Electrical and Computer Engineering The University of Birmingham Edgbaston Birmingham B15 2TT UK
[email protected] [email protected] Keywords:
intelligent transportation systems, eSafety, road safety, vehicle to vehicle communications, ad hoc networks, routing protocols
371
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway G. de Boer, P. Engel, W. Praefcke, Robert Bosch GmbH Abstract Exchanging the software in the ECUs during the lifetime of a vehicle will become a mandatory requirement, in order to update the vehicle’s functionality, and to cope with software errors and avoid return calls. Thus, Remote Software Download will become a dominating aspect in the manufacturer’s vehicle maintenance service in the future. Considering the amount of ECUs in the vehicle, a crucial issue is the flash procedure between gateway and ECU. There are many efforts towards standardizing the ECU interfaces and the communication protocols for software download. Nevertheless, this standardization is neither finished nor well-established. In this paper, we propose a very generic and flexible approach for the update of the vehicle’s software using a telematics device as a gateway between the remote download server and the local ECUs within the vehicle. This work has been carried out in the context of the EAST-EEA project within the ITEA research programme funded by the German Government BMBF [2].
1
Scenario Description
Innovation in automotive electronics has become increasingly complex. As microprocessors become less expensive, more mechanical components and functions are being replaced by software based functionalities. This results in high-end vehicles containing more than 60 electronic control units (ECUs). Market forecasts foresee that the amount of software in a vehicle is doubling every two to three years. Considering this trend, maintaining ECU software will become a crucial aspect in the manufacturer’s vehicle maintenance service in the future. Exchanging the software in the ECUs during the lifetime of a vehicle will become a mandatory requirement, in order to update the vehicle’s functionality, and to cope with software errors and avoid return calls. Mechanisms for remote management of the vehicle software, which allow an access of the vehicle in the field, become more and more important. A key use case is the remote update of ECU software. In this “software download” scenario, typically a powerful ECU with an Internet connection to the manufac-
372
Networked Vehicle
turer plays the role of a gateway receiving software from a remote maintenance server and distributing it to the affected ECUs in the vehicle via the local network, i.e. via the Controller Area Network (CAN) [5].
Fig. 1.
Remote Maintenance Scenario
Taking into account the number of ECUs in the vehicle, a crucial issue is the flash procedure between gateway and ECU. A standardized flash procedure identical for each ECU would be helpful in order to avoid a high complexity at the gateway. There are many efforts towards standardizing the ECU interfaces and the communication protocols for software download. Nevertheless, this standardization is neither finished nor well-established. Even though some manufacturers introduced a quasi standard within some high-end vehicles based on the Keyword Protocol 2000 (KWP2000) [6], a generic protocol common for all ECUs is out of sight today. Due to the fact that KWP2000 defines an optional set of diagnostic services but neither mandatory service sequences and associated state machines nor service parameter values, flash procedure variants with different security mechanisms, flash services and flash sequences are still status quo, even within one vehicle. In this paper, we propose a generic mechanism for wireless remote software update of ECU software (“software download”) using an infotainment head unit as a gateway between the remote download server and the local ECUs within the vehicle. The paper describes the basic principles of this solution and gives an overview about the applied technologies and standards. It describes the overall system architecture including the software architecture of a validator system. This work has been carried out in the context of the EAST-EEA project within the ITEA research programme funded by the German Government BMBF.
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway
2
Open Service Gateway Initiative (OSGi)
Some important new challenges posed by electronics in vehicles are the growing complexity of in-vehicle software, the shorter life-cycles of infotainment systems and applications, and the need to reduce the cost of software while significantly increasing its reliability and security. One of the most attractive and promising ideas addressing these new requirements is the platform defined by the Open Service Gateway Initiative (OSGi) [3]. Goal of the OSGi is to create open specifications for the delivery of multiple services over wide area networks to local networks and devices. Members of the OSGi come from the software, hardware and provider business. OSGi concentrates on the complete end-to-end solution architecture from remote service provider to local device. OSGi concentrates on the specification of a service gateway that functions as a platform for many communications based services. The specification defines an open architecture for software vendors, network operators, and service providers to develop and deploy services in a harmonized way. An OSGi service gateway provides a general framework for building and managing various software applications. The service gateway enables, consolidates and manages Internet, voice, data, multimedia communications to and from the home, office and other locations. This service gateway functions also as an application server for high value services such as telematics, energy management, control, security services, safety, health care monitoring service, e-commerce, device control, maintenance. The gateway enables service providers to deliver services to client devices on the local network. Speaking of future multimedia systems in the vehicular environment, devices and a central infotainment unit which are connected to a bus system can be seen as local network elements. In this manner a service provider could connect to these central infotainment unit and devices to provide his service. According to the contributors of OSGi the service gateway will likely be integrated in whole or parts in existing product categories such as (digital and analogue) set-top boxes, cable modems, routers, alarm systems, consumer electronics, PC. Key benefits of OSGi are platform independence: OSGi’s APIs can be implemented on a wide range of hardware platforms and operation systems application independence: OSGi’s specifications focus on defining common implementation APIs security - the specification incorporates various levels of system security features, ranging from digital signing of downloaded modules to finegrained object access control multiple services - hosting multiple services from different providers on
373
374
Networked Vehicle
a single service gateway platform multiple local network technology - e.g. Bluetooth, HAVi, HomePNA,
HomeRF, IEEE-1394, LonWorks, powerline communication systems, USB, VESA (Video Electronics Standards Association) multiple device access technology - UPnP, Jini. installation of new services via remote connections, even into a running environment. In more detail, the OSGi provides a collection of APIs that define standards for a service gateway. A service is hereby a self contained component that performs certain functionality, usually written with interface and its implementation separated, and that can be accessed through a set of defined APIs. These APIs define a set of core and optional APIs that together define an OSGi compliant gateway.
Fig. 2: OSGi Service Gateway Components, Release 3
The core APIs address service delivery, dependency and life cycle management, resource management, remote service administration, and device management. The essential system APIs are listed below. Device Manager: Recognition of new devices and automatic download of required driver software HTTP Service: Provides web access to bundles, servlets. Log Service: Logging service for applications and framework. Configuration Admin: Configuration of Bundles, maintenance of configuration repository. Service Tracker: Maintains a list of active services User Admin: User administration, maintenance of data for authentication and user related data, i.e. private keys, passwords, bio- profile, preferences. Wire Admin: Administration of local communication links
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway
Measurement: Utility class for the consistent handling of measure-
ments based on SI units. Position: Utility class providing an information object describing the
current position including speed and track information IO Connector Service: Intermediate layer that abstracts communication
protocols and devices. Jini Service Provision of Jini to services within OSGi. UPnP Service Provision of OSGi services in UPnP networks, UPnP device
control. The optional set of APIs defines mechanisms for client interaction with the gateway and data for management, and, in addition, some of the existing Java APIs, including JINI. Moreover, OSGi Service Platform 3 provides mechanisms supporting current residential networking standards such as Bluetooth and HAVi. A very attractive feature when dealing with remote admistration services is the capability of the platform, to be maintained independently from garage services. While the OSGi system was originally designed to provide powerful mechanisms for remotely manipulating and managing software on the embedded service gateway it can be used to extend this feature beyond the embedded device. With the strategy described in the following sections it is possible to perform software management on ECUs which are attached to the gateway with the same flexibility and security as within the gateway itself. This also comprises wireless remote access and thus avoids the return of the car to a maintenance station.
3
System Description
Making use of an OSGi-infrastructure at the manufacturer’s site, the gateway is administrated from the remote infrastructure. This includes the software download (push) and installation, start of new software packages and the deinstallation of software. The implementation of an OSGi gateway can take place in principle on an arbitrary system. In an automotive environment it is meaningful to integrate the gateway into existing powerful telematics systems which already provide a wireless communication connection [1]. Modern telematics systems are sophisticated software intensive systems and typically integrate a broad range of infotainment devices, including communication features based on e.g. GSM/GPRS or UMTS. In our scenario, we use the Head Unit of the infotainment domain as a host for the remote OSGi gateway to the vehicle.
375
376
Networked Vehicle
Fig. 3.
Main Components
The components of the remote software download scenario according to fig.3 are listed below. Head unit: Powerful telematics system hosting the Java-based OSGi service gateway. It is connected to the local vehicle network (CAN [5]) in order to access the ECUs in the vehicle. For software download and diagnosis purposes, this is done via a CAN Transport Layer software module according to ISO 15765 [7]. This is done via the so-called Java Native Interface (JNI), which gives access to system interfaces outside the Java environment (e.g., lower network and device functions as well as operating system functions). ECU Electronic Control Unit possessing a flash loader and a CAN interface, which allows the download of new software into the flash memory of the ECU via CAN Transport protocol ISO 15765. Remote Administration Server: The Administration and Maintenance Server provides services for the remote administration of the gateway. This includes the push of new Java software into the gateway and it’s installation and start. The Head Unit is administrated from the infrastructure by means of the Remote Administration Server, which may download, install, un-install, start and stop so-called OSGi-bundles on the gateway. An OSGi-bundles is a jararchive and represents the deployment unit for new shipping services. Access to the gateway is performed by standardized OSGi-APIs. The Administration and Maintenance Server provides services and software for the remote flash procedure. It maintains a database of „Flash-Bundles“ that may be downloaded to the HU.
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway
A key issue is the content of the software package “Flash Bundle” (figure 4): It contains the software to be installed on the ECU plus related information about the flash process (e.g. configuration data) and, additionally, a slim Java application, the so called reprogramming controller, which controls the flash procedure of the ECU later on. The reprogramming controller contains dedicated KWP2000 service calls [6] and is tailored to the ECU to be flashed. This is a very generic approach that allows to cope with different ECUs and flash loaders. Optionally a GUI for visualization, tailored to the HU requirements, may be added.
Fig. 4.
4
Flash Bundle
Downloading and Installing ECU Software
The installation of new ECU software is performed in several steps, as depicted in figure 4. First, based on standardized Java/OSGi mechanisms, a software package containing the new ECU-software is downloaded to the HU using a TCP/IP connection (e.g. via GSM/GPRS). This is done within the following two steps: After mutual authentication of the remote administration service and the HU, the software configuration of the vehicle is identified at the administration side based on the authentication information. The configurations of each vehicle is stored in a central database and assigned to a unique vehicle ID. After authentication, the server selects (or builds) a proper flash bundle dedicated to the ECU to be flashed (in the example ECU A). The flash bundle is transferred to the HU. The flash bundle is installed and activated on the HU within the OSGi
377
378
Networked Vehicle
framework by means of standardized services provided by the OSGi gateway.
Fig. 5.
Download Process
Second, triggered by a software element in the vehicle or from the maintenance site, the reprogramming controller is executed. It connects itself to the CAN Transport Layer module and drives the flash process according to the ECU flash loader specification. This process consists of the following steps: The reprogramming controller initiates the communication with the flash loader unit on the ECU using the CAN/KWP2000 protocol and handles the flash process. Since the reprogramming controller executes dedicated KWP2000 service calls according to the ECU flash loader protocol, it is possible to reprogram different ECUs regardless of their flash loader. Here lies the key aspect of the generic flash process. ECUs having different requirements regarding their flash procedure are not handled with one bulky flash mechanism pre-installed on the HU, which is applicable for all presently known ECUs. They are rather treated on the most generic level of communication, the KWP2000 calls. The software to treat the individual differences (the Reprogramming Controller and the Configuration Data) is transmitted together with the payload. This strategy even takes care of ECUs to come that may have flash procedures different from those today and provides for maximum flexibility. The local ECU flash loader installs the downloaded software in the ECU. After installation of ECU software, a notification about the installation result is given to the reprogramming controller and passed back to the server.
Generic Remote Software Update for Vehicle ECUs Using a Telematics Device as a Gateway
After successful installation of ECU software, the reprogramming controller is terminated and the complete flash bundle is uninstalled and erased from the system. With this step, system resources are de-allocated, and the interface to the ECU is removed.
5
Summary
A flexible, secure and robust mechanism for wireless remote software update of ECU software using an OSGi compliant telematics device as a gateway between the remote download server and the local ECUs within the vehicle has been proposed. Since the reprogramming controller dedicated to a specific ECU is downloaded together with the ECU flashware within one software package, this solution averts the danger of incompatibilities between the HU and the ECU and reduces the complexity of the HU software. Here, only a standardized CAN API has to be provided. Moreover, since the flash bundle is erased from the system after the flash process, the access to the ECU is possible only temporarily, which leads to an improved system security. In view of the fact, that the OSGi-infrastructure may be used for software distribution, vehicle’s ECU updates can be controlled via an OSGi-infrastructure and multiple vehicles can be updated in parallel.
References [1]
[2] [3] [4] [5] [6] [7]
V. Vollmer, M. Simons, A. Leonhardi, G. de Boer: Architekturkomponenten für zukünftige Multimediasysteme im Fahrzeug; 24. Tagung “Elektronik im Kfz”, Haus der Technik - Essen, Juni 2004 ITEA/EAST-ITEA: Project information; www.itea-office.org Open Services Gateway Initiative: OSGi Service Platform Release 3; www.osgi.org JavaTM 2 Platform, Standard Edition, java.sun.com. ISO 11898: Road vehicles — Controller Area Network (CAN) ISO 14230: Road vehicles - Diagnostic Systems - Keyword Protocol 2000 ISO 15765: Diagnostics on CAN, Part 2: Network layer services
379
380
Networked Vehicle
Dr. Gerrit de Boer, Dr. Werner Praefcke, Peter Engel Robert Bosch GmbH, FV/SLH, P.O. 77 77 77, D-31132 Hildesheim, Germany
[email protected] [email protected] [email protected] Keywords:
remote software download, remote maintenance, telematics gateway, OSGi
381
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles T. Wipiejewski, F. Ho, B. Lui, W. Hung, F.-W. Tong, T. Choi, S.-K. Yau, G. Egnisaban, T. Mangente, A. Ng, E. Cheung, S. Cheng, Astri Abstract We have developed fiber optic transceivers (FOT) based on leadframe and molding technology for large core PCS optical fiber systems. The transmitter module contains a VCSEL light source, an electronics driver chip, and some passive electronics components. The operating wavelength range of the laser chip is in the near infrared at 850nm. An internal control circuit stabilizes the optical output power over the entire operating temperature range from -40°C to +105°C to within 1dB variation. Eye diagrams taken at 500Mbps are wide open from -40°C to 105°C. The extinction ratio is larger than 10dB and the rise and fall time in the order of 0.5ns. The FOT is highly reliable and stable over 1000 temperature cycles from -40°C to 125°C. For the receiver side we developed high speed MSM photodetectors. The large area MSM photodetectors relax the coupling alignment tolerance to the core of the optical fiber. The MSM photodetector is capable of data rates of 3.2Gb/s. At this high speed the sensitivity is better than -18dBm for the MSM photodetector co-packaged with a suitable transimpedance amplifier (TIA). The technology meets the requirements of current and future infotainment networking applications.
1
Introduction
Optical interconnects for telecom and datacom applications have been extensively used for many years. We will briefly discuss the historical background of fiber optic applications and their application in automobiles.
1.1
Migration of Photonics Applications
Figure 1 shows the migration of photonics applications over the last decades. Since the 1980’s fiber-optics is widely applied in long-haul telecom systems. These systems use quartz glass single mode fibers of 9mm diameter to maxi-
382
Networked Vehicle
mize distance and speed. In the early 90’s, fiber-optics applications penetrated from telecom to datacom systems. Transmission distances are as short as 100m or even less. Multimode optical fibers of 50 or 62.5mm diameter are employed instead of the single mode fibers in order to relax the optical coupling tolerances. The packages of transceivers migrated from butterfly packages to TOcan packages, which have much lower cost. The relaxed tolerances and higher manufacturing volume enabled interconnects of lower cost compared to typical telecom systems.
Fig. 1.
Photonics applications migration from high performance driven telecom to datacom to low cost bitcom
Since the beginning of this millennium new applications [1-4] are emerging in the field of automotive, industrial control, consumer electronics and optical backplane interconnects (Fig. 2). These applications are summarized as bitcom. They typically exhibit transmission distances from 10cm to 30m. Since the bandwidth-distance product required is smaller, one can use large core fibers such as the plastic optical fiber (POF) [5] and polymer cladded silica (PCS) fiber. The core diameters of these fibers are normally 0.2mm to 1mm. The large optical core of POF or PCS fibers relaxes the mechanical alignment tolerances of the optical components. As a result, passive alignment can be realized. Conventional electronics packaging technologies such as automatic pickand-place and transfer molding can be employed in mass production of optical transceivers. The manufacturing cost of optical interconnects can also be substantially reduced due to the relaxed mechanical tolerances. From telecom to datacom to bitcom the typical cost of fiber optic transmitter components can be reduced by about a factor of ten depending on specifications and requirements.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
Fig. 2.
Short distance optical interconnect applications with their typical data rates: A) automotive infotainment and
safety systems, B) industrial control systems, C) home networking, D) optical backplanes in high-end computers
1.2
Fiber Optics Technology for Automotives
Fiber optic networks have been introduced to automobiles several years ago. MOST and IDB1394 are standards for the infotainment system [6-7]. Many European car manufacturers have already adopted the MOST system in their car models. The fiber optic transceiver production volume for automotive applications is already several million units per annum. Besides the infotainment system fiber optics has also been introduced to the safety system. The Byteflight system of BMW [8] is pioneering fiber optic links in their airbag control system.
Tab. 1.
Benefits of fiber optics
The key benefits of fiber optic links are the absence of electromagnetic interference (EMI) noise creation and susceptibility. In addition, fiber optic cables
383
384
Networked Vehicle
are thin, light weight, and highly flexible. The fiber optic connectors are robust and can carry high speed signals up to several hundred Mb/s easily (Tab. 1). A more detailed comparison between copper wires and optical fiber is shown in Table 2. The unshielded twisted pair (UTP) and shielded twisted pair (STP) cables are most commonly used as well as coaxial type cables for other high speed applications. The EMI immunity, light weight, and small bending radius of optical fibers are their main benefits for automotive applications.
Tab. 2.
1.3
Comparison between copper wires and optical fiber.
Physical Layer of Infotainment Networks
We develop fiber optic transceiver (FOT) components for automotive applications. Figure 3 illustrates how these components would be placed in a typical ring architecture structure of the infotainment system. All infotainment devices such as the radio head unit, the DVD player, or the GPS navigation are connected by an optical fiber. A pair of FOT transmitter and FOT receiver is placed in each device. Currently plastic optical fiber (POF) and 650nm LEDs are used in the fiber optics systems [1]. These systems exhibit a power budget of around 14dB and an operating temperature up to 85°C or 95°C. Therefore the current fiber systems are limited to the passenger compartment. For roof top implementations a maximum temperature of 105°C would be required and for the engine compartment even up to 125°C. Polymer cladded silica (PCS) fiber is under consideration to overcome the current operating temperature limitations. This fiber can be operated even at 125°C.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
Fig. 3.
ASTRI transmitter and receiver components for the infotainment system in automobiles. The various infotainment devices (such as for example: radio “A,” CD player “B,” navigation system “C”) are connected in a fiber optic ring.
Vertical-cavity surface-emitting lasers (VCSELs) are suitable light sources for the new PCS fiber. The VCSEL has a very narrow beam emission angle. Therefore the light output of the VCSEL can be easily coupled into the 200mm core of the PCS optical fiber. Since the VCSEL is a laser element its maximum modulation speed is very high (>>GHz). VCSELs provide data transmission for the Gb/s range. Thus, optical systems with VCSELs are future proof for higher speed requirements in new systems. This is an advantage in particular for video type applications. Table 3 shows a comparison of characteristics of 1VCSELs and other light sources used in optical fiber systems. VCSELs combine the advantages of LEDs such as easy packaging with the high performance of edge-emitting lasers. Thus, they are almost the ideal choice for short distance optical interconnects. Resonant-cavity LEDs (RCLEDs) are similar to conventional LEDs, but they can provide some higher speed. Their fabrication complexity is considered equivalent to VCSELs. However, they are not yet widely used in optical communication applications. VCSELs on the other hand have been employed in datacom modules for many years. Their production volume is millions of units per annum. VCSELs have also shown good reliability results in various applications.
385
386
Networked Vehicle
Tab. 3.
1.4
Optical sources for optical fiber systems
Comparison of Physical Layer
A comparison between the VCSEL/PCS fiber system and the LED/POF system is shown in Tab. 4. The VCSEL/PCS fiber system exhibits a power budget of around 21dB and an operating temperature range from -40°C to 105°C. In future the operating temperature can be potentially increased to 125°C. The higher power budget enables the system designer to incorporate more in-line connectors per link. Thus, the system flexibility is greatly improved.
Tab. 4.
Comparison between VCSEL/PCS fiber system and LED/POF system.
The PCS fiber has the advantage of low optical loss at the standard VCSEL wavelength of 850nm. Standard POF is not suitable for this wavelength, since the optical loss is too high. Even at the loss minimum wavelength of 650nm the attenuation of POF is still relatively high. Another disadvantage is that the absorption minimum covers only a narrow spectrum. Therefore the loss of POF increases when the operating wavelength of the 650nm LED moves to longer wavelengths at higher temperatures. This limits the total power budget of the POF link. For telecom and datacom applications glass fibers are used, because of their superior transmission performance in terms of speed and distance. The quartz glass single mode fiber has a very loss optical loss at 1550nm of 0.2dB/km. At
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
1310nm the optical loss is 0.4dB/km and the chromatic dispersion is zero. Therefore transmission capacity in the Tb/s range has been demonstrated with single mode fibers. For shorter distances of hundreds of meters multimode fibers are suitable. Table 5 summarizes the different performance parameters for the various types of optical fiber. For automotive applications PCS fiber seems to be a good choice, because it also offers a small bending radius of 15mm or less.
Tab. 5.
2
Comparison of various optical fibers used in telecom, datacom, and bitcom
Fiber Optic Transceivers
We have developed small size VCSEL based fiber optic transceiver (FOT) modules with plastic packages for PCS fiber systems [9]. A photo of the FOT modules is shown in figure 4. The module on the left is the transmitter, on the right is the receiver module. Their outer form factor of the two modules is a mirror image of each other. The module dimensions of 9.7mmx6.2mmx3.6mm are very small compared to the standard SFP modules. The maximum operating temperature of these modules is up to 105°C, much higher compared to typical datacom (85°C) or telecom (70°C) devices.
387
388
Networked Vehicle
Fig. 4.
2.2
High speed fiber optic transmitters (FOT) for a wide operating temperature from –40°C to 105°C. Left: transmitter module (Tx), right: receiver module (Rx)
Output Power Stability
One challenge for the transmitter design is to stabilize the optical output from VCSEL at high temperature. Telecom products use a thermo-electrical cooler to stabilize the device temperature, but the cost would be too high for bitcom products. In our approach we use a monitoring photo detector (MPD) to receive part of the optical output and feedback the photocurrent into the laser diode driver (LDD). Figure 5 shows the electrical circuit diagram of the transmitter. The LDD has an automatic gain control function to adjust the bias condition of VCSEL according to the feedback photocurrent. An additional external input signal can be used to level the average laser output power.
Fig. 5.
Electrical circuit diagram of VCSEL based fiber optic transmitter.
We have obtained stable optical power output of less than 0.4dB change over a wide temperature range from -40°C to 105°C as shown in fihure 6. This excellent value enables a tight output power specification of the transmitter.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
The tight power specification results in a better link margin for the transmission system.
Fig. 6.
2.3
FOT optical output power variation over a temperature range from -40°C to 105°C.
Optical Coupling Tolerance
We have designed an optical coupling system to couple the VCSEL output power into the PCS fiber. We optimized the optical coupling design to achieve a long working distance with a wide lateral alignment tolerance. Figure 7 shows a schematic of the optical coupling scheme.
Fig. 7.
Schematic optical coupling design.
A ray tracing model is built to calculate the coupling efficiency. The coupling efficiency is simulated for different VCSEL offset positions and fiber offset positions. By using the optimized coupling lens, the -3dB lateral coupling tolerance of the VCSEL transmitter to the PCS fiber is around ±100mm. This good value
389
390
Networked Vehicle
is obtained over a long longitudinal working distance of 500mm. Figure 8 displays some measurement results.
Fig. 8.
2.4
Coupling tolerance of VCSEL transmitter to PCS fiber.
High Speed Performance
The current MOST infotainment system runs at a data rate of 25Mb/s. Since the data format is a bi-phase signal, the physical transmission rate is 50Mb/s. The VCSEL based FOT can easily meet the timing specifications of the 50Mb/s data link. The fast turn-on and turn-off signal transitions provide a low jitter value for the entire optical system.
Fig. 9.
FOT eye-diagrams at 500Mb/s data rate for temperatures of (a) 25°C, (b) 95°C, and (c) 105°C.
The VCSEL FOT is also capable of much higher data rates in the Gb/s range. Other applications such as the IDB 1394 S400 standard require a data rate of 500Mb/s. Therefore we demonstrate the high speed performance of the FOT at the higher data rate of 500Mb/s. Figure 9 depicts the eye diagrams of the module at 500Mbps data rate for the different operating temperatures of 25°C, 95°C, and 105°C. The eyes diagrams are wide open indicating a good and error
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
free transmission performance. The extinction ratio is greater than 10dB for all temperatures. The rise and fall time are in the order of 0.5ns.
2.5
Reliability Test Results
Preliminary reliability studies on the VCSEL transmitters were carried out. These studies are important, because the coefficient of thermal expansion (CTE) of encapsulating materials is usually greater than that of the VCSEL by one or two orders of magnitude. The mismatch between CTE causes thermal stress on VCSEL during thermal cycling. The VCSEL reliability may be degraded if the stress is too high. Figure 10 shows some reliability test results for temperature cycling from -40°C to +125°C. The optical output power of the VCSEL transmitters is highly stable over 100 temperature cycles. The change in output power is less than 0.5dB. This excellent result shows that the package design and materials are well suitable for high reliability.
Fig. 10. Reliability results of VCSEL transmitter for temperature cycling from -40°C to 125°C.
3
MSM-PD Receiver
3.1
Photodetector for Large Core Fiber
The coupling of fiber to receiver becomes more challenging when the fiber core diameter is larger. Figure 11 shows the photdetector technology and optical media used for various optical transmission systems from very short reach (<30m) to long haul (>40km). For low data rate below 1Gbps, Si PIN-PD is a good candidate for receiving light from POF, PCS or MM-fiber. For high data rates up to 3.2Gbps or 10Gbps, GaAs or InGaAs pin-PD were used due to their higher speed. However, their application is usually limited to SM-fiber or MM-
391
392
Networked Vehicle
fiber systems since their aperture size is designed to be around 70mm otherwise the RC time constant may limit the bandwidth.
Fig. 11. Photodetector technology and optical media used for various optical transmission systems.
POF and PCS fiber are actively developed for Gigabit Ethernet. Gigabit per second transmission over POF has been demonstrated by several groups [10-11]. The demand for large area and high bandwidth photo-detectors is increasing. For POF/PCS systems transmitting data rate from 1Gbps to 3.2Gbps, GaAs metal-semiconductor-metal photodetector (MSM-PD) is found to be an attractive solution.
3.2
MSM Photodetector Design
The MSM-photodetector is comprised of back-to-back Schottky diodes that use an inter-digital electrode configuration on top of the active area. Figure 12 shows a cross-sectional view of the device. The bias voltage V is applied between electrode pairs to create a depletion region within the GaAs semiconductor.
Fig.12.
Schematic of an MSM-PD with metal electrodes of width w and spacing s.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
MSM-PD has a lower capacitance per unit area compared to pin-PD thus allowing a larger active area for the same bandwidth. Another advantage of GaAs MSM-PDs is the option to monolithically integrate them with transimpedance amplifiers as their fabrication processes are compatible. However, one disadvantage is the smaller responsivity due to the contact shadowing effect of the metal fingers. We have fabricated MSM-PDs on 4” GaAs wafers as shown in Fig. 13. The apertures size is f250mm. This enables loose alignment tolerance and high bandwidth. The chip dimensions are 730mm x 485mm. The chip thickness is 200mm.
Fig. 13. MSM-PD chip produced on 4”wafer.
Figuere 14 shows the drift time and RC time constant of an MSM-PD as function of the finger spacing s.
Fig. 14. Drift time td, RC time tc, and resulting total time constant for an MSM-PD as function of finger spacing s.
The larger finger spacing results in a drift time related speed limitation and also requires a higher bias voltage. For smaller finger spacing the capacitance is the speed limiting factor of the MSM-PD.
393
394
Networked Vehicle
Since the capacitance of the MSM-PD is reduced by a factor of 0.28 in comparison to a pin-PD of same diameter, the resulting speed at larger diameter is higher. Figure 15 shows a comparison of time constant for large area MSM-PD with finger spacing s=2mm and a pin-photodiode with an absorbing layer thickness of 2mm. The MSM detector is significantly faster for diameter of 150mm and above. For smaller diameters the drift time is more dominant. Thus the speed of the pin-diode is comparable to the MSM-PD.
Fig. 15. Comparison of time constant for large area MSM-PD and pin-PD.
3.3
Capacitance Measurement
The capacitance the MSM-PD with 250mm diameter at different bias voltages is shown in Fig. 16. The capacitance values are around 0.8pF at bias voltages from 0.5V to 10V. For comparison, the capacitance value of a pin-PD with diameters of 100mm is 0.8pF at 5V bias voltage. It is interpolated that the capacitance of a pin-PD with 250mm diameter is 5pF at the same conditions.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
Fig. 16. Capacitance of the MSM-PD at different bias voltages in comparison with pin-PD.
3.4
Coupling Tolerance Comparison
Figure 17 shows the comparison of coupling tolerances between MSM-PD of active diameter 250mm and pin-PD of active diameter 100mm. The MSM-PD has a 3dB bandwidth of 2.2GHz whereas the pin-PD has a 3dB bandwidth of 2GHz. With the use of MSM-PD, significant improvement of coupling tolerances of 80mm and 4000mm are obtained in lateral and longitudinal axes, respectively.
Fig. 17. Comparisons of coupling tolerances of MSM-PD and pin-PD to PCS fiber of 200mm core diameter in
395
396
Networked Vehicle
3.5
High Speed Performance
The eye-diagram and bit error rate curve of the MSM-PD packaged with a trans-impedance amplifier (TIA) are shown in Fig. 18. A PCS fiber is used to couple the optical source to the MSM-PD. The wavelength of the optical signal is 850nm. The signal is modulated at 3.2Gb/s with an average power of 10dBm. From the bit error rate curve, we interpolate a sensitivity of -18dBm at error rate of 10-12 for the PCS fiber system at 3.2Gb/s.
Fig. 18. Eye diagram and bit error rate of MSM-PD receiver coupled with PCS fiber. Wavelength = 850nm, data rate = 3.2Gb/s.
The MSM photodetector shows excellent high speed performance in large core optical fiber applications. The speed enables large alignment tolerances in current optical networks and provides the option for higher speed applications in the future. This includes video systems that require much larger bandwidths.
High Performance Fiber Optic Transceivers for Infotainment Networks in Automobiles
4
Conclusion
We have developed high performance fiber optic transceivers for infotainment networks in automobiles. The VCSEL based transmitter modules can operate up to 500Mb/s over a wide ambient temperature range between -40oC and 105oC. The optical output power is stabilized within 1dB variation. The extinction ratio is greater than 10dB. The small size leadframe style package provides loose alignment tolerances of ±100mm in lateral axis for a working distance of 500mm to large core PCS optical fibers. The large alignment tolerances for the transmitter and in-line connectors enable very cost effective optical networks. The receiver for high speed large core optical fibers is based on MSM technology. The MSM receiver technology exhibits a sensitivity of 18dBm at 3.2Gbps for a PCS fiber link. These transceivers overcome the limitations of current fiber optics technologies and provide higher performance in terms of operating temperature, speed and manufacturability. The VCSEL transmitter and the MSM based receiver modules meet the requirements of current applications and are future proof for new applications requiring higher data rates. These systems such as video links will be introduced in next generation of automobiles.
Acknowledgement We gratefully acknowledge great support from our suppliers and partners. This work was financially supported by funds from the Hong Kong Innovation and Technology Commission ITC.
397
398
Networked Vehicle
References [1]
Eberhard Zeeb, “MOST-Physical Layer Standardization Progress Report and Future Physical Layer Activities”, Proc. The 3rd Automotive LAN Seminar, pp. (3-3) – (334), Oct, 2003. [2] Hideo Nakayama, Takeshi Nakamura, Masao Funada, Yuichi Ohashi*, Mikihiko Kato, ”780nm VCSELs for Home Networks and Printers,” Proc. 54th ECTC, Las Vegas, June 2004. [3] W. Daum, W. Czepluch, “Reliability of Step-Index and Multi-Core POF for Automotive Applications,” Proc. 12th Intl. Conf. Polymer Optical Fiber, pp.6-9, Sep, 2003. [4] T. Nakamura, M. Funada, Y. Ohashi, M. Kato, “VCSELs for Home Networks – Application of 780nm VCSEL for POF,” Proc. 12th Intl. Conf. Polymer Optical Fiber, pp.161-164, Sep, 2003. [5] Andreas Weinert, “Plastic Optical Fibers”, Publicis MCD Verlag, 1999. [6] www.mostcooperation.com [7] www.firewire-1394.com/what_is_idb-1394.htm [8] R. Griessbach, J. Berwanger, M. Peller, “Byteflight – neues HochleistungsDatenbussystem für sicherheitsrelevante Anwendungen,” ATZ/MTZ Automotive Electronics, Friedrich Vieweg & Sohn Verlagsgesellschaft mbH, January 2000. [9] Flora Ho et al., “Plastic Package VCSEL Transmitter for POF and PCS Fiber Systems,” Proc. 13th POF Conference Nuremberg, Sep. 2004. [10] Takaaki Ishigure and Yasuhiro Koike, “Design of POF for Gigabit Transmission”, Proc. 12th Intl. Conf. Polymer Optical Fiber, pp.2-5, Sep, 2003. [11] O. Ziemann, J. Vinogradov, E. Bluoss, “Gigabit per Second Data Transmission over short POF Links,“ Proc. 12th Intl. Conf. Polymer Optical Fiber, pp.20-23, Sep, 2003. Torsten Wipiejewski, F. Ho, B. Lui, W. Hung, F.-W. Tong, T. Choi, S.-K. Yau, G. Egnisaban, T. Mangente, A. Ng, E. Cheung, S. Cheng ASTRI 5/F, Photonics Center, Hong Kong Science Park Hong Kong
[email protected] Keywords:
VCSEL FOT, fiber optic transceiver, high speed photodetector, MSM photodetector, PCS optical fiber link, infotainment system
Components and Generic Sensor Technologies
401
Automotive CMOS Image Sensors S. Maddalena, A. Darmont, R. Diels, Melexis Tessenderlo NV Abstract After penetrating the consumer and industrial world for over a decade, digital imaging is slowly but inevitably gaining marketshare in the automotive world. Cameras will become a key sensor in increasing car safety, driving assistance and driving comfort. The image sensors for automotive will be dominated by CMOS sensors as the requirements are different from the consumer market or the industrial or medical markets. Dynamic range, temperature range, cost, speed and many others are key parameters that need to be optimized. For this reason, automotive sensors differ from the other market’s sensors and need to use different design and processing techniques in order to achieve the automotive specifications. This paper will show how Melexis has developed two CMOS imagers to target the automotive safety market and automotive CMOS imagers in general.
1
Introduction
For the past decade, CMOS and CCD image sensors have driven the creation of many new consumer applications like mobile camera phones. The introduction of imaging sensors in cars has suffered from a serious lag, because high-speed buses, high quality displays and compact low-cost signal processing units and memories were not yet available. Also, previous generation camera sensors were not yet performing sufficiently well and did not match sufficiently the stringent automotive needs. Today, with the availability of the newly developed Automotive Camera sensors, the Tier 1’s can create revolutionary safety and comfort features for the car of tomorrow. Current camera IC’s having high dynamic range, high sensitivity over a broad spectrum (both Visible and Near Infra Red), fast frame rate, non-blooming, low cost and in-field testable capabilities are giving the automotive designers the ability to offer drivers options previously not possible. Two different kinds of applications exist, both with clearly distinct requirements and needs: Human Vision Applications, typically for comfort systems
402
Components and Generic Sensor Technologies
and Machine Vision Applications, typically safety critical and to be used in driver assistance systems. In the first application group, the task of the imager is to show one or several camera pictures to the driver. These applications usually need a small and autonomous colour sensor, most of the time generating an NTSC or equivalent signal for plug-and-play operation. In this set we find for example park assist, assisted overtaking (see the other-side road from another point of view like the left mirror), some sort of blind-spot detection systems. A sub-set of the display set is night vision where a black and white sensor of a higher cost and a higher resolution might be preferred. The second set of applications is the one dealing with image processing and usually safety related applications like lane tracking, seat occupancy for smart airbags, automatic blind-spot detection, automatic cruise control,… In these applications it is crucial that the sensor is a snapshot, black and white + NIR, fully testable, highly reliable CMOS imager. As the connection is to be made to a computation unit and not a display, parallel digital output is preferred to the common video standards. Although the Melexis sensors were designed to meet the demanding criteria of the safety critical applications and perfectly match the machine vision applications’ needs, they can also be easily used to build prototypes of the first category (Human Vision). Good examples to explain the need for an ‘intelligent camera’ in the “image processing” category are unintended lane departures. Such events were cited in 43% of all fatal accidents occurring in 2001. Rumble strips on the outside of the road have been proven to reduce lane departure accidents by 30-55%. Intelligent cameras could be applied to provide lane departure warning on nearly all roads. Once consumers realize the significance of an intelligent camera it will become a ‘must have’ feature. Existing systems just give a warning, but take no real action. (e.g. sound alarm, vibrations in the seat, flashing light, …). A final goal is a car that can control the steering and braking based on intelligent camera and other sensor inputs, but this is very far away. The intermediate situation is smooth guidance through a combination of multiple sensor inputs. In this paper we will mostly discuss the sensors for machine vision (image processing) and safety related applications in particular. A CMOS Imager in automotive applications must meet several goals: low price, good performance at high and low temperature, little or no ageing under the influence of the severe environmental factors found in motor vehicles and rigorous testability in production and in the field.
Automotive CMOS Image Sensors
As a reply to these demanding automotive requirements, Melexis has developed two CMOS snapshot camera chips for both inside and outside vision requirements. This paper will show how these two parts MLX75006 (CIF, 352x288=0.1Mpixels) and MLX75007 (Panoramic VGA, 750x400= 0.3Mpixels) have met these challenges. The chip design ensures a wide spectrum high sensitivity, an excellent dynamic range, all at a high frame rate, and without CCD imager issues like blooming and difficult control. For surviving the harsh automotive conditions, an over moulded plastic package with integrated focus free glass lens stack has been developed. This compact chip-package-lens combination can meet the price expectations of the market, while allowing the necessary flexibility in Field of View and NIR optimization. These devices are fully verified in production and can be tested for integrity in real time in the field.
2
Automotive Imager Requirements and Melexis Solutions
2.1. Technical and Commercial Requirements
As discussed in the Introduction, an Automotive Image Sensor should comply with several different requirements in order to meet the automotive market demands 1. Good image quality: High sensitivity over a wide spectrum and wide dynamical Range, all this over a broad temperature range: -40ºC ... +105ºC and vibration modes (snapshot). Some applications also require color, 2. Reliability: camera designed for good testcoverage, well tested in production, and allowing the user to do in-field self testing and real time integrity checks. 3. Compact system with optimum cost: highly integrated camera with lots of internal features (ADC, …), easy programmable and preferably with a matched glass lens system, again at minimum size and cost but without compromising the automotive qualification requirements.
2.2.
Solutions for Good Image Quality
Although not necessarily meaning the same thing, “Good Image Quality” is required for both Machine and Human Vision Applications. Although Vision Applications will have more emphasis on the aesthetic aspects of the image, and Machine Vision Applications will focus more on sharpness, allowing edge
403
404
Components and Generic Sensor Technologies
and pattern detection, “seeing” is key for both. The sensor needs to be sensitive enough to give a “good” image under all circumstances. As “seeing” is key, Melexis uses an optimized process for NIR sensitivity, yielding to the response curve shown below with the peak of sensitivity at a wavelength of 700nm and a high sensitivity up to 900nm. A high absolute value in sensitivity is met by using the relatively large 10µm x 10µm pixel size.
Fig. 1.
Spectral response relative to 555nm
As light conditions can change drastically (full sunlight, shadowing, tunnels, snow and rain, night, etc.) and a good picture is required under all those circumstances, an automotive image sensor needs to have an extremely high dynamical range. Dynamic range is one of the most limiting factors of common consumer image sensors. These sensors never have to deal with dark objects in the shadows surrounded by reflecting light on a wet road with the sun low at the horizon. Consumer sensors are usually able to take pictures of very bright and very dark scenes but not both at the same time. This is a must-have-feature for automotive image sensors. CMOS image sensors offer the possibility to have non-linear pixel responses. This non-linearity can be a piecewise linear response (also called multiple slopes), a logarithmic response or a mix of different responses. Melexis’ solution is the piecewise linear response where the total integration time is distributed over all the pixels so that the brightest ones see a lower integration time than the darkest ones and thus do not saturate. Non-saturating allows detection of objects in very bright areas and thus extends the dynamic range to higher limits. As the darkest pixels see a normal linear response, their noise level remains low and the dynamic range close to the lowest limit is
Automotive CMOS Image Sensors
unchanged. Together, this results in a significantly extended dynamical range, exceeding 100dB. Exact dynamical range calculations can differ strongly, depending on the calculation methods and assumptions used. Optimistic calculations can reach up to 120, 130dB, but one could wonder if all noise sources, and temperature effects are taken into account.
Fig. 2.
Examples of high dynamic range pictures
As the automotive world requires temperature ranges from -40ºC up to +85ºC at least, the image sensor must be designed and tuned to fit this broad range. By optimizing both design and technology, the Melexis sensors offer a guaranteed operational optical quality level from -40ºC up to +105ºC. The physical limitation of good high temperature behavior is leakage. This effect is an exponential function of temperature and will degrade the image sensor’s response in a non-uniform and non-predictable way, limiting the dynamical range and creating white spots randomly over the image map. The Melexis sensors include an internal “hot spot” rejection and general black level compensation to counteract some of the high temperature leakage effects and exceeding the +85ºC limit with good image quality.
Fig. 3.
The same scene at 25°C and at 105°C
Figure 3 compares two identical scenes at different temperatures. The first picture is taken at room temperature and must be compared to the second pic-
405
406
Components and Generic Sensor Technologies
ture taken at 105°C, with “hot spot” cancellation and offset compensation, but with identical integration time and speed settings. In order to sustain the high quality level and avoid skewing of the images under all (automotive driving & vibration) conditions, a global shutter where all pixels integrate and “freeze the image” at the same moment in time is a key feature. Whereas for Human Vision Applications a Global Shutter requirement is probably overkill, and not worth the disadvantages (more complex pixel), for Safety Critical Machine Vision Applications, a Global Shutter is a must. Whereas in a “Rolling Shutter” camera every pixel or line samples the image at a different moment in time, in a “Global shutter” every pixel in the whole array integrates and samples (locks) the image at the same identical moment. Together with the possibility of having very high frame rates, the Melexis cameras have the capability of monitoring very fast moving objects, like a rotating fan, without any blurry effects. In order to design a good Global Shutter camera, one needs to have an efficient “memory” cell in every individual pixel and sufficiently advanced technology in order to keep the image info stored in this cell during a complete read out cycle. Under extreme sunlight conditions, this is not evident for all global shutter sensors in the market. The parameter with which this limitation is measured is called “shutter efficiency” and its behaviour is strongly dependent on the pixel design itself.
Fig. 4.
Some colour pictures
In Machine Vision Applications, complex algorithms are constantly searching to identify edge transitions, correlations or other abrupt intensity changes. No
Automotive CMOS Image Sensors
human brain can help to interpret some strange artefacts and any extra software overhead to compensate the potential misalignments due to a rolling shutter is preferably to be avoided. Therefore for this group of applications, a Global Shutter is an absolute requirement On the other hand, the colour option is a must for Human Vision Applications (due to market requirements of exigent high end users) but not always necessary for Machine Vision (only required if the colour offers a significant difference in machine interpretation) Melexis and its partners have developed colour filters that can be added on top of a black & white camera in order to make a true colour camera. We have chosen a repetitive colour pattern of RGB filter, known as the Bayer pattern. Each block of 2x2 pixels equals then a 3 colour pixel and can be used to reconstruct a colour picture. As shown in figure 4, a high fidelity can be reached.
2.3. Solutions for Automotive Reliability & Testability for Image Sensors
As for any other component that plays a key role in automotive safety applications, reliability is an important topic that has to be addressed during the design and production phase of an automotive image sensor. The Melexis strategy of automotive image sensors follows the same line as used for all other automotive components 1. design for testability on sensor level (target is to reach efficiently high testcoverages on chip) and on application level by using DFMEAs 2. automotive AEC-Q100 set of qualification tests, 3. 100% optical and electrical tests during production at different temperatures, 4. several built-in testmodes and real-time integrity checks to allow the user to monitor its application in the field. Getting high test coverage in a complex sensorchip like an imager is not straightforward. Therefore, special attention has been put into this during the design cycles. The high testcoverage is reached by using full electrical separation of some blocks, using advanced software tools to write optimized test patterns and last but not least, using the continuous data stream flow of the sensor to incorporate info of internal nodes during operation. Together with a lead Tier1 supplier, several DFMEA discussions were held on application level, in order to make sure Melexis can offer an image sensor meeting the automotive standards.
407
408
Components and Generic Sensor Technologies
Before releasing the imager to automotive production, the standard automotive qualification tests are performed on the parts, all with appropriate temperature levels. As only standard processes are used, no major problems are to be expected. However, the less standard color filter technology and the integrated lens stack (see further) will also be qualified following the same guidelines. In order to reach the automotive target of a few ppm’s EOL fail levels, one needs to test 100% the outgoing production parts. With a sensor designed for test and high coverage, an efficient and optimized test procedure can be installed. Here, Melexis uses the experience and know-how built up during several years on testing another optical safety critical part, the MLX90255 linear optical array. Melexis is capable of doing a full optical & electrical test both at wafer level and final test at any temperature between -40ºC and +125ºC with automated, high volume test equipment.
Fig. 5.
Mass volume automated test equipment
Building a robust safety critical application requires advanced integrity checks on all critical building blocks. As most machine vision applications require a powerful processing unit, and the built-in space is very limited, some systems are made by means of a so called “two-box-solution”: one part consists of the imager, lens and some discrete electronics put at the critical sensing area, whereby the processing unit and voltage references are placed in another part inside the car. Such a system built requires not only good validation of the building blocks themselves, but also a good monitoring of the communication lines. The Melexis imagers offer several features to allow the user to indeed build a robust application, even using the two box concept. After completion of every image, several known testpatterns are sent by the camera to the processor, allowing the user to validate both uplink and downlink communication lines
Automotive CMOS Image Sensors
(also the command word used is sent back). As modern image sensors integrated in standard CMOS technology are composed of several other features like an internal ADC, white spot correction, advanced programming features etc., it is also important to have integrity verifications of this system on a chip itself available. Mixed after every line, and at the end of a full picture, known testpatterns are sent by the camera, allowing the user to verify the integrity of several internal nodes both in the analog and digital domain of the imagers itself. All these integrity modes are real time and can be interleaved with normal operation with very high efficiency. However, if something would go wrong, additional, more advanced test modes are also available to the user to identify where exactly a problem might occur. To access these advanced test modes, the sensor has to be put out of normal operation, but fast single frame switching between testmode and normal mode is possible at all times. Although the expected failure rate of electronic sensors qualified for automotive usage (including image sensors!) is in the ppm range, full visibility of all key components, including the wiring, is crucial in order to build a robust final system. By means of the variety of testmodes implemented and made available with the Melexis imagers, one can guarantee this. However, first generation safety critical driver assistance systems, will certainly not overtake the drivers’ decisions, but mainly provide assistance and help in potentially dangerous situations.
2.4. Solutions for Optimum Size & Cost
In the automotive world, besides performance and reliability, size and cost are also criteria of extreme importance. Using the experience and know how of the optical linear array, Melexis has further developed with its partners the plastic overmoulded cavity package concept. This solution offers several advantages: 1. a well known, standard packaging technology, relatively lowly priced and offering similar quality performances as several other plastic package used in the automotive world. 2. although the cavity offers an easy path to the light to enter the sensors sensitive optical area, all other parts are overmoulded, including some sensitive electronics, and of course, the bondwires and bondpads. 3. because the optical sensing area can be accessed freely, there is also the opportunity of focus free lens mounting, giving several extra advantages to the end user in terms of size, ease of handling and in the end total cost.
409
410
Components and Generic Sensor Technologies
Several years of experience with the plastic cavity concept used for the optical linear array have allowed us to elaborate the plastic cavity package concept into a general image sensor package technology. Using standard processes and compound, standard automotive quality levels can be reached. MSL level 3 is standard, but with the choice of adequate compound, MSL level 1 can be the target. Currently, leadframe based solutions like MQFP type packages are widely accepted in the automotive world, but if QFN technology would receive the same acceptance level as MQFP, the same cavity technology can be applied to leadless QFN types. One of the most critical points in reliability of a package are the bondwires and the bondpads. As those are overmoulded in exactly the same way as with any other standard automotive plastic package, no difference in quality is to be expected. As long as a sufficient guard band is kept between the cavity and the bondpad positions, the plastic cavity packaging concept gives equal reliability performances as the standard plastic packages. As the sensitive nonoptical electronics is covered with plastic, no extra metal layers are required for coverage. This can allow for a cheaper sensor technology and can also improve some electrical behavior like speed or matching. Also, a thick layer of plastic is a much more efficient optical sealant than thin metal layers.
Fig. 6.
MLX75006 (CIF sensor) in MQFP44 package, with and without Integrated Lens Stack
All optical systems require one or multiple lenses. This open package technology allows to directly mount a lens on the sensor. An integrated lens stack consisting of multiple lens interfaces optimized to the so called Panoramic VGA resolution of the MLX75007 image sensor, has been developed for this purpose. The lens offers a 60 degree field of view and a F# = 2.5. The optical design is matched our sensors performances, and optimized for a broad wavelength range up to the Near Infra Red. Edge distortion is limited to a minimum and an MTF of 40% at 50lp/mm can typically be achieved over a broad wavelength range and Field Of View.
Automotive CMOS Image Sensors
When mounting a lens, using the plastic package or PCB board as a reference, extra means of adjusting are required due to the high build up of tolerances. By placing the Integrated Lens Stack (ILS) directly inside the package, with the sensor area as reference plane, and by well controlling the tolerances, we can offer a fully focus free, integrated lens solution. Besides optimum size, the ease of processing also gives several advantages to the user: 1. no focusing required by the end user, nor the possibility to get out of focus (by design) 2. the module: sensor + lens interface can be completely tested by Melexis before shipping 3. the target is to offer a complete hermetically and optically sealed solution, capable of withstanding the standard soldering techniques, and as such to be a plug & play component, similar to a lot of other currently used sensors.
2.5
How to design an automotive vision application: “in search for the best compromise”.
Crucial to the end success of any (vision) application will be the good judgement of all application needs and the correct choice of both hardware and software needs. Every building block has its specific advantages, but also its price, therefore – as always – designing well means making optimized and well balanced compromises. As described in this paper, concerning the image sensor, several specification requirements can and should be weighed against each other: global shutter vs. rolling shutter, color vs. black & white, pixel size and resolution. As described above, a global shutter is a very interesting feature for a camera, but it requires a complex pixel design. Therefore, if not required, a rolling shutter might be an interesting alternative, with probably a higher light sensitivity for a lower price. The same is valid for color. Nice to have, but its need should be carefully balanced and questioned in the total system, as it will have a significant lower light sensitivity and dynamical range for an increased cost. Pixel size and resolution will determine largely the end cost of the imager, but can also impact on the lens cost and software needs. Too large lenses will be prohibitively expensive for the mass volume automotive market. Extremely high bandwidths and processing power due to megapixels image streams at very high frames/sec are probably also prohibitive for both processor requirements and EMC qualifications. For some specific applications, the “old-fash-
411
412
Components and Generic Sensor Technologies
ioned” CCD solution might even still be the best choice nowadays. For other applications, however, a CMOS global shutter image sensor will be required. A final automotive vision application will be the end result of a series of these balanced judgments: the right parts with the optimized software to get a correct performance at an acceptable market price.
3
Conclusions
This paper has presented the main criteria to which an image sensor should comply if it is to be used in the highly demanding automotive world. Next to obvious parameters like image quality, size and cost, also reliability and integrity verification means will determine the possibility of integration of an automotive image sensor in a safety critical automotive application. Furthermore, the unique combination of the overmoulded open package with a focus free integrated lens stack, can address successfully the dimensional and cost requirements.
References [1] [2]
A. Darmont, “CMOS Imaging: From Consumer to Automotive”, MSTnews 6/04, pp 43-44, December 2004. http://www.melexis.com/prodmain.asp_Q_family_E_MLX75007
Sam Maddalena, Arnaud Darmon, Roger Diels Product Group Manager OptoElectronic Sensors Melexis Tessenderlo NV Transportstraat 1 3980 Tessenderlo Belgium
[email protected],
[email protected],
[email protected] Keywords:
CMOS camera, automotive image sensor, high dynamic range, high sensitivity, fully programmable, near infra-red, advanced package, integrated lens, in-field test modes and integrity checks
413
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors G. Dahlmann, G. Hölzer, S. Hering, U. Schwarz, X-FAB Semiconductor Foundries AG Abstract In this paper a CMOS foundry process is presented, which allows the integration of piezoresistive pressure sensors with mixed signal electronic circuits on a single chip. Based on a 1.0µm modular CMOS technology platform, two distinct MEMS process modules have been developed. The first of these process modules is based on silicon direct wafer bonding in a pre-CMOS process and is designed for the realisation of miniature absolute pressure sensors. The second process is based on a conventional bulk micromachining post-CMOS process and allows the realisation of relative and absolute pressure sensors. Prototypes of pressure sensors have been fabricated and characterised as discrete devices. The measurements have shown that the sensors have state-of-the-art performance. A set of reliability tests have been carried out and show that the device characteristics remain stable even under harsh environmental conditions.
1
Introduction
The use of semiconductor foundries to outsource IC production has rapidly gained importance in recent years, not least in the automotive industry. Wafer fabrication in a foundry offers considerable advantages. Foundries can provide a second source for IC manufacturers with in-house production facilities, which enables them to react more flexibly to an increase or decrease in customer demand and to manage their own fabrication capacity more efficiently. Furthermore, foundries have given smaller fabless companies the opportunity to access the market, for whom the investment in production facilities would otherwise have been prohibitively expensive. Outsourcing is a routine procedure when it comes to manufacturing of integrated circuits. For micromechanical sensors, however, outsourcing is still an exception, even though the advantages of an outsourcing strategy are equally compelling. An example for a class of devices, where sensor manufacturers could benefit from additional flexibility and potential reduction of fabrication
414
Components and Generic Sensor Technologies
cost are piezoresistive pressure sensors. These devices are widely used and there is a rapidly increasing variety of applications for consumer goods, process control and automotive markets. Integrated pressure sensors, where the piezoresistive sensor and the signal conditioning electronics are placed on a single chip, are used for a number of automotive applications. Monolithic integration offers substantial advantages, such as reduction of size and weight and the potential to reduce the manufacturing cost. On the other hand, monolithic integration is not always possible, because harsh environmental conditions may require for the electronics to be separated from the pressure sensor. Furthermore, the initial investment cost for technology development is very substantial and it is only justified if the production quantities are high. As a consequence, today there are still relatively few applications, where the integrated technology has replaced the conventional two chip solution with discrete pressure sensor and ASIC. Manifold absolute pressure sensors (MAP) and barometric absolute pressure sensors (BAP) are examples, where integrated pressure sensors are the industry standard, today. However, only industry leaders such as Freescale, Bosch or Denso have managed to develop viable technologies for these applications. Only recently with the loom of tyre pressure monitoring systems new players have tried to enter the market for integrated pressure sensors. Most notably, Elmos, have presented their newly developed technology in 2004 [1]. If integrated pressure sensors can be used for new applications and enter into new markets will depend on whether the technology can be made cost-effective compared to conventional technology. Outsourcing the fabrication of integrated pressure sensors using a semiconductor foundry offers excellent prospects towards making the technology more cost-effective. With a foundry approach, a substantial part of the initial investment required for technology development can be saved. Moreover the need to run and maintain a production line for a highly complex process can be eliminated. With an outsourcing strategy, integrated pressure sensor technology could therefore become a viable alternative to the conventional two-chip solution, also for medium volume applications.
2
Technological Platform - 1.0µm Modular CMOS Process
X-FAB is a semiconductor foundry with experience of many years in development and volume production of integrated circuits. Besides the company’s core
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
competence in mixed signal IC manufacturing, there is also a great deal of expertise in the manufacture of microsystems. The technology portfolio comprises modular CMOS and BiCMOS technologies ranging from 0.35µm to 1.0µm, as well as SOI and MEMS technologies. For the development of integrated pressure sensors the 1.0µm CMOS technology was selected as a platform. X-FAB’s 1.0µm CMOS technology is a wellestablished technology, which is suitable for automotive, consumer and industrial applications. It has a modular architecture with a core process, which allows either 1.5V or 5V digital and basic analogue functionality. Around the core process there is a number of specific modules which can be freely combined with each other. These include: up to 3 metal layers (including power metal 3) ERPOM and EEPROM non-volatile memory advanced analogue options, for poly-poly capacitors, bipolar transistors, high-ohmic resistors high voltage transistors optical window for photo diodes ESD protection In order to facilitate circuit design, there is a comprehensive library of primitive devices and a set of more complex analogue and digital library cells. Most of the common EDA platforms are supported with high precision device models. The technology has been used for the implementation of diverse automotive ICs, including sensor interfaces. For numerous applications it has been demonstrated that the reliability requirements of the automotive industry can be met.
3
MEMS Process modules for integrated Pressure Sensors
In this chapter the process modules are described, which allow the monolithic integration of piezoresistive pressure sensors in 1.0µm CMOS technology. There are two distinct processes, a pre-CMOS process based on wafer bonding and a post-CMOS process based on bulk silicon micromachining.
415
416
Components and Generic Sensor Technologies
3.1
Pre-process Module Based on Wafer Bonding Technique
The first process module is a pre-CMOS process module, which means that the MEMS process steps in which the pressure sensitive membrane is created are carried out prior to the CMOS process. The process flow is shown in figure 1.
Fig. 1.
CMOS process with wafer bonding pre-process module
The basic idea to manufacture pressure sensors using direct silicon wafer bonding was first presented by Peterson in 1988 [2]. The process starts with a silicon wafer, in which a cavity is created by anisotropic silicon etching. Next, a second wafer is bonded onto the first silicon wafer using silicon fusion bonding under vacuum. After annealing, the wafer stack is thinned back, first by grinding and then by chemical mechanical polishing (CMP) until only a thin layer of silicon remains over the cavity. This silicon layer over the cavity is the diaphragm of a piezoresistive pressure sensor. The vacuum inside the cavity is the reference pressure for the absolute pressure sensor. At this point, the MEMS pre-process is completed and the CMOS front end process begins. The n-well implantation into the p-type substrate is also used to create an n-type region on the top surface of the membrane. Next follows
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
field oxidation and definition of active areas. Subsequently, gate oxide is grown and the polysilicon gates are defined. This is followed by the p- and n-type implantations for source and drain, where the p-type implantation is used to create the piezoresistive elements on the membrane. After this, an interlayer dielectric is deposited and the contact openings are etched. Depending on the complexity of the circuit, up to three layers of metal can be added, which are separated by glass interlayers. Finally the passivation is deposited and structured in order to provide contacts to the bond pads. The principal advantage of this integrated pressure sensor technology is that by adding the MEMS pre-process module the core CMOS process is left unchanged. In this way it can be ensured that all other process modules are available and can be combined with the pressure sensor module. Furthermore, the device libraries with primitive devices, digital and analogue cells remain available as well.
3.2
Post-Process Module Based on Bulk Micromachining Process
The second process module is a post-CMOS process module, which means that the MEMS specific process steps are carried out after the core CMOS process. The membranes are manufactured using a conventional bulk micromachining process, where the silicon wafer is etched anisotropically from the backside. The process flow is shown in figure 2. As for the pre-process module described above, the key focus of the process development was to leave the core CMOS technology unchanged in order to ensure compatibility with other CMOS process modules. The process flow is hence very similar to the process flow that was described in chapter 3.1. The wafer material consists of a p-type epitaxial layer on an n-type silicon substrate. The process flow starts with n-well implantation and diffusion. The areas where the pressure sensor membrane will be located at the end are also n-doped, in order to provide the isolation for the p-type piezoresistors. Next is the field oxidation using a LOCOS process, the gate oxidation and creation of the polysilicon gates. A second polysilicon layer, which is isolated from the first one is optional and allows the realisation of poly poly capacitors or resistors with higher sheet resistance. Subsequently, the n-type and p-type source and drain areas are created by implantation and diffusion. The p-type piezoresistors for the pressure sensor are also created in this process step together with source and drain areas for the p-channel transistors. The metallisation system consists of up to 3 metal layers, which are insulated by glass interlayers. A thicker top metal layer is optional for high power applications. The final
417
418
Components and Generic Sensor Technologies
step of the CMOS process is the deposition of the passivation layer and the opening of the bond pads.
Fig. 2.
CMOS process with bulk micromachining post process module
The post-process then starts with the deposition of an oxide layer on the backside, which acts as mask material for the bulk silicon etch. A front-to-backside alignment technique is used to pattern the backside of the wafer, where the mask openings are brought into line with the features on the front side. Finally the membranes are created using time controlled etching in potassium hydroxide (KOH) from the wafer back side. Optionally, a glass base wafer can be bonded to the pressure sensor wafer. This increases the stability of the pressure sensor and isolates the devices from thermo-mechanical packaging stress. In order to implement absolute pressure sensors, a solid glass wafer is bonded to the silicon wafer under vacuum. For a relative pressure sensor, the glass wafer contains openings in order to provide access to both sides of the membrane.
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
4
Prototype Fabrication and Assembly
Pressure sensor prototype wafers have been manufactured for both process modules with the aim to characterize discrete pressure sensors and to carry out preliminary reliability tests. Since the discrete pressure sensor does not require several of the CMOS process layers, a simplified process flow has been used to manufacture the prototype wafers. The sequence of layers and the associated characteristics, however, remain essentially unchanged. Pressure sensor dies have been designed and manufactured for various pressure ranges: Wafer bonding process module Absolute pressure sensor (APS) 3bar, 6bar, 15bar Bulk micromachining process module Relative pressure sensor (RPS) 0.5bar, 1bar, 6bar, 10bar, 15bar For characterisation, the pressure sensors are packaged. In order to obtain valid characterisation data for the pressure sensor chip, the package is selected so as to cause minimal degradation of the sensor performance. Distinct packages are used for absolute and relative pressure sensors. Absolute pressure sensors are mounted in a standard DIP-MK8 ceramic package. For the relative pressure sensor a custom package has been developed, where the chip is mounted onto a ceramic plate that contains an opening in order to provide the second pressure connection. For both sensor types a silicone based adhesive is used to attach the silicon sensor to the substrate. No further protective coatings are applied, which leaves the pressure sensor chip directly exposed to the atmosphere.
5
Characterisation
For all pressure sensor types, pressure and temperature dependent characteristics have been measured. The Wheatstone bridge was excited with a constant voltage of 5V and no external compensation was used. The temperature range was -40 to 125°C . The linearity error was defined as the maximum deviation from a best fit straight line. The following characteristics have been measured for absolute pressure sensors fabricated using the wafer bonding process module:
419
420
Components and Generic Sensor Technologies
Tab. 1.
Typical characteristics for absolute pressure sensors fabricated using wafer bonding process module.
For relative pressure sensor that were fabricated using the bulk micromachining process module, the measured characteristics are:
Tab. 2.
Typical characteristics for relative pressure sensors fabricated using bulk micromachining process module.
On the whole, the characteristics are comparable to what is the industry standard for uncompensated pressure sensor dies. The trade-off that results from using the CMOS layer sequence is that the piezo-coefficients are lower than for a discrete technology, where they can be optimised to yield a maximum sen-
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
sitivity. A further consequence of this constraint is that the sheet resistance of the piezoresistors is relatively low. This issue was addressed by adapting the resistor layout, in order to ensure that a practical value is achieved for the bridge resistance.
6
Reliability Tests
The reliability of a micromechanical sensor and the stability of its characteristics over time are very critical performance parameters. Thorough reliability testing and qualification schemes are required to ensure the correct functioning of the device over its life time. With micromechanical sensors the dilemma is, however, that there is no industry standard for reliability testing as it exists for integrated circuits. There is a vast number of applications, requiring for sensors to work under very diverse environmental conditions, which have made a standardisation impractical. As a result, tests and test conditions vary from application to application and from manufacturer to manufacturer. For the pressure sensors that were manufactured using the MEMS process module of the 1.0µm CMOS technology, reliability tests have been carried out, which were derived from qualification specifications for integrated circuit by the Automotive Electronics Council (AEC). Two diverse environmental tests have been carried out with samples for both process options, a high temperature storage test and a temperature cycle test. As for the characterisation measurements the package remains open during the stress tests and no protective coating is applied. The silicon chip is hence directly exposed to the atmosphere. The high temperature bake is carried out at 175°C for 500h, the temperature cycle test includes 500 cycles from -65°C to 150°C. The pressure sensor characteristics are measured before and after the stress test. The results are shown in the figures below.
421
422
Components and Generic Sensor Technologies
Fig. 3.
Offset drift for absolute pressure sensors fabricated using wafer bonding process module after stress test - left: high temperature bake (45 devices) - right: temperature cycle test (45 devices)
For the pressure sensors that were manufactured using the wafer bonding technique, the offset drift after high temperature storage is 0.3% (typically) and 0.6% (maximum). After temperature cycling, the drift is 0.1% typically and 0.4% (maximum). The values are related to the full scale output voltage (FSO).
Fig. 4.
Offset drift for relative pressure sensors fabricated using bulk micromachining process module after stress tests - left: high temperature bake (45 devices) - right: temperature cycle test (75 devices)
For the relative pressure sensors that were manufactured using the bulk micromachining process module, the offset drift after stress tests is slightly higher. After high temperature storage a 0.4% (typical) and 0.7% maximum deviation was measured. After temperature cycle test the drift was 0.4% (typical) and 1% (maximum). Since the layer sequence and the sensor geometry is essentially the same in both cases, the increased offset drift is mainly attributed to the reduced stability of the bulk micromachined sensor. This sensor chip does not have a stable base, but it is directly attached to the ceramic substrate, which makes it more susceptible to thermo-mechanical stress.
7
Pressure Sensor IPs
Semiconductor foundries generally provide some form of design support in order to enable their customers to use foundry processes more efficiently and to minimise the time-to-market. Besides a detailed process specification and design rule documentation, X-FAB’s design support also includes a compre-
A Modular CMOS Foundry Process for Integrated Piezoresistive Pressure Sensors
hensive library with characterised circuit components, the so-called design kit. In the design kit, the circuit designer can find primitive devices, such as transistors or diodes, as well as analogue and digital cells, e.g. operational amplifiers, comparators, logic gates, etc.. Moreover, there are IP blocks, for more complex circuit sub-functions, such as memory blocks. For all library elements, models are available and most of the common EDA tools are supported. In this way, the design of integrated circuits can be made considerably more efficient. The designer can compose the circuit of well characterised components and re-use working sub-circuits, instead of developing everything from scratch. In this way, expensive and time-consuming design iterations can be brought to a minimum. Since the concept has proven to be highly beneficial for the development of integrated circuits, X-FAB has extended the idea to integrated pressure sensors. Thus, a discrete pressure sensor is an IP block, which can be incorporated into a customer ASIC design. In this way the sensor, its signal conditioning electronics and ASIC functionality can be combined in a system-on-chip. X-FAB provides GDS data and comprehensive characterisation data for all of the above described CMOS pressure sensors. Furthermore, in house expertise with finite element modelling and with sensor design allows the development of customer specific pressure sensors. The practicality to develop device models of pressure sensors for EDA platforms is currently assessed.
8
Conclusions
Based on a 1.0µm CMOS technology platform, two process modules have been developed for the manufacture of integrated piezoresistive pressure sensors. Prototypes of sensors have been fabricated and characterised. The results show that the performance is comparable to state-of-the-art discrete pressure sensors. Preliminary reliability tests have been carried out and it has been shown that the device characteristics remain stable. With the new technology it is possible for the first time to manufacture integrated pressure sensors in a semiconductor foundry. Sensor IPs are available for a set of pressure ranges and can be built in into customer IC designs. With this approach, it is anticipated that the development cost and the time-to-market for integrated pressure sensors can be significantly reduced, which could make these devices attractive for an increasing number of applications.
423
424
Components and Generic Sensor Technologies
References [1]
[2]
R. Bornefeld, W. Schreiber-Prillwitz, H.V. Allen, M.L. Dunbar, J.G. Markle, I. van Dommelen, A. Nebeling, J. Raben, Low-Cost, Single-Chip Amplified Pressure Sensor in a Moulded Package for Tyre Pressure Measurement and Motor Management, in Advanced Microsystems for Automotive Applications, Springer, Berlin, 2004, pp. 39-50 K. E. Peterson, P. Barth, et al., “Silicon fusion bonding for pressure sensors”, in Proceedings of the Solid-State Sensor and Actuator Workshop, Hilton Head, 1988, pp. 144–147
Dr. Gerald Dahlmann, Gisbert Hölzer, Siegfried Hering, Uwe Schwarz MEMS Process Development X-FAB Semiconductor Foundries AG Haarbergstr. 67 D-99097 Erfurt Germany
[email protected] Keywords:
MEMS, integrated pressure sensor, system-on-chip, foundry process, bulk micromachining, wafer bonding
425
High Dynamic Range CMOS Camera for Automotive Applications W. Brockherde, C. Nitta, B.J. Hosticka, I. Krisch, Fraunhofer Institute for Microelectronic Circuits and Systems A. Bußmann, Helion GmbH R. Wertheimer, BMW Group Research and Technology Abstract We have developed a CMOS camera featuring high sensitivity and high dynamic range which makes it highly suitable for automotive applications. The camera exhibits a 768x76 pixel resolution and a dynamic range of 118dB at 50fps. The measured noise equivalent exposure is 66pJ/cm2 at a wavelength of 635nm which corresponds to 4.9mlx at an integration time of 20ms. The centrepiece of the camera is a novel CMOS image sensor chip integrated in a standard 0.5µm CMOS technology. The image sensor employs multi-exposure principle and was used to build an automotive camera, which contains the CMOS image sensor, camera electronics, a 2/3 inch lens and an IEEE 1394 Firewire Interface. This interface enables transfer of user-defined sub-frames yielding 512x256 pixels which exhibit 118dB dynamic range at 50fps and featuring 64dB dynamic range when acquired at a single integration time.
1
Introduction
In numerous applications image sensors are required which exhibit high sensitivity and high dynamic range. One of the most demanding applications is automotive vision. Among the most crucial specifications for this application (and the most difficult to fulfil) are the dynamic range of about 120dB and the sensitivity which should be better than 0.1lx [1]. Based on our experience with automotive CMOS image sensors, we have set the following minimum specifications: a dynamic range of 120 dB, a signal-tonoise ratio corresponding to 8bit, and a noise equivalent power of 50µW/m2 at 635nm wavelength and 20ms integration time [2, 3]. To realize this, we have developed CMOS image sensors based on multi-exposure approach. The 1st generation design featured 256x256 pixel resolution in 1µm CMOS technology and yielded a dynamic range of 120dB at 50fps [2,3].
426
Components and Generic Sensor Technologies
In this paper, we report on an improved high-dynamic range sensor realized in 0.5µm CMOS technology. The spatial resolution has been increased from 65k pixels to 442k pixels while the noise equivalent exposure (NEE) has been drastically improved. We have retained the multi-exposure approach, which enables the realization of linear imager characteristics in the entire brightness range and yields excellent image quality and contrast superior to imagers with nonlinear characteristics.
2
Circuit Description
The basic architecture of the presented image sensor is depicted in figure 1. The readout circuitry is located at the bottom of the array and enables parallel column readout while sequentially accessing all rows. The column and row shift registers enable sub-frame readout if so desired.
Fig. 1.
Image sensor architecture
Each pixel contains the 3-transistor pixel circuit shown in figure 2 [2]. The photodiode is based on the pn-junction formed between an n+-diffusion and the psubstrate.
High Dynamic Range CMOS Camera for Automotive Applications
The column readout amplifier with correlated double sampling (CDS) capability is shown in more detail in figure 3. The CDS cancels the effects of reference voltage variations, all offset voltages, and low frequency noise. Nevertheless, it doubles the high frequency white noise power.
Fig. 2. Fig. 3.
3
Pixel circuit (left) Column readout amplifier with CDS (right)
Pixel Circuit Design
Besides the area and the MTF, the most critical parameter of the pixel circuit is its noise because it defines the dynamic brightness range, the signal-to-noise ratio, and the noise equivalent power, i.e. the minimum detectable irradiance at the photodiode. The noise referred to the photodiode consists of photon shot noise, dark current noise, reset noise, and noise of the readout ( ) electronics. Hence, the noise depends on the photocurrent Iphoto, the dark current IS, the total photodiode capacitance CP, and . Although both, CP and IS, depend on the geometry of the pn-junction photodiode, it is the photodiode capacitance which is much more dependent on area. For small-area photodiodes, the dark current is mostly generated at the sidewall parts of the junction, above all at the corners [4]. This means that the dependence of the dark current on the area is rather weak, especially for small-area diodes. The photodiode capacitance, on the other hand, is much less affected by the sidewall parts, so that it exhibits rather heavy dependence on the area of the bottom planar part of the junction. If VREF is the reference voltage used for the reset operation, S the sensitivity of the photodiode, APD the photodiode area, IS the dark current of the photodiode, CP the photodiode capacitance, and tint the integration time, then we can calculate the dynamic range as
427
428
Components and Generic Sensor Technologies
(1)
and the maximum signal-to-noise ratio as (2)
where is the total noise of the readout electronics referred to the photodiode. Note that the reset noise is doubled due to CDS. The noise equivalent power (NEP) which represents the noise equivalent input irradiance (i.e. noise equivalent power, NEP) is given by (3)
and is related to the noise equivalent exposure (NEE) as NEP=NEE/tint. A further important measure is the maximum voltage swing at the photodiode which is given by (4)
For the 0.5µm CMOS process used here we have assumed VREF=2V (for a power supply voltage of 3.3V) and tint=20ms (i.e. 50fps) and experimentally found S=0.2A/W at 635nm. Moreover, the dark current measurements confirmed that this current is nearly constant for small-area photodiodes, though it tends to rise with the photodiode area size. Experimentally, we have found that the value IS=3.5fA is a good approximation for small-area photodiodes. If we assume that we can design such a low-noise operational amplifier for the CDS stage and that the product is not dominant and can be neglected, we can plot an isochor diagram shown in figure 4.
High Dynamic Range CMOS Camera for Automotive Applications
Fig. 4.
Isochor diagram for tint=20ms (parameter: APD)
As we plan to use the multi-exposure approach (i.e. several different integration times) again, the dynamic range with a single integration time must be at least 55dB: the isochor diagram then yields CPmin=1.5fF. On the other hand, a SNRmax equivalent to 8bit yields CPmin=5.5fF while NEPmax=50µW/m2 yields e.g. APDmin=37µm2 for CP=5.5fF and APDmin=87µm2 for CP=35fF for tint=20ms. The diagram shows that APDmin=50µm2 (i.e. fill factor of 50% at a pixel pitch of 10µm) is perfectly feasible and the capacitance CP should not exceed 10fF.
4
Realization and Measurements
A CMOS imager has been designed using the above optimization procedure and fabricated in a standard 0.5µm CMOS technology featuring p-substrate, nwell, single poly, and triple metal layer (see figure 5). The chip realizes the multiexposure method mentioned above [2, 3]. It uses up to 4 different integration times: the number of integration times can be defined by the user. The minimum integration time is 20µs while the maximum integration time is 20ms for a single integration time at 50fps. The integration times can be interlaced yielding e.g. 4 integrations within 20ms with 16ms for the longest inte-
429
430
Components and Generic Sensor Technologies
gration time. The light sensitive area contains 768x576 pixels, while the total pixel count is 769x604. The on-chip voltage gain can be switched between 1 and 7.
Fig. 5.
Chip photomicrograph
The measured NEP was 33µW/m2 (i.e. 41 noise electrons) for tint=20ms. This is due to small capacitance CP and optimized low noise of the readout electronics. The NEE is then 66pJ/cm2 which yields a sensitivity of 4.9mlx at 20ms integration time and 635nm wavelength. The remaining technical data can be found in the Table below.
Fig. 6.
Camera demonstrator
High Dynamic Range CMOS Camera for Automotive Applications
5
Camera
To demonstrate the performance of the image sensor presented above, we have developed a camera demonstrator (see figure 6). The camera contains a processor which generates composite images exhibiting a DR of 118dB from images acquired at different integration times. It features a 2/3 inch lens and an IEEE 1394 Firewire interface. This interface enables transfer of userdefined sub-frames containing 512x256 pixels and featuring 118dB dynamic range at 50fps. Full-size frames can be transferred at 50fps exhibiting 64dB when acquired at 20ms single integration time. Figure 7 shows a single image of a video sequence taken on a highway at sunset in high-dynamic range mode with 4 different integration times.
Fig. 7.
Image sample acquired in high-dynamic range mode
431
432
Components and Generic Sensor Technologies
6
Summary and Conclusions
We have presented a high-sensitivity, high dynamic range CMOS image sensor fabricated in 0.5µm CMOS technology. We have shown how the pixel layout and circuit design can be optimized for a given technology. When carefully optimized, the sensitivity of CMOS image sensors can match or even surpass that of CCD image sensors while exhibiting a superior dynamic range. The result of our optimization has yielded an image sensor which is highly suitable - among others - for automotive applications including night vision.
References [1] [2]
[3]
[4]
[5]
B.Höfflinger, “Vision chips make driving safer,” Europhotonics, June/July 2001, pp. 49-51. M. Schanz, C. Nitta, T. Eckart, B.J. Hosticka, and R. Wertheimer, “A high dynamic range CMOS image sensor for automotive applications,” Proc. of the 25th European Solid-State Circuits Conference, 21-23 September, 1999, Duisburg (Germany), pp. 246-249. M. Schanz, C. Nitta, A. Bußmann, B.J. Hosticka, and R. Wertheimer, “A high dynamic range CMOS sensor for automotive applications,” IEEE Journal of SolidState Circuits, vol. 35, no. 7, pp. 932-938, July 2000. H.I. Kwon, I.M. Kang, B.-G. Park, J.D. Lee, and S.S. Park, “The analysis of dark signals in the CMOS APS imagers from the characterization of test structures,” IEEE Trans. on Electron Devices, vol. 51, no. 2, pp. 178-183, Febr. 2004. W. Brockherde, A. Bußmann, C. Nitta, B.J. Hosticka, and R. Wertheimer, “ HighSensitivity, High-Dynamic Range 768 x 576 Pixel CMOS Image Sensor”, Proceedings of the 30th European Solid-State Circuits Conference, ESSCIRC 2004, pp. 411-414, Leuven, Belgium, Sept. 2004
High Dynamic Range CMOS Camera for Automotive Applications
Werner Brockherde, Christian Nitta, Bedrich Hosticka, Ingo Krisch Fraunhofer Institute for Microelectronic Circuits and Systems Finkenstr. 61, D-47057 Duisburg, Germany
[email protected] [email protected] [email protected] [email protected] Arndt Bußmann Helion GmbH Bismarckstr. 142, D-47057 Duisburg Germany
[email protected] Dr. Reiner Wertheimer BMW Group Research and Technology 80788 Munich, Germany
[email protected] Keywords:
CMOS camera, high-dynamic range camera, image sensor, automotive camera
433
435
Performance of GMR-Elements in Sensors for Automotive Application B. Vogelgesang, C. Bauer, R. Rettig, Robert Bosch GmbH Abstract In the paper an introduction to the principle of the GMR-effect, and investigations into three types of GMR sensor are presented. Attention is hereby directed to the automotive applications ABS/ESP, engine management systems and transmission control. A sensitive GMR element with an integrated circuit has been developed fulfilling all requirements of the automotive environment for incremental speed sensors. The measurements of the sensors were performed using both active (magnetic) and passive (steel) impulse wheels in a temperature range of -40°C to 170°C. In addition to these investigations, simulations of the magnetic field have been performed to define the magnetic circuit. In order to be able to quantify a direct system advantage, the functionality is discussed compared to products in mass production based on Hall and AMR technology.
1
Introduction
The requirements of sensor technology in the automotive sector have continuously risen in the last few years. In particular within the area of vehicle dynamics and engine management systems, active sensors based on magnetic principles are an established commodity. The contactless principle of these sensors leads to a very high robustness in the automotive environment. The system requirements e.g. for higher temperature stability and increasing mounting tolerances are continuously rising. Fundamental changes in the automotive market and the ever increasing number of sensors in a vehicle has lead to the fact that costs are the most important criteria for success in the sensor market, nevertheless cost must be evaluated as the total system cost. In the past the magnetic sensors mounted in a motor vehicle have been passive inductive sensors but the trend is moving towards active magnetic sensors, whose measurement principle is based on the Hall effect or on the Anisotropic MagnetoResistive effect (AMR).
436
Components and Generic Sensor Technologies
Current developments in the field of GMR (giant magneto resistive) sensor technology show important functional advantages in relation to conventional sensor principles. This is due to an improved sensitivity and temperature stability, which will be crucial for future applications. The higher costs at the start of GMR production are cancelled by the improved function and robustness, reducing required system tolerances and thus total system cost. Increased robustness by smart integration of compatible technologies and full supply chain test optimization can lead to significant improvement in the overall quality, reducing the total cost of ownership.
2
The GMR Effect
The strong dependence of the electrical resistance on an applied magnetic field is called giant magnetoresistant effect (GMR). It results of the magnetic coupling of adjacent ferromagnetic layers separated by thin non-ferromagnetic layers. This effect can be used for different GMR sensor structures, which differ in the stack design. In our investigations we examined multilayer (ML), multilayer with integrated hard magnetic bias layer (HMB) and spin-valve (SV) structures, that are described in the following sections
Fig. 1.
2.1
Sketch of the GMR-effect. Antiparallel (left) and parallel (right) magnetization of a GMR multilayer.
Multilayer
GMR multilayer (ML) consist of a stack with alternating layers of ferromagnetic and non-ferromagnetic materials (see figure 1) with layer thicknesses in the range of a few nanometers. The magnetic coupling of the ferromagnetic layers depends on the layer thickness of the non-ferromagnetic spacer layer.
Performance of GMR-Elements in Sensors for Automotive Application
For the GMR multilayer an appropriate thickness is chosen to generate an antiferromagnetic coupling of the adjacent ferromagnetic layers. Parallel coupling of the magnetic layer leads to interface scattering of only one type of electron spins, thus resulting in a low resistance of the stack. Whereas in the antiparallel coupling state both type of electrons scatter with the corresponding magnetized layer resulting in a high resistance of the whole stack. The characteristic curve of the multilayer stack arising from the above explained magnetic behaviour is shown in figure 2. The picture on the right hand side of figure 2 shows a TEM shot of the investigated multilayer structure. These GMR multilayers show the highest sensitivity in the magnetic field range of 10 to 20mT or -10 to -20mT. Therefore an external bias magnet is used to provide a magnetic field to shift the working point into this area. The advantages of multilayer structures are their simple layer system, their high GMR effect level and their high stability due to perturbing magnetic fields. Possible applications for multilayers in the automotive sensor area are e.g. incremental speed sensors, position sensors and current sensors.
Fig. 2.
2.2
Characteristic curve (GMR vs. magnetic field) of a multilayer structure (left) and TEM picture of a multilayer stack (right)
Spin-Valves
Alternative GMR stacks, suitable for sensor applications, are spin-valves (SV). They consist of two ferromagnetic layers with a nonmagnetic spacer layer, which has a high thickness in comparison to ML spacer layers to decouple the ferromagnetic layer. A typical SV structure is depicted in figure 3 a. One magnetic layer is referred to as reference layer and the other as free layer. The alignment of the reference layer is defined by the coupling with the antiferromagnet. The anti-parallel state occurs when the magnetization direction of the free layer is changed by external fields. Spin valve structures have much lower GMR values than multilayer structures but they feature low coercitivi-
437
438
Components and Generic Sensor Technologies
ties and high sensitivities (figure 3 b). The reference layer is selected in the way that it shows a large magnitude of uniaxial anisotropy. The stability of this layer can be enhanced by adding a synthetic antiferromagnet (SAF) as showed in figure 3. The pinned layer is magnetically biased in direction of the easy axis by the means of the Exchange Bias effect. The Exchange Bias describes the pinning of the magnetization of a ferromagnetic layer by an adjacent antiferromagnet.
Fig. 3.
Typical spin-valve structure a) and its characteristic curve (GMR vs. magnetic field) b)
The advantages of this type of spin-valve structures are their high, adjustable sensitivity, their negligible hysteresis compared to MLs and their stability due to perturbing magnetic fields of more than 100mT. Their applications in the automotive sensor area can be enhanced by angle measurements.
3
GMR Sensor Structures
For a benchmark of GMR sensors with Hall and AMR sensors, a GMR-ASIC was developed to process the GMR structure signals and provide a current interface for incremental speed sensors (see figure 4 b). Discrete and integrated versions are both investigated to verify the feasibility of integrating the technological processes of the GMR-layers and the CMOS without reduction in functionality.
3.1
Discrete Sensor
Figure 4 a depicts the layout of a GMR multilayer sensor bridge for a discrete sensor. The four resistors of the wheatstone bridge are arranged in a gradiometer geometry to avoid signals arising from interference magnetic fields.
Performance of GMR-Elements in Sensors for Automotive Application
In the discrete version the sensor is made up of the GMR sensor bridge and the ASIC. The two separated chips are connected with each other via bonding wires in a SOIC8 package in a stacked dice geometry.
Fig. 4.
3.2
GMR sensor bridge layout with multilayer resistors arranged in a wheatstone-bridge configuration a). After signal processing the sensor triggers a pulse at every period via a current interface b).
Integrated Sensor
For a robust handling and smaller outline of the GMR sensor an integrated version of the GMR sensor is also realized as shown in figure 5. The integration is assembled by depositing the GMR layers on the left and right hand side of the integrated circuit on top of empty but pre-processed areas.
Fig. 5
4
Integrated GMR sensor: The GMR meander are deposited on the left and right hand side of the integrated circuit a), the integrated GMR sensor in a SOIC8-package b) and a package similar to PSSO4.
Magnetic Simulation
In order to use GMR sensors in combination with steel wheels a suitable bias magnet is mandatory. The aim must be to develop a magnetic circuit where the GMR element is working close to maximum sensitivity in the middle of the operating range. Due to the non linear characteristic curve of GMR-elements
439
440
Components and Generic Sensor Technologies
the requirements on the magnetic field distribution of a back bias magnet are a lot higher compared to back bias magnets for Hall sensors.
Fig. 6.
FEM model of a toothed steel wheel.
A valuable tool to design an appropriate magnet is the numerical simulation of magnetic fields. This is done via simulation tools using a coupling between finite element (FEM) and boundary element methods (BEM). figure 6 shows a finite element model of a toothed steel wheel.
Fig. 7.
Signal characteristic of a GMR bridge signal at an air gap of 6.0mm.
After the generation of an finite element mesh the calculation of the electromagnetic fields is performed and the data is transferred and evaluated. Together with the known characteristic curve of the GMR-elements the sensor signals can be computed. Dependent on the signal results the magnetic circuit is changed and optimized in an iterative process. figure 7 shows a GMR bridge signal with nearly offset free signal characteristics.
Performance of GMR-Elements in Sensors for Automotive Application
5
Application
5.1
Automotive System Requirements
In this chapter we focus on three different automotive applications: wheel speed sensors, engine speed sensor and rotational speed sensors for transmission control. The geometrical, thermal, chemical and electrical system requirements differ dependent on the target application. The three applications have the following in common: The environmental influences are challenging due to their application in the engine compartment. Sensors based on magnetic principles have become established, because of their contactless, robust and thus very reliable function. Differential measurement principles are used wherever it is possible, to avoid incident magnetic signals. Wheel speed sensors are placed in the wheel hubs either inside or outside of the wheel bearings. Their task is to determine the rotational velocity of the wheels for systems like ABS, ESP etc.. Each wheel and axle assembly is equipped with a toothed tone wheel or magnetic encoder that rotates with the wheel near the sensor. As the wheel rotates, a magnetic field fluctuates around the sensor working as an incremental sensor. An additional backbias magnet is placed behind the speed sensor when using a tone wheel. The output signal is transmitted via two wire cable to the correspondent control unit. For the magnetic field fluctuations there are different impulse wheels in use (figure 8). Magnetic encoders with 32 to 54 pole pairs (usually 48), magnetic encoders with axial or radial magnetization and toothed tone wheels with 46 or 48 teeth. The speed sensor is attached to the hub nearby the brake disk resulting in high temperature requirements of -40°C to 150°C.The air gap requirements vary depending on the design of the hub up to 3.5mm. Furthermore the wheel speed sensor must be able to work with frequencies up to 4500Hz. Sensors which are able to detect the direction of rotation are increasingly in demand.
441
442
Components and Generic Sensor Technologies
Fig. 8.
Incremental speed detection with magnetic encoders (a) and toothed tone wheels (b).
To characterize the performance of different sensors with respect to their suitability for wheel speed application, measurable parameters have to be found. The characteristic parameters discussed in this paper will be: Air gap, dutycycle and jitter. For engine management incremental speed sensors are used to measure the speed of the crankshaft. Both magnetic encoder and toothed tone wheels are used to generate the fluctuations of the magnetic field for te sensor. These are radial magnetized encoder with 60-2 pole pairs and toothed tone wheels with 60-2 teeth. The required temperature range is similar to the one for wheel speed application from -40°C to 150°C with an similar air gap range of 0mm to 3.5mm. The frequencies that can occur on the output signal of the sensor can be as high as 10kHz. Digital voltage output is required for sensors in motor management systems. The recognition of the direction of rotation is a feature that will be used in future systems for the crankshaft. Rotational speed sensors for transmission control operate almost exclusively with metal tone wheels: toothed or perforated with holes. The diameter and the number of teeth or holes varies widely depending on the transmission it is applicated to. Therefore different sensor types can only be compared in reference to a defined impulse wheel and cannot be easily transferred to every impulse wheel inserted in different transmissions. The temperature requirements in the transmission barely differ from those of the preceding applications. The range covers -40°C to 150°C. In contrast the air gap and the frequency ranges are greatly increased to 5.5mm and 12kHz respectively. Additional to direction of rotation recognition, vibration recognition is the most desirable new feature for transmission sensors.
Performance of GMR-Elements in Sensors for Automotive Application
For a comprehensive evaluation, the overall benefit for the system must be taken into account. However this will not be considered here, because it is outside the scope of this paper.
Tab. 1.
5.2
Summarized requirements for incremental speed sensors in wheel speed, crankshaft or transmission application.
GMR Sensor Performance
In wheel speed sensor application the temperature and air gap requirements are – as mentioned before – the most critical parameters. GMR-structures appear to be of great advantage for this application due to their excellent temperature stability and the enhanced sensitivity compared to AMR and Hall sensors. Therefore the focus of the investigations for wheel speed application has been the air gap dependence on temperature when a standard magnetic encoder is used. Figure 9 shows two different types of wheel speed sensors. A bottom read sensor with an integrated backbias magnet for steel wheel application is displayed on the left hand side, a side-read sensor and a wheel bearing with an integrated magnetic encoder is displayed on the right hand side.
Fig. 9.
Wheel speed sensors for ABS, ESP, ... systems.
For crankshaft applications on the other hand the parameters repeatability and phase shift are of most importannce. For this reason investigations for this application were made by measuring the 360° repeatability and the phase
443
444
Components and Generic Sensor Technologies
shift of the sensors over frequency and air gap at different temperatures. A radial magnetized encoder with 60-2 pole pairs was used as a reference wheel for the measurements. In figure 10 a) an example for crankshaft application with an 60-2 teeth tone wheel and an attached incremental speed sensor is displayed. In transmission application the requirements for air gap are constantly growing due to the increasing mechanical clearance of the gear wheels. Therefore the air gap is the only parameter discussed here. It was measured in front of a rotating perforated disk wheel with 40 holes at room temperature. Figure 10 b) shows an automated transmission and its control module with an integrated rotational speed sensor.
Fig. 10. Speed sensor for crankshaft speed detection a) and rotational speed sensor for transmission control b).
In the following the performance of a GMR speed sensor based on a gradiometer principle is demonstrated to compare it with other sensor technologies. Both discrete (2 chips: GMR-bridge and ASIC) and integrated (1 chip) GMR sensors were investigated and showed best performance. The ability to integrate GMR processes with CMOS processes could be verified. The integrated version is much more suitable for automotive application. It has smaller dimensions and the packaging is much more reliable in comparison to the one of a discrete two chip version. The summary for the air gap benchmark is listed in table 2. These values were obtained using a small sample size and are for development purposes only. Air gap is the most important parameter, because it can be an enabler for cost reduction in the system due to reduction of precision in mechanical parts. The GMR sensors are compared to common Hall- and AMR-sensor available on the sensor market. The air gap for both GMR sensor types (spin-valves and
Performance of GMR-Elements in Sensors for Automotive Application
multilayer) show a great improvement compared to traditional sensor technologies. The expected low temperature dependence of the air gap was confirmed for all three applications.
Tab. 2.
Maximum air gaps of different sensor types for different applications are determined from measurements with the same impulse wheel for each application.
The crankshaft measuring results for the 360° repeatability and the phase shift especially for GMR sensors with spin-valve structures are outstanding. The dependence of the parameters on the temperature and frequency are very weak. In summary it may be said that the investigated GMR sensors show high potential for applications in the automotive area due to their enlarged air gap, low jitter and low temperature dependence. New developments in the automotive sector like corner modules with integrated wheel speed sensors as well as trends to reduce the size of wheel bearings and thus the magnetic stimulation are demanding speed sensors for enlarged air gaps. This may lead to cost reduction of the systems and as a result boost the introduction of the new GMR technology into the market.
445
446
Components and Generic Sensor Technologies
References [1]
[2] [3] [4]
V. Gussmann, D. Draxelmayer, J. Reiter, T. Schneider, R. Rettig: „Intelligent Hall Effect based Magnetosensors in Modern Vehicle Stability Systems“, Convergence 2000, Detroid, 2000-01-CO58. J. Marek, H.-P. Trah, Y. Suzuki, I. Yokomori: „Sensors for Automotive Technology“, Wiley-VCH, Weinheim (2003) B. Vogelgesang, C. Bauer, R. Rettig: “ Potenzial von GMR-Sensoren in Motor- und Fahrdynamiksystemen”, Sensoren und Messsystemme 2004, Ludwigsburg (2004) D. Hammerschmidt, E. Katzmaier, D. Tatschl, W. Granig, Infineon Technologies Austria AG, J. Zimmer, Infineon Technologies AG, B. Vogelgesang, R. Rettig, Robert Bosch GmbH, “Giant magneto resistors – sensor technology & automotive applications”, SAE 2005, in press
Dr. Birgit Vogelgesang, Dr. Christian Bauer, Dr. Rasmus Rettig Robert Bosch GmbH Business Unit Chassis Systems Robert Bosch Allee 1 4232 Abstatt Germany
[email protected] [email protected] [email protected] Keywords:
GMR, giant magnetoresistant, sensor, speed sensor, automotive sensor, magnetoresistive sensor, sensor application
447
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil T. Ina, K. Takeda, T. Nakamura, O. Shimomura, Nippon Soken Inc. T. Ban, T. Kawashima, Denso Corp. Abstract Progress in electronic vehicle control has created the need for contactless detection of rotation angles up to 360 degrees, such as for engine control valves or vehicle control steering. Conventional contactless sensors that can detect rotation angles up to 360 degrees have been optical or magnetic rotary encoders, which are large and expensive for use in automobiles. The authors developed a small and low-cost sensor that is suited for mass production and able to detect rotation angles up to 360 degrees by combining MRE sensors with a membrane coil.
1
Background of Sensor Development
Progress in electronic vehicle control technology has raised the necessity for detection of rotation angles over a wider range with a higher degree of precision. In particular, crank angle sensors and steering wheel angle sensors are required to detect absolute angles up to 360 degrees. To meet the above requirement, the authors have been engaged in the development of a rotation angle sensor that can linearly detect absolute angles up to 360 degrees. Typical rotation angle sensors available today are shown in Fig. 1. Optical sensors are sensitive to ambient conditions (temperature and stain), though they can detect angles more accurately than the other two types of sensors. Resolvers are large in size and expensive since they are constructed of wirewound coils. Due to the above disadvantages, these sensors are intrinsically unsuitable for automotive use. On the other hand, two-phase MRE sensors consist of a pair of rotary magnets and produce a parallel magnetic field. The angle of the magnetic field is detected by two magnetism-sensing elements arranged with their sensitivity faces shifted by 90 or 45 degrees. Sensors of this type were considered to be suitable for developing into inexpensive, high
448
Components and Generic Sensor Technologies
accuracy rotation angle sensors for automotive use, since they would easily be integrated into single chips for miniaturization.
Fig. 1.
Principle and disadvantage of conventional rotation angle sensors
The biggest problem with MRE sensors available today is that their angle detection range is limited to within the angle range of zero (0) to 180 degrees. The authors used the principle of two-phase type MRE sensors to develop a new sensor that can detect rotation angles up to 360 degrees.
2
Angle Detection Principle of Conventional MRE Sensors
The angle detection principle of currently available MRE sensors is shown in Fig. 2. Parallel main magnetic flux is produced between a pair of opposed rotary magnets. The rotation angle of the main magnetic flux is detected by COS and SIN bridge MREs. Since these bridge MREs are arranged with their phase shifted by 45 degrees from each other, the output of the SIN and COS bridges corresponding to the rotation angle of the main flux is doubled as shown in Fig. 2. To obtain a sensor output proportional to the rotation angle, the ARCTAN values of these two outputs are calculated. Since this principle is based on the ratio of two bridge outputs, it enables high accuracy angle detection almost independently of the effect of ambient temperature. However, since MRE sensors detect angles at a 180-degree cycle, they cannot detect 360-degree rotation angle. In other words, this principle cannot distinguish between two angles differing by 180 degrees from each other. The reason is because the MRE is a resistance element that cannot identify the direction of magnetic flux in principle. This fact had been a barrier to upgrading
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
this type of sensor to one that can detect rotation angles up to 360 degrees.
Fig. 2.
Angle detection principle of conventional MRE sensor
3
New MRE Sensor Capable of Detecting Rotation Angles up to 360 Degrees
3.1
Principle of Rotation Angle Detection up to 360 Degrees and Problems to Be Solved in New Sensor Development
The new method the authors devised for detecting rotation angles up to 360 degrees by using the principle of MRE sensors is outlined in Fig. 3. In this method, an auxiliary coil is wrapped around an MRE in a single direction. When energized, the auxiliary coil produces an auxiliary magnetic flux. The auxiliary magnetic flux gives effect on the main magnetic flux produced by the rotary magnets to produce a compound magnetic flux. The MRE detects this compound magnetic flux, the direction of which differs from that which the MRE had detected before the auxiliary coil was energized. Under the condition where the auxiliary magnetic flux is set along the 0degree direction in Fig. 3, the compound magnetic flux changes its direction clockwise with respect to the main magnetic flux if the direction of the main flux stays within an angle range of 0 to 180 degrees. To the contrary, the compound magnetic flux changes its direction counterclockwise with respect to the main magnetic flux if the direction of the main flux stays within an angle range of 180 to 360 degrees. As discussed above, the compound magnetic flux changes its phase difference from the main magnetic flux clockwise or coun-
449
450
Components and Generic Sensor Technologies
terclockwise depending on the direction of the main magnetic flux. The MRE detects the change in the direction of the compound magnetic flux to determine the angle range, either a 0 to 180-degree range or a 180 to 360-degree range, where the phase angle of the main magnetic flux stays with respect to the reference angle (0). The auxiliary coil is energized (ON) for a predetermined time and de-energized (OFF) for another predetermined time. While the power to the auxiliary coil is OFF, the phase angle of the main magnetic flux is detected/calculated to obtain two candidate rotation angles. The auxiliary coil is then energized to identify the direction of the main magnetic flux according the method described above and thus to determine the correct rotation angle. As discussed above, the new method can detect 360-degree rotation angle.
Fig. 3.
Principle of rotation angle detection up to 360 degrees
The new method involved the following two problems: The first problem was that wrapping a wire around an MRE would increase the coil size, leading to increase in sensor size and cost. The second problem was that it was difficult to identify the direction of the compound magnetic flux depending on the phase angle of the main magnetic flux. The second problem is discussed in more detail by referring to the upper right section of Fig. 3. For the main magnetic flux with nearly a 90- or 270-degree phase angle, the main magnetic flux and auxiliary magnetic flux produce a comparatively large phase difference and this enables the compound magnetic flux to change its phase angle sharply. Therefore, the direction of the main magnetic flux can easily be identified. For the main magnetic flux with nearly a 0- or
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
180-degree phase angle, however, the phase angle of the main magnetic flux becomes nearly equal to that of the auxiliary magnetic flux. In this case, the compound magnetic flux does not change its phase angle significantly, making it difficult to identify the direction of the main magnetic flux. To solve the second problem, we had to devise a new method that enables the compound magnetic flux to change steeply even in an angle range where the phase difference between the main and compound magnetic flux remains almost unchanged.
Fig. 4.
3.2
Method for improving identification performance in angle ranges near 0 and 180 deg.
Solution to the Second Problem
We took the following measures to solve the second problem. As shown in the left half of Fig. 4, changes in the direction of the main magnetic flux could be easily identified in rotation angle ranges near 90 and 270 degrees, while changes were difficult to identify in angle ranges near 0 and 180 degrees each of which is shifted by 90 degrees from the former angle range. If the MRE sensor could be rotated by 90 degrees integrally with the auxiliary coil, the characteristics would shift by 90 degrees, making the change in the direction of the main magnetic flux easy to identify in angle ranges near 0 and
451
452
Components and Generic Sensor Technologies
180 degrees. Thus, we reached a conclusion that the angle ranges with small phase difference change could be eliminated by shifting the ranges with large phase difference change by 90 degrees. In practice, it was difficult to physically rotate the sensor during rotation angle detection. We put this idea into practical use by designing the required function into an electronic circuit.
Fig. 5.
Signal switching in bridge circuit
Fig. 6.
Detectable angle range of bridge circuits
We used the symmetry of the bridge circuit of the sensor to artificially rotate the MRE sensor. The MRE sensor consists of four symmetrical MRE elements the phase of which is shifted by 90 degrees from each other, as shown in Fig. 5. In the SIN bridge circuit (1), an electric current is supplied through the top and bottom terminals and the output is obtained from the right and left ter-
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
minals. Since the four MRE elements are identical in construction, an electric current can be supplied to the right and left terminals, which are positioned at an angular distance of 90 degrees from the top and bottom terminals, and the output is obtained from the top and bottom terminals (this method is called “alternating energization”). The alternating energization method artificially rotates the sensor by 90 degrees. In the same manner, we can alternately energize another pair of COS bridges the phase of which is shifted by 45 degrees from the SIN bridges. Thus, we can prepare four drive patterns in total. Fig.6 shows the ranges that allow easy rotation angle detection for each of four bridge circuits (1 through 4 in Fig. 5) in the whole range of 0 to 360 degrees. For example, it shows that SIN bridge 1 shares the angle range to detect around 90 degrees and 270 degrees. It also shows the angle ranges assigned to be detected by the other three bridge circuits. In this way, it is shown that the angle ranges of the four bridge circuits cover the whole range of 0 to 360 degrees in total.
Fig. 7.
Producing symmetric magnetic field by spiral coil
Stated above is the structure of the SIN and COS bridge circuits to detect the whole angle range of 0 to 360 degrees. To realize the detection model, an auxiliary coil is required to generate auxiliary magnetic flux shown in Fig.4. Fig7. shows this structure. In the auxiliary coil, the winding is placed as octagnonshaped spiral in a plane on the MRE sensor. The reason why we employed this structure is as follows. When an electric current was applied to the two-dimensionally arranged spiral coil, magnetic flux was generated in a radial manner over the coil plane according to the right-hand rule as shown by arrows in Fig. 7. And the square-shaped spiral coil in Fig.7b can generate the magnetic flux for the SIN bridge in Fig.5. Similarly, the coil shown in Fig.7b right can generate the auxiliary magnetic flux for the COS bridge. This magnetic flux pattern enabled us to obtain the same characteristics as those shown in Fig. 4 without requiring the auxiliary coil to be rotated physically by 90 degrees. We also could identify the direction of the compound magnetic flux at all times in response to the phase angle change of the main magnetic flux by laying the octagonal-shaped spiral coil (same function as to two square-shaped spiral coil
453
454
Components and Generic Sensor Technologies
in Fig.7b) orthogonally with respect to all MRE elements. Since the coil could be laminated with the MRE sensor in a plane manner, the dimensions of the new sensor remained almost the same as that of the base sensor. This solved the first problem at the same time. In other words, we could avoid increase in sensor size and manufacturing cost.
3.3
Newly Devised Sensor
As discussed in the previous section, we could identify the direction of the main magnetic flux in the entire angle range from zero (0) to 360 degrees by devising “an auxiliary spiral coil of an octagonal shape” and “alternating energization.” We made a prototype MRE sensor. The construction of the new sensor is shown in Fig. 8 and a photograph of a prototype sensor chip in Fig. 9. We formed an insulation layer after forming two bridge MREs on a silicon wafer. Following the above procedure, we laminated with the sensor an auxiliary coil made by etching an aluminum foil into an octagonal spiral. We determined the number of coil turns to be 16 to 33 after taking into account the dimensions of the MRE and the distance between adjacent coil elements. The resistance of the coil with the number of turns of 16 was approximately 50Ω, showing that the sensor could be operated by a 5V power with a 100mA pulse and 10% duty.
Fig. 8.
Construction of the new sensor
As has been already discussed, a rotation angle is detected according to the following steps:
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
With no power applied to the sensor, a candidate rotation angle within a range of 0 to 180 degrees is detected/calculated. According to the angle range within which the candidate angle falls, the applicable MRE alternating energization pattern is selected among the previously described four patterns and is energized. Following the above step, the auxiliary coil is activated to detect the change in the MRE sensor output to identify the direction of the compound magnetic flux. Based on the direction identified thus, the actual rotation angle within a range of 0 to 360 degrees is finally determined.
Fig. 9.
3.4
Photograph of the new sensor chip
Sensor Evaluation Result
The result of prototype sensor evaluation showed that the direction of the main magnetic flux could be identified over the entire angle range, as we had estimated. As shown in Fig. 10, the angle range, where the sensor output changes when the auxiliary coil is energized, shifts by 45 degrees according to the energization pattern of the two bridge circuits of the MRE sensor. In addition, the output changes from positive value to negative value or vice versa according to the direction of the main magnetic flux, making it possible to identify the direction. It was confirmed that the combination of these bridge circuits and four energization patterns enabled detection of rotation angle from 0 to 360 degrees.
455
456
Components and Generic Sensor Technologies
Fig. 10. Characteristics of the new sensor
4
Conclusion
We devised a unique MRE sensor consisting of a combination of an octagonally shaped auxiliary spiral coil and MRE sensor bridges that are energized alternately. This sensor is very small in size and can detect absolute angles from 0 to 360 degrees. Based on the research results, we will develop an angle sensor that can be used for engine control and vehicle steering control.
5
Acknowledgement
The authors would like to express his gratitude to Dr. Kunihiko Hara, Senior Executive Director of NIPPON SOKEN, INC. for his helpful advice to complete this study and thesis.
360-degree Rotation Angle Sensor Consisting of MRE Sensors with a Membrane Coil
References (1)
(2)
(3)
(4)
(5)
Hans Theo Dorisen , Klaus Durkopp. (2003). Mechatronics and drive-by-wire systems advanced non-contacting position sensors. Control Engineering Practice 11 (2003) 191-197 Ichiro Shibasaki. (2003). Properties of InSb Thin Films Grown by Molecular Epitaxy and Their Applications to Magnetic Field sensors. IEEJ Trans. SM, Vol.123, No.3, 2003 Masanori Hayase, Takamasa Hatano, Takeshi Hatsuzawa. (2002). Remote Power Sourceless Encoder Using Resonant Circuit with Loop-type Coil. IEEJ Trans. SM, Vol.122, No.3, 2002 Yoshiyuki Watanabe, Takashi Mineta, Seiya Kobayshi, Toshiaki Mitsui. (2002) 3Axis Hall Sensor Fabricated by Microassembly Technique. T.IEEJ Vol.122-E, No.4, 2002 Osamu Shimoe, Yasunori Abe, Yukimasa Shonowaki, Shigenao Hashizume. (2002) Magnetic Compass using Magneto-resistive device. Hitachi Metals technical review Vol.18(2002)
Toshikazu Ina, Kenji Takeda, Tsutomu Nakamura, Osamu Shimomura Nippon Soken Inc. 14 Iwaya, Shimohasumi-cho Nishio-shi Aichi-pref. 445-0012 Japan
[email protected] Takao Ban, Takashi Kawashima Denso Corp. 1-1, Showa-cho Kariya-shi Aichi-pref. 448-8661 Japan
457
459
Low g Inertial Sensor based on High Aspect Ratio MEMS M. Reze, J. Hammond, Freescale Semiconductor Abstract This paper is presenting a new single axis low g inertial sensor based on a High Aspect Ratio MEMS (HARMEMS) transducer combined with a 0.25µm SmartMOS™ mixed signal ASIC into a single Quad Flat No leaded 16 pin package (QFN). The high logic gate density digital signal processing (DSP) is used for filtering, trim and data formatting while the associated non volatile memory contains the device settings. The micro-machined transducer features an overdamped mechanical response and a high signal to noise ratio. This makes this device ideal for use in Vehicle Stability Control (VSC) or Electrical Parking Brake (EPB) applications where high sensitivity and small zero-g acceleration output error is required.
1
Market Perspective
1.1
Vehicle Stability Control
Since its first debut on the roads in 1995, the electronic stability program/control (ESP/ESC) has evolved and is now recognized by the industry and several governments to have a huge safety benefit. Several international studies [1] have demonstrated through significant data collection that ESC significantly reduce the risk of a crash and help save thousands of lives annually. As a matter of fact, several car manufacturers in Europe and in the US have introduced this equipment as standard on some of their car lines. The ESP is an additional improvement to the anti-lock braking system (ABS) and traction control system (TCS). Its basic function is to stabilize the vehicle when it starts to skid by applying differential braking force to individual wheels and reducing engine torque. This automatic reaction is engineered for improved vehicle stability, especially during severe cornering and on low-friction road surfaces, by helping to reduce over-steering and under-steering. [2] Additional sensors must be added to the ABS system in order to implement ESP functionality like a steering wheel angle sensor, a yaw rate sensor and a low g acceleration sensor that measure the vehicle dynamic response.
460
Components and Generic Sensor Technologies
According to Strategy Analytics [3] the highest growth potential for accelerometers appears in VSC applications with a world wide system demand of 14.4 million units in 2007 and up to 17 million units in 2009 as shown in figure 1. In terms of regional market penetration, ESP did not take off in North America as it did in Europe or Japan. For 2004, nearly 38% of the European passenger cars will be equipped compared to 22% for the US. But things are changing. Survey shows that SUV are more subject to rollover or loose of steering control in difficult driving conditions. Furthermore, Original Equipment Manufacturers (OEM) are directly promoting their systems to consumer. Thus, the prospects for growth are strong and 57% of the cars sold in Europe and in the US by 2009 should have ESP included (36% on a world wide basis in 2009).
Fig. 1.
1.2
Vehicle stability control market demand [Source: Strategy Analytics]
Electrical Parking Brake (EPB)
First introduced on the luxury car segment, this function is becoming more and more popular as and medium segment cars are now delivered equipped. The basic principle of EPB is that mechanical connections to the rear callipers are replaced with electric connections and actuators. This is a first step toward
Low g Inertial Sensor based on High Aspect Ratio MEMS
a full “brake by wire“ functionality. It simplifies assembly and service and brings a lot of interesting new features: It improves brake pedal feel by providing reduced pedal travel versus hydraulic brake systems, it improves interior spaciousness by eliminating the parking brake lever or pedal, it provides anti-lock brake (ABS) function during dynamic parking brake application and it enables an automatic hill-hold feature preventing the car from rolling back when on a hill. The system uses information from various sensors inside the vehicle like wheel speed sensor and a lateral low-g accelerometer which is used to detect the car angle with respect to the ground.
1.3
Sensor requirements
In order to address the two types of application mentioned above, the low g accelerometer needs to fulfill several requirements. As it is critical to detect very small accelerations the sensor needs to have a high sensitivity output and a high accuracy (low noise, small zero-g acceleration shift in temperature). Furthermore, the device needs to be immune to the parasitic high frequency content present in the car at the chassis level. Low energy signals with large frequency bandwidth can be found, from few hundreds Hz during normal driving condition to few kHz due to shocks coming from the road (gravel,…). All frequencies above 1kHz must be filtered to avoid corrupting the sensor response. By definition, an inertial sensor is highly sensitive to acceleration of any origin, since the micromachined sensing element is based on a seismic mass moving relative to a fixed plate. The sensor output signal is typically cleaned of parasitic high frequencies via electronic low pass filtering. A sensor with an overdamped transducer which can eliminate this unwanted higher frequency acceleration content mechanically provides additional benefit. A sensor acceptable for the application will have sensitivity to acceleration in the range of 1 to 1.5V/g for a 5V or 3.3V ratiometric power supply, allowing it to measure ±1.7g acceleration (1g being the earth gravity). Sensitivity error and cross axis sensitivity should be less than 4%. The zero-g acceleration error needs to be below 125mg over the full temperature range (-40°C to +125°C). Output noise should be below 700µg√Hz.
461
462
Components and Generic Sensor Technologies
2
System-in-package Technologies
One of the most common methods used to sense acceleration is to measure the displacement of a seismic mass which is translated into a variable capacitance measurement. The sensing element is a mechanical structure formed from semiconductor materials which is manufactured by using surface micromachining techniques. Moving plates (the seismic mass) are deflected from a rest position when subjected to acceleration. The change in distance between the moving plates and the fixed plates is a measure of acceleration. The arms used to maintain the moving plates behave as springs and the air squeezed between the plates will damp the movement. All Freescale’s accelerometers consist of a surface micromachined capacitive sensing element and a control ASIC for the signal conditioning (conversion, amplification and filtering) contained in a plastic integrated circuit package. A new low-g accelerometer has been developed using an innovative transducer technology focused on increasing the thickness of the structural layer to improve the performance. Since the height of the devices movable structure is much larger than the spacing and widths, the technology is known as HARMEMS (High Aspect Ratio MEMS) where the ratio is between air gap and trench deepness.
2.1
Transducer: HARMEMS
The HARMEMS transducer utilizes differential capacitive sensing to translate acceleration into a capacitance which can then be processed by sensor circuitry.
Fig. 2.
High aspect ratio MEMS or HARMEMS.
Low g Inertial Sensor based on High Aspect Ratio MEMS
The term ‘high aspect ratio’ refers to the width of the key mechanical features in the transducer such as the spring portion of the mass-spring system or the gap between movable and fixed capacitor plates [4]. The technology delivers this high aspect ratio by a combination of a 25 µm thick SOI layer and narrow trenches defined by deep reactive ion etching (DRIE) (see fig. 2). The HARMEMS process flow includes an SOI substrate with a buried thermal oxide layer, 25µm SOI layer, a field oxide for electrode isolation, a field nitride and polysilicon interconnect from above. Aluminmum allow bond pads are shown in bluered. Hermetic sealing is accomplished via wafer bond using with glass frit. (see fig. 3).
Fig. 3.
Cross section of high aspect ratio MEMS or HARMEMS with wafer bond above-vacuum sealing cross section.
The high aspect ratio of the technology, combined with higher-than-vacuum hermetic sealing possible with glass frit wafer bonding, provides an overdamped mechanical response. In figure 4, the HARMEMS mechanical response is compared with a thin underdamped polysiliconm MEMS (‘PolyMEMS’) device which has been in production for several years. The PolyMEMS device is excited to resonance (for this design, above 10kHz). By contrast, the HARMEMS devices exhibits no resonance, but rather a cut-off frequency below 1kHz. The designed resonant frequency of the HARMEMS device is between 1 and 5kHz. . In fairness, it is also theoretically possible to have a high aspect ratio polysilicon MEMS device. But that is not the device measured here. The larger sensor capacitances per unit area possible with a HARMEMS process flow lead to increased capacitance change versus acceleration (see table 1, fig 5). As an example, with a thicker capacitor layer, a 205µm
463
464
Components and Generic Sensor Technologies
HARMEMS base capacitance increases by more than 10 times compared to a 2 um polysilicon MEMS transducer. The ratio of mass to spring constant, meanwhile, remains constant. At the same time, the vertical spring constant of the HARMEMS device is 100 times higher than the Poly-MEMS device, offering substantially increased resistance to process and in-use vertical stiction. Any increase possible in mass to spring ratio is amplified by the thicker mechanical layer, further improving the signal to noise performance of the HARMEMS transducer.
Fig. 4.
Comparison of normalized dynamic response of a traditional thin (underdamped) polysilicon MEMS device vs overdamped HARMEMS .
Tab. 1.
Comparison of 205µm HARMEMS and 2µm Poly-MEMS
For a fixed transducer die area, this enables improved noise performance (see fig 5). Using the same circuit, a HARMEMS accelerometer demonstrates better than 50% decrease in power spectral density. It should be noted that the
Low g Inertial Sensor based on High Aspect Ratio MEMS
HARMEMS transducer in this data also has a larger mass to spring ratio than the poly-MEMS device to which it is compared.
Fig. 5.
Comparison of measured normalized Spectral Density (proportional to noise) for Poly-MEMS and HARMEMS low g transducers.
Finally, small error tolerance of the system is enabled, again by the high aspect ratio of the MEMS process. Thicker capacitor plates mean less out of plane deformation of the sensor structure due to package stress over the automotive temperature range (-40°C to +125°C). And, the improved signal-tonoise ratios available in HARMEMS translate to lower gain of the transducer signal in the sensor system. Errors in transducer, ASIC, or package are reduced, making for a tighter total error from the product system.
2.2
Asic: SmartMOS™
SMARTMOS™ is Freescale’s family of mixed signal technologies engineered to combine precision analog, high speed CMOS logic, and high voltage power transistor capabilities onto a single chip. This technology is well suited for applications operating in harsh electrical environments often found in automotive systems. SmartMOS™ is an analog CMOS technology based on mature dual gate (0.25µm min feature) logic. Most common electronic functions can be implemented, from voltage regulators, A to D converter and op amps to E2PROM and MCU cores. This process has voltage-scalable analog CMOS devices with breakdown voltages up to 80V. Lateral and vertical pnp devices can complement high-gain npn devices (beta>100). Its unique trench based isolation eliminates the need for intra-device and inter-device spaces for voltage support thereby maximizing analog and power device shrink. Its high density logic is ~25kgates/mm2 and allows integration of complex state machines or DSPs
465
466
Components and Generic Sensor Technologies
with many parametric trimming options. Full digital signal conditioning can be implemented which brings some advantages such as programmability (filters, acceleration range, …) and autodiagnostics which can be initiated periodically during operation (self-test).
2.3
Packaging: Quad Flat No Lead 16 pin
Packaging of MEMS structure is a vital process as it directly influences the final characteristics of the product (mechanical stress, shock transmission, etc), and its reliability and its cost.
Fig. 6.
Stacked die approach
One of the accelerometer specific requirements is that the sensing element must be protected from the plastic material injected during molding. This is solved by a first wafer level packaging where the transducer is sealed hermetically with a glass frit by a silicon cap (fig. 3). Made in clean room environment at wafer level, this process ensures that the sensing element will be free of any particles that may disturb the sensing element. With such a protection the sensing element can go through all the other process steps (scribing, packaging) without any damage. Figure 6 shows a picture of the hermetically sealed sensing element and the control ASIC mounted on the lead frame, with wire bonding already complete. Compared to the actual generation of devices in production right now which are side by side assembled, the new accelerometer is using a stacked die approach. The advantages of such configuration are the smaller volume taken by the overall package (6x6x1.98mm3)
Low g Inertial Sensor based on High Aspect Ratio MEMS
and the improved manufacturability cycle time as few process steps are needed.
3
Product Description
The single X axis low-g sensor depicted in fig. 7 is manufactured based on Freescale system-in-package approach combining an HARMEMS transducer and a SmartMOS8mv Asic into a QFN package:
Fig. 7.
3.1
Multichip approach based on separate sensing element and control IC
Asic Architecture Description
Voltage Regulator
Separate internal voltage regulators supply fixed voltages to the analog and digital circuitry. The voltage regulator module includes a power monitor which holds the device in reset following power-on until internal voltages have stabilized sufficiently for proper operation. The power monitor asserts internal reset when the external supply voltage falls below a predetermined level. A separate voltage reference provides a stable voltage which is used by the sensing element interface.
Oscillator
An internal oscillator operating at a nominal frequency of 2MHz provides a stable clock source. The oscillator is factory trimmed for best performance. A clock generator block divides the 2MHz clock as needed by other blocks. In the event of oscillator failure, an internal clock monitor provides a fault signal to
467
468
Components and Generic Sensor Technologies
the control logic. An error code is then transmitted in place of acceleration data.
Programmable Data Array
A 256-bit programmable data array allows each device to be customized. The array interface incorporates parity circuitry for fault detection along with a locking mechanism to prevent unintended changes. Portions of the array are reserved for factory-programmed trim values.
Control Logic
A control logic block coordinates a number of activities within the device. These include: Post-reset device initialization; Self-test; Operating mode selection; Data array programming; Device support data transfers
SPI
A serial peripheral interface (SPI) port is provided to accommodate communication with the device. The SPI is a full bidirectional port which is used for all configuration and control functions. Acceleration output is provided over 10 bits data.
Self-test Interface
The self-test interface provides a mechanism for applying a calibrated voltage to the sensing element. This results in deflection of the proof mass, causing reported acceleration results to be offset by a specified amount. Σ-∆ Converter A 16 bit sigma delta converter provides the interface between the sensing element and digital signal processing block. The output of the Σ-∆ converter is a 1-bit data stream at a nominal frequency of 1MHz.
Digital Signal Processing Block
A digital signal processing (DSP) block is used to perform all filtering and correction operations. The DSP operates at the frequency of the Σ-∆ converter. Each device is factory programmed to select the acceleration range and filter characteristics for the device.
Low g Inertial Sensor based on High Aspect Ratio MEMS
A temperature sensor with 8-bit resolution provides input to the digital signal processing block. Device temperature is incorporated into a correction value which is applied to each acceleration result. Low-pass filtering occurs in two stages. The serial data stream generated by the Σ-∆ converters is decimated and converted to parallel values by a sinc filter. Parallel data is then processed by an infinite impulse response (IIR) lowpass filter. A selection of low-pass filter characteristics are available. The cutoff frequency (fc) and rate at which acceleration samples are determined by the device (ts) vary depending upon which filter is chosen. Power consumption is also affected, as higher sample rates require higher DSP clock frequencies, which in turn requires more supply current. Several others functions could be implemented in order to offer even more flexibility to the device. More channels could be added by multiplying the number of Σ-∆ converter to monitor other axis in order to propose XY and XYZ accelerometer. Analog output can be provided by adding a 10-bit digital to analog converter (DAC) at the output of the DSP.
Fig. 8.
3.2
Zero-g output error of a QFN low-g device vs temperature (-40°C to 125°C).
Electrical Performance
Figures 8 and 9 show some of the benefit of a HARMEMS transducer as seen in the response of output voltage error and sensitivity versus temperature of an analog output QFN low g accelerometer (see fig. 7, 8). The uncompensated zero g output voltage varies by 103mV from -40°C to 125°C, or equivalently 86mg. By adding on-chip temperature compensation, this variation can be reduced to 19mV. compensated zero g output voltage varies by 19mV over
469
470
Components and Generic Sensor Technologies
temperature. Dividing by the device sensitivity of 1.2V/g, this equates to 16mg. The sensitivity variations are lower than 1% over -40°C to 125°C.
Fig. 9.
4
Normalized sensitivity of a QFN low-g device vs temperature (-40°C to 125°C).
Conclusion
We have presented a new low g accelerometer based on a High Aspect Ratio sensing element. The sensitivity has been increased by reducing the distance between the fixed and the moving elements. High signal to noise ratio and overdamped frequency response have been achieved by increasing the thickness of the structural layer. This makes this device perfectly suited for Vehicle Dynamic Control application. Combined with a 0.25µm mixed signal Asic, it contains a full digital signal conditioning path allowing the implementation of programmable parameters like filters or acceleration ranges. Thanks to the flexibility offered by this multichip approach and by this type of Asic architecture, the development of a full family of single, dual and tri axis low g axis sensors fulfilling harsh automotive requirements will be implemented.
Acknowledgements The HARMEMS technology described herein is the result of a fruitful collaboration between CEA-LETI and Freescale. The authors acknowledge the work and effort of the entire team at LETI in Grenoble, France, lead by Bernard
Low g Inertial Sensor based on High Aspect Ratio MEMS
Diem, with significant contributions from Sophie Giroud, Denis Renaud, and Hubert Grange. The authors gratefully acknowledge the contributions and significant effort of the Freescale technology and product development teams for their work on HARMEMS, sensor packaging, and low g sensors. In particular, we recognize the work of Bishnu Gogoi, Ray Roop, Jan Vandemeer, Dan Koury, Gary Li, Arvind Salian, and Dave Mahadevan.
References [1] [2] [3] [4]
Research study conducted by the University of Iowa Understanding Automotive Electronics, Sixth Edition, William B Ribbens, SAE Strategy Analytics, Automotive Sensor Demand 2002 – 2011, Market forecast, October 2004 “SOI ‘SIMOX’; from bulk to surface micromachining, a new age for silicon sensors and actuators”, Sensors and Actuators A: Physical, Volume 46, Issues 1-3, January-February 1995, Pages 8-16, B. Diem, P. Rey, S. Renard, S. V. Bosson, H. Bono, F. Michel, M. T. Delaye and G. Delapierre
Matthieu Reze Freescale Semiconducteurs SAS Sensor Product Division BP 72329 31023 Toulouse Cedex 1, France
[email protected] Jonathan Hammond Freescale Semiconductor Inc Sensor Product Division 2100 E. Elliot Road Tempe, AZ 85284, United States of America
[email protected] Keywords:
low g accelerometer, HARMEMS, DRIE etching, surface micromachining
471
473
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements J. Thurau, VTI Technologies Oy Abstract Developing MEMS structures is a first milestone for creating a successful final product. The industrialization reveals whether the concept is ready for mass production with an acceptable yield to meet commercial aspects as well as the requirements for reliability and performance over lifetime. By taking the robust bulk-micromachining technology from today’s high volume series production and combining it with modern DRIE technology towards 3D-MEMS technology VTI has developed its own concept to condense the high performance to smaller dimensions. The next generation of low-g sensors will utilize this platform for single as well as multiple axes sensor systems. A new accelerometer family is created within one housing so that suiting to all measurement requirements it is just necessary to install the appropriate sensor component on the same PCB position. Main target is to meet fail-safe requirements, bring system costs down, reduce size and enable new functionality applications in the automotive environment. This article shows how to close the link between excellent sensor element development and the integration into application where sensor components need to fulfil advanced and enhancing criteria for modern sensor systems.
1
Today’s Market Situation
It was like religion in the past: There were two different approaches of micromachined sensors out on the market – surface micromachining vs. bulk micromachining. Both had their dedicated technologies leading to dedicated advantages which were used for different classes of acceleration sensors – as shown in table 1.
474
Components and Generic Sensor Technologies
Tab. 1.
Comparison of different micromachining approaches
Sometimes it happens in real life that there is convergence between two different approaches leading to excellent synergy. In motion MEMS market it seems to happen that the classic categories are merging more and more – figure 1 – surface micromachining is getting thicker and bulk micromachining is getting better aspect ratios so that at the end there is a common growth where a border line has been before – a development that gives benefits to the customers. More performance for better price and in smaller size are necessary to satisfy the increasing demand for higher numbers of motion sensors due to higher penetration rates in automotive safety and comfort systems as well as for the numerous commodity, sports and fitness applications which are identified by market research companies. To meet the volume requirements VTI has invested heavily in a second clean room building for volume increase as well as redundancy purpose. A new second clean room is kept totally separate with it’s supplies as well as from average risk point of view.
Fig. 1.
Market trend: Merging of motion MEMS technologies
Based on the given knowledge VTI has identified its critical success factors to meet the market demand. The following metrics were defined to measure the development results for the next generation accelerometers:
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements
Performance
Equal and better to meet dedicated application requirements Robustness
Excellent vibration and shock endurance stability Small size
fit into size reduced housings and applications Cost
help market penetration of performance applications Communication
Enhance system performance Q: MTBF and PPM
to meet reliability and failsafe application requirements
2
3D MEMS Structures
VTI developed its own SIMAS process to meet requirements for modern motion and pressure sensing systems. Starting at today’s robust bulk-micromachining technology that is in stable high volume series production and combining it with Deep Reactive Ion Etching (DRIE) increases the efficiency ratio of silicon area to signal quality. With this approach – the combination of KOH wet etching and DRIE etching – real three dimensional structures are achieved, called 3D-MEMS. Depending on the product Silicon On Isolator (SOI) wafers are used to achieve exact dimensions of thin structures at one side of the wafer – e.g. for torsional springs.
Fig. 2.
SIMAS process – left: DRIE etching for mass structuring - right: KOH wet etching for high performance springs
The SIMAS process is having a more sophisticated technology also to generate springs. Starting from wafers with several 100µm thickness – like in today’s VTI MEMS structures – trenches are etched into those wafers by DRIE to achieve a good aspect ratio for much better than KOH etching area utilization.
475
476
Components and Generic Sensor Technologies
After structuring the proof mass from the frame by literally “cutting” the wafer with DRIE, the springs will be etched by VTIs legendary KOH wet etching in single crystal silicon leading to most exact dimensioning in height and shape. This is necessary to achieve the excellent shock resistance of the sensors of several 10,000g that is required for most of the applications. For that spring shaping process all other structures are covered by photo mask so that just the springs are etched “through small windows” (figure 2). The result is a large proof mass on the given silicon area with exact sensitive springs. After processing the middle wafer as shown above, the sensing elements are sealed hermetically with upper and lower capping wafers leading to wafer level/ chip scale packaging MEMS elements. The wafer anodic glass bonding technology used for this process is the same as in today’s mass production. During this 3-wafer stacking process the media in the sensing element is set to be vacuum or a controlled pressure depending on the required behavior of the final product. For structuring electrically isolated partitions on the same element surface vertical 3D glass isolation technique is used. In combination with the thick horizontal glass layers a low parasitic capacitance structure is achieved. Those sensor elements are typically contacted by SMD technology or wire bonding.
Fig. 3.
3-axis sensing element structure – 3D-MEMS – realized with SOI and DRIE
An example for SOI wafer utilization is the three axis concept where the isolator layer is used for dimensioning the height of the rotational spring. Due to the DRIE etching, appropriate aspect ratios are achieved to realize an excellent utilization of the silicon area – creating smaller sensing elements (figure 3).
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements
The more sophisticated technique to achieve exact spring thickness is the utilization of KOH in combination with DRIE etching like in 1-axis accelerometer elements (figure 4).
Fig. 4.
left: new single axis sensing element – right: 3D-MEMS proof mass structure DRIE and springs in KOH wet etching
The capacitances for the measurement are in all shown structures between middle wafer released proof mass and capping wafer electrodes. The main advantage is a stable design against misalignment DRIE etch trenches. This avoids one of the critical elements of DRIE finger structures. Due to the nature of thick middle wafer with its several 100µm thickness the proof mass has the significant mass advantage leading to excellent signal to noise ratios. With the described new SIMAS process the following design targets have been met compared to today’s designs in mass production: Reduction of sensing element size by 65% Enhancement of capacitance dynamics by 50% Improvement of relative capacitive sensitivity by 100% Advancement of mechanical sensitivity by 200% Perfectioning of static linearity by reducing the stray capacitance and by introducing a parallel plate movement Improvement of dynamic linearity by new damping structures (vibration rectification effect) The process was set-up in a way that can be used as platform technology for new products which can be introduced as flexible as ASICs today in C-MOS technology. This reduces the lead-time for new MEMS elements on this platform significantly. Furthermore VTIs excellent wafer capping process can be utilized easily to achieve extremely good vacuum for high Q products even for extremely fragile structures. This makes the new platform flexible for upcoming new products.
477
478
Components and Generic Sensor Technologies
3
Housing Concept – One Size Fits All
Coming from today’s combination concept of SCA610 x-axis, SCA620 z-axis as well as SCA1000 dual axis sensor family designed for the same footprint on the PCB, it was a more than natural requirement to apply the same requirement to the new sensor generation also. In a first approach it was considered to have two different housings for single axis and multi axes products. After detailed review it was decided that the same housing would be the best approach for all products for this platform. This added approximately 1mm in length and width to the single axis originally planned footprint, what is acceptable for automotive applications. The advantage is an economy of scale production with the same housing for all related products, no matter whether it is single, dual or three axes accelerometer or even other MEMS products with the same highly automated production line for excellent yield and PPM figures.
Fig. 5.
Flexible housing platform
A mixing of existing sensor elements and ASICs with next generations is even possible to generate an easy transfer from generation to generation. This flexibility in introduction makes the change management much more reliable for the specific application. Within the same housing with uniform layout and pinout, a variety of products will be available so that the application-tailored performance, number of axes as well as safety level can be adapted to the requirements.
4
Packaging – Performance vs. Price
In automotive applications temperature requirements of -40°C up to 150°C lead to extraordinary requirements in MEMS packaging. Best results were
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements
achieved with pre-molded packages where the CTE mismatch between housing plastic and silicon MEMS is buffered best by silicone gel and adhesives. This leads to uniform behavior over temperature as well as negligible hysteresis effects. The disadvantage of higher costs of this more sophisticated packaging method can be compensated by stable high yield for the packaging process as well as better performance products.
Fig. 6.
Basic structure of sensing element and ASIC in premolded housing
The chosen Dual Flat Lead (DFL) concept is based on a lead frame premolded package. The sensing element as well as the ASIC are picked and placed by die bonder and positioned on silicone adhesive. The open cavity is filled with silicone gel and protected with a stainless steel lid, which is used for laser marking identification with product name as well as unique serial number for traceability. Both are in clear text as well as 2D-matrix code for automatic reading. The product is produced on a high performance, automated line with integrated test area.
Fig. 7.
New DFL component housing (7,6 x 8,6 x 3,3 mm?) and soldering meniscus due to 0,3mm excess leads
The pins of the housing are chosen differently from SCA610 family housing before, where gull wings were considered to be required to meet the thermal shock reliability requirements. The housing for the next generation was produced as prototype in two versions – one with gull wings and one with flat leads underneath the housing. Thermal shock comparison tests with traditional tin-lead solder as well as lead free solder have shown that the results with the flat leads were fully acceptable for the harsh automotive reliability
479
480
Components and Generic Sensor Technologies
requirements. Compared to the standard QFN housing the pins realized here are having excess leads of 0.3mm (figure 7). The advantage is that the surface of the pins was increased and even more important that a clear meniscus is building up during soldering process that is improving the reliability of the soldering. In addition to that the solder joint and meniscus can be inspected automatically by visual inspection. The chosen housing concept is therefore dedicated to reliability requirements of harsh automotive environment. Due to the premolded housing the performance of the sensor is the same before and after soldering process. This is important to avoid any over temperature tests in application at end of line. At this point it needs to be pointed out that VTI is pursuing housing solutions with over-molded low-cost QFN housings as well. Those are dedicated for lower performance applications so far.
5
Signal Conditioning
For the next generation of acceleration sensors VTI has changed from mainly analogue ASICs with digital parameter programming towards more digital signal processing. The chosen technology with 0,35µm structure and a supply voltage of 3,0V to 3,6V has lead to higher integration that was utilized to realize more features as well as modern SPI interface and for the single axis accelerometer PWM interface additionally. The sensing element with its symmetrical dual capacitor structure for best common mode suppression is read out by advanced sigma delta conversion in an analogue interface block. This is done very similar to today’s ASICs hence the signal conditioning is performed digitally. This gives several degrees of freedom related to the resolution that can be varied between 8 and 16 bits depending on application demand. The low resolution comes with fast update frequency for higher cut-off frequency applications. Whereas the high resolution is dedicated for slow applications like inclination measurements. A fast SPI interface with up to 8MHz clock frequency and strong driving capabilities is realized to transfer the information to the application controller. The SPI interface is realized in a way that it has the same format for all types of sensors – single, dual and three axes – to achieve the platform character with full interchangability of motion products within the application. For the single axis accelerometer alternatively to the SPI interface a Pulse Width Modulation (PWM) is implemented. It can be used as an analogue inter-
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements
face after rectification also. A classic analogue output was considered but found not to be state of the art anymore.
Fig. 8.
ASIC structures
The strength of VTIs current acceleration sensors is that temperature compensation isn’t needed to fulfil offset stability criteria of standard automotive applications. This strength was kept. For special highest offset stability types the ASICs incorporate a temperature compensation that can be calibrated on demand to achieve enhanced offset stabilities for upcoming applications.
6
Safety
The good old news first: 3D-MEMS structures from VTI guarantee that inrange failures and sticking are impossible due to the fact that the bulky mass structure etched into single crystal silicon doesn’t have an in-range position where electrostatic adhesion might be more powerful than low acceleration forces as present in low-g applications. Single crystal silicon does not show any creeping or other deterioration over lifetime. For applications which have to work fast after power-up VTI can guarantee that there isn’t any start-up drift. This is quite important for failure recognition algorithms as well as for low power applications were the power is turned on only short time to save energy. An old friend can be found again in the single axis sensor – the electrostatic forced self test. In this test voltage is applied to one of the capacitors deflecting the proof mass to one side and the output to rail. The self test was modified in a way that the mass can be deflected to both directions in SCA800
481
482
Components and Generic Sensor Technologies
series. A control was implemented to release the mass automatically after reaching a pre-defined threshold. After successful self-test the result can be read out via SPI. If the self-test fails, the sensor is recognizing it by logic so that a signal not valid signal status is set.
Fig. 9.
self test sensing element signal of single axis SCA800 family
The ASIC diagnosis is designed in a way that malfunctions of the sensing element as well as for the interconnections are detected. In combination with a straightforward sensor structure this leads to negligible field failure rates as well as to extremely low FIT figures for system safety calculations.
7
Meeting the Application
Today’s main application is still the Electronic Stability Control (ESC) requiring single axis vehicle lateral sensing direction. For this application a X-axis sensor SCA810 or a Z-axis sensor SCA820 can be used depending on the orientation of the PCB in the car and the Gyroscope type. For the measurement of longitudinal acceleration one additional axis is required so that a best fit would be a dual axis sensor – applications are 4x4-ABS, Electronic Parking Brake (EPB) and Hill Start Assist (HSA). Adding the vertical axis, rollover applications are addressed – leading to three axis sensor requirements. A trend is seen that all axis measurements are integrated into an Inertial Measuring Unit (IMU). The most challenging feature that was realized with the 3-axis sensor is the combination of different g-ranges. For HSA and EPB application a very good resolution for low g values – corresponding with low degrees of car up- or down-hill slopes are one extreme – a sensor with 0,2g would be sufficient for this application. The ESC standard full scale range is with 1,5g in the middle of the range but rollover application as well as airbag supporting functions ask for 3,0g to 5,0g ranges. To measure all of those with one sensing element that can be rotated and configured freely in the 3 dimensional room was the chal-
Realisation of Fail-safe, Cost Competitive Sensor Systems with Advanced 3D-MEMS Elements
lenge. It was achieved by excellent over-acceleration behavior of the sensing elements as well as due to sophisticated signal conditioning structures realized by linear mapping.
Fig. 10. application acceleration directions
First results with single axis SCA810 prototypes show very promising results in terms of stability over temperature. Figure 11 shows the test result of offset stability left and sensitivity error on the right side. Both are by factor 4 better than for today’s sensors. Even by adding additional variance for potential lot to lot variation the new technology proves it superiority for dedicated high performance applications.
Fig. 11. First results with single axis SCA810: left: offset over temperature – right: sensitivity failure over temperature
8
Outlook / Summary
Starting with VTI’s current extremely robust bulk micromachining technology, combining it with higher efficiency DRIE etching technology and for some products with SOI wafer base material, VTI has created its 3D-MEMS Technology which combines the advantages of all MEMS technologies. With
483
484
Components and Generic Sensor Technologies
this technology core, modern ASICs for signal conditioning and appropriate housing technologies for best application fit, a robust sensor platform with high reliability was realized. Hence what is most important for the commercialization – stable production with appropriate yield – can be realized from the beginning due to the moderate change of technology. For the field of automotive application this guarantees that the user can be sure that after design-in the technology can be produced in appropriate volume.
References [1] [2]
[2]
[3] [4]
[5]
[6] [7]
Monolithic Accelerometer for 3D Measurements, Tuomo Lehtonen, Jens Thurau, VTI Technologies OY, AMAA proceedings 2003 MEMS-IMU Based Pedestrian Navigator for Handheld Devices: J. Käppi, University of Technology, Finland; J. Syrjärinne, Nokia Mobile Phones, Finland; J. Saarinen, Tampere University of Technology Finland, The Institute of Navigation GPS 2001, Salt Lake City, USA Sebastian Butefisch, Axel Scoft, Stephanus Buttgenbach,’three-Axes Monolithic Silicon low-g Accelerometer’, Journal of Microelectromechanical Systems, Vol. 9, No. 4, December 2000 R. Puers, S. Reyntjens,’Design and processing experiments of a new miniturized capacitive triaxial accelerometer’, Sensors and Actuators A 68, 1998 Gang Li, Zhihong Li, Congshun Wang, Yilong Hao, Ting Li, Dacheng Zhang and Guoying Wu, ‘Design and fabrication of a highly symmetrical capacitive triaxial accelerometer’, Journal of Micromechanics and Microengineering, 11, 2001 M. A. Lemkin, M. A. Ortiz, N. Wongkomet, B. E. Boser, and J. H. Smith, “A 3-axis surface micromachined SD accelerometer,” in ISSCC Dig. Tech. Papers, pp. 202–203, Feb. 1997 Heikki Kuisma, ‘Inertial Sensors for Automotive Applications’, Transducers ’01, Munich, Germany 2001 Heikki Kuisma, Tapani Ryhänen, Juha Lahdenperä, Eero Punkka, Sami Ruotsalainen, Teuvo Sillanpää, Heikki Seppä,’A Bulk macromachined Silicon Angular Rate Sensor’, Transducers ’97, Chicago 1997
Jens Thurau VTI Technologies Oy Rennbahnstrasse 72-74 60528 Frankfurt Germany Keywords:
accelerometer, 3D-MEMS, automotive application, DRIE, bulk-micromachining, signal conditioning, safety
Intersafe
487
eSafety for Road Transport: Investing in Preventive Safety and Co-operative Systems, the EU Approach F. Minarini, European Commission
Introduction It is now almost 3 years that the eSafety initiative has been launched in Europe jointly by the Industry and the European Commission. This initiative aims at bridging the gap between technology developments and their actual implementation in the market by fostering the introduction of new Information and Communication Technologies (ICT) and systems in future motor vehicles and infrastructure. The eSafety initiative aims therefore to improve road safety and efficiency through intelligent vehicle safety systems. When the eSafety initiative was launched in April 2002, The EU had 15 Member States. On 1st May 2004, 10 new countries joined for a European Union with now 25 members and a population of 445 Million. This means new urge for all modes of transport, but in particular for road transport and road safety. The European Commission, together with the European Governments has launched several initiatives to improve road safety and make the transport sector overall more sustainable. One example is the adoption in September 2001 of the Transport white paper that for the first time established a target to halve fatalities by 2010. In the EU with 25 members we had 50.000 fatalities in 2002 (source Eurostat). For 2010 we target a first reduction to 25.000 fatalities which is an ambitious but not an impossible target. Several actions have been deployed at political level mainly based on strengthening enforcement rules and improving driver education and information through for example, prevention campaigns. These measures are proving to be successful and the number of fatalities is decreasing. Nevertheless, to further reduce the number of fatalities and target the 50% reduction we developed an integrated approach where the traditional “3Es” approach (Education, Enforcement and Engineering) is extended with a 4th “E” which represented by “eSafety”.
488
Intersafe
Fig. 1.
1
Evolution of fatalities in Europe
Integrated Safety Systems
As we know that between 90 and 95% of the accidents are due to the human factor and that in almost 75% of the cases the human behaviour is solely to blame, it is clear that our failings as drivers represent a significant safety risk to ourselves, and other road users. It is in this framework that within the eSafety initiative, Advanced Driver Assistant Systems and Intelligent Active Safety have major role to play in reducing the number of accidents and their impacts. It is with this purpose that the Commission has proposed a strategic objective on “eSafety for road and air transport” in the Information Society Technologies priority thematic area of Framework Programme 6. As a result of the first call for proposals that closed at the end of April 2003, a key integrated project in the area of preventive safety and Advanced Driver Assistance Systems (ADAS) has been selected for funding.
2
The PREVENT Project
This project is part of the Integrated Safety Initiative supported by EUCAR, regrouping other projects funded by the IST programme like AIDE (on Human
eSafety for Road Transport: Investing in Preventive Safety and Co-operative Systems, the EU Approach
Machine Interface), EASIS (on Electronic architecture for ADAS in vehicles), GST (for on line safety services) and a project funded by DG RTD called APROSYS (on Passive safety). PREVENT focuses on the use of ICT to improve vehicle safety. It will develop, test and evaluate safety related applications using advanced sensors and communication devices integrated into on-board systems for driver assistance. The project also links with National and European programmes.
Fig. 2.
Different activities and areas of preventive safety in PREVENT
PREVENT integrates different activities and areas of preventive safety. It is organised as a matrix with applications to be developed in vertical and horizontal sub-projects in the following “safety functions fields”: Safe speed and Safe following Lateral Support and Driver Monitoring Intersection Safety Vulnerable Road Users and Collision Mitigation and “cross functional fields”: Code of Practice for developing and testing ADAS Use of digital maps Sensors and sensors data fusion
489
490
Intersafe
The PREVENT Consortium has 51 partners from Industry, Public Authority, RTD institutes, universities and public private organisations. In particular 12 Car manufacturers and 16 suppliers are partners in the consortium. The total cost of the Project is approximately 55MEuros and we are contributing with a grant to the budget of 29.8MEuros. The project will last for 4 years ending in January 2008. It will produce several new applications, improve existing ones, largely disseminate activities and results, liaise with other relevant research programmes and at the end it will have a public exhibition where more than 20 prototypes integrating the Project results will be presented and tested.
3
Towards Co-operative Systems
The European Commission has a long history of research on the use of information and communication technologies for road and vehicle safety. Under the European Framework Research and Development Programmes, projects have been funded which have developed and demonstrated traffic telematics systems aimed at making transport safer, more efficient, effective, and more environmentally friendly. Many of these systems were aimed at improving the transport infrastructure, while others were based in the vehicles themselves. Mostly, the systems developed by these projects have operated as autonomous or stand-alone. Although, they hold the great potential to improve road safety and efficiency, there are also some limitations to what can be achieved by systems based solely on the road, or solely in the vehicle, e.g. dealing with far distance threats or anticipating road difficulties with time margins compatible to the driver response time. This requires another class of systems whose intelligence is distributed between vehicles and roads. As the capacity and flexibility of information technology and communications increases, and costs decrease, it becomes feasible to develop co-operative systems in which the vehicles communicate with each other and the infrastructure. In this way cooperative systems will greatly increase the quality and reliability of information, support and protection available to road users, and the cost-effectiveness of applications. In spring 2004 three expert meetings were organised by Unit C.5 (ICT in Transport and the Environment) of DG Information Society. In these, experts in the field of transports telematics were invited to express their views on what should be the objectives and priorities in the area of cooperative telematics systems for improving the safety and management of road transport. This was intended to provide a basis of a contribution to the IST Work programme development for RTD projects in the Sixth Framework programme for 2005-2006.
eSafety for Road Transport: Investing in Preventive Safety and Co-operative Systems, the EU Approach
The selected experts came from relevant public sector bodies with responsibility for the road infrastructure and from the vehicle industry. The following definition of Co-operative Systems was agreed during the expert meetings: “Road operators, infrastructure, vehicles, their drivers and other road users will co-operate to deliver the most efficient, safe, secure and comfortable journeys. The vehicle-vehicle and vehicle-infrastructure co-operative systems will contribute to these objectives beyond the improvements achievable with stand-alone systems.” Some aspects of such systems have already been investigated in previous Framework Programme and recent projects, but it is sensible to make co-operative systems a stronger focus of R&D in the future, as vehicles are increasingly equipped with wireless communications and location detection, increased computing power, and a multifunctional Human Machine Interface. Taking into account the results of the consultation process and the ongoing research initiatives the requirements for projects to be funded in the IST Work programme 2005-2006 under the Strategic Objective “eSafety – Cooperative systems for Road Transport” were developed. The main objectives are safety and efficiency. Such systems will enhance the support available to drivers and other road users and will provide for greater transport efficiency by making better use of the capacity of the available infrastructure and by managing varying demands. They will also, and primarily, increase safety by improving the quality and reliability of information used by advanced driver assistance systems and allowing the implementation of advanced safety applications. The Research will focus on advanced communications concepts, open interoperable and scalable system architectures, advanced sensor infrastructure, dependable software, robust positioning technologies and their integration into intelligent co-operative systems that support a range of core functions in the areas of road and vehicle safety as well as traffic management and control. This call for proposals has been published in December 2004 and will close in March 2005 the available budget is 82MEuros. The European Funded Projects on co-operative systems are expected to start at the end of the year. We believe that innovative concepts, technologies and systems will be developed, tested and widely disseminated bringing the level
491
492
Intersafe
of European excellence in the area of ICT for road and vehicle safety an additional stage forward thus contributing, with the support of the eSafety initiative, to the ambitious goal of halving the fatalities for the year 2010. Fabrizio Minarini European Commission Information Society Directorate General Avenue de Beaulieu 31 1049 Brussels Belgium
493
A New European Approach for Intersection Safety – The EC-Project INTERSAFE K. Fürstenberg, IBEO Automobile Sensor GmbH B. Rüssler, Volkswagen AG Abstract Intersection Safety is a challenging subject due to the complexity of the heterogeneous environment. Furthermore, it is one of the most important areas under discussion with respect to the huge number of accidents which occur on intersections. Therefore, the European Project INTERSAFE was established within the Integrated Project PReVENT. INTERSAFE focuses on two approaches. The first one is a bottom-up approach using state of the art sensors – laser scanner and video – and infrastructure to vehicle communication. An innovative strategy to identify static and dynamic objects based on accurate positioning at the intersection will be presented. The second one is a top-down approach based on a driving simulator. With this different sensor configurations and communication methods can be evaluated. In addition, the investigation of dangerous scenarios can be realised as well. These two approaches will be introduced. The communication methods will be described as well as the results of a detailed accident analysis based on selected European countries.
1
Introduction
In the 6th Framework Programme of the European Commission, the Integrated Project PReVENT includes Intersection Safety. The INTERSAFE project was created to generate a European approach to increase the safety at intersections. The project started on the 1st of February 2004 and will end in January 2007. The partners in the INTERSAFE project are: Vehicle manufacturer: BMW, VW, PSA, RENAULT Automotive supplier: TRW Conekt, IBEO Institute / SME: INRIA / FCS
494
Intersafe
The main objective of the INTERSAFE project is to improve safety and to reduce (in the long term, avoid) fatal collisions at Intersections. The objective will be achieved by a combination of sensors for detection of crossing traffic and all other objects on the intersection as well as sensors for localisation of the host vehicle when approaching and transversing the intersection. Furthermore, there will be communication between the host vehicle and the infrastructure, to exchange additional information about traffic, weather, road conditions, etc. A basic approach will be realised on a test vehicle with existing on board sensors and off the shelf communication modules. In parallel an advanced approach will develop driver warning strategies using a driving simulator to evaluate and specify the needs for an extended Intersection Safety System.
2
INTERSAFE Concept & Vision
The INTERSAFE project realises two different approaches in parallel. The first approach is a bottom-up approach based on two laser scanners, one video camera and vehicle-to-infrastructure communication. All these state of the art devices will be installed on a VW test vehicle, as shown in figure 1 and figure 2. The laser scanners will be used for object detection and the video camera for road marking detection. Highly accurate vehicle localisation is performed by fusion of the outputs of the video and laser scanner systems. The Laserscanner system tracks and classifies obstacles and other road users.
Fig. 1.
Bottom-up approach with state of the art sensors on a VW test vehicle
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
Fig. 2.
Fields of view of two ALASCA® sensors and one video camera used in the INTERSAFE approach
Furthermore, some communication modules will be installed at selected intersections in public traffic to realise communication between the vehicle and the traffic lights. This approach will result in a basic intersection system, which can be evaluated in public traffic on the selected intersections.
Fig. 3.
Top-down approach in the BMW driving simulator
The second approach is a Top-Down Approach, based on a BMW driving simulator (see figure 3). The driving simulator allows the analysis of dangerous situations, independent of any restricted capabilities of the sensors for environ-
495
496
Intersafe
mental detection. The results of this approach will be used to define an advanced intersection safety system, including requirements for advanced onboard sensors. The concept of INTERSAFE is shown in figure 4. Based on object detection, road marking detection and navigation based on natural landmarks (realised by matched information of the Laserscanners and the video camera in the basic approach) as well as a detailed map of the intersection, a static world model is built. As a result of this model, all objects and the position of the ego vehicle are known precisely.
Fig. 4.
INTERSAFE concept & vision
In a second step, a dynamic risk assessment is done. This is based on object tracking & classification, communication with the traffic management as well as the intention of the driver. As a result of the dynamic risk assessment, potential conflicts with other road users and the traffic management can be identified. Consequently, the intersection safety system is able to support the driver at intersections. In the INTERSAFE project, the consortium is mainly focused on stop sign assistance, traffic light assistance, turning assistance and right of way assistance. The difference of the two approaches (A-ISS = Advanced Intersection Safety System and B-ISS = Basic Intersection Safety System) lies in their time to market and complexity. The architecture and the warning strategies will probably be the same.
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
3
Communication
Car-to-infrastructure communication offers additional information to the approaching vehicles. Thus, it is possible to bring more safety into signalised intersections when communicating with the traffic signal system (TSS). There are many different functions that can benefit from this communication: One aim of the INTERSAFE project is to prevent the driver from missing the red-light at intersections. Communication with the TSS enables the on-board application to give an estimation of remaining time before the light signal changes. With this information an appropriate warning or intervention strategy can be developed that is able to assist the driver. Besides the traffic light violation warning, a comfort system can be realised that is able to give a speed recommendation to the driver who is approaching the intersection. This recommendation for reaching the green light enables better traffic flow and shorter stopping times at intersections. The above mentioned functions can be realised using unidirectional communication from the traffic light to the car, which is the first approach in the INTERSAFE project. Extending this technique to bidirectional communication offers additional possibilities for driver assistance and driver comfort systems. Once the approaching cars communicate bidirectionally with the TSS, the following functions can be realised: In an intelligent traffic light control, which can be seen as an extension to the conventional induction loops, the TSS “knows” further in advance how many cars are approaching the intersection at a specific time. Traffic light priority for emergency vehicles. When the approaching cars “inform” the TSS of their arrival (like for the intelligent traffic light control) the information can be routed to all other cars to extend their survey area, even “around the corner”. Figure 5 shows the communication facilities for communication with a traffic signal system.
497
498
Intersafe
Fig. 5.
3.1
Car-to-infrastructure communication facilities
Technological Basis
In accordance with the activities in the United States (Vehicle Safety Consortium, VSC) and in Europe (Car-to-Car Communication Consortium, C2CCC) in the field of car-to-car and car-to-infrastructure communication the technological basis is IEEE 802.11a, also known as Wireless LAN. One goal of the activities is to get an exclusive frequency within the 5GHz range as it is realised in the US (frequency band from 5,85GHz to 5,925GHz) for safety relevant applications. Communication with the TSS should use the same band for its applications.
3.2
Communication Properties
Broadcasting the relevant information from the TSS to the cars in the range of the radio link seems to be suitable for the realization of unidirectional communication with the TSS. A maximum range of 200m should be sufficient. A vehicle driving at a speed of about 70km/h will receive the transmitted data more than 10s before arriving at the intersection. On streets with a lot of traffic lights equipped with communication modules the range, of course, can be shorter. An update of the transmission every 100ms should be adequate for intersection related applications. Based on the initial maximum range of about 200m there seems to be no need for additional repeaters or multi-hops. Nevertheless, if results from some testing sites in city areas with a lot of possible occlusions are available, there might be a need for multi-hop and repeaters.
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
As for all safety related applications with wireless communication the data from the TSS is important. Media access and the handling of priorities will be realised as specified in 802.11e. To guarantee that the transmitted data has its seeds in the TSS, an authentication mechanism with certification is needed. Transmitted messages are digitally signed before sending and when received verified using the public key read from the certification. When thinking of comfort system applications at traffic signal systems or other applications that require a bidirectional communication, these techniques have to be extended. But as mentioned before, in a first step, INTERSAFE will focus on the easier unidirectional communication. Nevertheless, an extension will not affect the standards and the specifications stated above.
3.3
INTERSAFE System Architecture
Within the INTERSAFE project a prototypical intersection with communication will be built up. In the beginning a standard PC will replace the TSS controller board to have non-restrictive access to all required data (e.g. signal times). Nowadays TSS controllers do not offer that possibility with sufficient accuracy. The PC will be equipped with an IEEE 802.11a WLAN card to realise the communication. A GPS-timestamp will ensure the synchronization of the sent and received data. With this setup the communication possibilities for safety related applications at intersections will be evaluated.
Fig. 6.
Communication system
Figure 6 shows schematically the system architecture. As a result of a successful demonstration of the systems’s functional efficiency, a reengineering of a standard TSS controller can be considered in order to demonstrate the feasibility in real traffic situations.
499
500
Intersafe
4
Accidentology - Relevant Scenarios
Based on a detailed accident analysis for intersections of selected European countries the relevant scenarios which have to be addressed by an intersection safety system are determined. The three most important scenarios including more than 60% of the accidents on intersections are identified. The strategy of the applications in INTERSAFE is focussed on warning the driver if a dangerous situation is predicted. Thus, only a few seconds before a potential crash a warning has to be generated.
Fig. 7.
CASE VEHICLE (A) drives with an initial speed of 0 to 60km/h. The final speed is 0 kph and the final position is the stop line at the road sign. The DRIVER’S INTENTION (A) is to stop and cross the road, to stop and turn left or right or not to stop. The ROAD SIGNS could be a stop sign (mainly), atraffic light or a give-way sign. The OPPONENT VEHICLE (B) drives with a constant speed of up to 40km/h from right to left or vice versa.
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
Fig. 8.
CASE VEHICLE (A) drives with an initial speed of 0 to 60km/h. The ROAD SIGNS could be a stop sign (mainly), a traffic light or a giveway sign. The OPPONENT VEHICLE (B) drives with a constant speed of up to 60km/h from right to left or vice versa or right to left/right turn. The DRIVER’S INTENTION (B) is to stop and cross or not to stop.
Fig. 9.
CASE VEHICLE (A) drives with an initial speed of 0 to 60km/h. The final speed is 0km/h and the final position is the stop in the centre of the intersection. The DRIVER’S INTENTION (A) is to turn left. There are no ROAD SIGNS for the case/opponent vehicle. The OPPONENT VEHICLE (B) drives with a constant speed of up to 40km/h from the opposite direction.
501
502
Intersafe
The three most important scenarios including more than 60% of the accidents on intersections are taken as reference for the requirements.
5
Requirements
As a result of the relevant scenarios from the previous chapter and the warning strategy, requirements for the sensor system are formulated. INTERSAFE is focussing on stop sign assistance, traffic light assistance, turning assistance and right of way assistance. Sensor systems should have a medium range up to 80m, with a very wide field of vision of about ±125ºaround the front of the vehicle. They should have the ability to localise the vehicle accurately in position and orientation. Of course, automotive requirements like weather robustness and lighting conditions are under consideration.
Fig. 10. Field of vision for turning into a road with priority (left) and for turning off a road with priority scenario (right) (not in true scale).
The sensor systems of the bottom-up approach, which will be built up by spring 2005, will be applied to the three relevant scenarios.
6
Conclusion
The proposed solution to realise the INTERSAFE System is based on challenging technical objectives. The consortium is convinced it will be able to fulfil the requirements to support the driver on intersections. The basic system with onboard sensors will provide a solution, which can be tested on selected intersections. The advanced system will provide knowledge about future needs of
A New European Approach for Intersection Safety – The EC-Project INTERSAFE
sensors and new opportunities to support the driver in more critical driving situations as well.
Acknowledgements INTERSAFE is a subproject of the IP PReVENT. PReVENT is part of the 6th Frame-work Programme, funded by the European Commission. The partners of INTERSAFE thank the European Commission for supporting the work of this project.
References [1] [2] [3]
[4]
[4]
Buchanan, A.: Lane Keeping Functions. 12th Symposium e-Safety, ATA EL 2004, Parma, Italy. Lages, U.: Laser Sensor Technologies for Preventive Safety Functions. 12th Symposium e-Safety, ATA EL 2004, Parma, Italy. Fuerstenberg, K.: Object Tracking and Classification for Multiple Active Safety and Comfort Applications using a Multilayer Laserscanner. Proceedings of IV 2004, IEEE Intelligent Vehicles Symposium, June 2004, Parma, Italy. Heenan, A.; Shooter C.; Tucker, M.; Fuerstenberg, K.; Kluge T.: Feature-Level Map Building and Object Recognition for Intersection Safety Applications. Proceedings of AMAA 2005, Conference on Advanced Microsystems for Automotive Applications, March 2005, Berlin, Germany. Hopstock, M. D; Ehmanns D.; Spannheimer, H.: Development of Advanced Assistance Systems for Intersection Safety. Proceedings of AMAA 2005, Conference on Advanced Microsystems for Automotive Applications, March 2005, Berlin, Germany.
503
504
Intersafe
Kay Ch. Fürstenberg Research Management IBEO Automobile Sensor GmbH Fahrenkrön 125, 22179 Hamburg, Germany
[email protected] Bernd Rössler Volkswagen AG Group Research, Electronic Systems, Letter box 11/17760, D-38436 Wolfsburg, Germany
[email protected] Keywords:
intersection safety, communication, relevant scenarios, video, laserscanner
505
Feature-Level Map Building and Object Recognition for Intersection Safety Applications A. Heenan, C. Shooter, M. Tucker, TRW Conekt K. Fürstenberg , T. Kluge, IBEO Automobile Sensor GmbH Abstract Accidents at intersections happen when drivers perform inappropriate manoeuvres. Advanced sensor systems will enable the development of Advanced Driver Assistance Systems (ADAS) which can assess the potential for a collision at a junction. Accurate localisation of the driver’s vehicle and path prediction of other road users can be fused with traffic signal status and other information. The ADAS system can use this fused data to assess the risks to the driver and other road users of potentially hazardous situations and warn the driver appropriately. The accurate localisation of the host vehicle is achieved by utilising individual sensors’ feature-level maps of the intersection. The INTERSAFE project will independently use video and Laserscanner sensing technologies for localisation and then fuse the individual outputs to improve the overall accuracy. The Laserscanner system will also be used to track and classify other road users and obstacles, providing additional data for the path prediction and risk assessment part of the application.
1
Introduction
European accident statistics show that up to a third of fatal and serious accidents occur at intersections. The INTERSAFE project aims to reduce and ultimately eliminate fatal collisions at intersections. The project will explore the accident prevention and mitigation possibilities of an integrated safety system by creating vehicle demonstrators that provide the driver with turning assistance and infrastructure status information. Furthermore the effectiveness of the safety system will be examined for higher-risk scenarios through its implementation and testing in a simulator. The sensor technologies being used in INTERSAFE are video and Laserscanner. Figure 1 shows an example intersection with a vehicle and the sensors’ fields of view overlaid. This paper will focus on the sensor technology challenges within the INTERSAFE project, namely, algorithm development for vehicle localisation, fusion of
506
Intersafe
the outputs of the video and Laserscanner systems and the use of the Laserscanner system to track and classify obstacles and other road users. The nature of the problem faced by the INTERSAFE project means that a very high accuracy in the localisation of the host vehicle in the intersection is required. Improvements to the current automotive sensor technologies are needed to achieve this required level of accuracy both for map creation and real-time localisation within intersections. The INTERSAFE system’s accuracy will require the fusion of high-level localisation data from two independent but complementary sensor technologies.
Fig. 1.
Example intersection with a vehicle and the sensors’ fields of view overlaid
As each of the two sensing technologies used in INTERSAFE can detect different types of features, they will have their own feature-level maps of the intersection containing only features detectable by, or relevant to, themselves. Each sensor will match sensor data with the feature-level map to provide an estimate of the position of the vehicle within the intersection. The estimate of vehicle position from each of the two sensors will be fused to provide a single estimate of the host vehicle position. The host position estimate will be used to determine the host vehicle location on a high-level feature map of the intersection. The high-level map contains data (such as position of lanes, stop lines, etc.) relevant to the risk assessment and collision avoidance algorithms. In the future the maps could be transmitted to the host vehicle as it approaches an intersection that requires extra
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
driver assistance. However, this is beyond the scope of the INTERSAFE project. The feature-level maps will be created semi-automatically. Intersection data reported by the sensors along with any relevant vehicle dynamics and GPS data will be recorded whilst driving through the intersection from the various available approach roads (fig. 2). Special reference features will be added to the intersection during data logging to allow the map building algorithms to determine the orientation of each logged section and to aid in linking the logs from different directions. The special features must be detectable by both sensing technologies to enable the two sensor-level maps to be referenced from the same point on the intersection. The data logs can then be postprocessed to build up separate feature-level maps of the intersection for each sensor
Fig. 2.
Video sensor datalogger screen shot
After the feature-level maps have been created they can be edited to remove the specially added features that would not appear on the actual junction and any features that may not be useful to the sensors for localisation or target discrimination. The Laserscanner system will also use the map to remove all measurements at fixed obstacles in the current range profile set with the remaining objects being classified and reported to the INTERSAFE system as road users and potential threats. The sensors will be synchronised during oper-
507
508
Intersafe
ation using the video frame synchronisation signal to trigger the scan point of the Laserscanner system. This should simplify the host vehicle localisation task that fuses the data from the two sensor systems.
2
Video Sensor Feature-Level Data Collection and Map Building
The TRW Conekt automotive video sensor for Lane Departure Warning applications [1] will be modified and used in the INTERSAFE project to sense the position of intersection features (typically visible road markings) relative to the host vehicle. The position of features will be compared with a video sensor feature-level map built up on previous visits to the intersection in question. Data from this comparison combined with data from other vehicle sensors can be used to calculate the localisation of the host vehicle in the junction. The features detected and stored in the map by the video sensor for the INTERSAFE application must have detectable information in both the lateral and longitudinal directions for unambiguous localisation of the host vehicle to be possible.
2.1
Sensor Suite Specifications
Sensor: Sense Data: Output type: Output details:
Wheel Speed X2 Wheel Speed Digital TTL Output range: 0.3V low, 5.0V high 44 pulses / wheel revolution
Sensor: Sense Data:
BEI MotionPak(TM). Yaw rate, Pitch rate, Roll rate (º/sec), Lateral acceleration, Longitudinal acceleration, Vertical acceleration (g) Analogue Max Range: ±500º/sec (rates), ±10g (accelerations) Configured Range:±100º/sec (rates), ±2g (accelerations) Range Output: ±2.5Vdc (rates), ±7.5Vdc (accelerations).
Output type: Output details:
Sensor: Sense Data: Output type: Output details:
Correvit® SL Sensor. Outputs are configurable. See data sheet. D Channel1: Longitudinal distance DL D Channel2: Angle ϕ (º) Digital TTL Pulse Voltage: 0-5V
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
Channel1: 340 Pulses/m (configurable to 160-750 pulses/m). Channel2: 50 Hz/º Note: Channel 2 is FM - 10kHz carrier wave, ±2kHz Sensor: Sense Data:
Output type: Output details: Sensor: Sense Data: Output type: Output details:
Afusoft Raven 6 Latitude (degrees, minutes, decimal minutes, North or South) Longitude (degrees, minutes, decimal minutes, East or West) True course over ground (º) Speed over ground (km/h) Serial NMEA 0183 standard format. National Semiconductor Greyscale HDR CMOS progressive scan Raw video image. Cameralink Imager Resolution 640 x 480 Enabled Field of View (Vertical) 22º Enabled Field of View (Horizontal) 54º Intra-frame Dynamic Range Variable: 62 - 110dB Inter-frame Dynamic Range 120dB Frame Rate 30Hz
The Correvit SL and BEI MotionPak sensors are used to improve the accuracy of the feature-level map creation task. It is envisioned a standard wheelspeed and yaw rate sensor (as available on most current high-end vehicles) will yield sufficient accuracy for the localisation task.
2.2
Feature Extraction
The existing TRW video lane sensor is used to measure the relative position of lane markings in front of the vehicle. The lane markings are detected in the video image using proprietary feature extraction techniques. Line tracing and fitting algorithms are used to extract parametric line descriptions from the raw edge data. Corrections are then made to the line parameters to correct for inaccuracies caused by optical effects. Feature extraction from the video images is performed differently for map creation and localisation. For map creation, the raw image is logged with the vehicle dynamics data and post-processed to allow optimised line parameter estimates. For the online localisation (line tracking and matching) the line param-
509
510
Intersafe
eters are determined using optimised image processing techniques which run in real-time on the system PC. Using parameterised lane markings reduces the amount of data which needs to be processed by the association and tracking algorithms.
Fig. 3.
2.3
Video Sensor Image and Detected Lines
Map Creation
During map creation, lane marking images and vehicle dynamics data will be logged for each approach road to the intersection. Each log is processed to generate a map containing the absolute position of each of the lane markings. The absolute positions of lane markings are determined by fusing the lane marking position relative to the vehicle and the vehicle dynamics data using an extended Kalman filter (EKF). The maps generated for the different approaches to the junction are then merged. This merging is enabled automatically as known reference markers are placed in or near the intersection. Limited manual editing of the map will then take place to produce a clean, low-level feature map (fig. 4) which can be used in the online video localisation system.
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
Fig. 4.
Video feature map
Fig. 5.
Schematic description of the ALASCA®
511
512
Intersafe
3
Laserscanner Obstacle Detection
The Automotive Laserscanner (short ALASCA®) was developed to integrate Laserscanner technology into automobiles. It combines a 4-channel laser range finder with a scanning mechanism, and a robust design suitable for integration into the vehicle. The object data, like contour or velocity, are useful for many applications in the automotive area.
3.1
Principle of Operation
The main units of the ALASCA® are shown figure 5. A laser diode sends out short light pulses (red ‘light’ in fig. 5) and a rotating mirror reflects the infrared beam. The target around the scanner/car reflects the infrared beam (blue ‘light’ in fig. 5). And the mirror reflects the light to the receiver. This operation has duration of only a few nanoseconds. The time passed during this procedure represents a measurement of the distance to the target. The angle resolution is supplied by an angle-encoder. The measurement values are compensated to minimize the effects of temperature and high reflections on well reflecting targets to get accurate and robust distance values. With the new measurement technology the ALASCA® is able to detect two echos per measurement and generate two distance values for each laser shot and layer (fig. 6). With this feature the ALASCA® is able to detect targets behind raindrops, fog or spray when driving behind other vehicles. The Laserscanner can measure through a dirty cover and recognise heavy soiling and is able to report an incorrect function of the Laserscanner system.
Fig. 6.
Example for double-echo evaluation
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
The four scan planes allow the ALASCA® to measure with a vertical resolution of approx. 3.2 degree. In combination with the evaluation of the scan data (e.g. removing scan data resulting from measurements on the ground, rain or snow), the scanner system compensates the pitch movements of the vehicle without losing contact with the tracked targets.
Fig. 7.
ALASCA® Scan data, 150° scan area and video frame of this scene
Fig. 7 shows a perspective view with a scan area of 150 degrees. One ALASCA® is mounted at the 0m position (red circle). The 4 layers are shown in 4 different colours. The second echoes (called B-channel) are shown in light colours. The red line shows the zero degree position of the vehicle coordinate system. The oncoming black car is to be found to the left of the orange circle in scan data picture. The silver car in the front is part of the way along the red line. The red dots (in the orange circles in fig. 7) are distance values that are processed and marked as ground data. The scan pre-processing marks measurements from dirt and rain, as well. The object tracking and classification algorithms categorise the objects on both sides as background objects.
3.2
Technical Data ALASCA® Range 0.3m up to 80m (30m on really dark targets) Range resolution ±2cm Horizontal field of view up to 240 degree (depending on the mounting
position) Scan frequency 10 to 4 Hz
513
514
Intersafe
Vertical field of view 3.2 degree subdivided into 4 layers Horizontal angular resolution 0.25 degree ... 1 degree (depending on
3.3
the scan frequency) Interfaces ARCnet / Ethernet, CAN Eye safe (laser class 1) Waterproof to IP66 (even as stand-alone unit), no external moving parts Electrical power consumption 14W
The Laserscanner Fusion System
In recent projects the scan data fusion was very processing-intensive. The Laserscanner gathers the range profile in a certain time while the mirror is rotating with the host vehicle usually moving. This leads to a shift in the distance, with respect to the vehicle coordinate system, of actually equidistant targets that are measured in different angles due to the time in scan. Every measurement is shifted to a common time base with respect to the host vehicles movement. The two Laserscanners do not usually measure the same object at the same time. If a fused scan just combined the scan data directly the same object would be at different places in the vehicle coordinate system. Under consideration of this, the ALASCA® scan data fusion is based on synchronised Laserscanners to provide a consistent fused scan. Fig. 8 shows a fusion system with 2 ALASCA®, a Fusion ECU and the vehicle control unit.
Fig. 8.
Fusion System with vehicle control
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
3.4
Scanner Synchronisation
The first step of a scan data fusion of the ALASCA® sensors is the synchronisation of the Laserscanners. The ECU periodically sends signals (with the desired rotation frequency) to both Laserscanners. The Laserscanners adapt their rotation frequency to the synchronisation frequency and the angle of the rotating mirror to the sync-signal. For this purpose a fine-tuning motor speed controller is integrated into the Laserscanner. The accuracy of synchronisation (i. e. the time difference between the synchronisation signal and the zero-crossing of the direction of view) is about ±2ms. The synchronisation ensures that both Laserscanners measure an object almost at the same time. Other sensors (like a video camera) could be synchronised with the ECU, as well.
3.5
Laserscanner Data Fusion
The two ALASCA® sensors send raw data (distance and angle measurements) to an ECU. The fusion module translates the scans to the vehicle coordinate system and fuses the two scans. With the Laserscanner synchronisation, there is no need for timer shifting due to the movement of the host vehicle.
Fig. 9.
Object tracking using Laserscanners: road users are tracked crossing an intersection
515
516
Intersafe
3.6
Object Tracking and Classification
Comparing the segment parameters of a scan with predicted parameters of known objects from the previous scan(s), established objects are recognised. Unrecognised segments are instantiated as new objects, initialised with default dynamic parameters. The tracking process is usually divided into three sub-processing steps, as shown in figure 10.
Fig. 10. The tracking process
In order to estimate the object state parameters a Kalman filter is well known in the literature and used in various applications, as an optimal linear estimator [2]. A simplified Kalman filter, the alpha-beta tracker is often used instead [3]. The Kalman filter was chosen, as it allows for more complex dynamic models, which are necessary for a precise object tracking. We evaluated our data on different association methods, such as the nearest neighbour and the global nearest neighbour method [4]. Object classification is based on object-outlines (static data) of typical road users, such as cars, trucks/buses, poles/trees, motorcycles/bicycles and pedestrians. Additionally the history of the object classification and the dynamics of the tracked object are used in order to support the classification performance [5, 6]. In a simple algorithm road users are classified by their typical angular-outline using only the geometric data [7]. Additionally the objects history and its dynamic data are necessary to enable a robust classification [8]. In case there is not enough information for an object classification a hypothesis is generated, based on the object’s current appearance. The temporary assignment is valid, as long as there is no violation of a limiting parameter. However, the classification is checked every scan to verify the assignment of the specified class [9].
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
An environmental model supports the selection of a suitable class. The understanding of the traffic situation may further improve the classification results.
4
Laserscanner Feature-Level Map Building
GPS based localisation could be sufficient in an environment where free view to the sky is possible. But it is still neither a very accurate nor a reliable localisation. In particular in urban areas, where most of the intersections are located, the localisation using GPS systems is not satisfactory. Therefore other strategies are under development, which enable a very accurate localisation but still in a very reliable manner. The approach within INTERSAFE focuses on localisation based on natural landmarks. In a first stage a map of the intersection is generated by moving the Laserscanner across the intersection under consideration of the host vehicles movement [10]. The obtained grid map is used to semi automatically mark the natural landmarks, like posts or trees. The landmarks are registered in a feature-level map, as shown in fig. 11.
Fig. 11. Laserscanner feature map (left) and video reference picture (right)
After a feature-level map is generated it is possible to recognise some of the landmarks which are registered in the feature-level map. With that every vehicle, which is equipped with the Laserscanner, is able to localise with respect to the recognised landmarks at the intersection. First results show that a very accurate localisation on the intersection is possible.
517
518
Intersafe
5
Conclusion
The INTERSAFE project answers the need for increased safety on European roads at intersections. The INTERSAFE approach, presented in this paper, is focussed on existing Laserscanner and video technologies and the modifications to achieve the required accuracy of host vehicle localisation. Additional information about the road users on the intersection is provided for assistance applications. Using the two sensors and infrastructure-to-vehicle communications, collision avoidance applications (relevant to common intersection accidents) are under development. The INTERSAFE project will build a demonstrator vehicle that will allow evaluation of the developed applications in real life situations and a simulator to evaluate driver reaction to the ADAS in the situations that cannot be assessed on real intersections (i.e. full collision, high speed, etc). Initial results show that the very high level of accuracy required for the INTERSAFE application is possible through the use of individual sensor feature maps and the fusion of the output from the two sensors.
Acknowledgements INTERSAFE is a subproject of the IP PReVENT. PReVENT is part of the 6th Framework Programme, funded by the European Commission. The partners of INTERSAFE thank the European Commission for supporting the work of this project.
References [1] [2] [3]
[4] [5]
A. Buchanan, M. Tucker, (2002). “A Low Cost Video Sensor for Lane Support”. ITS World Congress, 2002, Chicago. Welch, Greg; Bishop, Gary: An Introduction to the Kalman Filter. http://www.cs.unc.edu, 2001. Gavrila, Dariu M.; Giebel, Jan; Shape based Pedestrian Detection and Tracking. Proceedings of IV 2002, IEEE Intelligent Vehicles Symposium, IV 2002 Versailles. Blackman, Samuel: Design and Analysis of Modern Tracking Systems. Artech House, London, 1999. Willhoeft, V.; Fuerstenberg, K. Ch.; Dietmayer, K. C. J.: New Sensor for 360° Vehicle Surveillance. Proceedings of IV 2001, IEEE Intelligent Vehicles Symposium, IV 2001 Tokyo.
Feature-Level Map Building and Object Recognition for Intersection Safety Applications
[6]
Fuerstenberg, K. Ch.; Willhoeft, V.: Pedestrian Recognition in urban traffic using Laserscanners. Proceedings of ITS 2001, 8th World Congress on Intelligent Transport Systems, ITS 2001 Sidney, Paper 551. [7] Fuerstenberg, K. Ch.; Hipp, J.; Liebram, A. (2000) A Laserscanner for detailed traffic data collection and traffic control. Proceedings of ITS 2000, 7th World Congress on Intelligent Transport Systems, ITS 2000 Turin, Paper 2335. [8] Fuerstenberg, K. Ch.; Dietmayer, K. C. J.; Willhoeft, V.: Pedestrian Recognition in Urban Traffic using a vehicle based Multilayer Laserscanner. Proceedings of IV 2002, IEEE Intelligent Vehicles Symposium, IV 2002 Versailles, Paper IV-80. [9] Dietmayer, K. C. J.; Sparbert J.; Streller, D.: Model based Object Classification and Object Tracking in Traffic scenes from Range Images. Proceedings of IV 2001, IEEE Intelligent Vehicles Symposium, IV 2001 Tokyo, Paper 2-1. [10] Weiss, T.: Globale Positionsbestimmung eines Fahrzeugs durch Fusion von Fahrzeug und GPS-Daten zur Erstellung einer digitalen Referenzkarte; Diplomarbeit, Universität Ulm, Ulm 2004. Adam Heenan, Carl Shooter, Mark Tucker TRW Conekt, Technical Centre Stratford Road, Solihull, B90 4GW United Kingdom
[email protected] [email protected] [email protected] Kay Fürstenberg, T. Kluge IBEO Automobile Sensor GmbH Fahrenkroen 125 22179 Hamburg Germany
[email protected] Keywords:
video, laserscanner, intersection, map-building, safety, object recognition, sensor-level features
519
521
Development of Advanced Assistance Systems for Intersection Safety M. Hopstock, Dr. D. Ehmanns, Dr. H. Spannheimer, BMW Group Research and Technology Abstract The major task of the project InterSafe, which is funded by the European Commission, is to reduce the number of accidents in intersection scenarios. In order to solve this problem, the BMW Group chose a top-down approach to develop advanced safety systems for intersections. The relevant scenarios of turning, crossing and ignoring traffic lights were identified by the analysis of intersection accidents. Based on this, the controller will be developed within the realistic environment of the BMW Group dynamic driving simulator. This allows determining the requirements for the surveillance systems in parallel to the development of the other components such as the Human-Machine-Interface and the controller algorithm. Furthermore, the virtual environment allows the evaluation process to occur with real test drivers in critical situations without endangering them. The results of the project will be an assistance system to enhance safety at intersections.
1
Motivation of a Top-Down Approach
The effectiveness of active safety systems heavily depends on novel sensors, which influence the reliability of situation detection. Development of safety systems often follows a classical bottom-up development process led by the capabilities of key technologies. In such a process there is the risk of not considering all possible situations in the design of active safety systems. [1] In order to support both, system function and new technology, a top-down approach was chosen by the BMW Group Research and Technology. This is a strategy that allows the needed interaction between the desired function (reduction of accidents) and the technical realisation (real system). This process starts by the analysis of relevant accidents (=significant reduction potential) in order to define the required functions of the system. Then the technical feasibility of assistance systems can be determined using simu-
522
Intersafe
lation tools or a driving simulator. At the end, sensor requirements complete the development. In contrast, a bottom-up approach starts with the definition of a new sensor technology and then derives in which manner it could be used to reduce accidents.
Fig. 1.
2
Approaches for system development [2]
Statistical Analysis
For accident analysis several sources can be used. A first approach is based upon national statistics (e.g. Germany: DESTATIS - German Federal Statistical Office). These include all accidents recorded by police at the accident site, but contain only a few relevant parameters. Additionally, in depth-studies (e.g. Germany: GIDAS – German In-Depth Accident Study) can be used to get detailed information. Compared to national statistics, they have relatively few reports, but written by specially trained investigation teams on site in any detail. Since these in-depth data are a welldefined subset of the national statistics, they can be regarded as a representative sample. In Germany, there were 354.534 accidents in 2003 with at least minor injuries. The distribution of pre-crash situations (=accident types) per road type is displayed in figure 2. Intersection-related situations are turn across path (type 2) and turn into/straight crossing path (type 3) and together account for 36% of all accidents. This indicates the importance of system development in this
Development of Advanced Assistance Systems for Intersection Safety
field.
Fig. 2.
Distribution of pre-crash situations per road type [3]
In analysing the main causes in figure 3 it is obvious that disregarding right of way is clearly leading the ranking. Adding errors while turning, a large group (≈1/3) of preventable mistakes can be addressed and potentially avoided by supporting the driver with advanced assistance systems for intersection safety.
Fig. 3.
Distribution of main accident causes [4]
523
524
Intersafe
3
Accident Analysis
After defining relevant accident situations, it is important to analyse some representative cases in detail to get information on the initiating manoeuvres. This helps to interpret and to understand the circumstances why a critical situation (near-collision) in this particular case finally resulted in an accident (collision). A method of analysing the situation is to change relevant parameters like velocity, deceleration and steering angle to study changes in the accident sequence. In order to do this, suitable simulation systems are required like: PC-Crash© or Matlab/Simulink© Reconstruction data are an essential input, as parameters for avoidance algorithms should be realistic. Analysing an actual accident can provide this. An appropriate tool is PC-Crash©. This programme is based on a 3D-impact model, which allows the calculation of multiple collisions and visualisation of the sequence, which is exemplarily shown in figure 4.
Fig. 4.
Reconstruction of an accident situation created with PC-Crash© [5]
Since PC-Crash© can only provide reconstruction data, a sufficient tool for system development, algorithm testing and a first performance test is a 2D-simulation (e.g. based on Matlab/Simulink©). Stylised cars and road layouts represent the scenario and allow determining effects of different assistance systems and driver reactions to warnings (e.g. no reaction or braking/countersteering) on the sequence, which can be seen in figure 5. On the left, a severe left turn accident is visible, while in the middle it could be mitigated by braking. On the right a possible avoidance was enabled by an appropriate assistance system’s interference.
Development of Advanced Assistance Systems for Intersection Safety
Fig. 5.
2D simulation results [5]
This detailed interpretation and simulation of accidents is the basis for the development of the safety application itself.
4
Scenario Selection and System Development
As shown in the previous section, an assistance system shall prevent the driver from committing crucial mistakes that inevitably cause critical situations. Within the previously mentioned accident types turn across path and turn into/straight crossing path there exist several different scenarios. As systems shall not only address single scenarios, similar ones have been grouped to be covered by the same assistance function. The most relevant basic scenarios groups are therefore:
Fig. 6.
Left turn path (and collision with oncoming traffic)
525
526
Intersafe
Fig. 7.
Straight crossing path (and collision with lateral traffic)
Fig. 8.
Red-light crossing (and collision with other road users)
The red-light crossing scenario is not classified within any accident type and can have multiple opponents such as pedestrians, other cars or railway trains. Even though it is not regarded as accident type in the statistics; accidents caused by not adhering to traffic lights are also worth analysing. The system development for assistance with traffic lights can benefit from the other situations. The systems will have to work within these basic scenarios and similar ones. These scenarios will be used to develop: the system functionality, the controller algorithm, the sensor systems requirements and the Human Machine Interface (HMI). The development process of the intersection safety systems will follow the top down approach. First, the system functionality has to be defined. The controller algorithm itself will be derived from a possible function. In order to cover all addressed scenarios, the first controller design will be developed independently from
Development of Advanced Assistance Systems for Intersection Safety
sensor systems available in near future. Simulation will be used to test the algorithms. The first implementation and testing will be done with Matlab/Simulink©. Thus, no further knowledge of the sensor system is needed. The design of the HMI is directly connected to the system functionality. The main focus is to inform and warn the driver in such a manner so that the driver can react quickly and appropriately. This task is quite complex, because the relatively high speed requires fast reactions, so the HMI must provide an intuitively comprehensible warning. The HMI concept will be tested in the driving simulator.
Fig. 9.
Impression of virtual surrounding
The specification for the driving simulation itself can also be derived from the intersection scenarios. Approaching the intersection, the kinaesthetic and optic feedback has to be realistic at low speeds. At intersections braking until a complete stop reaches in standard situations values up to -4ms-2, which significantly differs to braking in highway traffic. Thus, a dynamic driving simulator has to be used in order to have a realistic impression of driving manoeuvres e.g. including the last twitch when reaching a stop.
527
528
Intersafe
The optical impression is typically based on an urban environment with houses, street, signs, etc. Therefore, the shape of these geometric elements has to be modelled in detail. Both aspects were considered while adapting the BMW driving simulator for the development of assistance systems for intersections. Figure 9 shows an example of the virtual image while approaching an intersection. Within this virtual surrounding, test persons can evaluate the system. The key to system functionality is sensing technology. On one hand, the movement of relevant vehicles in the surrounding and on the other signs and traffic lights have to be considered. Within intersection scenarios sight obstructions cause problems in reliable surveillance. In order to solve the problem, two technologies will be considered: autonomous onboard sensors as well as communication systems (car to car, car to infrastructure). Starting with the hypothesis of ideal sensors, the roadmap from existing technology up to ideal sensors will be examined inversely. The resulting system functionality based on the sensor capabilities can be identified. Thus, the minimum sensor requirements can be derived. Additional communication technologies will be investigated regarding their potential. The final result of the development will be a matrix that shows the system functionality depending on the surveillance capability. In order to evaluate these systems, test drivers will have to assess them. This assessment can only be done in a reproducible environment. Since the developed system only applies to critical situations, a virtual environment ensures the testing without endangering the test persons by real traffic. Within the driving simulator, the combination of risk-free and realistic environment is possible. Following the top-down approach, the simulator has the advantages of ideal sensor modelling. Within a virtual world, functionality and driver behaviour in almost any traffic situation can be tested including non-ideal weather conditions such as fog or snow and limited insight into crossing streets.
5
Conclusion
The major goal of the InterSafe project is the reduction of intersection-related accidents. In order to accomplish this goal, the BMW Group develops active safety systems using a top-down approach. This starts with an accident analy-
Development of Advanced Assistance Systems for Intersection Safety
sis to identify critical traffic scenarios. Based on the identified scenarios of turning at and crossing an intersection, the algorithm development begins with simulation studies. This allows the development of surveillance technologies like autonomous onboard sensors or communication systems in parallel to the controller development. The system requirements can be derived from the simulation studies. The major tool during the development phase is the dynamic BMW Group driving simulator. This simulator allows the evaluation of complete systems including sensing technology, controller and humanmachine-interface within a virtual environment. Major advantages of this approach are the test facility of critical scenarios involving test persons without endangering them as well as the reproducibility. At the end, the effect of the complete system can be assessed depending on the surveillance detection capability.
References [1]
[2] [3] [4] [5]
Meitinger, K.-H.; Ehmanns, D.; Heißing, B.: “Systematische Top-Down-Entwicklung von Kreuzungsassistenzsystemen”, VDI Berichte 1864, 10.2004 Meitinger, K.-H.; Ehmanns, D.; Heißing, B.: “Kreuzungsassistent”, Doktorandenkolloquium, 11.2003 DESTATIS – Statistisches Bundesamt Wiesbaden 2004: Verkehrsunfälle - Strukturdaten 2003 GIDAS – TU Dresden und MH Hannover: Datenbank 2004 Hopstock, M. “Aktive Sicherheit und Unfallanalyse” Diplomarbeit BMW Group, 06.2004
Matthias Hopstock, Dr. Dirk Ehmanns, Dr. Helmut Spannheimer BMW Group Forschung und Technik GmbH 80788 München Germany
[email protected] [email protected] Keywords:
intersection safety, driver assistance accident analysis, driving simulator
529
Appendix A List of Contributors
Contributors
List of Contributors Abele, J. 49 Adomat, R. 185 Andersson, G. 227 Arndt, M. 323 Arvanitis, T.N. 353 Ban, T. 447 Bauer, C. 435 Baum, H. 49 Becker, L.-P. 71 Bennett, J. 289 Beutner, A. 169 Bodensohn, A. 149 Brockherde, W. 425 Buettner, C. 243 Buhrdorf, A. 289 Bußmann, A. 425 Cheng, S. 381 Cheung, E. 381 Choi, T. 381 Constantinou, C.C. 353 Cramer, B. 311 Dahlmann, G. 413 Darmont, A. 401 de Boer, G. 371 Debski, A. 71 Degenhardt, D. 71 Diebold, J. 185 Diels, R. 401 Dietmayer, K. 197 Dobrinski, H. 289 Egnisaban, G. 381 Ehmanns, D. 521 Engel, P. 371 Ernsberger, C. 299 Färber, G. 3 Fürstenberg, K. 197, 215, 493, 505 Geduld, G. 185 Geißler, T. 49 Ghosh, S. 169 Goronzy, S. 335 Graf, Th. 61 Hammond, J. 459
533
534
Contributors
Hanson, C. 243 Haueis, M. 149 Hedenstierna, N. 227 Heenan, A. 505 Hering, S. 413 Hillenkamp, M. 71 Ho, F. 381 Hoetzel, J. 115 Hoffmann, I. 79, 159 Holve, R. 335 Hölzer, G. 413 Hopstock, M. 521 Hosticka, B.J. 425 Hung, W. 381 Ina, T. 447 Justus, W. 197 Kai, K. 257 Kämpchen, N. 197 Kawashima, T. 447 Kerlen, C. 49 Kibbel, J. 197 Klug, M. 185 Kluge, T. 505 Knoll, P.M. 85 Kolosov, O. 289 Kompe, R. 335 Kormos, A. 243 Krisch, I. 425 Krüger, S. 23, 49 Kvisterøy, T. 227 Lüdtke, O. 289 Lui, B. 381 Mäckel, R. 149 Maddalena, S. 401 Mangente, T. 381 Matsiev, L. 289 Minarini, F. 487 Mitsumoto, M. 257 Möhler, N. 169 Mühlenberg, M. 97 Nakagawa, K. 257 Nakamura, T. 447 Ng, A. 381
Contributors
Nitta, C. 425 Ochs, T. 311 Pelin, P. 227 Polychronopoulos, A. 169 Praefcke, W. 371 Pulvermüller, M. 149 Rettig, R. 435 Reze, M. 459 Rotaru, C. 61 Rüssler, B. 493 Sans Sangorrin, J. 115 Sauer, M. 323 Schäfer, B.-J. 85 Schamberger, M. 185 Schichlein, H. 311 Schier, J. 269 Schlemmer, H. 129 Schreiber, T. 149 Schulz, R. 197 Schulz, W.H. 49 Schumann, B. 311 Schwarz, U. 413 Shimomura, O. 447 Shooter, C. 505 Sohnke, T. 115 Solzbacher, F. 23 Spannheimer, H. 521 Takeda, K. 447 Thiem, J. 97 Thiemann-Handler, S. 311 Thurau, J. 473 Tong, F.-W. 381 Topham, D.A. 353 Tucker, M. 505 Uhrich, M. 289 Vogel, H. 129 Vogelgesang, B. 435 Ward, D.D. 353 Wertheimer, R. 425 Willig, R. 269 Wipiejewski, T. 381 Yau, S.-K. 381 Zhang, J. 61
535
Appendix B List of Keywords
Keywords
List of Keywords 3D-MEMS 459, 473 4w 269 α-Si 129 ACC 159, 185, 257, 269 acceleration sensor 269 accelerometer 43, 459, 473 active safety 185 ad hoc networks 353 adaptive cruise control 185, 257 adaptive cruise control 257 aeronautics 129 air-conditioning 323 algorithm 227 amperometric 311 angular velocity sensor 269 APIA 185 assistant 159 automotive 129, 323 automotive application 459, 473 automotive camera 425 automotive image sensor 401 automotive MEMS 289 automotive sensor 435 bias 227 blind spot detection 71 braking distance 185 bulk micromachining 413, 459, 473 camera 159 camera-based 71 carbon dioxide 323 CMOS camera 401, 425 color image processing 61 communication 493 competitive analysis 23 confidence measures 335 construction areas 61 cross-system application 269 decision system 169 deep etching 43 demand controlled ventilation 323 deployment 23 differentiation 23
539
540
Keywords
digital 227 distronic 185 DRIE 459, 473 DRIE etching 459 drive-by-wire 185 driver assistance 61 driver assistance accident analysis 521 driver assistance systems 59, 71, 97, 1185 driving simulator 521 dual-band camera 129 EAS 269 emergency braking 185 eSafety 353 ESP 227, 269 exhaust gas sensor 311 fiber optic transceiver 381 FM-pulse doppler 257 FMCW 257 foundry process 413 frequency modulated continuous wave 257 FSRA 257 full speed range ACC 185, 257 fully programmable 401 gas sensor 323 giant magnetoresistant 435 GMR 435 gold 311 GUIDE 335 gyro 227 gyroscopes 43 HARMEMS 459 headlamp IR sensor 185 headlight detection 71 HHC 269 high dynamic range 401 high sensitivity 401 high speed photodetector 381 high-dynamic range camera 425 HIS 185 hydrogen 311 image fusion 129 image processing 71 image sensor 185, 425
Keywords
in-field test modes and integrity checks inertial sensor cluster 269 inertial sensors 43 infotainment system 381 infrared sensor 185, 323 innovation networks 23 integrated lens 401 integrated pressure sensor 413 integrated tool-based UI design 335 integration 269 intelligent cruise assistance 97 intelligent transportation systems 353 intersection 493, 505, 521 key IR sensor 185 KIS 185 lane departure warning 185 lane detection 197 lane keeping support 169, 185 laser 159 laserscanner 493, 505 LDW 185 LIDAR 197 life time prediction 149 LKS 185 long range radar 185 low g accelerometer 459 low speed following 257 LSF 257 LWIR 129 magnetoresistive sensor 435 map-building 505 market 23, 43 market forecast 43 MEMS 227, 413 micro system technology 289 microbolometer 129 micromechanics 149 microsystems application 23 mid-range sensor 185 millimeter-wave radar 257 modular concept 269 MSM photodetector 381 multi-use of sensors 185
401
541
multilayer laserscanner 197 navigation 227 near infra-red 401 NIR 129 NOx 311 object recognition 505 oil analysis 149 oil condition 289 oil level sensor 289 OOV 335 OSGi 371 oxygen partial pressure 311 passive 159 passive safety 185 PCS optical fiber link 381 perception layer 169 platinum 311 polarization-twisting cassegrain 257 pre-crash detection 185 pre-crash safety 257 precrash-sensing 115 prediction 23 production capacities 23 pulse doppler 257 R744 323 radar 115, 159, 185 relevant scenarios 493 reliability 149 remote maintenance 371 remote software download 371 road safety 353 roadway detection 197 rollover 227 ROM 269 RoSe 269 routing protocols 353 safety 459, 473, 505 safety systems 185 SDM 227 sensor 435 sensor and actuator control 169 sensor and data fusion 23 sensor application 435
Keywords
sensor fusion 97 sensor technology 185 sensor-level features 505 short range radar sensor 185 signal accuracies and characteristics 269 signal conditioning 459, 473 silicon surface micromachining 269 situation analysis 115, 169 solid electrolyte 311 speaker adaptation 335 speech recognition 335 speed sensor 435 spoken dialogue 335 stability 227 stereo 159 stereo from motion 71 stop-and-go support 185 stopping distance 185 surface micromachining 459 SWIR 129 system design 169 system monitoring 149 system-on-chip 413 technological challenges 23 telematics gateway 371 thermal imager 129 titration 311 tuning fork 289 uncooled CMT 129 VCSEL FOT 381 vehicle to vehicle communications 353 video 159, 493, 505 video-based 71 vision based systems 185 vision enhancement 185 VSC 227 wafer bonding 413 yaw rate sensor 269 yellow lane markings 61
543