Cognitive Systems Monographs Volume 1 Editors: Rüdiger Dillmann · Yoshihiko Nakamura · Stefan Schaal · David Vernon
Paolo Arena and Luca Patanè (Eds.)
Spatial Temporal Patterns for Action-Oriented Perception in Roving Robots
ABC
Rüdiger Dillmann, University of Karlsruhe, Faculty of Informatics, Institute of Anthropomatics, Robotics Lab., Kaiserstr. 12, 76128 Karlsruhe, Germany Yoshihiko Nakamura, Tokyo University Fac. Engineering, Dept. Mechano-Informatics, 7-3-1 Hongo, Bukyo-ku Tokyo, 113-8656, Japan Stefan Schaal, University of Southern California, Department Computer Science, Computational Learning & Motor Control Lab., Los Angeles, CA 90089-2905, USA David Vernon, Khalifa University Department of Computer Engineering, PO Box 573, Sharjah, United Arab Emirates
Editors Dr. Paolo Arena
Dr. Luca Patanè
Università di Catania Dipto. Ingegneria Elettrica Elettronica e dei Sistemi Viale Andrea Doria, 6 95125 Catania Italy E-Mail:
[email protected]
Università di Catania Dipto. Ingegneria Elettrica Elettronica e dei Sistemi Viale Andrea Doria, 6 95125 Catania Italy E-Mail:
[email protected]
ISBN 978-3-540-88463-7
e-ISBN 978-3-540-88464-4
DOI 10.1007/978-3-88464-4 Cognitive Systems Monographs
ISSN 1867-4925
Library of Congress Control Number: 2008936216 c 2009
Springer-Verlag Berlin Heidelberg
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typeset & Cover Design: Scientific Publishing Services Pvt. Ltd., Chennai, India. Printed in acid-free paper 543210 springer.com
Preface
The basic principles guiding sensing, perception and action in bio systems seem to rely on highly organised spatial-temporal dynamics. In fact, all biological senses, (visual, hearing, tactile, etc.) process signals coming from different parts distributed in space and also show a complex time evolution. As an example, mammalian retina performs a parallel representation of the visual world embodied into layers, each of which represents a particular detail of the scene. These results clearly state that visual perception starts at the level of the retina, and is not related uniquely to the higher brain centres. Although vision remains the most useful sense guiding usual actions, the other senses, first of all hearing but also touch, become essential particularly in cluttered conditions, where visual percepts are somehow obscured by environment conditions. Efficient use of hearing can be learnt from acoustic perception in animals/insects, like crickets, that use this ancient sense more than all the others, to perform a vital function, like mating. Also motion in living systems is derived from highly organised neural networks driving limbs and other parts of the body, in response to sensory inputs. A huge number of muscular cells are controlled in real time and in an adaptive fashion. Biological locomotion generation and control patterns were demonstrated to be efficiently modelled by using the paradigm of the Central Pattern Generator (CPG) as well as the decentralised control approach (Walknet). In locomotion, different and differently specialised neuronal assemblies behave in a self-organised fashion in such a way that, as a consequence of certain stimuli, particular patterns of neural activity arise, which are adeguately sent to peripheral fibres to generate rhythmic activities of leg motion and control. Recent results have also shown that nonlinear and complex dynamics in cellular circuits and systems can efficiently model both sensing and locomotion. After analysing and being involved in such topics, researchers from various scientific fields and from different European Laboratories came up to the idea to glue their effort to try to go up from sensing and locomotion modelling, to perception. The present volume collects the most significant results of the European research Project called “SPARK”, whose aim was to develop completely new sensing-perceiving-moving artefacts inspired by the basic principles of living systems and based on the concept of “self-organization”. The project activities were carried out by scientists from six institutions: the Institute for Biological Cybernetics of the University of Bielefeld (Germany), the Institute of Perception, Action and
VI
Preface
Behaviour, University of Edinburgh (UK), the Instituto Pluridisciplinar, Universidad Complutense de Madrid (Spain), the Companies ANALOGIC s.l. from Budapest (Hungary) and Innovaciones Microelectronicas s.l., ANAFOCUS, from Seville (Spain). The Consortium was Coordinated by the Dipartimento di Ingegneria Elettrica Elettronica e dei Sistemi of the Unversity of Catania (Italy). The main scientific objective of the project was to consider the concept of sensorymotor integration and control tightly based upon the paradigm of complex spatialtemporal dynamics leading to the formalisation of novel bio-inspired control strategies. As far as perception is concerned, current perceptual schemes are often based on information derived from visual routines. Since real world images are quite complex to be processed for perceptual needs with traditional approaches, more computationally feasible algorithms are required to extract the desired features from the scene in real time, to efficiently proceed with the subsequently action. Traditional full representational approaches have to be replaced by more parsimonious and flexible strategies. The focus of the project was to answer in a novel approach, from a methodological point of view, to the key question in action-oriented perception on how can perception provide necessary information for motor behaviour. In our spatial-temporal approach, perception is considered as the result of a dynamic pattern forming process, in which a particular pattern will evolve in a spatial-temporal structure, from the information deriving from sensors. This pattern will concisely represent the environment information. Recent results in neurobiology and psychology have shown that this is based on internal representations that combine aspects of sensory input and motor output in a holistic, unified way. These representations can be modelled by recurrent neural networks added to spatial-temporal nonlinear patterns. The pattern of internal representation of the environment will directly influence the particular associated motor behaviour. Patterns of cellular activity are quite common in nature. For example, Turing-like patterns are usually employed in mathematical biology to represent the dynamics of morphogenesis, the process underneath the geometrical growth of organisms from initial gastrulation to adult shape (more is to be said about this below). Plateau potentials (patterns) of neural activity are common in lower invertebrates and in molluscs, to select particular locomotion patterns from given sensory information. These results give the possibility to formalise a new paradigm for active perception based on principles borrowed from ecological psychology, synergetic and dynamical systems approaches, following Haken’s and Kelso’s ones to perception and action. The project aimed also at introducing a new architecture for action-oriented perception, trying to emulate living beings, where some actions are inherited, like escape or feeding actions, while some others have to be learnt from the environment, subjected to the capability of the senses. The active collaboration among the partners resulted in the introduction of a new general model for actionoriented perception. The target biological architecture taken into account was the insect brain. Insects were selected because, though often regarded as relatively simple reactive agents, nevertheless are capable of interesting complex perception for action, for example a range of impressive navigation tasks that require memory and multi-sensory integration. Their sensory systems contain a multiplicity of specialised receptors that perform intricate filtering processes even at the periphery, both in anatomical arrangement and in processes of thresholding, spatial and temporal filtering, adaptation etc.
Preface
VII
Many multimodal interactions between these sensory systems are observed, for example: between visual, olfactory and mechanosensory wind detectors in following odor trails; between auditory and visual reflexes; and in multimodal cues for flight stabilisation. In summary, insects are not so complex to prevent careful modelling, and not so simple for not gaining very interesting and useful insights for new classes of perceptual machines. Starting from biological studies on the insect nervous system, a scheme for an insect brain architecture was introduced and, consequently, a new artificial perceptual model was conceived. This is a hierarchical structure, which takes inspiration from previous work on environmentally mediated perception, but enhances the existing models in several aspects, reflecting each individual partner’s expertise. Parallel precognitive behaviours, acting as hardwired systems, cooperate to provide suitable actions to drive the robot from the very beginning of its navigation. Higher structures provide an ever increasing level of adaptation, including plasticity and learning. In particular a proto-cognitive correlation layer is devoted to detect time varying causal correlations among the behaviours so as to learn to anticipate one behaviour through another. A further layer provides “Representations” of the environment. These are conceived as emergent dynamical flows in complex nonlinear dynamical cellular systems, under the form of Turing Patterns. These provide an abstract and concise representation of the environment as an emerging pattern, whose codified version is used to learn, in an unsupervised way, the suitable modulation of the proto-cognitive behaviours of the lower levels. Further memory models save successful sequences of modulated behaviours for future exploitation. Such a new and complex model needs to be assessed starting from the single elements. Moreover, each of the single blocks has to be studied from the neurobiological, mathematical and engineering points of view. This book provides a thorough analysis within the whole model, through the investigation within the single components: basic pre cognitive behaviours (including locomotion), proto-cognitive behaviours, the correlation block, the representation layer. All these constitute the various blocks of a perceptual architecture that will confidentially lead to a general architecture for action oriented perception in roving robots. Although the main focus is methodological, some efforts were performed to provide realistic test cases and, as a consequences, the project delivered some robot prototypes able to implement the perceptual routines. The test beds to carry out the experiments were a legged and a wheeled robot prototype, where the perception task will be focalised to solve the locomotion/navigation problem in a cluttered environment, fusing together the different kinds of sensing capabilities to carry out specific tasks. The robots will integrate low level responses from sensors: they will learn to create an iconic, abstract and simplified world representation under the form of a dynamic pattern and will generate, as a result, a proper action under the form of a motor pattern. The action selection will result not on the basis of an “if-then” process, but as a consequence of a spatial-temporal self-organisation, which will be represented through a pattern of activity. The book is organised in three parts. Part I Principles of insect perception reports a thorough survey of the state of the art in perception. Here perception is considered in the sense of Perception for action, which, from a high level, aims at describing how sensory signals are transformed into actions, that, affecting the environment, are transformed back into sensory events. The traditional approach seems to pose in primary relevance
VIII
Preface
the existence of a representation stage, within the perceptual system, at the aim to build a kind of internal model useful for producing an action. This model would be able also to predict the effect of all (or of part of) sensory events, so as to react also to traces of previous experienced situations. The first chapter analyzes where and when the notion of representation is needed, i.e. which kind of behaviours really requires internal models. It results that most of action-relevant perceptual behaviours would not need, in principle, any internal model. Moreover, in most cases, a complete description of the environment is not needed, but only that part relevant for the action to be executed. This has also several links with the direct view of Gibsonian perception and the associated notion of affordance. A different point of view considers perception as a transformation, in the sense that the agent transforms the sensory signals to give rise to motor output. According to this view, there is no need for an internal model. This approach well matches with the known fact that biological sensory systems perform much more than a mere sensing. They rather process signals in a sometimes very complex way. Under this prospective, perception is active. Vision can not be considered as a “passive visual scene recording”, but rather a large amount of special purposes algorithms; haptics is an active handling used also to obtain object qualities. The same consideration could be also done with respect to the motor stage: body configuration and motor systems are tightly connected to the perceptual stage, and indeed they contribute to attain high perceptual capabilities. Once again the concept of direct perception by Gibson is a paradigm in this sense. This approach can be extended, hypothesizing parallel transformations for multiple sensory-perceptual-motor loops, each one and all together closed through the environment. This closely resembles Rodney Brook’s approach. The existence of this kind of parallel sensory motor pathways was found in the insect brain architecture, that, in these decades, is starting to be progressively unraveled. Therefore this approach can be taken into account for the design of the basis of an insect brain inspired architecture. This structure should be constituted of merely sensory motor capabilities that represent basic, reflex-driven behaviours. Examples of these are locomotion and the related sensory systems, like mechanosensors, environmental sensors and antennae, olfactory, visual and sound system. Building upon this, the possibility to scale-up to intelligent behaviours could be faced with referring to the possibility to exploit and further transform information from the parallel sensory motor loops, leading to the emergence of context-dependent new solutions. This could include the possibility, for example, to predict one sensory motor loop by using another one, decide about termination or maintenance of a given behaviour in front of certain environment contingencies, create motivation-driven complex behaviours, and some forms of non elemental learning. Once again, works on insects can answer to these questions: current knowledge on insect behavioural and physiological neuroscience can help to derive models able to show also the emergence of complex behaviours. That is why most of the biologically relevant part of the book will be devoted to insects, and details of the nervous system basic physiology will be analyzed. An insect brain inspired control architecture has been derived. This is basically a block diagram, in which some blocks are known with a certain detail, some others with less accuracy, some others remaining practically unknown. The latter are exactly
Preface
IX
those involved in higher perceptual competencies: here are involved nonlinear dynamics, plasticity and learning, in a way whose details are largely unknown, leaving room to some working hypotheses. In this way traditional dynamical systems cannot help us very much. The new tools deriving from complex and self-organising systems, chaotic dynamics and bio-inspired learning strategies can be exploited. The hard problem of how to co-ordinate multiple parallel sensory motor loops can find a plausible solution. Part II Cognitive Models develops upon what is presented in the previous part. Here high cognitive capabilities are discussed, they require internal representations about the agent, contextualized within its environment and subject to its current motivation. Starting from an introduction on behaviour-based and knowledge-based approaches, here a bottom-up approach for cognitive control is introduced, referring in particular on how to reach cognitive behaviours grounding on reactive systems. This original approach is based on the introduction of the so-called Situation models. These develop upon the network structure for locomotion (Walknet) introduced in the first part of the book. According to this approach, cognition is defined as the capability of planning ahead; this cannot be separated from low-level behaviours, and tightly related to the presence of a “body that perceives” and the peculiar environment situation. Since the approach is mostly based on the theory of special Recurrent Neural Networks (RNN), mathematical details are also reported to formally assess the network capabilities in learning and representation of dynamic situations. Formal details include also other topics that, even if they are not yet discovered in living beings, nevertheless help in dealing with real sensors, in particular in situations where noise occurs at a considerable level. For this aim a probabilistic approach is introduced for modeling the sensor and the motor layer. Additional to bacteria chemotaxis, a short term memory is introduced into the sensory motor loop, leading to the concept of memotaxis. This strategy was experimentally tested in a real robot, demonstrating a great improvement in really adverse situations. The second part of the book reports other approaches to model reactive and precognitive behaviours, among which the so-called weak-chaos control strategy, inspired by W. Freeman theories based on experiments on the rabbit olfactory bulb. Perception is modeled as a process of spatial temporal patterns emerging as stable limit cycles from a chaotic attractor, representing the whole reservoir of all perceptual information, controlled through the sensory stage. The particular emerging cycle acts as a fingerprint of the environment situation, recorded through sensors. A meaning is then assigned through the association of a successful action learned by using a simple reward-based method. A simple application to navigation control shows the suitability of the approach to learn general action maps in real time on board of a robot prototype. Another efficient approach for building a cognitive architecture is represented by a correlation layer. This was realised using a network of spiking neurons, endowed with a simple spike-based learning algorithm. The approach allows to train a high level sensor using information from a lower level one. Examples are reported where avoidance based on distance sensors is learned through contact sensors, or where a visual object is recognised as a target, based on lower level, simpler sensor information. This efficient real time approach, is very useful in many cases: for example, the network could be continuously trained to cope with a serious problem like sensor degradation. Another efficient solution to the homing problem is also given exploiting the capabilities of the RNNs
X
Preface
introduced in the previous chapters. The nest position constitutes the equilibrium of the network, that is able to relax to this position starting from any initial condition, represented by the current pose of the robot searching for home. An excellent degree of robustness against noise and partially obscured landmarks increase the suitability of the strategy. An additional reliable quality is added to the approach: it consists in selecting reliable landmarks among many landmark candidates, and filtering out unreliable (i.e. moving) ones. The last chapter of Part II presents a high level approach to robot perception, based on the theory of complex dynamics. Nonlinear partial differential equations, discretised on a space lattice, are used as a complex system where steady state solutions emerge from initial conditions. The latter are signals coming from the sensor stage. The emerging solutions are so associated to a particular environment portrait. Being such solutions equilibrium points in a dynamical system, they are associated to a particular basin of attraction, (i.e. set of all the environment conditions leading to the emergence of that solution). A suitable learning at the sensor level, using ad hoc designed neurons, contributes to shape the geometry of each basin of attraction. In this way, at the end of the learning stage, all the environment conditions that require the same action are associated to the same basin of attraction, leading to the emergence of a solution associated (through a reward based learning at the efferent stage) to that action. The emerging solutions have been shown to be formally equivalent to the so-called Turing Patterns, the well-known morphogenetical pattern described by Alan Turing in 1952, which arise in spatial-temporal dynamical systems, the so-called reaction-diffusion systems. The efficiency of the Turing pattern approach to perception was demonstrated through experiments on roving robots. Based on these results and on all the other models for the parallel sensory motor loops (sensory systems, locomotion networks, correlation layer and memory), a complex control architecture for action oriented perception, including cognitive capabilities is reported at the end of Chapter 7. This is called SPARK architecture. Here the parallel sensory motor pathways looping through the environment and acting at the lowest level as pre and proto-cognitive behaviours, are coordinated by higher layers of increasing complexity whose capabilities are incrementally learned, guided by a motivation, describing the robot mission. The main higher layers are a correlation layer for anticipation and a representation layer, where emerging patterns are used to modulate basic behaviours. This architecture enables a robot to exploit the capabilities of basic behaviours, during the initial stages of learning phase, to perform basic and safe actions. The higher layers incrementally allow the robot to reach more complex behaviours that better satisfy the robot motivation. Turing patterns, at the end of the learning phase, embed a coded representation of the environment with associated a suitable modulation of the basic behaviour for best matching the robot mission. Under this point of view, within this layer, a representation of the environment is incrementally formed. The architecture introduced is in full accordance with the arguments treated in Part I of the book. In fact the abstract (pattern based) model built is strictly dependent to the robot mission, and so not a general one. Moreover, while an internal model is not needed for the realisation of the basic (reflex based) behaviours, a more complex, context dependent representation of the environment is needed to gain cognitive capabilities. These are incrementally built upon the basic behaviours and help the robot to solve the given mission in an even
Preface
XI
increasing efficiency. Also in this case the suitability of the approach is testified by a number of experimental results where the added value given by the representation layer is confirmed with the capability of the robot to escape from being trapped into local minima, unavoidable without an intelligent, context dependent, modulation of the basic behaviours. During the thorough study that led first to the introduction of the insect brain model and then to the SPARK cognitive architecture, it was observed that a number of perception relevant details of the insect brain are still lacking, in particular referring to the role played by the Mushroom Bodies (MB) and the Central Complex (CX). Progressively discovering the architecture and the functional role of these neural centers will be precious to progressively refine the SPARK architecture. This is a topic of actual deep research that is expected to play an impact on the capability of future cognitive robotic architectures. The last part of the book Part III Software/Hardware cognitive architecture and experiments is related to the practical but not less important part of the project activities. In particular the software and hardware architecture where the cognitive model was implemented are reported. These include the new visual sensor/processor used for complex visual routines implementation. Both the architectures and the visual perceptual algorithms are reported in details. The SPARK board, the custom hardware board based on a powerful FPGA plus additional memory and a huge number of IO pins, is reported and described in details. Finally, the main actors of the project are described. These are the various robotic platforms that were used to implement the various tiles that constitute the complex cognitive model. A number of experimental results are finally reported, whose related multimedia material can be found in the SPARK web page: www.spark.diees.unict.it
March 2008
Paolo Arena Luca Patan´e
Acknowledgements
Editors would like to thank all the partners that contributed with their expertise to the success of the project, with a professional and proactive collaboration. A sincere gratitude is expressed to the project reviewers, Prof. Gurvinder S. Virk and Prof. Ronald Tetzlaff, for their steering comments that greatly contributed to the research activity during the 36 months of research. For the great importance that he gave to the project, a special thank is addressed to the EC Project Officer, Dr. Hans-George Stork, for his continuous, constructive comments and encouragement, that was essential in our research project. We think that without such a support from the EC, we would never been able to work together in such a way, enjoying while hard working. A special thank is addressed to Prof. L. Fortuna, Dean of the Engineering Faculty of the University of Catania, who, through his continuous support, encouraged the formulation and submission of the SPARK Project proposal, supervising and stimulating with new ideas all the steps of the research activity during the project development.
Contents
Part I: Systems 1
2
Perception for Action in Insects . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. Webb, J. Wessnitzer 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The Traditional View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Perception as Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Closing the Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Active Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Dynamical Systems Theory and Perception . . . . . . . . . . . . 1.4.3 Dynamics and Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.4 Further Bio-inspired Architectures for Perception-Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Predictive Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Perception for Action in Insects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Basic Physiology and the Central Nervous System . . . . . . . . . . . . 1.8 Higher Brain Centres in Insects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.1 The Mushroom Bodies (Corpora Pedunculata) . . . . . . . . . 1.8.2 The Central Complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Towards ‘Insect Brain’ Control Architectures . . . . . . . . . . . . . . . . . 1.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principles of Insect Locomotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. Cruse, V. D¨ urr, M. Schilling, J. Schmitz 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Biological Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 3 3 6 8 8 10 11 14 16 17 19 21 21 27 30 33 35 43 43 44
XVI
Contents
2.3 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Mechanosensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Environmental Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Leg Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Swing Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Stance Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Coordination of Different Legs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Insect Antennae as Models for Active Tactile Sensors in Legged Locomotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Central Oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Actuators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Low Level Approaches to Cognitive Control . . . . . . . . . . . . . . . . B. Webb, J. Wessnitzer, H. Rosano, M. Szenher, M. Zampoglou, T. Haferlach, P. Russo 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Sensory Systems and Simple Behaviours . . . . . . . . . . . . . . . . . . . . . 3.2.1 Mechanosensory Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Olfactory Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Visual Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Audition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.5 Audition and Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Path Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Visual Homing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Robot Implementation and Results . . . . . . . . . . . . . . . . . . . 3.4 Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Neural Model and STDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Non-elemental Associations . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Associating Auditory and Visual Cues . . . . . . . . . . . . . . . . 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48 49 51 52 52 57 65 75 78 83 85 86 97
97 98 98 100 102 115 123 129 129 137 143 156 157 158 163 166 167
Part II: Cognitive Models 4
A Bottom-Up Approach for Cognitive Control . . . . . . . . . . . . . . H. Cruse, V. D¨ urr, M. Schilling, J. Schmitz 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Behavior-Based Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 A Bottom-Up Approach for Cognitive Control . . . . . . . . . . . . . . . 4.4 Representation by Situation Models . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Basic Principles of Brain Function . . . . . . . . . . . . . . . . . . . .
179 180 181 185 188 190
Contents
4.4.2 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Memory Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.6 Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Towards Cognition, an Extension of Walknet . . . . . . . . . . . . . . . . . 4.5.1 The Reactive and Adaptive Layer . . . . . . . . . . . . . . . . . . . . 4.5.2 Cognitive Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
6
Mathematical Approach to Sensory Motor Control and Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.G. Velarde, V.A. Makarov, N.P. Castellanos, Y.L. Song, D. Lombardo 5.1 Theory of Recurrent Neural Networks Used to Form Situation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 RNNs as a Part of a General Memory Structure . . . . . . . . 5.1.2 Input Compensation (IC) Units and RNNs . . . . . . . . . . . . 5.1.3 Learning Static Situations . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 Dynamic Situations: Convergence of the Network Training Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.5 Dynamic Situations: Response of Trained IC-Unit Networks to a Novel External Stimulus . . . . . . . . . . . . . . . . 5.1.6 IC-Networks with Nonlinear Recurrent Coupling . . . . . . . 5.1.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Probabilistic Target Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 The Robot Probabilistic Sensory - Motor Layers . . . . . . . . 5.2.3 Obstacles, Path Complexity and the Robot IQ Test . . . . . 5.2.4 First Neuron: Memory Skill . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.5 Second Neuron: Action Planning . . . . . . . . . . . . . . . . . . . . . 5.2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Memotaxis Versus Chemotaxis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Robot Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
XVII
192 192 195 199 203 207 208 209 215 216
219
219 219 220 223 229 236 242 246 249 249 250 253 254 257 259 260 260 261 265 266
From Low to High Level Approach to Cognitive Control . . . . 269 P. Arena, S. De Fiore, M. Frasca, D. Lombardo, L. Patan´e 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 6.2 Weak Chaos Control for the Generation of Reflexive Behaviours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
XVIII
7
Contents
6.2.1 The Chaotic Multiscroll System . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Control of the Multiscroll System . . . . . . . . . . . . . . . . . . . . . 6.2.3 Multiscroll Control for Robot Navigation Control . . . . . . . 6.2.4 Robot Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Learning Anticipation in Spiking Networks . . . . . . . . . . . . . . . . . . 6.3.1 The Spiking Network Model . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Robot Simulation and Controller Structure . . . . . . . . . . . . 6.3.3 Spiking Network for Obstacle Avoidance . . . . . . . . . . . . . . 6.3.4 Spiking Network for Target Approaching . . . . . . . . . . . . . . 6.3.5 Navigation with Visual Cues . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Application to Landmark Navigation . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 The Spiking Network for Landmark Identification . . . . . . 6.4.2 The Recurrent Neural Network for Landmark Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
272 272 275 276 278 279 281 284 286 289 293 295 297
Complex Systems and Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Arena, D. Lombardo, L. Patan´e 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Reaction-Diffusion Cellular Nonlinear Networks and Perceptual States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 The Representation Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 The Preprocessing Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 The Perception Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 The Action Selection Network and the DRF Block . . . . . . 7.3.4 Unsupervised Learning in the Preprocessing Block . . . . . . 7.3.5 The Memory Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Strategy Implementation and Results . . . . . . . . . . . . . . . . . . . . . . . 7.5 SPARK Cognitive Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Behaviour Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Basic Behaviors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Representation Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Behaviour Modulation: Simulation Results . . . . . . . . . . . . . . . . . . . 7.7.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.2 Learning Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.3 Testing Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix I - CNNs and Turing patterns . . . . . . . . . . . . . . . . . . . . . . . . . Appendix II - From Motor Maps to the Action Selection Network . . .
309
298 301 305 306
309 311 312 313 313 320 321 323 325 330 333 333 334 334 334 336 336 337 338 340 344
Contents
XIX
Part III: Software/Hardware Cognitive Architecture and Experiments 8
New Visual Sensors and Processors . . . . . . . . . . . . . . . . . . . . . . . . . 351 L. Alba, R. Dom´ınguez Castro, F. Jim´enez-Garrido, S. Espejo, S. Morillas, J. List´ an, C. Utrera, A. Garc´ıa, Ma.D. Pardo, R. Romay, ´ Rodr´ıguez-V´ C. Mendoza, A. Jim´enez, A. azquez 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 8.2 The Eye-RIS Vision System Concept . . . . . . . . . . . . . . . . . . . . . . . . 353 8.3 The Retina-Like Front-End: From ACE Chips to Q-Eye . . . . . . . 356 8.4 The Q-Eye Chip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 8.5 Eye-RIS v1.1 Description (ACE16K Based) . . . . . . . . . . . . . . . . . . 362 8.5.1 Interrupts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 8.6 Eye-RIS v1.2 Description (Q-Eye Based) . . . . . . . . . . . . . . . . . . . . 364 8.6.1 Digital Input/Output Ports . . . . . . . . . . . . . . . . . . . . . . . . . . 366 8.7 NIOS II Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 8.7.1 NIOS II Processor Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 8.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
9
Visual Algorithms for Cognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . ´ Zar´ A. andy, Cs. Rekeczky 9.1 Global Displacement Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Foreground-Background Separation Based Segmentation . . . . . . . 9.2.1 Temporal Foreground-Background Separation . . . . . . . . . . 9.2.2 Spatial-Temporal Foreground-Background Separation . . . 9.3 Active Contour Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Multi-target Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10 SPARK Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. Alba, P. Arena, S. De Fiore, L. Patan´e 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Multi-sensory Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Spark Main Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Analog Sensory Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Sensory System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
371 371 374 375 376 377 380 383 383 385 385 386 387 388 391 396 397
11 Robotic Platforms and Experiments . . . . . . . . . . . . . . . . . . . . . . . . 399 P. Arena, S. De Fiore, D. Lombardo, L. Patan´e 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 11.2 Robotic Test Beds: Roving Robots . . . . . . . . . . . . . . . . . . . . . . . . . . 400
XX
Contents
11.2.1 Rover I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Rover II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Robotic Test Beds: Legged Robots . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 MiniHex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.2 Gregor III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 Visual Homing and Hearing Targeting . . . . . . . . . . . . . . . . 11.4.2 Reflex-Based Locomotion Control with Sensory Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.3 Visual Perception and Target Following . . . . . . . . . . . . . . . 11.4.4 Reflex-Based Navigation Based on WCC . . . . . . . . . . . . . . 11.4.5 Learning Anticipation via Spiking Networks . . . . . . . . . . . 11.4.6 Landmark Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.7 Turing Pattern Approach to Perception . . . . . . . . . . . . . . . 11.4.8 Representation Layer for Behaviour Modulation . . . . . . . . 11.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
400 402 402 402 404 405 405 408 409 411 413 415 416 420 422 422
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
List of Contributors
Paolo Arena, Sebastiano De Fiore, Mattia Frasca, Davide Lombardo, Luca Patan´e Department of Electrical, Electronic and System Engineering, University of Catania, I-95125 Catania, Italy
[email protected],
[email protected],
[email protected],
[email protected],
[email protected] Barbara Webb, Jan Wessnitzer, Matt Szenher, Markos Zampoglou, Thomas Haferlach, Hugo Rosano Institute of Perception, Action and Behaviour, University of Edinburgh
[email protected],
[email protected] Holk Cruse, Malte Schilling, Josef Schmitz University of Bielefeld, Department of Biological Cybernetics and Theoretical Biology, P.O. Box 100131, D-33501 Bielefeld, Germany
[email protected],
[email protected],
[email protected]
Volker D¨ urr University of Cologne, Institute of Zoology, Weyertal 119, D-50931 K¨ oln, Germany
[email protected] Manuel G. Velarde, Valeri A. Makarov, Nazareth P. Castellanos, Yong-Li Song Instituto Pluridisciplinar, Universidad Complutense de Madrid, Paseo Juan XXII 1, 28040 Madrid
[email protected] L. Alba, R. Dom´ınguez Castro, F. Jim´enez-Garrido, S. Espejo, S. Morillas, J. List´ an, C. Utrera, R. Romay, C. Mendoza, A. ´ Rodr´ıguez-V´azquez Jim´enez, A. AnaFocus (Innovaciones Microelectr´ onicas S.L.) Av. Isaac Newton 4, Pabell´ on de Italia, Planta 7, PT Isla de la Cartuja, 41092, Sevilla
[email protected] ,
[email protected] AKos Zar´andy, Csaba Rekeczky AnaLogic Computers Kft., Vahot u. 6, H-1119 Budapest, Hungary
[email protected],
[email protected]
Part I
Principles of Insect Perception
1 Perception for Action in Insects B. Webb and J. Wessnitzer Institute of Perception, Action and Behaviour, University of Edinburgh
[email protected],
[email protected]
Abstract. We review the concept of ‘perception for action’, contrasting the traditional view of perception as internal representation with the idea of transformation in a closed loop system. This introduces recent approaches using active perception, dynamical systems theory, actionbased agent architectures and consideration of the role of predictive loops. We then apply these ideas to insect behaviour and neurophysiology, with particular attention to higher brain centres. We propose an insect brain control architecture for robotics.
1.1 Introduction The idea of ‘perception for action’ has received wide and varied discussion. Our aim in this chapter is to present a general guide to this discussion, highlighting certain issues, and to then examine in more depth what can be learnt about these issues from insect behaviour and neuroethology. It is probably worth specifying at the outset that we do not intend to address issues of perceptual awareness, qualia or consciousness. We are interested in perception from the point of view of explaining how animals do it and how robots could use it. This in itself is one meaning of perception for action, i.e. we are interested in perception in terms of how it can be used (to control action) rather than as an end in itself, as it is commonly discussed within psychology, for example. We can think of the general problem of perception, cognition, and action of an agent, a robot or animal in terms of a loop through the system and the environment (Fig. 1.1) where A is a ‘transfer function describing how sensory events are transformed into actions, and E a ‘transfer function’ describing how the motor output of the system, and any external disturbance, is transformed by the environment into new sensory inputs.
1.2 The Traditional View The traditional view in Artificial Intelligence and Cognitive Science focuses on the agent A and decomposes the ‘transfer function’ into perceiving → thinking → acting. The purpose of perception, thus conceived, is to deliver or update an internal model of the environment E (or some part of E) that can be manipulated to determine actions. Or, in different words, the perceptual process produces a internal state that represents E, and some process on that state will produce the appropriate action. The former description P. Arena and L. Patan`e (Eds.): Spatial Temporal Patterns, COSMOS 1, pp. 3–42. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
4
B. Webb and J. Wessnitzer
Sensors
A
Motors
E Disturbances
Fig. 1.1. General issue of perception for action: A is a transfer function (via the agent) from sensors to motors, and E a transfer function (via the environment) from motor output, plus external disturbances, to sensors
tends to imply that the internal model is symbolic and can be manipulated by formal inference rules; the latter is more neutral with respect to the nature of the state and the processing. However both imply that there is necessarily some central stage, between sensing and acting, where the (action-relevant) contents of the environment are represented. An issue that thus arises is whether and how the environment needs to be ‘re’presented within the agent. It would seem in many cases that a process on the sensory input itself could suffice to produce the appropriate action. The idea of an internal model was introduced by Craik [28] explicitly for the purpose of explaining how intelligent agents are able to act in ways that depend on more than the current sensory input, e.g. to make a plan to obtain something it currently cannot sense (Fig. 1.2). In this sense, a defining function for the model or internal representation is that it can be used ‘off-line’, so, a simple activation trace of previous sensory stimuli is not sufficient to count as an internal model of the environment. However, as several authors (e.g. [52, 86]) have noted, this distinctive and interesting sense in which there is a claim that brains ‘represent’ is not always clearly understood, because of the common use of ‘representation’ to refer to any internal state that causally co-varies with an external state, such as the activity of a brain neuron that is proportional to velocity of visual motion, or a non-linear transform of the voltage level in a robot sensor. This latter ‘causal’ sense of representation has different explanatory force to the intentional sense of representation, in which something (the internal model) stands in for something else (the currently absent sensory stimuli). A resulting problem is blurring of the important issue of determining which behaviours actually require internal models. It could be said that the motivation of Newell and Simon [129] in introducing their ‘symbol system hypothesis’ was to distinguish ‘cognitive’ (or intelligent) behaviours that seem to require internal models, from mere ‘perception-action’ control1. Following this view, one (but not the only) function of perception is to generate an internal model, for those tasks that require it. However, in 1
Current criticisms of this approach often elide the distinction by re-defining ‘cognition’ and ‘intelligence’ to include the most basic forms of ‘perception-action’ tasks, many of which can, of course, be accomplished without internal models (e.g. [6]).
Perception for Action in Insects
A
B s(t)
m=f(s(t))
m
C
s(t)
m=f(s(t),s(t−1),...)
m
s(t)
m=f(s(t),s(t−1), ...,i(t),i(t−1),...)
m
D s(t) m=f(s(t),i(t))
m
i(t)
E
5
s’(t) m’=f(s’(t),i(t)) i(t)
i(t) F s(t)
s’(t) m=f(s(t),s(t−1), m ...,s’(t),s’(t−1),...) ...,i(t),i(t−1),...) i(t)
Fig. 1.2. A illustrates a purely reactive system, i.e. the motor output is determined entirely by the current sensory input; applying the same input always produces the same output. B illustrates a system with memory, such that the output depends on the history of sensory input (e.g. the system might show adaption). C illustrates a system in which there are some intrinsic variables (e.g. motivational factors) whose current value, jointly with current sensory input, will determine the output. In D the system in C is dependent on the history of both sensory and intrinsic variables. So far, we do not need to introduce a representational account. In E, the system is operating ‘offline’ using a stand-in s’(t) for the missing sensory input s(t) to produce an imagined output m’. In F, the internal representation, the real sensory input, and the intrinsic variables are combined to determine the output.
practice, this view became distorted in two ways: first, by treating generation of an internal model as the only function of perception, indeed as a definition of what perceptual systems are doing [111]; and second, by often studying this problem in isolation from any particular task, hence assuming a general purpose and veridical model is required, i.e., the aim of perception was understood to be reconstruction of the external cause of the sensory input. In the field of computer vision, for example, a textbook definition of the perception problem [160] is solving the inverse function from the input stimulus to the environmental situation that produced it. An implicit widespread acceptance of this view is found in much of the neurophysiology of perception, with the approach to the whole problem described in terms of decoding [152]. In reaction to this view, it is often pointed out that perceptual processing cannot be well understood in isolation from the task. This is one interpretation of what is meant by the phrase ‘perception for action’. Although often presented as a ‘radical’ insight, in fact
6
B. Webb and J. Wessnitzer
much traditional cognitive science does start from the definition of what kind of model is required for a particular task, and then looks to see how the perceptual system can deliver it. Or, in other words, how the perceptual system can extract task-appropriate information from the environment. For example, the same textbook (Russell and Norvig) lists several computer vision applications and notes: “None of these applications requires extraction of complete descriptions of the environment”. This task-based view can encompass both ‘high-level’ tasks, such as deliberative planning (where most detail of the sensory input may be irrelevant) and ‘low-level’ tasks, like immediate avoidance of a looming object (where again most details of the sensory input may be irrelevant). This action-relevant view of perception is often associated with the notion of ‘affordances’. Gibson defined affordances as “what [the environment] offers the animal, what it provides or furnishes, either for good or ill”. Affordances are thus the opportunities for action in a given situation, and Gibson proposed that these are what we perceive, rather than physical properties of objects and environments [45]. The term is now so widely used that its precise meaning is not clear. [Existing models of affordances tend to be for very limited action set and basically becomes idea of using success of some action as a means of classifying sensory input, see e.g. [26, 27]. An argument that is sometimes presented, usually by those who consider high-level tasks to be important, is that because sensory input has to be used in many tasks, perception does need to deliver a ‘general purpose’ model of the world, hence reconstruction is the right way to understand the purpose e.g. [198]. However this seems to miss the point, as it just puts the issue of how to get the task relevant information back one step it now has to be extracted from the internal world model instead of from the world. It is a valid point that we, as cognitive agents, do seem to be (sometimes) able to re-utilise internally stored sensory information for novel tasks, but the extent to which we can do so, and the extent to which any other animal can do so, is far from clear; and thus it seems unjustified to take this as a general characterisation of the basic functioning of perceptual systems.
1.3 Perception as Transformation A different criticism of the traditional approach is that to view perception as only, or even mostly, a problem of reconstructing the environment is to miss the essential point that the actual purpose of perception is to transform the sensory signal. The agent does not want to reconstruct the cause of the signal, but rather it wants to use the signal to produce some motor output (irrespective of whether this is a direct reaction or some distant plan). It is possible, but far from given, that building an internal model will be necessary to produce the right output. Given that action or task-relevant transformation is the purpose, we can argue that perceptual systems are constructed so as to carry out this transformation - even, that the ‘output’ of perception is action, as there is no other clear end-point of the perception process. A simple example is provided by the results of training or evolving neural networks to control robots in tasks like obstacle avoidance or discrimination of targets - the resulting networks can successfully produce appropriate actions but resist any decomposition of the network’s operation as construction and
Perception for Action in Insects
7
manipulation of an internal model, or even as ‘extracting information’ by which to choose an action [132, 82]. One aspect of perception that this viewpoint helps to emphasise is that much of the processing can occur at a very peripheral stage. For example, the physical layout of the sensors may already represent task specific demands, a principle discussed by Wehner [193] as ‘matched filtering’ in biological systems (see examples in chapter 3). Similarly, as the motor interface determines the end-point of perception, its design can be critical to understanding the process: for example it may simplify any co-ordinate transformation required from the sensors to the actuators, or eliminate some control problems (an impressive illustration of this is provided by ‘passive walkers’ [23]). Interestingly, there are several lines of evidence that suggest close involvement of the motor system in our ability to perceive [81], for example the suggestion that motor production of language is involved in recognition of phonemes [104, 105] or the ’mirror neuron’ system in which observing and performing an action appear to activate the same neurons [153]. This is also related to Gibson’s more radical views of ‘direct’ perception — that from the sensory array, invariants are extracted predicting the function or nature (i.e. the affordance) of an object. Gibson argues these invariants are extracted without any complex cognitive processing but rather that perceptual systems ‘resonate’ to the invariant structure, are ‘tuned’ to it. Invariants occur because sensory information is constrained by the way it arises during interaction with the environment, for example, the flow-fields that occur during locomotion, which can be used directly for control [163]. Tuning to such higher-order invariants in the perceptual array can help explain a wide variety of adaptive responses [19], such as catching a ball. Gibson was never specific about what actual mechanisms might be required for ‘resonance’ but the notion is suggestive of both ‘matched filtering’ and of dynamical systems theory (discussed below). As perception is used in multiple tasks and for multiple purposes, it does not make sense to think of it as a single transformation, but as multiple parallel transformations. This echoes Brooks’ [15] well known arguments for horizontal rather than vertical decomposition of complex behavioural capabilities. As usefully summarised in the critique by Kirsh [89] this view is that: 1. Behaviour can be partitioned into a set of task-oriented activities, each with their own sensing and control requirements. 2. There is a partial ordering among these, so a system can be constructed in layers from simpler to more complex activities. 3. Information is available in the world so internal models are unnecessary. 4. Smart sensors can be used to sample the relevant subset of the world. 5. A remaining hard problem is how to co-ordinate the activities. An interesting extension of this issue to higher level cognition is found in the idea that - insofar as thinking involves the construction of models for off-line reasoning those models are effectively simulations of lower-level sensorimotor capabilities. Thus, it is argued, we consistently use body-dependent analogies in even our most abstract thought [95]. This usefully suggests a route for ‘scaling up’ from basic sensorimotor competence to intelligent behaviour ([51, 29] - see section 2.5). There is currently much
8
B. Webb and J. Wessnitzer
interest in examining how language use might emerge via categorisation of sensorymotor contingencies [173]. ’Cognition is not a phenomenon which can successfully be studied while marginalising the roles of body, world and action’ [19].
1.4 Closing the Loop All the above is still only about the sensors → A → motors part of the system. A further, interrelated critique of the traditional approach is its tendency to ignore the closed-loop nature of the perceptual process [144]. For many tasks this is a sufficiently tight loop that the performance of the behaviour cannot be properly understood unless it is regarded as a whole. The paradigm case is a simple feedback system where (again) no internal model manipulation is required. But more generally, almost all real cognitive tasks depend in various ways on this loop. Examples from a high-level perspective are things like actively seeking information, and using things in the world to help thinking e.g. the use of spatial arrangements of objects to aid reasoning [90]. 1.4.1
Active Perception
Active vision is a very successful example of the application of this principle. Ballard [4] discusses in particular the advantages of a visual system that has gaze control, in allowing it to search, use specific movements to simplify processing, and to use exocentric co-ordinate frames. This can lead to some very efficient algorithms. Ballard comments “it may be that the visuo-motor system is best thought of as a very large amount of distinct special-purpose algorithms where the results of a computation can only be interpreted if the behavioural state is known”. A popular example (again originating with Gibson) is optic flow as a cue to self motion. The heading of a moving observer can be estimated by localising the ‘focus of expansion’ (FOE). The FOE is the motion in the optic array expanding out of a singular point along the direction of travel. However, optic flow needs to be distinguished from retinal flow. Eye or head movements induce visual motion on the retina. Retinal flow is thus composed of translational (body movement) and rotational (eye or head movement) elements. In [96], a large body of research from psychophysical studies analysing retinal flow is reviewed, suggesting the visual system combines a multitude of sensory signals for retinal flow analysis including: • efference copies of motor commands, • proprioceptive signals, • visual depth cues. Knowledge of depth perception is thought to be useful for separating translational and rotational flow. The motion of distant points could be used for estimating rotation whereas nearby points could be used for obtaining translational information. On the other hand, motion parallax is a strong cue for depth. Note that this process involves several loops, including internal ones (efference copies), and also several modalities. There is also evidence of ‘direct’ tuning to flow fields, for example in pigeons [205]. Van den Berg and Beintema [10, 8] describe a model combining a template tuned to pure observer rotation, with another sensitive to the derivative of the first template to rotation.
Perception for Action in Insects
9
Auditory perception also involves action, for example in the disambiguation of forward-back sound directions by head movements. Gaver [43] argued for ‘ecological acoustics’ i.e. the importance of interpreting auditory perception as the perception of events, rather than sounds. Rosenblum [158] investigated the auditory equivalent of ‘time-to-contact’ judgements [100, 190] which can be derived from simple cues without independent calculations of distance and speed. Rosenblum et al. [159] looked at the auditory affordance of ‘reachability’ of objects and found subject’s distance judgements in this natural task were much better than expected from previous studies of static auditory distance perception. Tactile perception is also a system that strongly depends on action, as the study of ‘haptics’ has shown. Far more information about object properties - such as weight, texture, consistency, temperature - are obtained from active handling than passive touch, and recognition of the function of objects is greatly improved if they can be manipulated [99]. Turvey [184] has suggested a dynamical systems approach to haptics, looking particularly at how mechanoreceptor information produced when wielding an object can act as invariants to provide size and shape information. Metta and Fitzpatrick [121] have examined how visual learning of categories in a robot can be enhanced by active manual manipulations, such as pushing objects to see how they react. O’Regan and Noe [139] propose that interactions are what constitute perception and that sensorimotor contingencies are what give perception its experiential character — for example, that vision is a mode of exploration of the world that is mediated by knowledge, on the part of the perceiver, of what [they] call sensorimotor contingencies. A sensorimotor contingency is a law or set of laws that describes the relation between self-actions and resulting changes in sensory input. According to this view, perception is an ongoing activity of exploration rather than abstraction of perceptual experience into a final interpreted percept. This view has been implemented in an approach which analyses the laws linking motor outputs and sensory inputs with the aim of discovering information about the structure of the world [145], when no information about the devices making up the body is known a priori. Sensorimotor contingencies contain properties related to the structure of the physical world and an organism’s body which is situated and embodied in the environment. The algorithm aims at discovering and subdividing these properties into parameterisable subgroups. Philipona et al. have shown that the notion of space can thus be generated with a sensorimotor rather than a purely sensory approach. Another sense in which ‘active perception’ is sometimes used is to emphasise that the neural systems involved in perception are not just passive receivers, but actively interact with the input. This is often characterised as ‘top-down’ processes that shape perception by previous knowledge and expectations. Within psychology it is sometimes described as a process of ‘hypothesis testing’ or inference [47, 154] in which different interpretations of the sensory input are (unconsciously) considered for coherence and fit to the data. More recent versions of this idea use Bayesian approaches to describe perception as a process of statistical (rather than logical) inference e.g. [88]. An alternative view of this issue, less dependent on notions of symbolic reasoning, is that of perception as a dynamic system with intrinsic activity that is perturbed by the input. Interestingly, this is a possible means of reconciling constructivist and Gibsonian views.
10
1.4.2
B. Webb and J. Wessnitzer
Dynamical Systems Theory and Perception
The dynamical systems view [6, 147, 7, 171] of cognition sees closing the sensorimotor loop as a paramount concern, and emphasises that this introduces real time considerations into the system. This is more fundamental than saying that the process must operate under time pressure; rather it stresses that the sensing and acting are both continuous processes so their interaction via A and E cannot (in general) be treated as a sequence of computational steps around the loop. This “Radical embodiment” [19] view suggests a different approach to traditional analyses of behaviour, drawing on dynamical system concepts such as state spaces, stability, attractors, limit cycles and chaos. As proposed in [147], the dynamical hypothesis is the claim that: cognitive agents are dynamical systems, and can be scientifically understood as such. This is presented as an alternative to the “symbol system hypothesis” of Newell and Simon, although it has rather different force, as ‘being dynamic’ is not a property that distinguishes cognitive from other systems. The point, rather, is that an computational/symbol-processing account may not suffice to explain what is really going on. There has been a rapid increase in attempts to use non-linear dynamics in the analysis of psychological data [53, 62] including EEG analysis, psychophysics [48], movement control [87, 150], language [130], memory [21], organisations [53], and development [182]. Guastello [53] suggests that several older theoretical perspectives in psychology can be interpreted in terms of dynamics, including Gestalt psychology (which focused on emergent properties in perception), and Piagetian development stages, which he describes as a selforganising process involving approach-avoidance dynamics. Gibsonian approaches to perception also fall fairly naturally within this approach. The main methodological developments in this area are the use of non-linear approaches to time-series analyses, rather than the more typical static group comparisons used in psychological statistics. Heath [2000] provides an overview of these methods, which essentially aim to recover a transfer function in the form of a polynomial, e.g. using non-linear system identification methods or gradient descent approaches. These methods have been more widely applied in analysis of neural data (reviewed in [91, 1]) but can also be used for behavioural data, e.g. reaction times. Heath notes that there is still a problem of interpreting the transfer function once it has been derived, as it may not be easy to determine the underlying mechanisms that produce it. One possibility is to investigate different models of psychological processing by comparing the transfer functions they produce with those obtained from data sets. A common emphasis in these approaches is the issue of identifying deterministic chaotic processes that might underly apparent random noise. This may be investigated by graphical methods, such as phase plots, recurrence plots, Poincar´e plots, to identify attractors, limit cycles and so on. Glazier et al. [46] describe the use of these methods within sports biomechanics, for example. Heath also describes the use of quantitative indices, i.e. attractor dimensionality and Lyapunov components. Though interesting, it is difficult to apply these methods to complex psychological processing, and as yet their use in psychology is not widespread. Kelso [87] describes behaviour as a self-organising pattern, emerging from interaction of subsystems. He argues that this is not merely a high level description of what could be an underlying representational mechanism, but rather points to a different
Perception for Action in Insects
11
mechanism to the usual representational account, one that could be directly implemented in brain dynamics. It suggests that perception, learning, action and memory arise as metastable spatial-temporal activity patterns, produced by cooperative interactions among neural assemblies. For example, reorganisation of topographic maps in motor cortex (after amputation) is related to the temporally correlated activity among neurons. Neural assemblies may be thus constituted and destroyed dynamically, depending on changes of sensory input, thus producing transitions between patterns of activity. The overall behaviour can thus be described by (relatively simple) differential equations; the potential function describes an attractor landscape, that alters with (small number of) control parameters. To date, this has been most successively applied to simple rhythmic motor tasks, such as gait transitions. As an example of a more cognitive task, van Rooij et al. [155] provide a dynamical system theory account of imagined action (a judgement of affordance) to account for hysteresis, enhanced contrast and critical boundaries in decision making. Based on Tuller’s [183] account of speech categorisation, they use the equation: V (x) = kx − 12 x2 + 14 x4 in which varying k produces a function with one or two minima, and the decision trajectory is taken to be a random walk on this landscape; with k dependent on the stimulus and on previous decisions. This work is characteristic of the approach, which is to find a relatively simple non-linear function to characterise behavioural trajectories and then compare the predictions of this model to actual behaviour. If successful, the collective variable V (x) is “assumed to be a non-reducible description of the task specific system”. Very similar approaches can be found in work on collective behaviours of groups of animals e.g. [30] in which a simple differential equation describing arrival/departure dynamics can be shown to predict self-organising aggregation behaviour. McFarland and Boesser [117] present a highly simplified dynamical account of the interaction of basic motivations in animals (e.g. hunger vs. thirst) to account for behaviour choice. A dynamical description of some problem domain does not necessarily account for the underlying mechanisms, i.e. neural processes, but it may well be of use to understand the problem domain itself. This criticism of the dynamical approach is that if high-level causal interactions (between system components, agent-environment, etc.) are described, explanation for underlying neural processes are omitted. The underlying mechanism may then be engineered in an appropriate fashion (evolutionary methods are one popular approach). Eliasmith [34] sees the natural role of dynamical systems theory as one of describing higher-level and temporal behaviours of connectionist networks and suggests that dynamical models and connectionist approaches should be merged in an attempt to achieve more complete explanations. 1.4.3
Dynamics and Networks
An obvious complement to the above methods of analysing psychological data using non-linear dynamics is the development of models of neural dynamics, in particular in the range of network architectures that use recurrence, in contrast to the standard feedforward architectures which do not usually close the realtime loop. There is a great deal of work in this area; the following will highlight some examples, with particular focus on those that have been used in perception for action applications, and without
12
B. Webb and J. Wessnitzer
attempting cover the many theoretical studies of dynamic network behaviours that have been undertaken in the last two decades. Hopfield nets [77], one of the best known recurrent net architectures, use full interconnectivity of the units. Used in pattern recognition tasks, a partial input activating the net will result in it settling in an attractor state that represents the completed input, i.e. acting as an associative memory (where Hebbian learning is used to store the memories). Many variant architectures for associative memory have been constructed, including bidirectional associative nets [92, 137] which have two interconnected layers; Boltzmann machine (statistical rather than deterministic change of weights to avoid settling into local minima). Li and Dayan [103] discuss the potential advantages of using non-symmetric connections between units (in particular distinguishing inhibitory and excitatory units) which is more biologically plausible than the standard symmetric connectivity. They argue this has a particular advantage for selective amplification of particular signals, through the dynamics of delayed inhibitory feedback. Another well known architecture was introduced by Elman [36] using recurrent connections to an extra hidden layer and using backpropagation methods. This can be generalised to the idea of multiple copy layers that record previous instants in time, and be trained using back-propagation through time, or real time recurrent learning. These architectures have been used extensively for sequence learning, particularly in language. Hochreiter and Schmidhuber [69] have introduced an augmented version that attempts to overcome some limitations of this architecture, i.e. difficulty of learning over long intervals for sequence learning. Hoshino [78, 79] uses reciprocally connected networks for features and objects to investigate mechanisms of recognition and attention. The system operates in a ‘randomly itinerate state’ with recognition expressed as a dynamic phase transition to a basin attractor that has been learnt through Hebbian processes. The attractor dynamics can be sufficient for attention focus without explicit top-down signals. The same method can be used as model of multimodal interaction. The method is only tested on simple simulated input, but tries to reproduce psychophysical characteristics of crossmodal facilitation. Tani and Ito [181] propose a simple architecture in which the lower level is a recurrent neural net that receives inputs and generates motor commands, and encodes multiple sensory-motor reflexes or primitives. External ‘control neurons’ connect to all the neurons in the lower level, and their modulation shifts the network from one primitive to another. These control neurons are then embedded in a higher level recurrent network that generates appropriate motor sequences [140]. These levels are (loosely) compared to spinal cord, brainstem and cortex. Grossberg and Carpenter [50, 18, 17] have presented a series of models based on ‘Adaptive Resonance Theory’ (ART - essentially bidirectional nets) and used them to model various aspects of perceptual psychology. They argue that this system can “can self-organize, self-stabilize, and self-scale a sensory recognition code in response to an arbitrary, possibly infinite, list of binary input patterns” and provides a complementary mechanism to error based learning. The basic idea is that when top-down and bottom-up signal patterns match, positive feedback leads to amplification and prolongation of the activation sufficient to allow adaptive learning. Mismatch activates an orienting system
Perception for Action in Insects
13
that resets the top-down expectation, to either find a match, or recruit new cells to learn the new information. Another view is interest in synchrony as a neural processing mechanism, in particular that large scale synchrony may enable global influences on local processing. Engel et al. [37] review some of the evidence for this view: “We discuss recent evidence that predictions about forthcoming events might be implemented in the temporal structure of neural activity patterns. Specifically we focus on spatiotemporal patterns of ongoing activity that translate the functional architecture of the system and the prestimulus history into dynamic states of anticipation”. One proposal, advanced particularly by Freeman [170, 42] is that the critical mechanism for dynamic neural computation is ‘control of chaos’. Based particularly on multielectrode studies of activity patterns in the olfactory bulb, he proposed that it showed a base state of chaotic activity that bifurcates, due to sensory input, to a selected limit cycle attractor. This behaviour can be intepreted as the emergence of an attractor that ”enslaves” the cortical system by acting as an ”order parameter” [55] or the ”emergence of order out of chaos” [151]. Analysing electroencephalographic signals (EEG) from extracranial and intracranial measurements, the focus is on spatial patterns of amplitude modulation of EEG waves in the gamma spectrum (20-80 Hz). These patterns can switch rapidly (due to the chaotic base state) and reflect not simply the stimuli features, but relate to attention, expectancy and learning factors controlled by the limbic system. Freeman argues that as adaptive self-organising systems: “Nervous system dynamics is a self-organized process constrained by the requirement that the system anticipate and incorporate the immediate consequences of its own output within the larger constraints of regulating its well-being and the long-term optimisation of its chances for survival. This is subsumed in J. J. Gibson’s [1979] theory of “affordances”. He thus takes this to be the mechanism by which brains create ‘meaning’ from sensory information. Korn and Faure [91] review the current status of experimental evidence for chaotic brain processes. Some modelling approaches to ‘control of chaos’ also reviewed in [62]. Harter and Kozma [60] have adopted this approach for a robot navigation task. They propose a ‘KIV’ architecture based on the limbic system, consisting of four subsystems: what (sensory/perception); where (orientation/memory); why (goals/drives/value systems); and how (motor/actions). Each involves a similar form of aperiodic dynamics, e.g. to form cognitive maps in the where system, or to control goal-oriented motor output. Eliasmith and Anderson [35] present a systematic view of how different attractor networks can be constructed from spiking elements, and how the dynamics can be controlled. This is a very promising approach that has been applied to several biological control problems, such as integrating multiple sensory sources to obtain an estimate of velocity for path integration [25]. Another interesting development in this area is the idea of combining a recurrent/ chaotic/non-linear network that responds to input with a linear readout system that can learn to classify the patterns. Two versions of this architecture were developed independently as Echo-state networks [85] and Liquid state machines [109]. In the latter, a recurrent microcircuit of spiking neurons acts as an unbiased analog memory. The
14
B. Webb and J. Wessnitzer
resulting ‘liquid state’ x(t) is transformed by a readout map into some target output f (x(t)). Only the readout synapses need to be trained. 1.4.4
Further Bio-inspired Architectures for Perception-Action
In the previous section we discussed several architectures that have been proposed as general accounts of how biological systems might obtain perception for action. Here we summarise a number of alternative approaches that also draw strongly on the ideas of action-oriented perception and closed loop control, and claim to be based (more or less closely) on biology. Brooks’ subsumption architecture was briefly characterised above. This was an extremely influential architecture in robotics, proposed as an alternative to the (then) standard paradigm of a central, task-independent representation [14]. Brooks argued that building more complex behaviours by gradually adding layers of sensorimotor loops was also more biologically justified (as following evolution) although in general a close connection to any particular biological system was not claimed. In the earlier roving robots, biology inspired the tasks rather than the implemented control structures; in more recent work on the COG project [16], more specific attempts to model particular brain systems have been attempted, for example in hand-eye co-ordination and visual attention mechanisms. Emerging quite directly from Brooks’ approach and associated ‘animat’ research has been further investigation of the problem of behaviour co-ordination, which in the subsumption architecture is basically a pre-wired priority ordering between the layers. Typically this is formulated as the ‘action selection’ problem [185], in which the current goals of the system, combined with the current sensory situation, are used to determine which behaviour to activate. Maes [110] provided an early example of a method for flexible action selection and sequencing that did not require explicit planning, but rather was “an emergent property of activation inhibition dynamics amongst the different actions the agent can take”. Different actions are linked as successors, predecessors, or conflictors in a non-hierarchical network, and activation from external situation variables or internal goal variables spreads through the network, with the first action to reach threshold being executed. Though appealing, this approach has not been very successful in application to real robot control. Another class of ‘action selection’ mechanisms uses fusion of behaviours rather than arbitration between behaviours. A well known method, of which there are now several variants, was introduced by Rosenblatt and Payton [157]. Behaviours are represented at a ’fine-grained’ level that corresponds fairly directly to motor commands. All these elements evaluate their relevance to the current situation and goals, and produce a weighted vote for which action to execute. This allows the system to find compromises between behaviours, rather than one behaviour always dominating another. Tyrrell [185] suggested an extension of this architecture, with a hierarchy from simple motor commands, to higher level actions, which he demonstrated in simulation to produce higher competence than Rosenblatt’s original scheme or Maes approach. Another well-known approach to ‘action fusion’ is the ‘potential field method’, used, for example, by [2] in his ‘AuRA’ architecture. Each behaviour produces a vector expressing its desired motion, and these outputs are combined using vector summation and normalisation.
Perception for Action in Insects
15
More recently Gurney, Prescott and Redgrave [149] have proposed an action selection model closely based on the neural architecture of the mammalian basal ganglia. One population of cells encodes the salience of possible actions, due to sensory or other inputs, and the output structures gate the candidate actions by a reduction of inhibition of the winning channels. They note several properties of this brain structure that will contribute to its efficacy as a switching mechanism, e.g. local inhibition that enhances the most active channels, dopamine modulation of selectivity, and a feedback signal that scales the output relative to the number of active channels. They have demonstrated the use of this mechanism for switching between five different behaviours in a robot. Meyer et al. [122] have combined this model with rat-inspired navigation, and also use the dopamine signal to perform actor-critic reinforcement learning of the saliencies, in their ‘Psikharpax’ project to build an artificial rat. A rather different approach to the problem is presented by Sch¨oner, who uses dynamical systems methods [164]. Behaviours are described by differential equations relating a particular behavioural variable (e.g. speed or heading) to its time derivative via a function that describes the task constraints. Typically these describe attractors or repellors in the state space, which are combined additively (possibly with different weights); and the final behaviour is determined by the evolution in time of the solution. An interesting principle used in this method is that the relative domination of different behaviours is determined by differences in the timescales of their dynamics. A more radical approach is suggested by Seth [169], who points out that it may be mistaken to assume a one-toone association of externally observable behaviours - which are a joint product of the agent, environment and observer - with mechanisms within the agent. He demonstrates that a Braitenberg style vehicle with several direct sensorimotor loops can be evolved to produce many features of apparent ‘action selection’ including suitable balance of persistence and switching, prioritisation, opportunism, etc. Arbib has presented an influential view of ‘action oriented perception’ in the form of a network of competitive or co-operative schemas. ‘Schema’ is not well defined, but essentially means a routine, process or control system that has an action-specific function, such as recognising prey or avoiding a predator. Schemas perform neural rather than symbollic computation, and in Arbib’s work they are closely based on biological brain circuits. They can operate in parallel - “there is no one place in the brain where an integrated representation of space plays the sole executive role in linking perception of the current environment to action”; and new schemas arise by modulating existing schemas. This approach has been used in simulation and robot models of frog attraction, avoidance and detour behaviours. Arbib also distinguishes ‘instinctive’ and ‘reflective’ behaviours, suggesting only the latter requires explicit representation, and that there may be an important shift in brain architecture between the special purpose machinery that supports the former and the general purpose machinery (e.g. cortex) that supports the latter. Another interesting architecture has been proposed by Verschure [186]. A reactive control layer implements pre-wired reflexes. Simple sensory events (unconditioned stimuli, US) activate internal state (IS) neurons, which activate motor neurons to perform an unconditioned response (UR). There is a winner-take-all mechanism at the motor stage that selects between actions, with pre-wired interactions between internal
16
B. Webb and J. Wessnitzer
state neurons to resolve conflicts. An adaptive layer is then added, such that the internal state adds the effects of the US and more complex sensory events (conditioned stimuli, CS). The latter inputs have an adaptable weight matrix, that changes according to the difference between the actual and the predicted state of the CS. As a result, the system learns to react to CS. A third ‘contextual control’ layer maintains a short-term memory of CS events and their associated actions, and stores the sequence in long-term memory when a goal is achieved. Then matching of the stored sensory events to current sensory events enables LTM to take over behavioural control if a segment match exceeds a threshold. An important result of this extra control layer is that it changes the behavioural feedback experienced by the system (by repeating successful behaviours) and this indirectly leads to improvement of the perceptual learning of the CS stimuli.
1.5 Predictive Loops So far we have described two kinds of ‘loopiness’ in perception-action systems: the loop through the world, and internal, recurrent connections. It is worth noting here that, in a simple feedback loop system, from the system’s point of view there is no distinction between A and E, because the system cannot see its motor output other than by any effect it has on its sensors. We can simply say the pattern of neural activity at one point in the loop (e.g. the sensors) gets transformed via A and E into new activity (von Foerster, 1985, cited in [146]). If there is a disturbance, the system has no way of distinguishing whether that disturbance occurred in A or E. To provide more sophisticated behaviour than feedback requires additional loops. Loops of particular interest, as they hint towards more cognitive levels of behaviour, are those that allow the agent to predict future sensory input. In some sense, even the principal of matched filtering involves prediction, in that the sensors are adapted (through evolution) to respond directly to task relevant stimuli from the environment. Similarly, a simple memory trace can be thought of as a minimalist prediction of what the next input should be if nothing changes in the world. However, more interesting are the predictions that one sensory input can make about another; and the predicted sensory consequences of actions. In principle, as discussed below, either of these forms of prediction could be hard-wired; but they also correspond to two forms of learning, i.e. CS-US associations, and reinforcement learning. If instead of considering the system as a single loop, but instead (as discussed above) considering multiple parallel loops from sensing, via agent, motors, and environment back to sensing, it is rarely the case that the various sensory inputs are independent. Another way to look at this is that other sensors provide the ‘context’ for any particular sensor. There is ample evidence that the activity of one sensory channel can affect other channels, e.g. by gating, over-riding, or modulating the response. This can happen at many levels in the processing, e.g. at the sensory periphery, centrally, or at the motor output. A simple predictive example is insect antennae: contact of an obstacle by the antennae is likely to be followed by contact with the foot. Woergoetter and Porr [200] suggest that by implementing a principle of disturbance minimisation, one feedback loop (the reflexive lifting of a foot in collision) can be subsumed by another loop, which causes
Perception for Action in Insects
17
the action to occur in advance (i.e. raising the foot in response to the antennae contact). Their learning scheme has the advantage over standard reinforcement learning approaches that the system does not require a ‘actor-critic’ distinction, or prior definition of what is ‘reward’ and ‘punishment’ but simply produces whatever associations will minimise the disturbances within the feedback loops. A complementary form of prediction is provided by the notion of ‘efference copy’ or in control theoretic terms a forward model. In this case, there is an additional internal loop, from the motors to the sensors, which emulates the transfer function E. Such a loop can be used in several ways to make control more sophisticated: • for internal feedback since sensory feedback control may be too slow for rapid movements, • priming the sensory system, anticipate and inhibit the sensory consequences of movements, • provide an estimate feedback and compare to actual sensory feedback and thereby providing appropriate reinforcement for motor learning, and • state estimation combining the model’s prediction with sensory feedback, thereby allowing the evaluations of sensorimotor counterfactuals. This concept is playing an increasing role in neuroscientific explanations of motor control, context dependent action, and cognition [203, 202]. Several authors have proposed internal modelling as a unifying framework for understanding cognition, e.g. [20, 68, 51, 29]. It provides a re-grounding for the idea of internal models discussed above, as elaborated by Grush [52]. He suggests that during overt sensorimotor interaction with the environment, internal neural circuits receive efference copies and run in parallel, learning to predict the expected sensory feedback. These neural models may then be used off-line to estimate or simulate the outcomes of various possible actions. Although much of the interest in such systems is associated with this possible ‘off-line’ use, for example as an explanation of mental imagery (e.g. [197], [93]), it should be noted that the functions discussed above occur in ongoing behaviour, and would seem to be needed for competent control even in ‘minimally cognitive’ systems such as insects or more-than-reactive robots.
1.6 Perception for Action in Insects So far we have provided a theoretically oriented review of the issue of perception for action. The aim has been to show that the notion goes well beyond the idea that perceptual processes deliver action-relevant internal representations of the external environment. Rather, it emphasises that perception is a process of transformation, within closed loops. In particular: • sensory systems have evolved, developed or adapted to respond directly to task relevant stimuli; • there are multiple parallel connections between sensors and motors, rather than a central location where ‘perception’ occurs; • that interaction, i.e. a closed loop of acting and sensing, is critical and changes the nature of the problem to be solved;
18
B. Webb and J. Wessnitzer
• dynamical systems theory has been advanced as an alternative explanatory framework for behaviour to input-output computation; • that recurrent neural architectures have inherent dynamics that are shaped by the sensorimotor context - perception is not a feed-forward process; • prediction between sensory systems and from motor output to expected sensory input are critical mechanisms for obtaining more complex behaviours (rather than action selection). Although many of these ideas have received wide discussion, to date this is almost exclusively in the context of human/vertebrate behaviour and neuroscience. However, if the principals hold, they should be equally discoverable in invertebrate systems. An insect’s sensorimotor system is a product of evolution, development, learning and adaptation which work on different timescales to improve the animal’s behavioural performance. Thus, sensorimotor systems are well adapted to the particular ecological niche of animals. Insects, compared to vertebrates, are relatively simple yet highly successful in their ecological niches displaying complex behaviours supported by sophisticated sensorimotor systems. Insects exhibit rich and interesting behavioural repertoires yet have a central nervous system (CNS) and genome of a size that can be studied thoroughly and systematically. General principles of perception for action may therefore be potentially graspable in these systems before they can be understood in vertebrates. Many insects are capable of impressive orientation, homing and navigation tasks. These abilities are important for a large behavioural repertoire, e.g. the search for food, mates, nests, or reproduction sites. The need for navigation varies from species to species and many different behaviours have been observed. Some may orient themselves to specific stimuli, such as food odours or the auditory call of a potential mate. However, stimuli associated with food or reproduction are not necessarily of immediate importance to homing or generally spatial navigation which take advantage of more accurately identifiable landmarks or reference points, i.e. celestial cues including the sun or sky polarisation patterns. Many species of bees and wasps have been observed to fly in ever expanding circles around their new hive or nest identifying objects and their relation with the hive or nest opening. A particularly impressive and interesting example of navigation of the digger wasp Ammophila is described in [3]. Atkins notes: . . . Ammophila females make similar reconnoitering flights after completing a nest. However, these wasps disguise the nest opening before they leave and return with their prey on foot, often over a considerable distance. This remarkable behaviour of a digger wasp returning to her nest site with a caterpillar prey is reported in more detail in [38]: The Ammophila proceeded straight down between two rows of peas with her caterpillar slung beneath her. When she reached the end of the garden, about twenty feet away, she made a right angle and followed a plow furrow for another five feet. Then she ascended the far side of the furrow and entered a patch of weeds where, with scarcely a hesitation, she dropped her caterpillar and began to dig.
Perception for Action in Insects
19
Such impressive behaviour is of obvious significance to cognitive scientists and neurobiologists. How does the wasp find her way back to nest after long hunting and foraging excursions? Does she relate landmarks and the relative distances between them? A wide range of insects navigate using path integration. On leaving some significant starting point, an animal updates an accumulator constantly adjusting an estimate of distance and direction from the origin, a homing vector. The capability for path integration is mostly studied in the desert ant Cataglyphis, e.g. desert ant navigation [194] and ant odometry in the third dimension [201]. These insects are renowned for returning to their nests after long and tortuous foraging excursions. This behavioural feature is explained by path integration, or dead reckoning, particularly important in unfamiliar terrain. However, the accuracy of path integration decreases with distance travelled due to cumulative errors [13]. In familiar surroundings, navigation by using landmarks may override path integration. Path integration then becomes a back-up strategy that is used when navigation by landmarks fails [22]. Landmark navigation can increase the accuracy of navigation by breaking excursions into segments defined by landmarks. Pattern recognition in insects, as discussed in [64], could mean that sensory recognition of landmarks is a process of linking visual memories to certain motor or behavioural patterns. Could it be a process of retino-topic, image-based or feature-based, matching as proposed in [207] or is there evidence that these animals build cognitive maps [9]? The aim of the following sections is to survey and give an overview of insect nervous systems, and to consider what general common features or principles emerge, following Weiner’s guideline that ‘generality must be discovered, it cannot simply be declared’ [195].
1.7 Basic Physiology and the Central Nervous System An insect’s external anatomy can be divided into the head, the thorax, the abdomen and appendages. Regarding the internal anatomy, only the central nervous system is of concern in this report. Like many arthropods, insects have a nervous system with a dorsal brain and a ventral nerve cord consisting of segmental ganglia extending through the thorax to the abdomen. Ganglia are large aggregates of neurons connected by commissures. The insect brain consists of two head ganglia: the supraesophageal and the subesophageal ganglia. The supraesophageal ganglion consists of three major parts: the proto-, the deuto- and the tritocerebrum. Each part controls a certain spectrum of the insect’s activities. The protocerebrum is largely associated with vision; innervating the compound eyes and the ocelli (simple eyes only capable of perceiving light, darkness and movement, i.e. changes in light intensity). The protocerebrum also hosts higher brain centres, such as the mushroom bodies and the central complex. The deutocerebrum processes information collected by the insect’s antennae. It consists of two distinct neuropils, the antennal lobe (AL) and the antennal mechanosensory and motor centre (AMMC) [75]. The tritocerebrum is said to innervate the visceral system which controls the internal organs within the abdominal and thoracic cavities. The subesophageal ganglion innervates the insect’s mouthparts and salivary glands. In the thorax, pairs of thoracic ganglia control wing and leg movements, usually one for each pair of legs. In the abdomen, abdominal ganglia innervate a large number of sensory receptors located
20
B. Webb and J. Wessnitzer Protocerebrum
Supraesophageal ganglion Deutocerebrum Tritocerebrum Subesophageal ganglion Thoracic ganglia
Commissures
Abdominal ganglia
Fig. 1.3. The nervous system of a generalised insect. The brain (supraesophageal ganglion) consists of proto-, deuto- and tritocerebra and is posteriorly connected to the subesophageal ganglion which in turn connects to the ventral nerve cord linking the thoracic and abdominal ganglia. From [196] (Fig. 1).
on the insect’s back end, such as cerci, genitalia, etc. Insect nervous systems exhibit decentralisation: many overt behaviours (e.g. feeding, locomotion, mating,...) are, to varying extents, controlled by ganglia instead of the brain. The brain may stimulate or inhibit activities in ganglia. Fig. 1.3 illustrates the nervous system of a generalised insect. Insects have hundreds of thousands of receptor cells, varying substantially in numbers both within and across various sensory modalities. An animal’s main sensory receptor cells include photoreceptors, mechanoreceptors and chemoreceptors. Photoreceptors to consider here include compound eyes and occelli, found in a variety of insect species. Compound eyes are made up of many facets, called ommatidia. Smell (olfaction) and taste (gustation) rely on chemoreceptors detecting molecules. Mechanoreceptors provide tactile information and proprioceptive cues to the organism, and modified mechanoreceptors are responsible for the transduction of auditory signals. Each of these modalities will be further discussed in later chapters; the following outlines some important common properties. Within sensory modalities receptor cells may be divided into subtypes. Such subtypes are tuned to different qualities, more narrow bands of the stimulus spectrum. For example, olfactory perception combines information received from receptor cells
Perception for Action in Insects
21
differentially sensitive to specific airborne molecules. Similarly, visual perception is integrating information from receptor cells differentially sensitive to specific wavelengths of light. In the honeybee Apis mellifera, the majority of receptor cells (ultraviolet, blue, green receptors) are said to have secondary sensitivities at certain wavelength regions where the other receptor types absorb maximally [119]. This multiplicity of receptor cells may have several reasons as discussed in [31]: (a) extend the range and resolution of sampling by increasing surface area; (b) extend the range of sensors that are discriminating subsets of stimulus qualities, intensities or temporal dynamics; (c) increase sensitivity and accuracy of resolution through response summation; (d) increase robustness of system despite damage to individual sensors; (e) compensate for nonfunctioning developmental stages of sensors; (f) enable the formation of specialised central processing centres with different behavioural functions. Receptor cells are flexible; neuromodulation can, for example, influence a receptor cell’s sensitivity [11] and consequently this flexibility in sensitivity throughout a nervous system can have implications on an organism’s behaviour [12]. In insects, there are profound differences in neural organisation associated with both sex [175] and social experience [66]. For a good introductory review, see [24]. Receptor cells have evolved into intricate filters focusing on particular information right at the periphery of the sensory system (e.g., [94, 172]). In this way, signal processing is often done mechanically through specific setups of receptor cells and an introductory account on these ‘matched filters’ can be found in [193]. A receptor cell transduces the stimulus energy into a change in its membrane potential. These signals encode the detection of specific stimuli and their intensities through adaptation and amplification; elementary signal transduction modules exhibit computational primitives, including: threshold operations, low- and high-pass filtering, saturation, flip-flop behaviour, etc. At the circuit level, common properties, such as divergence and convergence, feedback, receptive fields forming sensory maps, can be observed; signal transductions serve specific purposes, filtering input in space and time in order to extract functionally important stimulus features. Insect visual systems exemplify this ability to filter out task-specific information from the environment.
1.8 Higher Brain Centres in Insects The insect’s protocerebrum contains many complex and little understood neuropils (anatomically distinct dense networks of axons and dendrites). Two architecturally distinctive neuropils, the mushroom bodies and the central complex, have been widely investigated for their apparent role in controlling more complex behaviours. The next sections will review their anatomy and neuroarchitecture, and their possible behavioural functions. 1.8.1
The Mushroom Bodies (Corpora Pedunculata)
1.8.1.1 Anatomy and Connectivity The mushroom bodies are a pair of large and distinctively (mushroom-)shaped neuropils in the insect brain. In honeybees, they together contain around 340,000 neurons (the
22
B. Webb and J. Wessnitzer
bee brain contains approximately 960,000 neurons occupying 1 mm3 [120]). Though smaller in many other insects, they still comprise a significant proportion of the total brain neurons. The mushroom bodies in most insects have a similar and characteristic neuroarchitecture: namely, a tightly-packed, parallel organisation of thousands of neurons, called Kenyon cells. The mushroom bodies are further subdivided into several distinct regions: the calyces, the pendunculus, and the lobes. The dendrites (inputs) of the Kenyon cells have extensive branches in the calyces2 , and the axons (outputs) of the Kenyon cells run through the pendunculus before bifurcating, diverging and extending to form the typical α - and β -lobes3 . Synaptic interconnections between Kenyon cell axons have been reported [65]. The structure is thus summarised by [165]: • ‘the input and the output of activity due to extrinsic fibres [i.e., neurons with cell bodies external to the mushroom bodies] takes place in separate neuropil areas (calyces and lobes)’. • ‘The Corpora pedunculata represent a multi-channel system of intrinsic fibres [i.e., neurons with cell bodies internal to the mushroom bodies] with respect to the afferent [input] and efferent [output] pathways of extrinsic fibres’. Note also that there is considerable divergence (1:50) from a small number of extrinsic input neurons onto the large number of Kenyon cells, and considerable convergence (100:1) from the Kenyon cells onto extrinsic output neurons4. In most insect species, the mushroom bodies receive significant olfactory input. Interneurons relay odourant information from the olfactory receptors, via the antennal lobe in the deutocerebrum, to the mushroom body calyces. In the cockroach and locust, tactile and gustatory input have also been reported. Some Hymenoptera, e.g., bees and wasps, have substantial connections from the optic lobes on both sides of the brain to the calyces of both mushroom bodies [49]. The absence of direct visual input in other species does not necessarily mean visual information is not integrated. Rather, it may be preprocessed in other areas of the protocerebrum before feeding into the mushroom bodies [107]. Neurons in the calyx of the cockroach have been reported to respond to antennal movement; these signals may be proprioceptive in origin [126]. There is also evidence of possible input from circadian clock neurons in Drosophila [67]. Sensory afferents exhibit ordered connections with the calyces of the mushroom bodies. For example, terminals of input neurons onto the calyces of the cockroach and of the honeybee exhibit function-specific distribution patterns [131] and the calyces show a modality-specific topography [74]. At least some of this ordering is preserved by the Kenyon cells. Mushroom bodies in honeybees are subdivided into several sensory compartments [127] and modular structures in the mushroom bodies of the cockroach are reported in [124]. A recent Drosophila study has shown that Kenyon cells are subdivided regularly regarding their gene expression, suggesting extensive parallel compartmentalisation of function [156]. 2 3 4
The calyces may be a later evolutionary addition of a specific input structure to a more primitive mushroom body architecture [177]. There are exceptions to this description of the typical Kenyon cell e.g., not all Kenyon cells bifurcate [39]. These ratios are estimates based on data from [98].
Perception for Action in Insects
23
The activities of extrinsic neurons in the output regions of the mushroom bodies can be classified as (i) sensory, (ii) movement-related and (iii) sensorimotor [135]. A large majority exhibit responses to multiple sensory stimuli and therefore confirm the suggestion that the mushroom bodies participate in sensory integration. For example, in the cricket a single neuron was reported to respond to auditory, visual and wind stimuli [162]. These neurons also exhibit changes in activity and sensitivity levels which can be related to specific stimuli and stimuli combinations. Electrical recordings of single neuron and field potential activities also indicate that processing of sensory stimuli can last from seconds to sometimes minutes after stimulation [65]. This is likely to involve recurrent feedback: it has been shown that extrinsic neurons provide direct recurrent feedback to the calyces [206] and indirect feedback via other areas [127]. However, despite some examples of motor-related activity in extrinsic output neurons (see below), there is little evidence of direct connections to descending neurons. The majority of output pathways target a variety of other protocerebral regions [84, 101, 102]. A good account of the output connections of the mushroom bodies to other parts of the brain can be found in [177]. 1.8.1.2 The Mushroom Bodies Are Secondary Pathways Evidence suggests that mushroom bodies do not form the only sensorimotor pathway for any modality. Most sensory areas in the brain have direct connections to premotor areas, and thus to descending neurons. The same sensory areas also supply afferents to the mushroom bodies (either directly or indirectly) as part of a secondary pathway as shown in Fig. 1.4 [126]. For example, in most insects, the olfactory system consists of two parallel pathways as depicted in Fig. 1.5. The main olfactory pathway is the connection between the antennal lobe and the superior lateral protocerebrum (medial and outer cerebral tract). The mushroom bodies are part of a secondary pathway (inner cerebral tract). Convergence in the superior lateral protocerebrum of antennal lobe outputs and mushroom body outputs provides a locus for comparison of information processed in these neuropils [102]. Similarly to the olfactory pathway, a secondary pathway via the mushroom bodies has been reported for the conditioning of the proboscis extension reflex in the bee [57]. A direct connection from the antennal lobe to the motor neurons of the mouthparts has a secondary parallel pathway branching off via the mushroom bodies. Genetic and developmental impairment of mushroom bodies confirm the hypothesis of parallel secondary pathways. Impaired Drosophila can perceive, but cannot remember, odours and their overall behaviour was described as ‘normal’ [65]. 1.8.1.3 Functional Roles of the Mushroom Bodies The size, distinctive architecture, range of inputs and outputs, and parallel pathway arrangement raise obvious questions about the functional role of the mushroom bodies in insect behaviour. We will here discuss the main current hypotheses. A role in pattern recognition The parallel channels of intrinsic Kenyon cells perform specific processing functions on the mushroom body input. Heisenberg [63] speculated that the large divergence of sensory afferents onto the calyces of the mushroom bodies may form matrices where each element (Kenyon cell) could be specific for unique relationships of excitations in the primary sensory channels feeding the mushroom bodies, indicating a particular
24
B. Webb and J. Wessnitzer Mushroom bodies Calyces
Protocerebrum Visual
A
Lobes
E
Olfactory Contact Chemosensory Premotor
Motor
Tactile
Fig. 1.4. Evidence suggests that mushroom bodies do not form the only sensorimotor pathway for any modality. Sensory areas in the brain have direct connections to premotor areas, and thus to descending neurons in motor areas. The same sensory areas also supply afferents to the mushroom bodies (either directly or indirectly) as part of a secondary pathway. This schematic overview of the connections is drawn after [126]. From [196] (Fig. 3).
stimulus situation. The dendrites of extrinsic (output) neurons, invading the lobes, vary in size and arborisation patterns. Thus each imposes different characteristic filter parameters on signal transmission from the Kenyon cells, depending on how many and which of the Kenyon cells it interacts with [49]. In support of this idea, some extrinsic output neurons have been shown to respond with changes in sensitivity and activity levels to a certain modality only when also presented with another [162]. The topographical relationships between efferent dendrites extending across Kenyon cell axons and how these axons represent afferent projections in the calyces is discussed in [178]. Kenyon cells may also act as delay lines [165, 127] which could provide a mechanism for recognising temporal patterns in the input. Schuermann suggests “the peculiar form of the Corpora pedunculata due to the pedunculi and lobes is . . . a special arrangement of neurons for transposition of synaptic distances into sequential activation or inhibition of nerve fibres”. Some extrinsic neurons have been reported to increase or decrease their activity depending on some preceding stimulus [162]. Note that pattern recognition also has a role in learning as further discussed below - sparse coding of sensory inputs is said to “make neurons more selective to specific patterns of input, hence making it possible for higher areas to learn structure in the data” [138]. More generally, Ito et al. [84] caution that the learning and memory deficiencies observed after disrupting or ablating the mushroom bodies could indicate that
Perception for Action in Insects
25
Mushroom bodies
Multimodal afferents Calyces
inner antennocerebral tract
Lobes
Multimodal afferents
Sensory stimuli put into context
Antennal lobe
medial and outer antennocerebral tract
Superior lateral protocerebrum Integration and discrimination
Premotor areas
Fig. 1.5. Hypothesised integration stages of odour perception with other sensory modalities (redrawn from [102]). Odours are represented by spatiotemporal patterns in the glomeruli of the antennal lobe. The inner antennocerebral tract projects to the calyces and continues into the superior lateral protocerebrum whereas the medial and outer antennocerebral tracts project directly into the superior lateral protocerebrum. In the mushroom bodies, odours are placed into a multimodal context and efferents relay this information into the superior lateral protocerebrum. There, antennal lobe and mushroom body outputs converge and it is speculated the relevance of odours is assessed with respect to the multimodal context, integrated and discriminated before feeding into premotor areas.
their critical role is in preprocessing of signals (such as detecting complex spatiotemporal patterns) for subsequent learning, rather than that they are themselves the central site in which learning and memory take place. A specific example of how the mushroom bodies perform sophisticated recognition is given by studies of the olfactory pathway in locusts [97, 143, 142]. Different odours produce distinctive spatio-temporal patterns in the antennal lobe. Laurent and Perez-Orive hypothesise the Kenyon cells help disentangle the spatio-temporal codes by
26
B. Webb and J. Wessnitzer
operating as coincidence detectors selective for correlations in the input spike trains. Kenyon cells receive direct excitatory input from antennal lobe projection neurons, but also indirect inhibitory inputs from the same neurons via lateral horn interneurons, arriving shortly after the excitation. Thus the integration time for the Kenyon cells is limited to short time windows [41], making them highly sensitive to precise temporal correlations. The suggestion is that while antennal lobe processing supports basic olfactory learning, more subtle odour discrimination tasks require the mushroom bodies [58]. Evidence from bees supports this interpretation. Bees with disrupted mushroom bodies were capable of distinguishing dissimilar odourants, but could not distinguish similar ones [174]. A role in integrating sensory and motor signals Although most discussion of ‘integration’ in the mushroom bodies focuses on the combination of sensory inputs, it should be recalled that they also receive recurrent feedback, and that extrinsic neurons show motor-related responses. In the cockroach Periplaneta americana, some mushroom body extrinsic neurons are reported to exhibit activities for 100-2000ms preceding the onset of locomotion [136]. This is much earlier than activity seen in descending neurons which usually precedes movement by only 10-200ms. The timing also precludes the response being due to feedback from proprioceptive sensors during movement. Okada et al. [135] report extrinsic neuron responses which are selective to the directions of turning behaviours. These researchers propose that the mushroom bodies participate in the integration of sensory stimuli and motor signals, possibly playing a role in initiating and maintaining motor action, or patterning motor output [125]. Supporting evidence is that the mushroom bodies play an important role in the termination of active walking phases in Drosophila [112]. However, ablations of the mushroom bodies did not prevent spontaneous locomotory behaviour, hence it is thought that an indirect pathway involving the mushroom bodies converges with more direct pathways for hierarchical integration and modulation of behaviour, perhaps including the central complex, discussed below. Furthermore, extrinsic mushroom body neurons of cockroaches have been reported to discriminate self-administered antennal grooming from externally imposed antennal stimulation [125]. A fascinating possibility is that the mushroom bodies are more generally involved in integrating efferent copies with reafferent signals, thus enabling insects to learn to discriminate self-stimulation from environmental stimulation [192]. Neural substrates for learning It is widely acknowledged that the mushroom bodies play some important role in associative learning and memory, with evidence from a variety of insect species, insect behaviours, and experimental manipulation methods. For example, mushroom body ablation has been reported to impair short-term and long-term memory of courtship conditioning in Drosophila [116]; genetic mutants with structural α -lobe defects have shown deficiencies in long-term memory [141]. The mushroom bodies of the cockroach Periplaneta americana have been linked to visual place memory [126]. There is a positive correlation between the volume of the mushroom bodies and sophistication of social behaviour in Hymenoptera (c.f., [161]), and mushroom body volume increases are linked to behavioural development [134], e.g., learning the location of the hive when
Perception for Action in Insects
27
commencing foraging [40]. However olfactory learning has been the main focus of investigations to date. Kenyon cells show striking structural plasticity, shedding their fibres and growing new ones at various occasions in an insect’s lifetime [63]. In [44], evidence suggesting the mushroom bodies in Drosophila are the site of a memory trace is reviewed and evaluated. Gerber et al. [44] elaborate on criteria for a memory trace (presence of neural plasticity in the mushroom bodies, the sufficiency and necessity of this plasticity for memory, and whether memory is abolished when input or output to the mushroom bodies is blocked during testing) and conclude that localising the olfactory associative memory trace to the Kenyon cells of the mushroom bodies is a reasonable working hypothesis. Indeed, the Kenyon cells of the Drosophila mushroom bodies have been identified as a major site of expression for a number of “learning” genes (for a review, see [33]). According to Dubnau et al. [33], Hebbian processes underlying olfactory associative learning may reside in Kenyon cell dendrites. However, synaptic plasticity underlying olfactory memory has also been located in the synapses connecting Kenyon cells to extrinsic neurons [168]. Based on the mushroom bodies’ functional anatomy, particular odours are assumed to be represented by specific subsets of Kenyon cells. For any odourant to become a predictor of a given reinforcing event (e.g., sucrose reward), the output synapses of particular sets of Kenyon cells should be modified such that extrinsic mushroom body neurons could then mediate a conditioned response (e.g., approach or avoidance). In [115], the response of one particular extrinsic mushroom body neuron Pe1 was studied extensively in the context of non-associative and associative learning. It was shown to change response in various conditioning paradigms, and hypothesised to be an element for short-term acquisition of an associative olfactory memory in the honeybee. In the honeybee, the role of appetitive reinforcement has been shown to have a neural substrate, an identified neuron called VUMmx1 located in the subesophageal ganglion [56, 57, 120, 118] which arborises into the antennal lobe, the mushroom bodies and the lateral horn. Stimulation of this neuron has the same effect as presentation of a sucrose reward for classical conditioning of the proboscis extension reflex [56, 57]. In the fruitfly, the paired dorsal medial (DPM) neurons arborise throughout the mushroom body lobes. In genetic studies these neurons have been shown to play a modulatory role (said to provide negative reinforcement) in olfactory learning and memory formation [188, 189]. 1.8.2
The Central Complex
While the exact function of the mushroom body is not certain, it is clear that it has major roles in spatiotemporal sensory processing and learning. By contrast, the role of the central complex remains rather more elusive, but seems to be largely concerned with (pre-)motor processing, control of locomotion, and possibly path integration. 1.8.2.1 Anatomy and Connections The central complex has a midline-spanning position in the insect brain and a highly regular neuroarchitecture. The common structure of the central complex among arthropod species is reviewed and discussed in [108]. It is situated between the two brain
28
B. Webb and J. Wessnitzer
hemispheres and consists of a group of four distinct but interconnected neuropils: the protocerebral bridge, the upper division of the central body (in Drosophila also termed the fan-shaped body), the lower division of the central body (in Drosophila also termed the ellipsoid body), and the noduli. These neuropils are connected via columnar interneurons. The neuroarchitecture is defined by 16 columns with 8 in each brain half, and a characteristic connection pattern between the two brain hemispheres, as shown in Fig. 1.6, which has been found in all species studied so far [74]. The central complex is connected to many other protocerebral regions (but only very few connections to the mushroom bodies have been reported c.f., [84, 114]). The protocerebral bridge is considered the main input region [59] with connections from visual regions particularly dominant in locusts and honeybees. Recent research identified memory traces for visual features in the central body of Drosophila [106]. Innervations by ocellar interneurons have been reported in a number of insect species (c.f., [70]) and also from the polarisation vision pathway [74]. Although significant visual input has been shown in a number of insect species, blind insects also have pronounced central bodies which suggests that vision is not the only sensory modality processed by the central complex. Evidence suggests that mechanosensory input is provided to the ellipsoid body from the ventral nerve cord via the lateral accessory lobes in the locust Schistocerca gregaria [72]. Detailed anatomical studies of the central complex suggest the anatomy of the central body is well-suited to overlay inputs (received via the protocerebral bridge) from both brain hemispheres [59, 108]. Insects that perform sophisticated asymmetric leg movements (e.g., weaving, comb-building) appear to have a large or elaborate central complex [176]. The lateral accessory lobes, as shown in Fig. 1.6, are major output targets of the central complex and interact with the central nerve cord through ascending and descending neurons; there is also feedback from the lateral accessory lobes to the central body (or fan-shaped body) [72]. Neurons from both lateral accessory lobes were also reported to be reciprocally connected in the silk moth Bombyx mori [123]. 1.8.2.2 Functional Roles of the Central Complex Almost two decades ago, it was proposed that the central complex with its precise left-right fiber geometry functions as a well-balanced system of inhibitory and excitatory interactions between the two brain halves [70]. Behavioural and genetic studies of Drosophila have associated the central complex with functions related to higher locomotor control; including initiation and modulation of behaviour, and controlling the persistence of behaviours. An excellent review can be found in [179]. It has been proposed that the neurophysiological mechanisms responsible for generating and controlling such highly organised locomotor activity imply a ‘higher decision centre’ organising behavioural activity [113]. Producing behavioural activity patterns Neural activity in the central complex has been linked to the expression of behaviour. Some neurons in the central complex of Drosophila have been reported to represent thoracic motor patterns [71]. Activity changes in the connections between the lateral accessory lobes and the central body were linked to initiation and termination of flight
Perception for Action in Insects
29
in Schistocerca gregaria [72]. Distinct activity patterns (measured by staining methods) for different behavioural situations were reported in [5], not only for flight and walking, but also for different visual stimuli. Bausenwein et al. reported that activity labeling did not reflect the intensity of sensory stimulation but that it reflected distinct activity patterns for different (visually controlled) behavioural situations, and suggest these can therefore be thought of as central representations of behavioural activity. Locomotor coordination and regulation Central complex mutants exhibit a range of locomotion defects. Mutant fruitflies with a disrupted protocerebral bridge were unable to fly and showed abnormal walking and turning behaviours. The protocerebral bridge, and connections from the protocerebral bridge to other regions of the central complex, were reported necessary for maintaining but not for initiating locomotor activity [114]. The protocerebral bridge is involved in regulating and optimising walking speed by controlling step length [179]. In Drosophila, the central complex has generally been found to up-regulate walking activity whereas the mushroom bodies have been shown to down-regulate or suppress locomotor behaviour [112]. Orientation The overlay of signals in this mirror-symmetrical neuropil has been shown to play an important role in goal-directed motion. Mutant flies, with partially interrupted fanshaped and ellipsoid bodies along the midline of the brain, were unable to compensate for existing body side asymmetries whereas flies with an intact central complex could deal with such asymmetries [179]. These mutant flies were unable to approach targets in a straight line. Furthermore, flies with intact central complex continue to move towards a target despite the target becoming temporarily invisible and/or a second target distracting the flies [180]. However, some central complex mutations (ellipsoid body and/or fan-shaped body) cause flies to quickly loose their bearings. The inability of mutants to maintain a bearing towards a target also suggests a role of the central complex for resolving conflicts between sensorimotor pathways competing for behavioural expression. A centre for path integration The central complex has recently been pronounced a promising candidate as a centre for path integration. Vitzthum et al. [187] have identified neurons in the central complex of the locust Schistocerca gregaria sensitive to polarised light - polarised skylight patterns are an important stimulus providing compass information [194] for many insects. Additional visual inputs to the central complex, given its apparent importance in controlling locomotory behaviour (e.g., estimating distance or rotation from optical flow) strengthens the argument for the role of the central complex in path integration [73]. Some inputs to the protocerebral bridge are said to originate from the accessory medulla, the circadian pacemakers in the brains of cockroaches and flies [76, 67]. Orientation using celestial cues requires adjustment for their positional changes over the day; thus these circadian inputs may serve time compensation [74]. Note that such a role for path integration is indeed quite consistent with the idea that the central complex is a higher locomotion control centre.
30
B. Webb and J. Wessnitzer Sensory input
Protocerebral bridge
Central body (upper)
Central body (lower)
Lateral accessory lobes
Descending and ascending neurons (to and from the thoracic ganglia)
Fig. 1.6. Schematic diagram showing the neuroarchitecture of the central complex (redrawn after [73]). The anatomy of the central complex with regular columnar overlay between sides is wellsuited to coordinate inputs from both brain halves. From [196] (Fig. 4).
1.9 Towards ‘Insect Brain’ Control Architectures Based on the preceding information about insect nervous systems, we here propose a general ‘insect brain’ control architecture for obtaining adaptive behaviour in robots. Our basic motivation for making this proposal is that insects work - they have a range of competences that far exceeds current robot capabilities. It seems sensible to take a close look at insect designs when devising solutions to problems in robotics. This is not a new suggestion, but to date the focus has been on specific sensorimotor systems, rather than the overall organisation, or integration of behaviours. This has led to the largely mistaken view that insect control mechanisms can be simply described (sometimes simply dismissed) as a collection of reactive behaviours; and that we must look to mammalian nervous systems, or other engineering approaches such as hybrid control, if we want to ‘scale up’ to complex robot behaviour. Like many mistaken views, it has an element of truth, in that perceptual systems of insects did not evolve to build an internal model of the environment for general action, but rather evolved for particular tasks relevant to the animal and its particular ecological niche. Thus, as discussed in section 2.3, there are many parallel domain-specific sensorimotor pathways. These pathways form the basis of our proposed architecture, and there are already a number of successfully implemented robot systems based on such pathways that we can draw upon. Behavioural decomposition, and distributed control are features of the nervous systems of insects (and also that of vertebrates, c.f., [148]) that resemble the behaviour-based approach to robot control. Behaviour-based systems (in particular Brooks’ subsumption architecture) aim at achieving higher levels of
Perception for Action in Insects
31
converging tracts diverging tract(s)
Visual
Tactile
Chemosensory
Mushroom bodies Premotor integration and discrimination Efferent copy
Central complex
Ganglia
Proprioceptive
Fig. 1.7. An outline of the proposed model is drawn schematically. Reflexive (domain-specific) sensorimotor pathways form the basis of this architecture. For illustration purposes, three direct pathways are shown for different modalities. Sensorimotor pathways for other modalities could be added (or, for a robot architecture, substituted). Indirect secondary pathways (mushroom bodies) achieve context generalisation, associative learning, and modulate reflexive sensorimotor pathways. Additional central coordination (central complex) for integration and persistence of behaviours is necessary and may also be used for path integration. Recurrent connections may serve short-term memory (in both mushroom bodies and the central complex). Efferent copies and forward models will be used to modulate sensory processing according to expected reafference. From [196] (Fig. 5).
competence by adding behavioural modules incrementally. It has been noted that these architectures often fail to scale well as layers are added. In fact, looking at the insect brain, rather than finding ever more layers of ‘behavioural modules’, there appears instead to be two qualitatively different ‘association areas’ (the mushroom bodies and the
32
B. Webb and J. Wessnitzer
central complex) that act to modulate the direct sensorimotor loops in several important ways. We thus propose that the next element of our architecture will be an indirect secondary pathway, which forms a parallel route for sensory inflow. This can be used to place information from various sensory modalities into context, to improve on reflexive behaviours by learning to adapt and anticipate reflex-causing stimuli, or learning to substitute one stimulus for another in guiding action. There is good evidence that the mushroom bodies play such a modulatory role in the insect brain and thus their neural architecture provides a model for designing an associative memory capable of multimodal sensory integration and modulating underlying reflexes in a heterarchical, context-dependant manner. Mathematical descriptions of the ionic currents and the resulting computational properties of Kenyon cells (the intrinsic mushroom body neurons) of the honeybee have already been developed [83, 204]. The mapping of sensory neurons onto Kenyon cells shows high divergence suggesting a form of sparse coding, quite possibly serving the recognition of unique relationships in primary sensory channels or domain-specific sensorimotor loops (as discussed in section 1.8.1.3). Recently, system models of the mushroom bodies for odour classification based on this characteristic neuroarchitecture have been proposed [80, 133]. However, this is not simply a feedforward system but involves multiple recurrent connections that are likely to be essential to its function, perhaps providing a mechanism for short-term memory (and resonating sensory events). Associative memory and learning also benefits from value systems, which in insects appear to be implemented in a small number of neurons with many connections throughout the brain. These signal salient sensory events (such as appetitive reward) and are involved in the plasticity of the mushroom bodies. These value systems report activity in reflex-loops, helping to establish useful CS-US relationships. The third element of our proposed architecture is inspired by the central complex, which forms another parallel processing stream superimposed on more direct sensorimotor loops. The central complex has a remarkable neuroarchitecture and many functions of the central complex in coordinating locomotory behaviour are well described. Central integration of sensory and motor signals is needed to co-ordinate behaviours, overlay signals from the two brain hemispheres, and to maintain or switch between behaviours. The central complex may provide inspiration for solving problems of conflict resolution between behavioural modules competing for control of a limited set of effectors. In vertebrate neuroscience, the basal ganglia has been associated with this problem of ‘action selection’ [54] - the central complex might have a similar function in insects. The central complex may also be involved in path integration, which requires the association of several modalities, in particular polarised light or other compass sensing, and visual or proprioceptive measures of distance. Some computational models of neural networks capable of path integration [61, 199] share features with the central complex, in particular rows of neural elements with specific columnar and global connection patterns [128]. The final and critical loop of our architecture is provided by efference copies of motor commands, which can in principle provide predictions of expected sensory events. This is vital contextual information for distinguishing external disturbances
Perception for Action in Insects
33
from reafference. Such mechanisms play a role in priming sensory systems, to anticipate and inhibit the sensory consequences of movements. In this way, an estimate of feedback can be compared to actual sensory feedback and thereby provide appropriate reinforcement for learning. State estimation can combine the model’s prediction with sensory feedback, thereby allowing the evaluations of sensorimotor counterfactuals. Neural correlates of reward predictors exhibit different reward-related responses depending on whether rewards are expected [120]. Evidence suggests that similar rewardlike reference signals also exist in mammalian brains [166, 167]. It is a very interesting question to determine the extent to which accurate prediction - which implies the use of forward models [192] - actually occurs in insect systems, and whether this might be an additional function implemented in the mushroom bodies or other specific neural centres.
1.10 Conclusion Insects solve difficult behavioural tasks with miniature brains. We have reviewed the current state of neurobiological research on insect nervous systems to identify essential elements of their control architecture. Although much remains uncertain and speculative, nevertheless some interesting general features emerge. In particular, parallel direct sensorimotor loops (or ‘behaviours’) are supplemented by specific brain areas that serve integrative functions relating to context-dependence, learning, and smooth coordination. Moreover these areas have distinctive neural architectures, preserved across all insect species, that suggest particular ways in which these functions might be implemented. Attempting to copy these systems should be productive for robot research; and should also contribute to extending biological understanding of the organisation of behaviour in insects [191, 32].
References 1. Abarbanel, H., Rabinovich, M.: Neurodynamics: nonlinear dynamics and neurobiology. Current Opinion in Neurobiology 11, 423–430 (2001) 2. Arkin, R.: Integrating behavioral, perceptual, and world knowledge in reactive navigation. Robotics and Autonomous Systems 6, 105–122 (1990) 3. Atkins, M.D.: Introduction to insect behavior. Macmillan Publishing Co., Inc., New York (1980) 4. Ballard, D.: Animate vision. Artificial Intelligence 48, 57–86 (1991) 5. Bausenwein, B., Mueller, N., Heisenberg, M.: Behavior-dependent activity labeling in the central complex of Drosophila during controlled visual stimulation. Journal of Comparative Neurology 340, 255–268 (1994) 6. Beer, R.: A dynamical systems perspective on agent-environment interaction. Artificial Intelligence, 173–215 (1995) 7. Beer, R.: Dynamical approaches to cognitive science. Trends in Cognitive Science 4(3), 91–99 (2000) 8. Beintema, J., van den Berg, A.: Heading detection using motion templates and eye velocity gain fields. Vision Research 38, 2155–2179 (1998) 9. Bennett, A.: Do animals have cognitive maps? Journal of Experimental Biology 199, 219– 224 (1996)
34
B. Webb and J. Wessnitzer
10. van den Berg, A., Beintema, J.: Motion templates with eye velocity gain fields for transformation of retinal to head centric flow. NeuroReport 8, 835–840 (1997) 11. Birmingham, J.: Increasing sensor flexibility through neuromodulation. Biological Bulletin 200, 206–210 (2001) 12. Birmingham, J., Tauck, D.: Neuromodulation in invertebrate sensory systems: from biophysics to behavior. Journal of Experimental Biology 206, 3541–3546 (2003) 13. Bisch-Knaden, S., Wehner, R.: Local vectors in desert ants: context-dependent landmark learning during outbound and homebound runs. Journal of Comparative Physiology 189, 181–187 (2003) 14. Brooks, R.: Intelligence without reason. In: Proceedings of IJCAI 1991 (1991) 15. Brooks, R.: Intelligence without representation. Artificial Intelligence 47, 139–159 (1991) 16. Brooks, R.: From earwigs to humans. Robotics and Autonomous Systems 20(2-4), 291–304 (1997) 17. Carpenter, G., Grossberg, S.: Adaptive Resonance Theory, pp. 87–90. MIT Press, Cambridge (2003) 18. Carpenter, G.A., Grossberg, S.: The art of adaptive pattern recognition by a self-organizing neural network. Computer 21(3), 77–88 (1988) 19. Clark, A.: Embodiment: from fish to fantasy. Trends in Cognitive Sciences 3(9), 345–351 (1999) 20. Clark, A., Grush, R.: Towards a cognitive robotics. Adaptive Behavior 7, 5–16 (1999) 21. Clayton, K., Frey, B.: Inter- and intra-trial dynamics in memory and choice. In: Nonlinear Dynamics in Human Behavior. World Scientific, Singapore (1996) 22. Collett, T., Collett, M.: Path integration in insects. Current Opinion in Neurobiology 10, 757–762 (2000) 23. Collins, S., Ruina, A., Tedrake, R., Wisse, M.: Efficient bipedal robots based on passivedynamic walkers. Science 307, 1082–1085 (2005) 24. Comer, C., Robertson, R.: Identified nerve cells and insect behavior. Progress in Neurobiology 63, 409–439 (2001) 25. Conklin, J., Eliasmith, C.: A controlled attractor network model of path integration in the rat. Journal of Computational Neuroscience 18(2), 183–203 (2005) 26. Cos, I., Hayes, G.: Behaviour control using a functional and emotional model. In: Proceedings of the 7th Conference on the Simulation of Adaptive Behaviour. The MIT Press, Edinburgh (2002) 27. Cos-Aguilera, I., Hayes, G., Canamero, L.: Using a SOFM to learn object affordances. In: Proceedings of the 5th Workshop on Physical Agents (WAF 2004), Girona, Catalonia, Spain (2004) 28. Craik, K.: The nature of explanation. Cambridge University Press, Cambridge (1943) 29. Cruse, H.: The evolution of cognition - a hypothesis. Cognitive Science 27, 135–155 (2003) 30. Deneubourg, J.L., Lioni, A., Detrain, C.: Dynamics of aggregation and emergence of cooperation. Biol. Bull. 202(3), 262–267 (2002) 31. Derby, C., Steullet, P.: Why do animals have so many receptors? The role of multiple chemosensors in animal perception. Biological Bulletin 200, 211–215 (2001) 32. Dolcomyn, F.: Insect walking and robotics. Annual Review of Entomology 49, 51–70 (2004) 33. Dubnau, J., Tully, T.: Gene discovery in Drosophila: new insights for learning and memory. Annual Review of Neuroscience 21, 407–444 (1998) 34. Eliasmith, C.: Computation and dynamical models of mind. Minds and machines 7, 531– 541 (1997) 35. Eliasmith, C., Anderson, C.: Neural engineering - computation, representation, and dynamics in neurobiological systems. The MIT Press, Cambridge (2003)
Perception for Action in Insects
35
36. Elman, J.: Finding structure in time. Cognitive Science 14(2), 179–211 (1990), http://www.isrl.uiuc.edu/∼amag/langev/paper/ elman90findingStructure.html 37. Engel, A., Fries, P., Singer, W.: Dynamic predictions: oscillations and synchrony in topdown processing. Nature 2, 704–716 (2001) 38. Evans, H.E.: Wasp farm. Anchor Natural History Books. Anchor Press/ Doubleday and Company, New York (1973) 39. Fahrbach, S.: Structure of the mushroom bodies of the insect brain. Annual Review of Entomology 51, 209–232 (2006) 40. Fahrbach, S., Giray, T., Farris, S., Robinson, G.: Expansion of the neuropil of the mushroom bodies in male honeybees is coincident with initiation of flight. Neuroscience Letters 236, 135–138 (1997) 41. Farivar, S.: Cytoarchitecture of the locust olfactory system. Ph.D. thesis, California Institute of Technology (2005) 42. Freeman, W.: A neurobiological theory of meaning in perception. I. Information and meaning in nonconvergent and nonlocal brain dynamics. International Journal of Bifurcation and Chaos 13(9) (2003) 43. Gaver, W.: What in the world do we hear? An ecological approach to auditory source perception. Ecological Psychology 5, 1–29 (1993) 44. Gerber, B., Tanimoto, H., Heisenberg, M.: An engram found? Evaluating the evidence from fruit flies. Current Opinion in Neurobiology 14, 737–744 (2004) 45. Gibson, J.: The ecological approach to visual perception. Houghton Mifflin, Boston (1979) 46. Glazier, P., Davids, K., Bartlett, R.: Dynamical systems theory: a relevant framework for performance-oriented sports biomechanics research. Sportscience 7 (2003) 47. Gregory, R.: Perceptions as hypotheses. Philosophical Transactions of the Royal Society of London, B 290, 181–197 (1980) 48. Gregson, R.: n-Dimensional non-linear psychophysics. Erlbaum, Mahwah (1992) 49. Gronenberg, W.: Subdivisions of hymenopteran mushroom body calyces by their afferent supply. Journal of Comparative Neurology 436, 474–489 (2001) 50. Grossberg, S.: Neural networks and natural intelligence. MIT Press, Cambridge (1988) 51. Grush, R.: The emulation theory of representation: motor control, imagery, and perception. Behavioral and Brain Sciences (in press, 2003) 52. Grush, R.: In defense of some ‘cartesian’ assumptions concerning the brain and its operations. Biology and Philosophy 18, 53–93 (2003) 53. Guastello, S.: Nonlinear dynamics in psychology. Discrete dynamics in nature and society 6, 11–29 (2001) 54. Gurney, K., Humphries, M., Wood, R., Prescott, T., Redgrave, P.: Testing computational hypotheses of brain systems function: a case study with the basal ganglia. Network: Computation in Neural Systems 15, 263–290 (2004) 55. Haken, H.: Synergetics: an introduction. Springer, Berlin (1983) 56. Hammer, M.: An identified neuron mediates the unconditioned stimulus in associative olfactory learning in honeybees. Nature 366, 59–63 (1993) 57. Hammer, M.: The neural basis of associative reward learning in honeybees. Trends in Neuroscience 20(6), 245–252 (1997) 58. Hammer, M., Menzel, R.: Multiple sites of associative odor learning as revealed by local brain microinjections of octopamine in honeybees. Learning and Memory 5, 146–156 (1998) 59. Hanesch, U., Fischbach, K.F., Heisenberg, M.: Neuronal architecture of the central complex in Drosophila melanogaster. Cell Tissue Research 257, 343–366 (1989)
36
B. Webb and J. Wessnitzer
60. Harter, D., Kozma, R.: Navigation and cognitive map formation using aperiodic neurodynamics. In: From Animals to Animats 8: The Eighth International Conference on the Simulation of Adaptive Behavior (SAB 2004), pp. 450–455. MIT Press, Cambridge (2004) 61. Hartmann, G., Wehner, R.: The ant’s path integration system: a neural architecture. Biological Cybernetics 73, 483–497 (1995) 62. Heath, R.: Nonlinear dynamics: techniques and applications in psychology. Lawrence Erlbaum Associates, Mahwah (2000) 63. Heisenberg, M.: Central brain function in insects: genetic studies on the mushroom bodies and central complex in Drosophila. In: Neural Basis of Behavioural Adaptations. Fortschritte der Zoologie, vol. 39, pp. 61–79. Gustav Fischer Verlag, Stuttgart (1994) 64. Heisenberg, M.: Pattern recognition in insects. Current Opinion in Neurobiology 5, 475– 481 (1995) 65. Heisenberg, M.: What do the mushroom bodies do for the insect brain? an introduction. Learning and Memory 5, 1–10 (1998) 66. Heisenberg, M., Heusipp, M., Wanke, C.: Structural plasticity in the Drosophila brain. Journal of Neuroscience 15, 1951–1960 (1995) 67. Helfrich-Foerster, C.: Neurobiology of the fruit fly’s circadian clock. Genes, Brain and Behavior 4, 65–76 (2005) 68. Hesslow, G.: Conscious thought as simulation of behaviour and perception. Trends in Cognitive Sciences 6(6), 242–247 (2002) 69. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9, 2451– 2471 (1997) 70. Homberg, U.: Structure and function of the central complex in insects. In: Arthropod brain: its evolution, development, structure and functions, pp. 347–367. Wiley, NY (1987) 71. Homberg, U.: The central complex in the brain of the locust: anatomical and physiological characterisation. In: Elsner, N., Roth, G. (eds.) Brain-Perception-Cognition. Thieme, Stuttgart (1990) 72. Homberg, U.: Flight-correlated activity changes in neurons of the lateral accessory lobes in the brain of the locust Schistocerca gregaria. Journal of Comparative Physiology A 175, 597–610 (1994) 73. Homberg, U.: In the search of the sky compass in the insect brain. Naturwissenschaften 91, 199–208 (2004) 74. Homberg, U.: Multisensory processing in the insect brain. In: Methods in Insect Sensory Neuroscience. CRC Press, Boca Raton (2005) 75. Homberg, U., Christensen, T., Hildebrand, J.: Structure and function of the deutocerebrum in insects. Annual Review of Entomology 34, 477–501 (1989) 76. Homberg, U., Reischig, T., Stengl, M.: Neural organisation of the circadian system of the cockroach Leucophaea maderae. Chronobiol. Int. 20, 577–591 (2003) 77. Hopfield, J.: Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the USA 79, 2554–2558 (1982) 78. Hoshino, O.: Dynamic interaction of attractors across multiple cortical networks as a neural basis for intersensory facilitation. Connection Science 14, 345–375 (2002) 79. Hoshino, O.: Coherent interaction of dynamical attractors for object-based selective attention. Biological Cybernetics 89, 107–118 (2003) 80. Huerta, R., Nowotny, T., Garcia-Sanchez, M., Abarbanel, H., Rabinovich, M.: Learning classification in the olfactory system of insects. Neural Computation 16, 1601–1640 (2004) 81. Hurley, S.: Perception and action: alternative views. Synthese 129, 3–40 (2001) 82. Husbands, P., Harvey, I., Cliff, D., Miller, G.: The use of genetic algorithms for the development of sensorimotor control systems. In: Proceedings of the PerAc 1994 Conference. IEEE Computer Society Press, Los Alamitos (1994)
Perception for Action in Insects
37
83. Ikeno, H., Usui, S.: Basic computational properties of Kenyon cell in the mushroom body of honeybee. Neurocomputing 32, 167–172 (2000) 84. Ito, K., Suzuki, K., Estes, P., Ramaswami, M., Yamamoto, D., Strausfeld, N.: The organisation of extrinsic neurons and their implications in the functional roles of the mushroom bodies in Drosophila melanogaster meigen. Learning and Memory 5, 52–77 (1998) 85. Jaeger, H.: Adaptive nonlinear system identification with echo state networks. In: Proc. NIPS 2002 (2002) 86. Keijzer, F.: Representation in dynamical and embodied cognition. Cognitive Systems Research 3, 275–288 (2002) 87. Kelso, J.: Dynamic patterns: the self-organisation of brain and behaviour. MIT Press, Cambridge (1995) 88. Kersten, D., Yuille, A.: Bayesian models of object perception. Current opinions in neurobiology 13, 1–9 (2003) 89. Kirsh, D.: Today the earwig, tomorrow man? Artificial Intelligence 47, 161–184 (1991) 90. Kirsh, D.: The intelligent use of space. Artificial Intelligence 73, 31–68 (1995) 91. Korn, H., Faure, P.: Is there chaos in the brain? II. Experimental evidence and related models. Comptes Rendus Biologies 326, 787–840 (2003) 92. Kosko, B.: Adaptive bidirectional associative memories. Applied Optics 26, 4947–4960 (1987) 93. Kosslyn, S., Ganis, G., Thompson, W.: Neural foundations of imagery. Nature Reviews 2, 635–642 (2001) 94. Labhart, T., Meyer, E.: Detectors for polarized skylight in insects: a survey of ommatidial specialisations in the dorsal rim area of the compound eye. Microscopy Research and Technique 47, 368–379 (1999) 95. Lakoff, G., Johnson, M.: Metaphors We Live By. University of Chicago Press, Chicago (1980) 96. Lappe, M., Bremmer, F., van den Berg, A.: Perception of self-motion from visual flow. Trends in Cognitive Sciences 3(9), 329–336 (1999) 97. Laurent, G.: Olfactory processing: maps, time and codes. Current Opinion in Neurobiology 7, 547–553 (1997) 98. Laurent, G.: Olfactory network dynamics and the coding of multidimensional signals. Nature Reviews Neuroscience 3(11), 884–895 (2002) 99. Lederman, S., Klatzky, R.: Haptic aspects of motor control. In: Jeannerod, M. (ed.) Handbook of Neuropsychology. Action and Cognition, vol. 11. Elsevier Science Publishers, Amsterdam (1996) 100. Lee, D.: A theory of visual control of braking based on information about time-to-collision. Perception 5, 437–459 (1976) 101. Li, Y., Strausfeld, N.: Morphology and sensory modality of mushroom body extrinsic neurons in the brain of the cockroach. Journal of Comparative Neurology 387, 631–650 (1997) 102. Li, Y., Strausfeld, N.: Multimodal efferent and recurrent neurons in the medial lobes of cockroach mushroom bodies. Journal of Comparative Neurology 409, 647–663 (1999) 103. Li, Z., Dayan, P.: Computational differences between asymmetrical and symmetrical networks. Network 10, 59–77 (1999) 104. Liberman, A., Cooper, F., Shankweller, D., Studdert, M.: Perception of the speech code. Psychological Review 74, 431–461 (1967) 105. Liberman, A., Mattingly, I.: The motor theory of speech perception revised. Cognition 21, 1–36 (1985) 106. Liu, G., Seiler, H., Wen, A., Zars, T., Ito, K., Wolf, R., Heisenberg, M., Liu, L.: Distinct memory traces for two visual features in the Drosophila brain. Nature 439, 551–556 (2006) 107. Liu, L., Wolf, R., Ernst, R., Heisenberg, M.: Context generalisation in Drosophila visual learning requires the mushroom bodies. Nature 400, 753–756 (1999)
38
B. Webb and J. Wessnitzer
108. Loesel, R., Naessel, D., Strausfeld, N.: Common design in a unique midline neuropil in the brains of arthropods. Arthropod Structure and Development 31, 77–91 (2002) 109. Maass, W., Natschlaeger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation 14, 2531–2560 (2002) 110. Maes, P.: Learning behaviour networks from experience. In: Proceedings of the First European Conference on Artificial Life. MIT Press, Cambridge (1992) 111. Marr, D.: Vision: a computational investigation into the human representation and processing of visual information. Freeman Publishers, New York (1982) 112. Martin, J., Ernst, R., Heisenberg, M.: Mushroom bodies suppress locomotor activity in Drosophila melanogaster. Learning and Memory 5, 179–191 (1998) 113. Martin, J.R., Ernst, R., Heisenberg, M.: Temporal pattern of locomotor activity in Drosophila melanogaster. Journal of Comparative Physiology A 184, 73–84 (1999) 114. Martin, J.R., Raabe, T., Heisenberg, M.: Central complex substructures are required for the maintenance of locomotor activity in Drosophila melanogaster. Journal of Comparative Physiology A 185, 277–288 (1999) 115. Mauelshagen, J.: Neural correlates of olfactory learning paradigms in an identified neuron in the honeybee brain. Journal of Neurophysiology 69(2), 609–625 (1993) 116. McBride, S., Giuliani, G., Chol, C., Krause, P., Correale, D., Watson, K., Baker, G., Siwicki, K.: Mushroom body ablation impairs short-term memory and long-term memory of courtship conditioning in Drosophila melanogaster. Neuron 24, 967–977 (1999) 117. McFarland, D., B¨osser, T.: Intelligent Behavior in Animals and Robots. MIT Press, Cambridge (1993) 118. Menzel, R.: Searching for the memory trace in a mini-brain, the honeybee. Learning and Memory 8, 53–62 (2001) 119. Menzel, R., Blakers, M.: Colour receptors in the bee eye - morphology and spectral sensitivity. Journal of Comparative Physiology A: Sensory, Neural, and Behavioural Physiology 108(1), 11–33 (1976) 120. Menzel, R., Giurfa, M.: Cognitive architecture of a mini-brain: the honeybee. Trends in Cognitive Sciences 5(2), 62–71 (2001) 121. Metta, G., Fitzpatrick, P.: Better vision through manipulation. Adaptive Behavior 11, 109– 128 (2003) 122. Meyer, J.A., Guillot, A., Girard, B., Khamassi, M., Pirim, P., Berthoz, A.: The psikharpax project: Towards building an artificial rat. Robotics and Autonomous Systems 50, 211–223 (2005) 123. Mishima, T., Kanzaki, R.: Physiological and morphological characterisation of olfactory descending interneurons of the male silkworm moth, bombyx mori. Journal of Comparative Physiology A 184, 143–160 (1999) 124. Mizunami, M., Iwasaki, M., Nishikawa, M., Okada, R.: Modular structures in the mushroom body of the cockroach. Neuroscience Letters 229, 153–156 (1997) 125. Mizunami, M., Okada, R., Li, Y., Strausfeld, N.: Mushroom bodies of the cockroach: activity and identities of neurons recorded in freely moving animals. Journal of Comparative Neurology 402, 501–519 (1998) 126. Mizunami, M., Weibrecht, J., Strausfeld, N.: Mushroom bodies of the cockroach: their participation in place memory. Journal of Comparative Neurology 402, 520–537 (1998) 127. Mobbs, P.: The brain and the honeybee Apis mellifera. I. The connections and spatial organization of the mushroom bodies. Philosophical Transactions of the Royal Society B 298, 309–354 (1982) 128. Mueller, M., Homberg, U., Kuehn, A.: Neuroarchitecture of the lower division of the central body in the brain of the locust (Schistocerca gregaria). Cell Tissue Research 288, 159–176 (1997)
Perception for Action in Insects
39
129. Newell, A., Simon, H.: Computer science as empirical enquiry: symbols and search. Communications of the Association for Computer Machinery 19, 113–126 (1976) 130. Nicolis, S., Tsuda, I.: On the parallel between Zipf’s law and 1/f processes in chaotic systems possessing coexisting attractors: a possible mechanism for language formation in the cerebral cortex. Progress of Theoretical Physics 82, 254–274 (1989) 131. Nishikawa, M., Nishino, H., Mizunami, M., Yokohari, F.: Function-specific distribution patterns of axon terminals of input neurons in the calyces of the mushroom body of the cockroach, periplaneta americana. Neuroscience Letters 245, 33–36 (1998) 132. Nolfi, S., Floreano, D.: Evolutionary Robotics. The Biology, Intelligence, and Technology of Self-organizing Machines. MIT Press, Cambridge (2000) 133. Nowotny, T., Huerta, R., Abarbanel, H., Rabinovich, M.: Self-organization in the olfactory system: one shot odor recognition in insects. Biological Cybernetics 93, 436–446 (2005) 134. O’Donnell, S., Donlan, N., Jones, T.: Mushroom body structural change is associated with division of labor in eusocial wasp workers (Polybia aequatorialis, Hymenoptera: Vespidae). Neuroscience Letters 356, 159–162 (2004) 135. Okada, R., Ikeda, J., Mizunami, M.: Sensory responses and movement-related activities in extrinsic neurons of the cockroach mushroom bodies. Journal of Comparative Physiology A 185, 115–129 (1999) 136. Okada, R., Sakura, M., Mizunami, M.: Distribution of dendrites of descending neurons and its implications for the basic organisation of the cockroach brain. Journal of Comparative Neurology 458, 158–174 (2003) 137. Okajima, K., Tanaka, S., Fujiwara, S.: A heteroassociative memory network with feedback connection. In: Caudill, M., Butler, C. (eds.) Proc. IEEE First International Conference on Neural Networks, pp. 711–718. IEEE, Los Alamitos (1987) 138. Olshausen, B., Field, D.: Sparse coding of sensory inputs. Current Opinion in Neurobiology 14, 481–487 (2004) 139. O’Regan, J.K., Noe, A.: A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences 24, 939–1031 (2001) 140. Paine, R.W., Tani, J.: Motor primitive and sequence self-organization in a hierarchical recurrent neural network. Neural Networks 17, 1291–1309 (2004) 141. Pascual, A., Preat, T.: Localization of long-term memory within the Drosophila mushroom body. Science 294, 1115–1117 (2001) 142. Perez-Orive, J., Bazhenov, M., Laurent, G.: Intrinsic and circuit properties favor coincidence detection for decoding oscillatory input. Journal of Neuroscience 24, 6037–6047 (2004) 143. Perez-Orive, J., Mazor, O., Turner, G., Cassenaer, S., Wilson, R., Laurent, G.: Oscillations and sparsening of odor representations in the mushroom bodies. Science 297, 359–365 (2002) 144. Pfeifer, R., Scheier, C.: Understanding intelligence. The MIT Press, Cambridge (1999) 145. Philipona, D., O’Regan, J., Nadal, J.P., Coenen, O.: Perception of the structure of the physical world using unknown multimodal sensors and effectors. In: Advances in Neural Information Processing Systems (2004) 146. Porr, B., Woergoetter, F.: Inside embodiment - what means embodiment to radical constructivists? Kybernetes 34, 105–117 (2005) 147. Port, R., van Gelder, T. (eds.): Mind as motion: explorations in the dynamics of cognition. A Bradford Book. The MIT Press, Cambridge (1995) 148. Prescott, T., Redgrave, P., Gurney, K.: Layered control architectures in robots and vertebrates. Adaptive Behavior 7, 99–127 (1999) 149. Prescott, T.J., Gurney, K., Montes-Gonzalez, F., Humphries, M., Redgrave, P.: The robot basal ganglia: Action selection by an embedded model of the basal ganglia, pp. 349–356. Plenum Press, New York (2002)
40
B. Webb and J. Wessnitzer
150. Pressing, J.: Referential dynamics of cognition and action. Psychological Review 106, 714– 747 (1999) 151. Prigogine, I.: From being to becoming: time and complexity in the physical sciences. Freeman, NY (1980) 152. Rieke, F., Warland, D., de Ruyter van Steveninck, R., Bialek, W.: Spikes: exploring the neural code. MIT Press, Cambridge (1997) 153. Rizzolatti, G., Craighero, L.: The mirror neuron system. Annual Review of Neuroscience 27, 169–192 (2004) 154. Rock, I.: In defense of unconscious inference. Wiley, New York (1997) 155. van Rooij, I., Bongers, R., Haselager, W.: A non-representational approach to imagined action. Cognitive Science 26, 345–375 (2002) 156. Rosay, P., Armstrong, D., Wang, Z., Kaiser, K.: Synchronized neural activity in the Drosophila memory centres and its modulation by amnesiac. Neuron 30, 759–770 (2001) 157. Rosenblatt, J.K., Payton, D.W.: A fine-grained alternative to the subsumption architecture for mobile robot control. In: Proc. of the IEEE Int. Conf. on Neural Networks, vol. 2, pp. 317–324. IEEE Press, Washington (1989) 158. Rosenblum, L.: Acoustical information for controlled collisions. In: Schick, A. (ed.) Contributions to Psychological Acoustics. Bibliotheks- und Informationssystem der Carl von Ossietzky Universitaet Oldenburg, Oldenburg (1993) 159. Rosenblum, L., Wuestefeld, A., Anderson, K.: Auditory reachability: an affordance approach to the perception of sound source distance. Ecological Psychology 8, 1–24 (1996) 160. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach. Prentice Hall, Englewood Cliffs (1995) 161. Rybak, J., Menzel, R.: Anatomy of the mushroom bodies in the honey bee brain: the neuronal connections of the alpha-lobe. Journal of Comparative Neurology 334, 444–465 (1993) 162. Schildberger, K.: Multimodal interneurons in the cricket brain: properties of identified extrinsic mushroom body cells. Journal of Comparative Physiology A 154, 71–79 (1984) 163. Schoener, G., Dijkstra, T., Jeka, J.: Action-perception patterns emerge from coupling and adaptation. Ecological Psychology 10, 323–346 (1998) 164. Sch¨oner, G., Dose, M., Engels, C.: Dynamics of behaviour: theory and applications for autonomous robot architectures. Robotics and Autonomous Systems 16, 213–245 (1995) 165. Schuermann, F.W.: Bemerkungen zur Funktion der Corpora pedunculata im Gehirn der Insekten aus morphologischer Sicht. Experimental Brain Research 19, 406–432 (1974) 166. Schultz, W., Dayan, P., Montague, P.: A neural substrate of prediction and reward. Science 275, 1593–1599 (1997) 167. Schultz, W., Dickinson, A.: Neuronal coding of prediction errors. Annual Review of Neuroscience 23, 473–500 (2000) 168. Schwaerzel, M., Monastirioti, M., Scholz, H., Friggi-Grelin, F., Birman, S., Heisenberg, M.: Dopamine and octopamine differentiate between aversive and appetitive olfactory memories in Drosophila. Journal of Neuroscience 23(33), 10495–10502 (2003) 169. Seth, A.: Evolving action selection and selective attention without actions. In: Pfeiffer, R. (ed.) From Animals to Animats 5, Proc. of 5th Intl. Conf. on Simulation of Adaptive Behavior. MIT Press/Bradford Books (1998) 170. Skarda, C., Freeman, W.: How brains make chaos in order to make sense of the world. Behavioral and Brain Sciences 10, 161–173 (1987) 171. Smithers, T.: On behaviour as dissipative structures in agent-environment interaction systems. In: Ritter, H., Cruse, H., Dean, J. (eds.) Prerational Intelligence: Adaptive Behavior and Intelligent Systems Without Symbols and Logic, vol. 2, pp. 243–257. Kluwer Academic Pulishers, Dordrecht (2000)
Perception for Action in Insects
41
172. Stange, G., Stowe, S., Chahl, J., Massaro, A.: Anisotropic imaging in the dragonfly median ocellus: a matched filter for horizon detection. Journal of Comparative Physiology A 188, 455–467 (2002) 173. Steels, L.: Synthesising the origins of language and meaning using co-evolution, selforganisation and level formation. In: Evolution of Human Language. Edinburgh Univ. Press (1996) 174. Stopfer, M., Bhagavan, S., Smith, B., Laurent, G.: Impaired odour discrimination on desynchronization of odour-encoding neural assemblies. Nature 390, 70–74 (1997) 175. Strausfeld, N.: Structural organization of male-specific visual neurons in Calliphorid optic lobes. Journal of Comparative Physiology A 169, 379–393 (1991) 176. Strausfeld, N.: A brain region in insects that supervises walking. Progress in Brain Research 123, 273–284 (1999) 177. Strausfeld, N., Hansen, L., Li, Y., Gomez, R., Ito, K.: Evolution, discovery, and interpretations of arthropod mushroom bodies. Learning and Memory 5, 11–37 (1998) 178. Strausfeld, N., Li, Y.: Representation of the calyces in the medial and vertical lobes of cockroach mushroom bodies. Journal of Comparative Neurology 409, 626–646 (1999) 179. Strauss, R.: The central complex and the genetic dissection of locomotor behaviour. Current Opinion in Neurobiology 12, 633–638 (2002) 180. Strauss, R., Pichler, J.: Persistence of orientation toward a temporarily invisible landmark in Drosophila melanogaster. Journal of Comparative Physiology A 182, 411–423 (1998) 181. Tani, J., Ito, M.: Self-organization of behavioral primitives as multiple attractor dynamics: A robot experiment. IEEE Trans. on Systems, Man, and Cybernetics Part A: Systems and Humans 33, 481–488 (2003) 182. Thelen, E., Smith, L.: A dynamic systems approach to the development of cognition and action. MIT Press, Cambridge (1994) 183. Tuller, B., Case, P., Ding, M., Kelso, J.: The nonlinear dynamics of speech categorization. Journal of Experimental Psychology: Human Perception and Performance 20, 3–11 (1994) 184. Turvey, M.T.: Dynamic touch. American Psychologist 5, 1134–1152 (1996) 185. Tyrrell, T.: Computational mechanisms for action selection. Ph.D. thesis, Department of Artificial Intelligence, University of Edinburgh (1993) 186. Verschure, P., Voegtlin, T., Douglas, R.: Environmentally mediated synergy between perception and behaviour in mobile robots. Nature 425, 620–624 (2003) 187. Vitzthum, H., Mueller, M., Homberg, U.: Neurons of the central complex of the locust Schistocerca gregaria are sensitive to polarised light. The Journal of Neuroscience 22(3), 1114–1125 (2002) 188. Waddell, S., Armstrong, D., Kitamoto, T., Kaiser, K., Quinn, W.: The amnesiac gene product is expressed in two neurons in the Drosophila brain that are critical for memory. Cell 103, 805–813 (2000) 189. Waddell, S., Quinn, W.: Flies, genes and learning. Annual Review of Neuroscience 24, 1283–1309 (2001) 190. Wann, J.: Anticipating arrival: Is the tau margin a specious theory? Journal of Experimental Psychology: Human Perception and Performance 22, 1031–1048 (1996) 191. Webb, B.: Can robots make good models of biological behaviour? Behavioral and Brain Sciences 24, 1033–1050 (2001) 192. Webb, B.: Neural mechanisms for prediction: do insects have forward models? Trends in neuroscience 27(5), 278–282 (2004) 193. Wehner, R.: ‘Matched filters’ - neural models of the external world. Journal of Comparative Physiology 161, 511–531 (1987) 194. Wehner, R.: Desert ant navigation: how miniature brains solve complex tasks. Journal of Comparative Physiology A 189, 579–588 (2003)
42
B. Webb and J. Wessnitzer
195. Weiner, J.: On the practice of ecology. Journal of Ecology 83, 153–158 (1995) 196. Wessnitzer, J., Webb, B.: Multimodal sensory integration in insects - towards insect brain control architectures. Bioinspiration and Biomimetics 1, 63–75 (2006) 197. Wexler, M., Kosslyn, S., Berthoz, A.: Motor processes in mental rotation. Cognition 68, 77–94 (1998) 198. Wilson, M.: Six views of embodied cognition. Psychological Bulletin and Review 9, 625– 636 (2002) 199. Wittmann, T., Schwegler, H.: Path integration - a network model. Biological Cybernetics 73, 569–575 (1995) 200. Woergoetter, F., Porr, B.: Temporal sequence learning, prediction and control - a review of different models and their relation to biological mechanisms. Neural Computation 17, 245–319 (2005) 201. Wohlgemuth, S., Ronacher, B., Wehner, R.: Ant odometry in the third dimension. Nature 411, 795–798 (2001) 202. Wolpert, D., Ghahramani, Z.: Computational principles of movement neuroscience. Nature Neuroscience 3, 1212–1217 (2000) 203. Wolpert, D., Ghahramani, Z., Jordan, M.: An internal model for sensorimotor integration. Science 269, 1880–1882 (1995) 204. Wuestenberg, D., Boytcheva, M., Gruenewald, B., Byrne, J., Menzel, R., Baxter, D.: Current- and voltage-clamp recordings and computer simulations of Kenyon cells in the honeybee. Journal of Neurophysiology 92, 2589–2603 (2004) 205. Wyley, D., Bischof, W., Frost, B.: Common reference frame for neural coding of translational and rotational optic flow. Nature 392, 278–282 (1998) 206. Yusuyama, K., Meinertzhagen, I., Schuermann, F.W.: Synaptic organization of the mushroom body calyx in Drosophila melanogaster. Journal of Comparative Neurology 445, 211– 226 (2002) 207. Zeil, J., Hofmann, M., Chahl, J.: Catchment areas of panoramic snapshots in outdoor scenes. Journal of the Optical Society of America A 20, 450–469 (2003)
2 Principles of Insect Locomotion H. Cruse1 , V. D¨urr2 , M. Schilling1, and J. Schmitz1 1
2
University of Bielefeld, Department of Biological Cybernetics and Theoretical Biology, P.O. Box 100131, D-33501 Bielefeld, Germany {holk.cruse,malte.schilling,josef.schmitz}@.uni-bielefeld.de University of Cologne, Institute of Zoology, Weyertal 119, D-50931 K¨oln, Germany
[email protected]
Abstract. Walking animals can deal with large range of difficult terrain and can use their legs for other purposes as sensing or object manipulation. This is possible although the underlying control system is based on neurons which are considered to be quite sloppy and slow computational elements. Important aspects of this control system are error tolerance and the capability of selforganization. This chapter concentrates on insect walking behaviour. Apart from some references to relevant morphology it addresses behavioural investigations which are paralleled by software simulations to allow a better understanding of the underlying principles. Furthermore, hints to neurophysiology and to hardware simulations are given. Characteristic properties of the control system are its decentralized architecture that relies heavily on internal feedback as well as on sensory feedback, and that exploits the physics of the body.
2.1 Introduction Locomotion, i.e. changing places, is the basis of all actions. Terrestrial locomotion in artificial systems is usually based on wheels or caterpillar tracks. However, both methods require a more or less flat terrain which, at least in the case of wheels, has to be construed as an artifact in the form of roads or trails. Animals, even “simple” insects, survive in any terrain and do not require artificial conditions. They walk over stony or sandy ground, climb in branches, surmount obstacles of arbitrary form and height as well as deep gaps that may be wider than their body length. Many animals can also use their legs for swimming. Furthermore, legs are used for complex manipulations like feeding or body cleaning or transportation of objects. In insects and crustaceans, leg movement is controlled by a comparatively small brain that uses computational elements, the neurons, which are, compared to modern computers, slower and less exact by many orders of magnitude. On the other hand, the control system must be extremely reliable and robust and must work successively also under adverse circumstances, e.g., loss of legs and with low energy consumption. Thus, the underlying control strategies are of high interest not only for the biologist doing basic research, but also for the engineer when it comes to construct artificial walking systems. Conversely, constructing walking systems using software simulations or building robots is also of interest for the biologist: like software simulation, such a “hardware simulation” is a critical tool to formulate and test hypotheses that cannot directly be investigated using the biological system only. P. Arena and L. Patan`e (Eds.): Spatial Temporal Patterns, COSMOS 1, pp. 43–96. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
44
H. Cruse et al.
At first sight, walking seems to be a not very interesting behaviour, because it appears to be fairly automatic. We do not have to think consciously about moving the joints when walking. Nevertheless, walking in a natural environment requires considerable “motor intelligence” and can be regarded as a paradigm for control of behaviour in general. First of all, walking, as almost all behaviour, has to deal with redundancy. In most biological systems for motor control, particularly those concerned with walking, the number of degrees of freedom is normally larger than that necessary to perform the task. This requires the system to select among different alternatives according to some, often context-dependent, optimization criteria. As a consequence the system usually has to have some autonomy. Autonomy — as understood here — does not simply mean, as it often does in robotics, energy autonomy. Rather, we have the literal meaning in mind, i.e., to be not dependent on commands given by an external system, such as an operator. Thus, an autonomous system follows self-contained rules and makes its own decisions. Therefore, the experimenter does not have direct control of the important inputs to the motor system. Furthermore, such systems must adapt to complex, often unpredictable environments. Sensorimotor connections give rise to a “loop through the real world”, and as a result, the properties of the environment affect the behaviour of the system. Despite these experimental and theoretical difficulties, the complexity makes the study of motor mechanisms especially challenging. This is because they illustrate to a high degree the task of integrating influences from the environment, mediated through peripheral sensory systems, with central processes reflecting the state and needs of the organism. In a walking insect, at least 18 joints (i.e., three per leg) have to be controlled. Because the environment may change drastically from one step to the next, and even the geometrical properties of the body may change, the control of walking is anything but a trivial task.
2.2 Biological Systems Whereas insects and crustaceans usually use six or more legs for walking, there are other animals which are confined to four legs (most mammals) or two legs (e.g., birds, some primates), but tetrapod and bipedal locomotion can also be observed in insects, e.g., mantids [172] and cockroaches [100, 97, 28], for a review see [96]. Systems using lesser legs require a more complex controller because holding the body upright against gravity is more difficult when using a small number of legs. This does not only refer to static stability, but includes the computationally more difficult problem of dynamical stability. Another disadvantage is that loss of a leg can less easily be compensated for when the system is endowed with only a few legs. Although many animals have body appendages that can be used for walking, only a few species are investigated in sufficient detail. In crustaceans, this is the crayfish and the lobster (with essentially four pairs of walking legs, i.e. eight legs). In the six-legged insects, this is the stick insect [14, 108, 47, 17], the cockroach [75, 199, 195, 127, 152] and, to a minor extent, the locust [33]. Some investigations of ants, e.g., [210], and, more recently, of Drosophila [193], are also of interest. In four-legged animals this is the dog and, mainly, the cat. Studies concentrating on neurophysiology but not so much on behaviour have also been done with mice and rats and with newt. The two-legged
Principles of Insect Locomotion
45
animal studied is, of course, man. However, most studies concentrate on standing, on control of upright posture or on handicapped people, e.g., paraplegic patients. The lower level control problems, i.e. control and properties of individual muscles in the context of behaviour, have, however, most intensively studied in man [124] and in cat [156]. Recently the overwhelming importance of muscular properties to the control problem in general has also been shown for insects (see Actuators, Section 2.8). In the remainder, this chapter, like the previous one, mainly concentrates on studies of insects, focusing on topics relating to locomotion. Insects have been studied intensively, not only because they are simpler to experiment with, but also because they are assumed to be construed in a simpler way and therefore hopefully easier to understand than vertebrates. The body of each insect, as is the case for all arthropods, is highly segmented and essentially consists of a head, a thorax and an abdomen. Each of these parts consists of several primary segments, some of those are fused however. This segmentation is reflected by the morphology of the central nervous system (CNS), a fact that eases the neurophysiological investigations of the walking behaviour of these animals. Basically, the whole central nervous system consists of an anteriorly and dorsally located brain (head ganglia) and a double chain of ventral segmental ganglia. The latter are interconnected within the segments by commissures and between the segments by connectives. The organization of the CNS of the stick insect is depicted in Fig. 2.1. Due to the rich endowment with sensory systems most insects posses a large brain located in the head capsule, in which the olfactory and mechanosensitive antennal and the visual inputs are processed. Situated ventrally in the head capsule is the subesophageal ganglion which serves the mouthparts. The thorax contains three sections, the prothorax, the mesothorax and the metathorax, each equipped with one pair of legs. Each thorax segment is served by its own ganglion; the left and right hemiganglion is fused to one thoracic ganglion. However, the bilateral partitioning is preserved internally as was revealed by neuroanatomical studies of several insect species (stick insect: [136], cockroach: [109], locust: [197, 161]). Most important for the investigation of the walking system of insects is the fact that each hemiganglion contains all necessary neurons for processing sensory information from the adjacent leg and all inter- and motoneurons for the production of a cyclic motor output to the muscles of the adjacent leg. Each abdominal segment also contains its own ganglion, except for the first few which might be fused with the metathoracic ganglion and the terminal ganglion which is again a fused ganglion from several segments. The main morphological characteristic of arthropods, specifically insects, is their exoskeleton. Therefore the muscles are arranged inside the tube-like skeleton (for details on muscular properties see Section 2.8: Actuators). The geometry of the leg is shown in Fig. 2.2. It usually contains five segments, coxa, trochanter, femur, tibia and tarsus. In the stick insect, as in many other insects, trochanter and femur form one rigid segment whereas, e.g., in the cockroach this joint is moveable to some extent. The coxa-trochanter (β -) and femur-tibia (γ -) joints are simple hinge joints with one degree of freedom corresponding to elevation and extension of the tarsus, respectively. The thorax-coxa (α -) joint, which connects the leg to the body,
46
H. Cruse et al.
Fig. 2.1. Sketches of the body (a) and the organization of the CNS (b) of an insect (a) adopted from [7]
is more complex, but most of its movement is in a rostrocaudal direction around a fixed axis described by the Euler angles φ and ψ . The additional degree of freedom, allowing changes in the alignment of this axis, is little used in normal walking, so the leg can be considered as a manipulator with three degrees of freedom for movement in three dimensions. Insects Use Feet with Adhesive Structures: The foot, or tarsus, is a morphological structure that is extremely important for walking in insects, but has only recently been studied in more detail [106, 95, 167]. The tarsus consists of a chain of up to five segments connected by passive joints (Fig. 2.3 a). The joints are flexed when a tendon driven by the retractor unguis muscle is shortened. The antagonistic force is produced by elastic properties of the tarsal joints. Each tarsal segment is equipped with different adhesive structures. In the case of the stick insect there are, from proximal to distal, four Euplantulae, one Arolium and two claws (Fig. 2.3 b-e). The surface of the Euplantulae is covered with small hair-like structures of about 5 µ m length, whereas the Arolium has a smooth surface that is covered with some fluid. Only the Arolium, but not the Euplantulae, is able to adhere to a glass surface. When the Arolium is covered with paint and the animal stands on six legs on a glass plate, about 40 times the body weight can
Principles of Insect Locomotion
47
Fig. 2.2. a) Schematic plot of insect leg morphology and approximate position of some mechanoreceptors, front view. b) joint between thorax and coxa, front view. c), d) two close ups showing the dorsal hair plate situated at the dorsal basis of the coxa (view from anterior). Forward movement of the coxa would move the hairs further below the membrane of the thoracic-coxal joint.
be held when the animal is loaded parallel to the surface of the substrate, whereas only a force of about body weight can be produced when the animal is loaded in a direction perpendicular to the surface. Exploiting these structures enables the insect to adopt a walking strategy different to that of most robots. Usually it is assumed that a walking system should adopt a statically stable position in every moment of time (disregarding dynamic cases). Static stability is then defined in such a way that the vertical projection of the centre of gravity must fall inside the support polygon spanned by the legs on the ground. This is a particular problem for insects because the centre of gravity is often situated near the base of the hind legs. Therefore, lifting a hind leg bears the danger of reaching an instable position. Kindermann [135] could show that this is actually the case in about 90% of the hind leg steps when a stick insect walks on a horizontal glass plate, and that the duration of the “instable” situation is longer than the time the animal would need to fall and touch the ground with the body. In a control experiment, he could show that tarsal contact with front or middle legs is indeed necessary to avoid falling: When the animal has to walk from the glass plate onto a surface covered with fine sand, the animal indeed falls over after the last hind leg lifted off the glass plate. This may be due to the fact that the hind legs are still loaded because the front legs cannot take over the load by pulling the body
48
H. Cruse et al.
Fig. 2.3. Tarsus morphology of Carausius morosus. a) side view of the tarsus. The tendon of the retractor muscle is schematically indicated by dotted lines. b) view from below, left side: claws and Arolium, right: 4 pairs of Euplantulae. c) claws and Arolium, d) Euplantulae, e) surface of Euplantulae.
to the sandy ground. Interestingly, in many cases the animal decreased walking speed before the hind leg was lifted, or stopped forward walking completely. This indicates the capability to predict the occurrence of an instable situation.
2.3 Sensors Two types of sensors will be treated separately: a) sensors that monitor the state of the body (for example joint angles, joint velocity, force, e.g., developed by a muscle against direction of gravity), for short proprioceptors or “body sensors”, and (b) sensors that provide information of the situation outside the body, for example to detect obstacles, for short exteroceptors or “environmental sensors”. Biological sensors usually differ from technical solutions insofar as the former in general give mixed information. Usually position and velocity and/or acceleration are transmitted by one channel, but are separated in technical systems. On the other hand, in biology, a signal representing one value, e.g., a joint angle often is represented not by one single channel, but by a number of parallel channels each of which represents only a given, e.g., angular, range (“range fractionation”, [5, 147] often represented by a topological arrangement of the dendrites [209, 150]. Because technical sensors appear “cleaner” in a physical sense, it is no problem to mix them in order to copy biological
Principles of Insect Locomotion
49
solutions. However, except for software simulations, we are not aware of specific robot applications of these principles. Related to this, the multitude of biological sensors is usually not dealt with, although this appears to be an important basis for the property of self-regulation and self-adjustment of biological systems as has been shown by recent simulation studies, e.g., Linder [143]. 2.3.1
Mechanosensors
Let us begin with the “body sensors”. Important sensors used by biological systems monitor joint position, joint velocity and joint acceleration [113]. It is typical for biological systems that there is redundant information: there are parallel arrangements of the same type of sensors as well as of different types that monitor essentially the same variable (e.g., joint position). In biology there are also contact sensors and load sensors. Load sensors are much less investigated in biology. However, recent investigations both in insects and in mammals concentrate on the functional role of load/force sensors (that measure stress in the exoskeleton or in the muscular tendons) and their applications to the control of leg movement. Negative feedback to control muscle force and positive feedback used to maintain and stabilize stance movement have been reported [87]. Interestingly, most insects do not have special gravity receptors as they are known from mammals and many other animals. Instead, walking insects exploit the different activations of feedback loops that control leg joints or other body appendages [201, 8]. Depending on the direction of gravity, different joint actuators (i.e. muscles) require different activation to hold the body in appropriate position. These differences can be used to gain information on the direction of the gravity vector. Why do other animals use specific gravity receptors? Six-legged insects have not only at least three legs on the ground to form a support tripod, but in addition have different adhesive structures at their feet (see above) that increase stability dramatically, allowing them to walk on vertical walls and even pending from a horizontal ceiling. We would speculate that the indirect method mentioned above to determine the direction of the gravity vector that is used by many insects is sufficient for these cases, but is too imprecise and too slow for swimming or flying animals or for walking animals with only four or two legs where reaction to changes in gravity direction has to be immediate. Therefore, direct gravity sensors may be applied for in latter cases. Mechanosensors found in insect legs can functionally be separated into position sensors and load sensors. Position sensors monitor joint angles, but as biological sensors have mixed properties, they may also transmit information on velocity and/or acceleration of joint movements. Load sensors are distributed over the exoskeleton (cuticle) of the leg [163] and monitor strain within the cuticle, functionally very much like technical strain gauges. 2.3.1.1 Position/Movement Sensors Morphologically, joint position and movement sensors come in different forms. The most obvious ones are the hair plates, groups of sensory hairs that are situated at joints in such a way that during movement of the joint the individual sensory hairs are bent by the soft joint membrane. The farther the joint is moved, the more each individual hair is bent and the larger is the number of hairs being bent. Such hair plates can be found
50
H. Cruse et al.
at the thoracic-coxal (α -) joint and at the coxa-trochanter (β -) joint (Fig. 2.2). Another important type of position sensors is the chordotonal organ. This is formed by sensory cells situated within partly elastic, partly rigid structures that span the joint. Movement of the joint leads to elongation or relaxation of the elastic part and of the sensitive structures of the sensory cells. An intensively investigated example is the femoral chordotonal organ [14] that monitors the femur-tibia (γ -) joint. Chordotonal organs can also be found at the thorax-coxa (α -) joint with in part complex mechanical structures [121]. Other internal and functionally quite similar sense organs are the strand receptor organs [27]. Furthermore, there are multipolar sensory cells situated near the joint membranes [12]. Chordotonal organs and hair plates have been intensively studied and are known to contribute to leg movement via negative (resistance reflexes) and positive (assistance reflexes) feedback. Leg contact with the substrate could be measured by exploiting signals from position sensors or load sensors or by individual large sensory hairs that are distributed over the cuticle (some can be seen in Figs. 2.2). 2.3.1.2 Load Sensors Two basic types of receptors detect forces in insect legs: campaniform sensilla are sense organs that monitor forces via strains that occur in the cuticle [163]. Multipolar receptors found in direct association with muscle tendons (apodemes) can signal tensions developed by some leg muscles and potentially encode external loads [110, 12, 148]. However, little is known about the effects of apodeme receptors in posture and locomotion [12] whereas some studies are available on the role of campaniform sensilla. Campaniform sensilla monitor forces as strains in the exoskeleton and can encode force increases and decreases. A campaniform sensillum consists of a sensory neuron whose dendrite inserts into a cap at the surface of the cuticle (Fig. 2.4). Forces applied to the exoskeleton, or developed by contractions of leg muscles, generate strains that produce mechanical distortion of the cap and discharge of the sensillum. In the legs, individual receptors are directionally sensitive and responses are correlated with the orientation of the cuticular cap. The discharges of sensilla depend upon their location and the vectoral direction of the imposed or self generated forces [206, 73, 37, 129]. In contrast to muscle apodeme or tendon organs, the coupling of campaniform sensilla to muscle tension is indirect and the distribution of strain depends upon joint position [170]. Campaniform sensilla are found in groups whose arrangement is similar, but not identical in different insects [163, 112, 122], (Fig. 2.4). The largest number of groups is found on the trochanter (Groups 1-4 in cockroaches, Groups 1-3 in stick insects; [112, 77]). Another group (fCS) is also found on the proximal femur, adjacent to the trochanterofemoral joint [13, 179], which is functionally a fused joint in stick insects but mobile in cockroaches. Smaller aggregations of sensilla are found on the tibia and individual or pairs of receptors occur on the tarsal segments [134, 160]. In the tibial group of sensilla of cockroaches, the receptor caps have two mutually perpendicular orientations (proximal and distal subgroups; [189, 207]. Forces applied in directions and magnitudes that mimicked loading in upright posture showed that proximal sensilla respond to force increases and distal sensilla discharge when forces decline [168, 170]. Discharges during releases from applied forces are evident in studies of campaniform sensilla in
Principles of Insect Locomotion
51
Fig. 2.4. Anatomy of campaniform sensilla fields in insects. (A) A campaniform sensillum consists of a sensory neuron whose dendrite terminates in a small cuticular cap in the exoskeleton and whose axon projects to the central nervous system. The caps show consistent orientations within a group, as in this scanning electron micrograph of posterior trochanteral sensilla of a stick insect. (B) Campaniform sensilla are found in groups at discrete locations in the leg (photo of cockroach hindleg). Most groups are found on the trochanter and a smaller number of receptors are found on the femur and tibia. (C, D) In stick insects, a large group is located on the proximal femur (fCS), opposite the posterior trochanteral sensilla (pCS). (taken from [204])
other insects [77, 151] and regularly occur in crustacean mechanoreceptors (cuticular stress detectors [146]), but mechanisms underlying these responses are presently unknown. However, these studies showed that the system has active signals both for loading and unloading of the leg. In summary, campaniform sensilla are sense organs that are specialized to encode forces in the legs and can provide the system with detailed data about the direction, rate and magnitude of loads [169]. Interestingly, the grouped campaniform sensilla appear near the basis of the legs [10, 179, 116] where the strain is largest due to the long lever arm. This is different from most technical solutions where force sensors usually are mounted next to the tip of the leg. As for the position sensors, negative and positive feedback effects have been described (e.g., [6, 14, 44, 155, 156, 176, 177, 208]). Schmitz and Stein [182] described the convergence of the reflex effects of load and position sensors on the leg motoneuron pools and Stein and Schmitz [191] reported that already at the level of the primary afferents load and position signals are preprocessed by presynaptic inhibition. 2.3.2
Environmental Sensors
Seen from the viewpoint of a human or a cat, visual sensors appear to be the most important ones that deliver information concerning the actual environmental situation. Although many insects have highly developed eyes, recent findings have shown that their antennae (feelers) are much more important for the investigation of the nearby environment [24, 84, 200, 171], for a review see [190]. It has been argued that using actively moved antennae provide a faster and more robust solution to detect external objects. Extraction of (in particular 3D) information from vision sensors is notoriously slow and prone to all kinds of problems. In contrast, exploration of mechanical contact with the antennae (which, in evolutionary terms, can be considered as specialized
52
H. Cruse et al.
legs) gives fast and reliable information with much less computational load (for more information see Section 2.6: Insect antennae). Other environmental sensors (acoustic, olfactory), of course, exist, but appear not of direct relevance for control of leg movement and will therefore not be considered here. However, they certainly play a crucial role in course control and orientation/navigation.
2.4 Leg Controller Control of walking has to deal with two basic problems, the control of the movement of the individual leg and the spatio-temporal coordination of the different legs. We will first address the former question. The step cycle of the walking leg can be divided into two functional states, swing and stance. These two states differ strongly with respect to their control requirements, particularly because one concerns an open kinematic chain with no or hardly any mechanical coupling with another leg (swing), whereas the other concerns parallel closed kinematic chains, the movement of which is governed my mutual mechanical coupling (stance). Since the control requirements change drastically with a transition between the two states, the transition events, i.e. touch-down and lift-off of the leg, have been in the focus of many behavioural and physiological studies. The anterior (rostral) transition point, the one at touch-down of the leg, is where the transition from swing to stance occurs in a forward walking animal. It is called the anterior extreme position (AEP). The posterior (caudal) transition point where the leg lifts off the ground, is called the posterior extreme position (PEP). The PEP is where the stance-swing transition occurs. AEP and PEP depend on the walking situation, which becomes particularly evident if an animal changes the direction of heading or the orientation relative to gravity. Cruse [41] investigated walking on a horizontal plane, on a small horizontal path, walking up a vertical path and walking while hanging from a horizontal beam. Dean [65] simulated upward and downward walking by running the animal on a treadwheel which was decelerated or accelerated by a torque motor. Both authors showed that the AEP and PEP are dependent on walking conditions. In the following, different aspects of the task to control a walking leg will be considered separately as are for example, swing movement, stance movement or height control. For each aspect a specific controller module will be described that can be used to simulate this property. In a later step these elements will be put together to form a complete system (termed Walknet, Fig. 2.13) appropriate for the simulation of sixlegged walking. 2.4.1
Swing Movement
Not only the location of AEP and PEP, but also the form of the swing trajectory can vary. Cruse and Bartling [49] analysed the swing trajectory in different walking situations as walking freely on a horizontal plane, walking tethered on a treadwheel or on a slippery surface (Fig. 2.5). Application of inverse kinematics allows to compute the time courses of the different joint angles (see [49], Fig. 2.6). How can such a swing movement be controlled? To answer this question, neurophysiological studies have to be performed (see below). In addition, the behaviour of the animal can be observed first, and then
Principles of Insect Locomotion
53
Fig. 2.5. Measured trajectories of swing movements of front, middle, and hind leg. Upper panel: side view (x-z plane), lower panel: top view (x-y plane). Direction of movement is from left to right.
quantitative hypotheses concerning the underlying control structure can be formulated on the basis of simulations. In robots, swing movements are traditionally controlled such as to follow a predefined trajectory. In contrast, biological solutions are assumed to require high flexibility, even during fast swing movements (e.g., retargeted reaching movements, [85]). Many biologically inspired simulations and some robot controllers are therefore based on artificial neural networks. This is not only because they are conceptually nearer to biological systems compared to any other solution, but also because they allow for different methods of off-line and on-line learning, and for error tolerance. Generally, the task of finding a network that produces a swing movement seems to be easier than finding a network to control the stance movement because a leg in swing is mechanically uncoupled from the environment and, due to its small mass, essentially uncoupled from the movement of the other legs. As mentioned above, the leg is considered to contain three hinge joints, the thorax-coxa (α -) joint, the coxatrochanter (β -) and femur-tibia (γ -) joint. Thus, the control network must have at least three output channels: one for each leg joint. Earlier results [43, 71] have shown that the end position of the swing movement of middle and hind legs is determined by the actual position of the front and middle leg, respectively. In other words, the swinging leg uses the position of its anterior neighbour as a target. Principally, this requires the computation of direct and inverse kinematics, but Dean [62] could show that a simple, three-layered feedforward network could solve the task sufficiently well. This “target net” receives as input the three actual angles α , β and γ of the anterior leg and provides as output the desired angles αt , βt and γt of the targeting leg (Fig. 2.6). In other words, this net directly associates desired final joint angles at touch down depending on the current joint angles of a rostral leg, such that the tarsi of the two legs are at the same position [62]. There is no explicit calculation of either tarsus position. Physiological recordings from local and intersegmental interneurons [31] support the hypothesis that a similar approximate algorithm is implemented in the nervous system of the stick insect.
54
H. Cruse et al.
Fig. 2.6. Modelling mechanically uncoupled leg movements with swing-net. a) Swing-net is an ANN consisting of three neuroids, the output of which sets angular velocity of the TC- (dα /dt), CT- (dβ /dt) and FT-joint (dγ /dt). Each neuroid receives weighted input (solid circles) from external sensors that signal the current joint angles (α , β , γ ) and obstacle contact (r1 to r4). A constant input models internal resting activation (1). Further inputs are the target angles (αt , βt , γt ) from the target net. The control loop is closed via the leg itself, as movement changes the posture and, thus, the sensory input. The target net consists of six neuroids that receive sensory input signalling the current joint angles of the next anterior leg (α1 , β1 , γ1 ). b) Example trajectories of swing-net, simulating normal protraction (dotted line) and an avoidance reflex (dashed line) upon obstacle contact (arrow). The reflex trajectory matches experimental results, i.e. average tarsus positions at different times (solid circles with error bars; ± S.D.) after mechanical disturbance of a swing movement (data from [178]).
Using this hypothetical network for the determination of the endpoint of the swing movement, another net is required for the control of the actual swing movement. A simple, one-layered feed-forward net (Fig. 2.6, swing-net) with three output units and six input units can produce movements that closely resemble the swing movements observed in walking stick insects [49]. The inputs correspond to three angles defining the current leg configuration and three target angles defining the configuration desired at the end of the swing. In the simulation, the three outputs, are set as the angular velocities of the joints, dα /dt, dβ /dt, and dγ /dt. During simulation, they are formally integrated (not shown in Fig. 2.6) to obtain the joint angles. In the animal, the movement of the leg itself can be considered as an integration of the muscle activities. Average frequency of action potentials during a motoneuron burst often show a linear relation to the mean joint velocity, e.g., during running [199] and searching [195]. The actual angles resulting from the integration and any external disturbance are measured and fed back into the net. Through optimization, the network can be simplified to only 8 (front and middle leg) or 9 (hind leg) non-zero weights (for details see [50]). Despite its simplicity, the net not only reproduces the trained trajectories, it is able to generalize over a considerable range of untrained situations, demonstrating a further advantage of the network approach. Moreover, the swing-net is remarkably tolerant with respect to external disturbances. The learned trajectories create a kind of attractor to which the disturbed trajectory returns. This compensation for disturbances occurs
Principles of Insect Locomotion
55
Fig. 2.7. Vector field representing the movement of the tarsus of a left front leg produced by the swing-net. Upper panel: Projection of a parasagittal section, i.e., the x-z plane shown in the right figure (y = 12mm). Lower panel: Projection of a horizontal section, i.e., the x-y plane slightly below the leg insertion (z= -3mm). Left is posterior, right is anterior. The average posterior extreme position (start of swing movement) and of the average anterior extreme position (end of swing movement) are shown by an open square and by a closed square, respectively.
because the system does not compute explicit trajectories, but simply exploits the physical properties of the world. The properties of this swing-net can be described by the 3D vector field in which the vectors show the movement produced by the swing-net at each tarsus position in the workspace of the leg. Fig. 2.7 shows the planar projections of one parasagittal section (a) and one horizontal section (b) through the work space. The complete velocity vector fields are similar to the force vector fields shown by [22], the major difference being that force vectors would be expected to be related to joint acceleration rather than velocity. In a detailed study using genetic algorithms Linder [143] could show that, for the middle leg, 7 non-zero weights are sufficient. Concentration on six of these weights allows to interpret the swing-net as to consists of 3 negative feedback controllers, one for each joint, whereby the controller of the thorax-coxa joint (α -joint) and of the coxatrochanter joint (β -joint) are coupled via the 7th weight. All 18 weights showed a small variability with the exception of this 7th weight. It is responsible for varying the maximum height of the swing movement and might therefore be subject to a learning procedure that changes this weight depending on the ruggedness of the substrate. Levator reflex: The ability of swing-net to compensate for external disturbances permits a simple extension in order to simulate an avoidance behaviour observed in insects.
56
H. Cruse et al.
When a leg strikes an obstacle during its swing, it initially attempts to avoid it by retracting and elevating briefly and then renewing its forward swing from this new position [70, 88]. In the augmented swing-net, an additional input similar to a tactile or force sensor signals such mechanical disturbances (Fig. 2.6 a, r1). This unit is connected by fixed weights to the three motor units in such a way as to produce the brief retraction and elevation seen in the avoidance reflex. Quite interestingly, recent findings by Ebeling and D¨urr [88] show that the avoidance reflex during swing movements of walking stick insects is independendent of the kinematics of the swing movement. That is, it is context-insensitive, even if swing direction differs considerably, for example during curve walking. Moreover, although the typical reflex action consists of combined rectraction, levation and flexion, these default actions per joint were shown to occur independently of each other.
Fig. 2.8. Leg searching-movements of a stick insect walking on rough terrain. The experimental situation was such that a stick insect reached the edge of a bridge (front leg) or crossed a gap between two bridges (middle and hind leg). In case of lacking foothold at the end of protraction, the leg performed characteristic searching-movements that look different for each kind of leg. Average trajectories of the tibia-tarsus-joint (heel) are plotted in a body-fixed coordinate frame, centred on the α -joint of the respective leg. Only the horizontal component of the movement is shown (xy-projection). (Fig. 1 from [86])
The described swing-net is able to produce swing movements as can be found in free walking animals in a very simple way. This neural network has been extended to account for a wider range of movements. Swing-net 2 [185] introduced a velocity factor which enforces steady movements by keeping the velocity constant. This leads to more natural movements regarding the dynamics of the movement and especially allows for searching movements, as can be found in the insect: An important adaptation of a swing movement can be observed in the stick insect when it does not find ground at the normal end of the swing movement. When the animal then steps into the hole, the leg performs a kind of search movements that consist of more or less regularly oscillating
Principles of Insect Locomotion
57
movements (locust: [157]; cockroach: [74, 195]; stick insect: [131, 18, 80, 23]). In stick insects of the species Carausius morosus, searching movements were shown to be stereotypic in that the tarsus movements always follow characteristic trajectories [80]. Fig. 2.8 shows how these trajectories differ between front, middle and hind legs. In front legs, tarsus loops are superimposed by retraction of the thorax-coxa joint. In middle legs, this retraction component is missing, in hind legs it is reversed into a continued protraction. Thus, each leg appears to search progressively towards the body centre. Careful analysis of the leg kinematics revealed no sign of switching from swing movement to a searching movement. In fact, D¨urr [80] showed that swing-net is able to model a swing movement with a terminal set of searching loops without the need of an additional controller (as it had been previously suggested, e.g., by Espenschied et al. [90]). Rather, searching movements may be viewed as unterminated swing movements with the oscillatory component reflecting the damping properties of the network controller. 2.4.2
Stance Movement
In insecta, leg movement during stance can be divided into two subtasks, movement in vertical direction (z-axis, see Fig. 2.7) and movement in the horizontal plane (x-y plane, see Fig. 2.7). The latter concerns forward and lateral movements, the former refers to control of body height. At first, we want to look at the movements in vertical direction: Measurement of correlation between body height and vertical force in open loop experiments and in closed loop experiments indicate that body height is controlled by a proportional negative feedback system [58]. Dynamical studies in walking animals [58] have shown that the system reacts very fast (time constant smaller than 20 ms), but it is open whether this reaction is due to passive viscoelastic properties or whether there are active elements contributing. Standing animals (stick insect: [133]; cockroach: [153]) have also been investigated considering both the complete animal and individual legs [56]. In these experiments the complete animal, i.e. with all legs contributing to height control are regarded. Simulations of measurements of animals walking over obstacles of different shapes support the idea that each leg represents an independent height controller [42]. This could directly be shown for standing animals [56]. In addition, the angle of the thorax-coxa joint shows a proportional (= spring-like) change that depends on the form of the substrate [42]. In the complete controller structure for a walking animal, Walknet, height control is performed by using a simple feedforward network (height net) that determines the required coxa-trochanter angle (β angle) individually in each leg depending on the actual joint values of the leg. Considering leg movement in the x-y plane, stance trajectories can be described by a straight line when the animal walks tethered on a treadwheel, which is a mechanical necessity. This is different for free walking animals (Fig. 2.9), where stance trajectories are more or less curved when plotted in a body fixed coordinate system [135]. To some extent, this corresponds to the body showing lateral oscillations even in straight walking. These oscillations result from the combined forces developed by the different legs that simultaneously perform a stance movement. These forces depend very much on the actual walking situation. Fig. 2.10 shows schematic time courses of the ground reaction
58
H. Cruse et al.
Fig. 2.9. Comparison of walking stick insects with Walknet: straight and curve walking. a) Stance trajectories of all six feet during forward walking in an unrestrained walking stick insect, Carausius morosus. Walking direction is toward the right. Solid circles mark the locations of the CTjoints. Trajectories are curved and vary from step to step. b) In comparison, stance trajectories of the Walknet simulation (top row, left column) are shorter and less variable, yet curvature and the nearly connecting trajectories of middle and hind legs are in very good agreement with the stick insect data. Left and right columns show the impact of walking speed on stance trajectories. Middle and bottom rows depict the impact of altered yaw turning commands (yawre f in Fig. 2.11). Turning direction is leftward, i.e. to the top of the diagrams. Trajectories are more variable than during straight walking and appear more curved.
Principles of Insect Locomotion
59
Fig. 2.10. Schematic presentation of ground reaction forces measured in different walking situations. a) walking on a horizontal plane, b) walking on a horizontal path (width 30 mm), c) walking up a vertical path d) walking when hanging from a horizontal beam. x-axis: positive forces propel the body in forward direction, y-axis: positive forces point away from the body, z-axis: positive forces decrease the clearance, i.e. the distance between body and substrate.
forces for front, middle, and hind legs in different walking situations: walking on a horizontal plane, walking on a horizontal path where the tarsi grasp the margin of the substrate from the side, walking up a vertical path, and walking when hanging from a horizontal beam [41]. It is immediately obvious that the forces developed by the legs are quite different in different situations, which means that it is difficult to attribute a given function to a given
60
H. Cruse et al.
leg. Leg function rather appears to depend on the actual situation. For example, when walking on the horizontal plane, front legs seem to act as feelers and do not provide significant forces, middle legs seem to function like passive struts, and hind legs appear to provide thrust. Quite in contrast, when walking up a vertical path, front legs develop strong forces in contrast to the hind legs. This appears also to be the case when walking on a heavy treadwheel [45]. It is not clear to what extent these different behaviours simply result from the changed geometrical situations with unchanged controller output (e.g., height control) and to what extent the controller adapts its output signals. Full, Blickhan and coworkers measured the ground reaction forces developed by walking cockroaches. They considered the total force produced by the complete animal [100, 101] and forces separately produced by the single legs. The latter results were qualitatively similar to those found for the stick insect walking on a horizontal plane (Fig. 2.10 a), apart from the fact that in stick insects and in this specific situation front legs produce much less force compared to cockroaches. Ting et al. [194] used the single leg forces to compute the position of the center of pressure which was shown to perform periodic movements around the center of mass. The type of the controller that governs motor output during stance is still disputed. Apart from position PD control, many different control principles like velocity control [68], force control, positive force feedback [165, 164] and positive velocity feedback [51] have been proposed. An attractively simple solution is discussed by Full and Blickhan (see [128], and the above references) and is investigated by theoretical methods [175]. These results show that an extension of the spring-loaded inverted pendulum model [25], which can explain the mechanics of legged locomotion in a sagittal plane, can also be applied to insects using sprawled leg posture. This lateral leg-spring model describes the stability of the walking animal in the horizontal plane. It could be shown that the passive properties of the model are sufficient to cope with disturbances as for example a lateral push to the body without the need of any active control system. Other results indicate that at least in slow walking sensory feedback plays an important role. In the stick insect, Bartling and Schmitz [6] have measured the ground reaction forces when the animal walks along a horizontal path. In the critical experiment, the platform on which the leg stepped was moved by a short distance while the reaction forces were still being measured. The results are not in accord with the idea of positive feedback, but indicate that the disturbance is counteracted by a negative feedback system. However, this counteraction can only be observed during the dynamical part of the disturbance. The detailed response appeared to depend on the compliance of the substrate. To investigate this in more detail, Cruse et al. [53] looked at the reaction to disturbances in standing animals. The results were interpreted in such a way that the joints are under integral position control when the substrate is very soft, but show properties of a proportional controller for stiffer substrate. If the substrate is stiff enough, the feedback system shows properties of a D- controller. As an aside, in these experiments a reflex has been found that had also been observed in other animals (see [205], cockroach, locust). When a joint is stretched too far, a short step is performed, the function of which appears to be to move the joint away from its mechanical limits.
Principles of Insect Locomotion
61
Force distribution problem: A completely different solution concerning the control of stance is the above mentioned positive velocity feedback. This idea is supported by results of B¨assler [11, 16] and Schmitz et al. [177] on stick insects as well as by results of Dicaprio and Clarac [78] and Sillar et al. [188] on crustaceans, who described the reversal of a reflex from a resistance reflex to an assistance reflex (positive feedback) when the animal changes from a passive to an active state. This idea can be applied to the complete animal: positive feedback could have the interesting advantage that coordination between the different joints of one leg and even of neighboring legs is not necessary on the neuronal level. Instead the mechanical coupling already given would suffice. Kinematic simulation studies using the principle of positive velocity feedback were successful [52]. Moreover, the same approach was successfully applied to a real planar, two joint manipulator in a crank-turning task [183] and was also proofed feasible in a dynamics simulation of a walking 3D-test leg [184]. This so called sLPVF controller switches from positive feedback to compliant motion depending on whether the joint produces positive or negative mechanical power, respectively. This approach therefore allows for an extreme simplification of the control architecture. This solution can also cope with sudden changes in the morphology (e.g., loss of part of a leg). All the mentioned approaches look at the joints on a local level, without requiring knowledge about the state of another leg and therefore circumventing the main difficulty and problem controlling the stance movement: Actually, the task of controlling the stance movements of all the legs on the ground poses several major problems. To avoid interaction forces between the legs, it is not enough to simply specify a movement for each leg on its own: the mechanical coupling through the substrate means that efficient locomotion requires coordinated movement of all the joints of all the legs in contact with the substrate. Thus, a total of 18 joints must be controlled simultaneously when all legs of an insect are on the ground. However, the number and combination of mechanically coupled joints varies from one moment to the next, depending on which legs are lifted. The control of rotational joints is a nonlinear task, particularly when the rotational axes of the joints are not orthogonal, as is often the case for insect legs and for the basal leg joint in particular. A further complication occurs when the animal negotiates a curve, which requires the different legs to move at different speeds. In walking machines, these problems are often solved using traditional, though computationally costly, methods, which consider the ground reaction forces of all legs in stance and seek to optimize some additional criteria, such as minimizing the tension or compression exerted by the legs on the substrate. Due to the nature of the mechanical interactions and inherent in the search for a globally optimal control strategy, such algorithms require a single, central controller; they do not lend themselves to distributed processing. This makes real-time control difficult, even in the still simple case of walking on a rigid substrate. For a slow and inexact biological network this calculation poses an even larger problem. Further complexities arise in more complex, natural walking situations, making a solution difficult even with high computational power. These occur, for example, when an animal or a machine walks on a slippery surface or on a compliant substrate, such as the leaves and twigs encountered by stick insects. Any flexibility in the suspension of the joints further raises the number of the degrees of freedom that must be
62
H. Cruse et al.
considered, increasing the complexity of the computation. Further problems for an exact, analytical solution occurs when the length of leg segments changes during growth or their shape changes through injury. In such cases, knowledge of the geometrical situation is incomplete, making an explicit calculation difficult, if not impossible. Despite the evident complexity of these tasks, they are mastered even by insects with their “simple” nervous systems. Hence, there has to be a solution that is fast enough that on-line computation is possible even for slow neuronal systems. How can this be done? Several authors [29] have pointed out that some relevant parameters do not need to be explicitly calculated by the nervous system because they are already available in the interaction with the environment. This means that, instead of an abstract calculation, the system can directly exploit the dynamics of the interaction and thereby avoid a slow, computationally exact algorithm. To solve the particular problem at hand, we propose to replace a central controller with distributed control in the form of local positive displacement feedback [50] as introduced above. Compared to earlier versions [51], this change permits the stance-net to be radically simplified. The positive displacement feedback occurs at the level of single joints: the position signal of each joint is fed back to control the motor output of the same joint (Fig. 2.11, stance-net). How does this system work? As a ’thought experiment’ consider an initially stationary system in which one joint begins to move actively. Then, because of the mechanical connections, all other joints begin to move passively, but in exactly the proper way to maintain the mechanical integrity of the system. Thus, the movement direction and speed of each joint does not have to be computed explicitly, because this information is already provided by the physics. In each joint, the angular change can be registered by appropriate sense organs (Sect. 2.3), and this value can then be used as a command to move this joint in the following time step by the same amount. This corresponds to positive displacement feedback that transforms this passive movement into an active movement. This command does not provide a mathematically exact solution. It rather provides an approximation that is the better, the smaller the chosen time step is. Of course, if a single, random joint is selected for the active movement, the direction in which the body moves probably will not be straight forward, but will vary depending upon the joint chosen. Such irregular movements are avoided because (i) at the start all legs initiate forward, propulsive forces (as is the case in insects [57]) and (ii) a global supervisory system (see below) controls walking direction. There are, however, several problems to be solved. The first is that positive displacement feedback, using the raw position signal may lead to an exponential increase in movement speed, as opposed to the nearly constant walking speed that is usually desired. This problem can be solved by introducing a kind of band-pass filter into the feedback loop. The effect is to make the feedback proportional to the angular velocity of joint movement, not the angular position. In the simulation, this is done by feeding back a signal proportional to the angular change over the preceding time interval. The second problem is that using positive displacement feedback for all three leg joints leads to unpredictable changes in body height. Body height of the stick insect is controlled by a distributed system in which each leg acts like an independent, proportional controller with nonlinear characteristics (walking animal: [42, 58], standing animal: [56]. However, maintaining a given height via negative feedback appears at odds with
Principles of Insect Locomotion
63
Fig. 2.11. Modelling joint coordination during stance, using positive displacement feedback. The stance-net consists of two output units that set the angular velocity of the TC- (dα /dt) and FTjoint (dγ /dt) during stance. As the CT-joint mainly affects height of the body relative to its feet, it is controlled by a separate ANN (height net in Fig. 2.13 b). The stance-net receives sensory input from proprioreceptors signalling angular velocities, i.e. closing a positive displacement feedback loop. Also, it receives central commands that determine the velocity of forward translation (vre f *vmod ) and yaw rotation (yawre f ). Both of these signals are subtracted from a corresponding sensory input signal, thus closing negative feedback loops. A ‘walking on’ signal sets a minimum retraction velocity (via a threshold operator ‘max’) to stabilise retraction against disturbances of the positive feedback signal dα /dt. In addition, one of the coordination influences can be modelled on this level: Coordination rule 5 can be partially modelled by adding a co-activation signal to the sensory input. Passive movements imposed on the TC- or FT-joint are sensed by the stancenet and transformed into an active movement, implementing a local assistance reflex.
the proposed local positive feedback for forward movement. How can both functions be fulfilled at the same time? To solve this problem we assume that during walking positive displacement feedback is provided for the α joints and the γ joints (Fig. 2.11, stance-net), but not for the β joints. The β -joint is the major determinant of the animal’s clearance, i. e., the body height. A third problem inherent in using positive displacement feedback is the following. Let us assume that a stationary insect is pulled backward by gravity or by a brief tug from an experimenter. With positive feedback control as described, the insect should then continue to walk backwards even after the initial pull ends. This has never been observed. Therefore, we assume that a supervisory system exists that is not only responsible for switching on and off the entire walking system, but also specifies walking direction (normally forward for the insect). This influence is represented by applying a small, positive input value (Fig. 2.11, “walking on”) that replaces the sensory signal if it is larger than the latter (the box “max” in Fig. 2.11, stance net).
64
H. Cruse et al.
To test whether this stance controller is able to contribute in a sensible way to walking, a simulation of the complete walker has to be performed in the future - in contrast to swing-net, that can be tested independently. Before this can be done, two additional questions have to be considered, the control of changes between the states of swing and stance, and the spatio-temporal coordination of the quasi-rhythmic movement of (neighbouring) legs, However, already here it can be stated that the principle of positive velocity feedback to couple different legs without neuronal connection is very much related to the coupling of different individuals (e.g., ants) to carry a common load without requiring a central command. Successful application of this principle would mean that the control of legs in stance could be reached by an extremely decentralized control architecture as there are only independent joint controllers that are only coupled by the mechanics of the legs. Coordination of different legs is however necessary at a higher level as will be shown below (see Coordination of different legs, Sect. 2.5). We have now discussed the control of a leg during swing and during stance, but not yet addressed the important question of how the system can switch between these two states. The targeting behaviour clearly indicates that information concerning the point where the anterior extreme position, i.e. the point where the leg may switch from swing to stance is provided by the anterior leg (or possibly the antenna in the case of the front leg). However, as this information may not be reliable, and is not used in all cases additional information may be required. An extreme case occurs in curve walking, where stance direction of front legs is rotated by some 70◦ or more [83]. Although the distance between front leg lift-off position (PEP) and middle touch-down position (AEP) is a lot larger than during straight walking, the scatter of the AEP remains very small. Ebeling and D¨urr [88] have interpreted this as an indication of the targeting mechanism being still at work, thus causing the small scatter, but with a target location that is counter-rotated with respect to the front leg stance direction. The transition from swing to stance state is usually assumed to being simply triggered by a ground contact sensor and/or a load sensor. This means, as soon as the leg is under load, it switches to stance mode [201, 76, 2]. This has been shown by Wendler [201] who replaced the tibia and part of the femur by an artificial prosthesis. The animal continued to perform appropriate swing and stance movements. Neurophysiological results supporting this view [2] will be described in detail below. However, there might be a problem because as mentioned in the case of avoidance reflexes loading the leg leads to a levator reflex, for example, but not to a change from swing to stance. Experimental results with stick insects have shown that there is also an internal, “motivational” state that is used to interpret sensory input during swing: the same physical stimulus is interpreted as an obstacle or as ground, depending of the level of “swing motivation”. How is a “swing motivation” produced? Experimental results showed that it is not simply the position of the leg nor time elapsed since the beginning of the swing. Experimental results [52] suggest that it is the distance between front and middle leg, which determines the ,,motivation“ for the middle leg to accept a stimulus as ground contact. A neuronal circuit that is able to describe these effects is given by Cruse et al. [52]. What is actually known concerning the switch from stance to swing? This switch happens as soon as a given position (posterior extreme position, PEP) is reached. This has been shown for insects, crustaceans, and cat, and is used in most robots. However,
Principles of Insect Locomotion
65
Fig. 2.12. Selector-net. This net determines whether swing-net or stance-net has access to the motor output. Input GC: ground contact, PEP: threshold determining the posterior extreme position.
also the loading conditions of the leg play an important role (see Chapter 8). A leg under load is prevented from beginning a swing [155, 13]. Furthermore, a sudden deloading can elicit a swing movement, provided that a given position has been reached [46]. How could these sensory inputs — leg under load — position of the leg, influence the state of the leg? As only limited knowledge is available concerning the underlying neuronal system [17], a simple circuit has been proposed based on theoretical considerations [55]. This “Selector-net” contains two inputs, ground contact, or load, and leg position (Fig. 2.12). Both inputs influence, with opposite signs, two units that represent the swing mode or the stance mode, respectively. In the simple version described here these units take only Boolean values (0 or 1). Each of these units receives positive feedback on itself that stabilizes the decision made, swing or stance, to some extent. The output of these units control whether swing-net or stance-net determines the actual motor output. Fig. 2.13 b shows how these modules, which, apart from the PEPnet, have been described above, are connected. In terms used for neuronal networks, this arrangement could be called a system containing two expert nets, swing-net and stance-net, and a gating net, the selector-net. In a later expansion (“Analogous Selector”, see Schilling et al. [174]) the activation of these two units take analogue values and might also be termed to represent swing motivation or stance motivation, respectively.
2.5 Coordination of Different Legs How could these leg controllers that essentially consist of a swing-net, a stance-net, and a selector-net be coupled? A possible solution for the coordination of those legs that actually are performing a stance movement has been described above. Another important task is to coordinate the changes from stance to swing, because this transition dramatically influences the stability of the walking system. Traditional solutions apply – in the case of a hexapod – a fixed
66
H. Cruse et al.
Fig. 2.13. Implementation of coordination rules in Walknet. a) Movements of each leg are controlled by separate single-leg controllers (boxes labelled as in Fig. 2.17). Each leg receives information from its ipsilateral and contralateral neighbours (dotted arrows), affecting the likelihood of a stance-swing transition. Moreover, global commands from a higher control level (dashed arrows) model descending information from the brain. b) Each single-leg controller (e.g., R2) consists of several ANN modules. The swing-net and stance-net generate leg movements during protraction and retraction, respectively. Their outputs set the joint angular velocities of the three leg joints (TC, CT and FT, equivalent to α , β and γ ). They are gated by the selector net in a mutually exclusive manner. The state of the selector-net depends on sensory input from the own leg (half-circles inside the grey area indicate local sensory information) and the summed effects of the PEP net, coding the distance to the normal PEP, and weighted information corresponding to coordination rules 1 to 3 (dotted arrows). Thus, rules 1 to 3 affect stance-swing transition. Rule 4 acts via sensory information from the next anterior leg (dotted half-circles) that is transformed into a target posture by the target net (see also Fig. 2.6 a). Thus, rule 4 affects the swingstance transition. The height net controls the body clearance and affects only the CT-joint during stance. Except for the input to the target net, all sensory inputs are local. The stance-net receives three global commands (dashed arrows), controlling body velocity (v) and yaw (y) and forward walking (w).
alternating tripod gait: front and hind leg of one side move together with the middle leg of the other side. This has often been termed the typical insect gait (Fig. 2.19). It should be mentioned here that the term tripod gait is used in two ways differing in detail: According to the strictest definition, in each moment of time there are three legs lifted and three legs on the ground. This means that swing duration and stance duration are the same for a given walking speed, but depend on walking speed. In contrast, in adult stick insects, for example, swing duration is always the same, independent on walking speed. Therefore, Graham [107] defined tripod in the way that three legs swing together. Thus, in the stick insect tripod there are periods where all six legs are on the ground. Many
Principles of Insect Locomotion
67
insects show an intermediate situation. Swing duration depends on walking speed, but is shorter than stance duration. Therefore, also in these cases we have periods where all six legs are on the ground. Different to any of these cases, slow walking insects use the so called tetrapod gait (Fig. 2.19): at least four legs are at the ground at any moment of time [107, 108]. The terms tripod gait and tetrapod gait may be misleading. As has been shown by Graham [107], there are no separate categories of gaits. Rather there is a continuum depending on walking speed. The higher the speed (and probably the smaller the load), the more the tetrapod gait resembles a tripod gait. There is another problem with the terms tripod and tetrapod gait: Both terms imply that the cyclic movement of the legs can be described by fixed phase relations and therefore fixed phase shifts have been used in the control of walking robots. However, in walking animals such fixed gait patterns can only be observed in continuously undisturbed walking situations. In very slow walking — think of a grazing horse, for example — no fixed phase values can be observed. A closer inspection has shown that gait controllers in animals are better described as free gait controllers. Nevertheless, such a controller produces an apparently regular gate when the walking speed is not too slow and when no disturbances occur. The basic results supporting this view are known since long. v. Holst has shown for the control of fish fins, for legs of walking dogs [114] and insects [115] that each leg or fin comprises an oscillator having its own stepping period and that these oscillations could be coupled more or less strongly. This has later been shown for legs of stick insects [201] and lobsters [36]. Strong coupling leads to fixed phase relations, termed absolute coordination by v. Holst, whereas weak coupling still shows a preferred phase value, but other phases values might also occur (“relative coordination”). How are these legs coupled? The underlying control system has been studied in detail for insects, crayfish and cat. There are different local rules that are active between directly neighboring legs. Three such rules have been describe for the crayfish, five similar, but different in detail, for the stick insect (see Fig. 2.2; [47]. For the cat, four mechanisms have been described [60]). Fig. 2.14 shows the effect of a brief interruption of stance movement of one leg in the crayfish. The leg resumes normal coordination by shortening or prolonging the swing movement and/or the stance movements of some of the neighbouring legs. To illustrate the evaluation procedure, two responses to disturbances of legs 3 and 4 are shown. The situations in which prolongation of a swing occurs are presented in Fig. 2.16 in a
Fig. 2.14. Two plots showing the behaviour of crayfish leg movement after a brief disturbance (prolongation of stance of leg 4). Depending on the timing of the disturbance, shortening and prolongation (a) and shortening (b) of steps of neighbouring legs can be observed. Legs are numbered from 2 to 5. Upward lines show swing movements. The abscissa denotes time.
68
H. Cruse et al.
form similar to a phase-response curve. The sketches below the abscissa symbolize the rhythmic movement of the two legs in a normally coordinated walk (solid lines). The values on the abscissa are given as absolute values rather than as relative phase, which is usual in a phase-response curve. The ordinate does not show the absolute duration, but rather the difference relative to the swing duration of a normal step. Thus, the zero value corresponds to an unchanged swing.
Fig. 2.15. Two examples of phase response curves from data for legs 3 and 4. Reference leg is leg 4 (a) or leg 3 (b). Ordinate shows change in duration of swing (see bold lines in the lower inset figures). Abscissa shows beginning of measured swing relative to reference point (t0).
Leg Coordination in Crayfish: a Simple Case: How can a system be constructed to produce a stable spatiotemporal pattern and, at the same time, to tolerate disturbances? To investigate the properties of such a system, its reaction to different disturbances such as brief interruptions of the stance movement of one leg have been observed. As the situation in crayfish appears to be simpler than in the stick insect, we will first concentrate on leg coordination in crayfish. In Fig. 2.15 a, leg 4 is chosen as reference leg with the reference point (t0 ) being the end of swing movement. As depicted in Fig. 2.15 a, the changes in swing duration are very large when the moment where the swing of leg 3 would have ended had it not been influenced occurs just after the AEP of the backward leg. The prolongation of the swing duration decreases with decreasing lag between the start of swing and the reference point. Had the slope of the points shown in Fig. 2.15 a reached -1.0, then the prolongation would have completely compensated for the wrong phase value. Two such deviating steps are schematically shown below the abscissa (dashed lines). Fig. 2.15 b depicts results in which the duration of the swing is either shortened or remained almost the same. In this figure, leg 3 is chosen as reference leg with the reference point (t0 ) being the end of stance movement The duration of the swing seems to be prolonged to some extent when the backward leg starts its protraction “too early”. The duration of the swing is clearly shortened when it is started “too late” relative to a normal step (see dashed lines in the schema below the abscissa). The remainder of the compensation is accomplished by changing the duration of the following stance
Principles of Insect Locomotion
69
Fig. 2.16. Coordination between ipsilateral legs in crayfish. a) rostrally directed influence prolongs swing movement, b) caudally directed influence shortens swing movement. The left part of each figure shows body and range of movement of two neighbouring legs (upward is anterior). The right part indicates the movement of the reference leg; posterior leg in (a), anterior leg in (b). The behaviour of the other leg is schematically shown by plotting several traces with different phase shifts. The wedges and arrows indicate the coordinating influences.
movement as indicated by the schematic below the abscissa. The collection of a large data set led to the interpretation that two coordinating rules are active in the crayfish. One is directed rostrally. It prolongs the swing of a leg as long as its posterior neighbour performs a stance (see Fig. 2.15 a). The other is directed caudally. When the anterior leg is close to the end of its stance or at the beginning of its swing phase, this second influence has the effect of ending the swing of the posterior leg and beginning the stance of this leg (see Fig. 2.15 b). This is summarized in Fig. 2.15. A third influence acting between each contralateral pair of legs will not be considered here (see [47]). Rules for Leg Coordination between Legs in Stick Insects For the stick insect, six different coupling rules have been found in behavioural experiments. These are summarized in Fig. 2.17 a. One influence (6 in Fig. 2.17 a) serves to correct errors in leg placement; another (5) distributes the propulsive force among the legs. These will not be considered in the following section. Only the remaining four are used in the present model. The beginning of a swing movement (PEP) is shifted by three rules arising from ipsilateral legs (Fig. 2.17 b): (1) a forward directed inhibition during the swing movement prevents lift-off of the next anterior leg, (2) a forward directed excitation soon after the begin of active retraction supports lift-off of the next anterior leg, and (3) a rearward directed influence depending upon the position of the next rostral leg increases the chance of lift-off with increasingly posterior position. Influences (2) and (3) are also active between contralateral legs. The end of the swing movement (AEP) in the animal is modulated by a single, caudally directed influence (4) depending on the position of the next rostral leg. This rule is responsible for the targeting behaviour - the placement of the tarsus at the end of a swing close to the tarsus of the adjacent rostral leg. Influence (4) affects the touch down location set by the swing-net, as has been described above. Influences 1, 2 and 3 are implemented as incoming scalar values that are simply summed and shift the threshold characteristic (PEP). The characteristic determines how much a standard stance movement is shortened or prolonged. This is illustrated by
70
H. Cruse et al.
Fig. 2.17. a) Summary of the coordination rules operating between the legs of a stick insect [47]. The leg controllers are labelled R and L for right and left legs and numbered from 1 to 3 for front, middle, and hind legs. The different mechanisms (1 to 6) are explained in the text. b) – d) Illustration of the effects of rules 1, 2 and 3, respectively.
the summation element in the selector-net of Fig. 2.13 b. The strength of the different coordination influences depends on the current behavioural context [83, 81]. The interleg influences are mediated in two parallel ways. The first pathway comprises the direct neural connections between the step pattern generators. The second pathway arises from the mechanical coupling among the legs. That is, the activity of one step pattern generator influences the movements and loading of all legs and therefore influences the activity of their step pattern generators via sensory pathways. This combination of mechanisms adds redundancy and robustness to the control system of the stick insect [69]. In a modeling study using Walknet, Kindermann [135] studied the complete sixlegged walking system by investigating its behaviour in different, specific situations. To permit the system to control straight walking and to negotiate curves, he introduced a supervisory system that, in a simple way, simulates the optomotor mechanisms for course stabilisation that are well-known from insects and have also been applied in robotics. This system can also compensate for unbalanced coupling factors or other inequalities between right and left legs. This supervisory system uses information on the rate of yaw (“yawsens”, Fig. 2.11 b, stance net), such as visual movement detectors might provide. It is based on negative feedback of the deviation between the desired
Principles of Insect Locomotion
71
Fig. 2.18. Simulated walk by the basic six-legged system with negative feedback applied to all six β joints and positive displacement feedback to all α and γ joints as shown in Fig. 18b. Movement direction is from left to right (arrow). Leg positions are illustrated only during stance and only for every second time interval in the simulation. Each leg makes about five steps. Upper part: top view, lower part: side view. (a) Straight walking (θyaw re f = 0). (b) curved walking (θyaw re f = 0).
turning rate and the actual change in heading over the last time step. The error signal controls additional inputs to the α joints of the front and hind legs that have magnitudes proportional to the deviation and opposite signs for the right and left sides. In earlier versions, this bias was given to the front legs only. A much better behaviour can be found when the bias is also given to the hind legs. With this addition and yawre f set to zero, the system moves straight (Fig. 2.18 a) with small, side-to-side oscillations in heading such as can be observed in walking insects [135]. To simulate curve walking (Fig. 2.18 b), a small positive or negative bias is added to the reference value that determines curvature and turning direction. However, as mentioned above, this simulation did not apply adaptive coupling strengths, in contrast to what has been found in real animals. Finally, we have to address the question of how walking speed is determined when such a positive displacement feedback controller is used. Again, a central value is assumed which represents the desired walking speed vre f . This is compared with the actual speed, which could be measured by visual inputs or by monitoring leg movement. This error signal is subject to a nonlinear transformation and then multiplied with the signals providing the positive feedback for all α and γ joints of all six legs (Fig. 2.11 b, stancenet). Therefore, another way to describe the function of the stance net is the following: the complete system is regarded as a negative feedback system controlling walking speed vre f . Within this system, the positive feedback signals provide the gain factors for the individual joints. Like the earlier version [51] the presented model shows a proper coordination of the legs for walks at different speeds on a horizontal plane. Steps of
72
H. Cruse et al.
ipsilateral legs, i.e., legs on the same side of the body, are organized in triplets forming “metachronal waves”, which proceed from back to front, whereas contralateral legs, i.e., legs on opposite sides, on each segment step approximately in alternation. With increasing walking speed, the typical change in coordination from the tetrapod to a tripod-like gait is found. For slow and medium velocities the walking pattern corresponds to the tetrapod gait with four or more legs on the ground at any time and diagonal pairs of legs stepping approximately together; for higher velocities the gait approaches the tripod pattern with front and rear legs on each side stepping together with the contralateral middle leg. This is possible because swing duration is constant [108]. Simulation results are shown in a “bar code” plot (bars indicate stance, gaps indicate swing) in Fig. 2.19. Fig. 2.20 shows an example of a tetrapod coordinating pattern. With higher speed stance duration shortens. This gradually changes the gait from a tetrapod to a tripod coordination. For a set velocity, the coordination pattern is very stable. For example, when the movement of one leg is interrupted briefly during the power stroke, the normal coordination is resumed immediately at the end of the perturbation. Furthermore, the model can cope with obstacles higher than the normal body clearance. It continues walking when a leg has been injured, such that, for example, half of the tibia is removed Unexpectedly, the following interesting behaviour was observed in the simulation. A massive perturbation, for example by clamping the tarsi of three legs to the ground, can make the system fall (Fig. 2.21). Although this can lead to extremely disordered arrangements of the six legs, the system was always able to stand up and resume proper walking without any help (for details see [135]). This means that the simple solution proposed here also eliminates the need for a special supervisory system to rearrange leg positions after such an emergency. Further improvements of Walknet have been introduced by Roggendorf [173], who tested the limits of the system when walking over obstacles at different walking speed and included a comparison with other solutions. Recently, Linder [142] tested the ability of hexapods to self organize leg controllers and their coordination rules from scratch while applying a biologically inspired, simple fitness function, but using a sufficiently complex environment the system has to cope with. In this investigation Linder could show that pure mechanical coupling of the legs is already sufficient to produce a considerably well coordinated gait. Apart from these coordination rules that are directly derived from biology, there are other interesting solutions that have been invented by engineers and are only partly based on biological information. The robot Hamlet [91] uses some kind of proximity function that determines the lifting of a leg. Another quite simple, but effective method has been proposed by Porta and Celaya [162]. According to this rule, a leg does start a swing if it is nearer to its physical extreme position than its neighbouring legs, provided no neighbour is swinging. This leads to a good walking behaviour as long as the walker uses a high speed. To also cope with slow walking speeds, Roggendorf added the following expansions: A leg can only swing, if its position is behind a given threshold. If the leg is behind the threshold, but the PC condition is not fulfilled, the leg votes for a smaller walking speed (depending on its distance to its physical extreme position). The leg with the slowest vote determines the overall walking speed. With this expansion Roggendorf [173] showed that the behaviour is even better, i.e., more stable when walking over irregular ground than when the stick insect rules 1-4 are applied.
Principles of Insect Locomotion
73
Fig. 2.19. Coordination rules 1 to 4 are sufficient to generate typical insect gaits. Based on a model proposed by Dean [63, 64, 66, 67] a precursor model of Walknet [149] incorporated the leg coordination rules of Fig. 2.17. Like all subsequent versions, this model showed velocitydependent gait changes that are characteristic for insect walking. a) If the reference walking velocity is high, the system generates a tripod gait with alternating tripods of L2-R1-R3 (dashed lines 1) and L1-L3-R2 (dashed lines 2). The black bars mark the stance periods of each leg, i.e. when a given leg is on the ground. b) If the reference walking velocity is low, the system generates a tetrapod gait in which always at least four legs are on the ground. The tetrapod gait is characterised by a back-to-front wave of stance movements on the right (dashed lines 1) and left side (dashed lines 2). Identical footfall patterns can occur repeatedly in a tetrapod gait too (dashed lines 3 in a), but they typically drift from one step to the next. Back-to-front waves can also be found in a tripod gait (dashed lines 3 in a), but stance periods of adjacent legs overlap less than in the tetrapod gait.
Curve walking is a special problem that has been investigated in most detail in stick insects [125, 126, 83, 82]. Implementation of the results into Walknet is still in progress, but it should be mentioned that the principle of positive velocity feedback control addressed earlier has proven to be a good basis for the control of curve walking and at the same time avoids unnecessary forces across the body. Another way to avoid such forces is to apply force feedback which, however, is a complex task. But even the search for common results of the many studies on turning and curve walking is a complex task. One reason for this may be that different studies have anal-
74
H. Cruse et al.
Fig. 2.20. Leg coordination using a tetrapod gait. Leg movement is shown such that an upstroke describes a swing, and a downstroke describes a stance movement. For one leg, the left middle leg, a second trace (bold lines) illustrates the effect of the coordinating influences. This trace shows the actual PEP threshold value of this leg (Fig. 2.13 b). The transition from stance to swing is elicited as the leg reaches this position. The legs are labelled R and L for right and left and numbered from 1 to 3 for front, middle, and hind legs, [52, 135].
Fig. 2.21. Righting behaviour. (a) By clamping the tarsi to the ground (arrowheads), the system is made to fall, leading to disordered arrangement of the legs (b). Nevertheless, the system stands up without help and resumes proper walking (c). The large arrow indicates walking direction. Upper panels: top view; lower panels: side view.
ysed turning in very different behavioural contexts, including the optomotor response to visual large-field motion [211, 126, 59, 83], visual tracking of small objects [139], spontaneous turns [127] or particular behaviours such as mating [93], defence behaviour [39] or turning on a beam [94]. Moreover, the locomotor state (static vs. dynamic locomotion) and movement constraints (tethered vs. free walking, natural vs. un-natural inertia) varied among these studies. Whereas ants and flies have been reported not to change the tripod gait during walking [210, 193], stick insects do change leg coordination considerably [82]. Perhaps the one common aspect of all studies on turning is that the stance direction changes considerably and differently in all legs. In stick insects it is clear that a turn is initiated by the strong effort of the front legs, while the kinematic changes of other legs lag the change in front legs [83]. This strongly suggests that the changes observed during curve walking are not occurring at the same
Principles of Insect Locomotion
75
rate but are orchestrated in a way that separates primary active events from secondary events that may be due to either passive or active effects.
2.6 Insect Antennae as Models for Active Tactile Sensors in Legged Locomotion For the coordination of leg movements during hexapod walking, sensory feedback accounts for important adaptive properties. It is beneficial to the walking system as a whole, if the legs exchange information about their current state, e.g., via the coordination rules in Fig. 2.17. Also, if a leg hits an object during protraction, it would be most efficient to ’tell’ the neighbouring leg about the obstacle. During walking, sensory information is gathered by the walking legs. In evolutionary terms, insects have more legs than the six they use for walking. For a number of reasons, biologists consider all articulated appendages of the insect body, with the exception of the wings, to be transformed, specialised legs. These include all mouthparts (mandibles, maxillae and labium) and the antennae, or feelers. The close relation between legs and antennae can be demonstrated in stick insects that, in certain experimental circumstances, will regenerate a leg to replace an amputated antenna [61]. The idea is that, during evolution, legs of the head region proved to be not necessary for maintaining posture or propulsion and, therefore, were free to take on other functions, such as chewing or contact sensing. The process of adopting these new functions was paralleled by gradual changes in morphology and movement pattern. Thus, the nonwalking legs adapted to the specific circumstances that the species encountered. Today, all insects carry a pair of antennae that are densely covered with various kinds of sensory hairs, including mechanoreceptors for tactile sensing, but also chemosensory and thermosensory hairs (the only exception is the very primitive order Protura, the members of which have neither eyes nor antennae, but fairly long front legs).Two main types of insect antennae can be distinguished: the segmented antenna of the Entognatha and the flagellar antenna of the Ectognatha [123] (for illustrations see, e.g., Seifert [186], Fig. 166 and 167). All higher insects, including virtually all commonly known species, carry flagellar antennae that consist of three functional segments. This type of antenna has only two active joints, one moving the scape relative to the head and another moving the pedicel relative to the scape. The long flagellum, that carries most of the sensory hairs, can be moved and bent passively only. Like walking legs, antennae are also equipped with proprioceptors such as hair plates, stretch receptors and chordotonal organs. The most intensely studied of these is Johnston’s organ, a chordotonal organ specialised to sense vibration signals. In honey bees [79], mosquitos [104] and fruit flies [145, 105], Johnston’s organ functions as an ear that allows hearing of other insect’s wingbeat. For a recent review of the neurobiology of insect antennae, see Staudacher et al. [190]. Depending on the phylogenetic heritage and ecological requirements, the morphology of the stick insects and crickets have rather long antennae and actively move them during locomotion (Fig. 2.23 A). Because active movement of a long appendage raises the probability to make contact with surrounding obstacles, this behaviour is likely to improve obstacle detection. Accordingly, assuming that information from the antennae
76
H. Cruse et al.
Fig. 2.22. Various kinds of flagellar antennae. Short, evenly thick flagellar segments, e.g., as in cockroaches, are characteristic of the setiform antenna (a). The filiform antenna, e.g., in carabid beetles, are more slender and longer (b). The moniliform flagellum of termites and many basic forms of insects contains rounded segments (c). In the serrate antenna, e.g., in Elateridae, the segments are toothed (d). Each tooth can be strongly elongated and may protrude in a single or in two directions, as in the one- and two-sided pectinate antennae of sphingid moths and some Tipulidae (e, f). The flagellum of the clavate antenna is distally swollen, forming a club shape (g, h) as in some beetles. The typical geniculate antenna is found in hymenopterans (i). The leafshaped lamellate antenna is found in scarabaeid beetles (k). “needle-shaped” antennae with an unsegmented flagellum are found in cicadas (l) (Fig.167 from [186]).
descends to the thoracic and leg joints, active antennal movements may improve locomotion efficiency on rough terrain. In the stick insect species Carausius morosus, rhythmic movement of the antennae are tightly coupled to the stepping rhythm of the walking legs [84]. From the movement trajectories of legs and antennae, it is possible to predict the probability of an obstacle to be touched by an antenna before one of the legs gets there. This is a necessary condition for exploiting tactile information to adapt on-going leg movements. Not only is the predicted curve confirmed experimentally, it also corresponds nicely to the climbing performance of Walknet: The probability for the antennae to detect an obstacle increases at a height range where the local leg reflexes and coordination rules of Walknet can no longer guarantee climbing-success. Thus, the movement pattern of the antennae is suitable to reliably detect obstacles that require a change of strategy. Meanwhile, behavioural experiments on a number of species confirmed many walking insects use antennal tactile information to incline their body axis in order to adapt to detected obstacles (stick insect: [86] and Fig. 2.23 C; cockroach: [171]). Furthermore, at least in stick insects the typical antennal movement pattern during walking changes as soon as the animal steps into a gap and engages in searching-movements [80] or if the animal turns [83]. In this situation, the antennae support the searching effort of the front legs in the anterior region above the area searched by the legs. Also, antennal function in tactile localisation of objects is revealed by fast re-targeting reactions of a protracting front leg [85], Fig. 2.23 D). Experiments on other insects and crustaceans have
Principles of Insect Locomotion
77
Fig. 2.23. Active exploration and use of antennae in stick insect locomotor behaviour. a) Antennal movements of the stick insect Carausius morosus during walking are rhythmical and of fairly regular appearance. Trajectories of antennal tips are drawn to a sphere that is centred on the head and oriented as shown by the inset. The trajectory of the left antenna (right sphere) is drawn as a mirror image, to match the orientation of the right antenna. b) Abduction phases of the antennae (first and fourth row of black bars) are coupled to the pattern of swing movements of the ipsilateral walking legs. Diagonal dotted lines indicate the back-to-front sequence (a metachronal wave) of leg swing movements and antennal abduction. LA and RA: left and right antenna, respectively; L1-L3: left front, middle and hind leg; R1-R3: right legs. c) The tactile sense is exploited for efficient locomotion. Head trajectories of stick insects walking towards a rectangular obstacle (hatched area, side view) show how the insect raises to climb the edge (walking direction: left to right). Stick insects with intact antennae (left) detect the obstacle earlier (triangles = mean position at first contact) and climb the obstacle with more clearance (circles = position when reaching twice the average walking height) than antennectomised animals (right). Sighted (top) and blind (bottom) animals behave the same. d) Tactile antennal cues can cause rapid re-targeting of an on-going front leg swing movement. Side view (left) and top view (right) of a stick insect walking sequence towards a vertical pole. Three stick figures show body axis, right front leg and tarsus position (circles) at times of lift-off, first antennal contact with the pole (open circle) and leg contact with the pole (solid line: tarsus trajectory; dotted line: trajectory of the antennal tip). A normal swing movement would have been continued as indicated by the dashed arrow. Instead, no more than 60 ms after antennal contact, the swing movement is re-directed to grasp the pole. (Fig. 18 from [190]; showing data from [84] - (A, B); [86] - (C) and [85] - (D))
78
H. Cruse et al.
demonstrated the tactile use of antennae in wall-following [35] and obstacle localisation and climbing [203, 159, 200]. In general, antennae are used during locomotion for orientation relative to wind direction [19, 144, 26] and gravity [9, 118], for detection of potential dangers [192, 38, 103] for active exploration of the space ahead [84, 119, 154, 159] and for tracking of objects [117]. A recent review, Staudacher et al. [190] show that the range of behaviours in which antennal tactile information is important is remarkable. All of these studies suggest that active tactile sensors may also be useful for application on walking machines, for the following four reasons. First, their control is rather simple but effective. Second, due to the fixed morphology, little information processing is required to retrieve 3D obstacle location, in particular when compared to visual input. This reduces computational cost. Third, the gathered information is always relevant to leg movement control, because only the working-range of the legs is sampled. Fourth, they reliably work irrespective of light and other environmental conditions. Although several walking machines make use of passive tactile sensors for obstacle detection [28], active tactile sensing has not yet been implemented with nearly the same effort as visual systems, in spite of the fact that tactile sensors might be sufficient or even advantageous in many applications. To date promising mechanical engineering studies on active insensitive probes [196, 130, 198] suggest that robust sensing solutions may be possible even with few sensors on the probe. A new tactile distance-sensing technique based on a single vibration detector has been proposed by [140]. So far, however, only few studies have tested tactile sensors on moving robots. A preliminary attempt has been made by [4], who mounted moveable antennae on an underwater ambulatory robot that is inspired by an eight-legged lobster. The antennae of this robo-lobster can be positioned at one out of four postures, but active movement is not used in the sensing process. More recently, [141] applied robot feelers to switch between different alternative locomotion behaviours. The cockroach wall-following behaviour [35] has been mimicked by the legged robot ’Sprawlette’ [40], which uses passive antennae, equipped with a set of strain gauges, to control the distance to the wall. From an engineering point of view, the drawback of active antennae mainly concerns the low sampling density, particularly when compared to vision. However, if the main purpose of this sensor is to guide a leg, it can be argued that the movement pattern must only be adapted to the stepping pattern of the ipsilateral front leg and/or to characteristic features of the environment. Non-orthogonal joint axes narrow the action range of the antenna because the torus-shaped workspace of a two-jointed manipulator narrows with decreasing angle between the axes [138]. Given that both joints sinusoidally move through their entire action range, appropriate choice of axis orientation significantly decreases movement effort for optimal obstacle detection, while increasing positioning and sampling accuracy at the same time. Thus, useful technical application of active tactile sensors will be invariably linked to the appropriate choice of kinematics and an efficient movement strategy.
2.7 Central Oscillators As has been mentioned in Sect. 2.4, the network shown in Fig. 2.13 produces rhythmic motor output without having any central oscillators. The reason for this is the
Principles of Insect Locomotion
79
interaction with the environment resulting from embodiment and situatedness. Thus, central oscillators are not required to control rhythmic movements of walking legs. But even if a central oscillator is used to control the basic rhythmic movement of a leg, the problem of coordinating the different joints of one leg during walking would remain. This is not a trivial task because, depending on the specific walking situation, there is considerable variation between the time course of the basic rhythm of swing and stance and the activation of the different joint muscles (for a brief review see [48]). On the other hand, there are biological experiments that indicate that neural oscillators do exist. Detailed studies have shown that separate oscillators can be found for each leg joint (e.g., [34], reviewed by [17]). This induces the question of what might be the biological sense of such oscillators. Based on a Brown half-center oscillator, Cruse [48] proposed a simple network that explains several findings observed in insects walking in different situations (see below).
Fig. 2.24. a) Circuit for the control of an antagonistic structure. As an example, one retractor (stance) muscle and one protractor (swing) muscle are used. LPF: low-pass filter, HPF: highpass filter. w represents a negative weight, and therefore an inhibitory connection between both antagonistic parts. Nonlinear characteristics represent rectifiers. b) Simulation of the behaviour resulting from sensory input of varying strength.
This network also can explain results that by some authors have been interpreted as to indicate the existence of a central oscillator that, as such, drives the rhythmic motor output. This network (Fig. 2.24 a) describes the sensory-motor connection of each joint in a more detailed way than do the swing-net and stance-net; sensory input and motor output are not organized as one channel as shown in Fig. 2.13, but as two antagonistic channels that drive the two antagonistic muscles of one joint. Furthermore, both parallel antagonistic channels are connected via high-pass filtered mutual inhibition (for details see [48]). This circuit is assumed to exist for each joint (for possible connections between joints of one leg, see Fig. 2.13). What are the properties of such an antagonistic structure? As an example, in Fig. 2.24 b retractor and protractor muscles move the thoracic-coxal joint (α -joint in Fig. 2.2). After central activation is switched on (Fig. 2.24 b, arrow), sensory input drives rhythmic motor output like that for swing-net and stance-net in Fig. 2.13. In the simulation, clear transitions between activations of agonist and antagonist are found even when the input values are very small (Fig. 2.24
80
H. Cruse et al.
b, third stance period or sixth swing period) or when the input shows soft transitions as in the case of a sinusoidal wave (not shown). These sharp transitions result from the inhibitory connections that form a kind of winner-take-all system. Interestingly, the strength of initial activation of the agonist depends on the strength of activation of the antagonist in the preceding half-cycle. A small retractor activity (Fig. 2.24 b, symbol *) leads to a small rebound effect in the following protractor activation, and a large retractor activity leads to a large rebound excitation in the following protractor burst (Fig. 2.24 b, symbol **). This rebound effect may be a simple way to explain a number of findings described in the literature. Pearson [155] showed that, when cockroaches have to drag a load, excitation of retractor muscles is increased. Furthermore, protractor excitation increases during swing and, probably as a kinematic consequence of increased protraction velocity, swing duration decreases. Similarly, in crayfish, increasing load during stance leads to decreased swing duration [54]. It is easy to imagine that loading the animal excites stance muscles via direct sensory feedback mechanisms (positive and negative feedback has been discussed, see also Section 2.4.2: Stance movement), whereas swing muscles cannot receive such direct input because the legs are lifted off the ground during swing. However, increased excitation during swing could be explained by the rebound effect discussed here, which excites swing muscles more when their antagonists showed a higher excitation during the preceding stance. Schmitz et al. [181, 180] found that the average velocity of swing movement increases when stick insects walk uphill and decreases when they walk downhill. Again, this phenomenon could be qualitatively explained by the assumption proposed here. Walking uphill requires a higher excitation of stance muscles compared to walking on a horizontal plane. This should lead to a rebound effect on swing muscles. The effect was most obvious in hind legs, which indeed produce most downward and rearward directed forces during uphill walking [41]. Walking downhill requires small propulsive forces during stance if not forces acting against walking direction. Therefore, the rebound effect should be small or even negative, which should lead to a decrease of swing muscle excitation as has been described by Schmitz and coworkers [181, 180]. Furthermore, different shapes of swing trajectories that depend on the form of the substrate can be explained by this account [185]. In conclusion, the rebound effect, which results from the high-pass filtered mutual inhibition (Fig. 2.24), does not produce a continuous oscillation as such, but only influences the next half-cycle. The network may therefore be regarded as a ‘soft oscillator’. This system is advantageous because its “predictive” property is based on actual, local knowledge and therefore avoids the possible bad prediction of a central oscillatory system having a strict inherent rhythm. According to this idea, the quasi-rhythmic motor output observed in walking is not based on a fixed internal “world model” in the form of a traditional central oscillator, but is based on “reality”, i.e., on direct sensory information while the oscillator properties do not influence the rhythm itself, but the strength of the motor output. It should be mentioned here that oscillations have been found in the artificial situation where sensory feedback has been cut off [34]. In agreement with the model simulation, these oscillations are slower than the normal walking rhythm.
Principles of Insect Locomotion
81
How are all the joints of the six legs coordinated during walking? For the swing movement this question could, at least in part, be answered by using the “half-cycle” principle. It can serve to simplify the adaptation of the swing movement to differently shaped substrate and different load situations. During stance the question can be solved by the mixture of positive and negative feedback as explained in Sect. 2.4.2. However, other questions are still open. For example, there exist only hypotheses concerning the control of the switch from levator activation to depressor activation during the second part of swing [158]. This concerns the question of whether, in addition to these local, joint-based oscillators, a superordinate oscillator is necessary (or at least helpful) to control the rhythmic change between swing and stance (recall that movement in the joints can be quite independent from these overall states). Although positive and negative properties of central pattern generators have been discussed with respect to the behaviour in unpredictable situations [48], we do not argue that central oscillators are inappropriate for the control of rhythmic motor output in general. Apart from predictable situations, central systems may also be successfully applied when the mechanical device has few degrees of freedom, for example only one joint (possibly a wing or a fin) or a simple planar two joint leg, because in these cases the critical problem mentioned above, namely the coordination of the joints with the swing/stance rhythm, can easily be solved. One of these approaches will be briefly reviewed here. B¨uschges, Schmitz and co-workers investigated which sensory signals from the leg were able to entrain each of the three joint oscillators [34] such that coordinated transitions between the two step phases – swing and stance – were possible. Most powerful in inducing phase transitions are the load sensors of the leg. Signals from specific subgroups of the trochanterofemoral campaniform sensilla play a significant role in timing the activity of motoneurons supplying the individual leg joints through effects upon central pattern generating networks [2, 204, 3]. Sensory signals from the femoral campaniform sensilla (fCS) are able to induce transitions in activity between tibial motoneuron pools. A posterior or upward directed increase in load on the femur initiates flexor tibiae activity and terminates firing in the extensor tibiae [1]. Furthermore, signals from the fCS are able to reset the rhythm of reciprocal bursting in extensor and flexor tibiae motoneurons in the otherwise deafferented mesothoracic segment. There is, however, a considerable amount of variability in the resetting effect. Additional experiments using the semi-intact single middle leg walking preparation [92] indicated that this stimulus contributes to maintaining flexor tibiae motoneurons activity during the stance phase [1]. Other experiments have shown that sensory signals from the trochanteral campaniform sensilla (trCS) affect the timing of leg motoneuron activity at the more proximal thoraco-coxal joint. An increase of strain on the leg signaled by the trCS applied during bursts of the protractor coxae terminates the protractor activity, in an otherwise deafferented leg. This stimulus also initiates retractor coxae activity, resulting in a resetting of the rhythmic activity of both motoneuron pools [2]. Similar resetting effects were found by Pearson in bursting of cockroach leg motoneurons to pressure on the trochanter [155]. Furthermore, in stick insects, ablation of the trCS in the middle leg in a semi-intact preparation produces a massive deterioration of the intra-leg coordination
82
H. Cruse et al.
during walking. Two conclusions can be drawn from these results: (i) sensory signals from the trochanteral campaniform sensilla affect the timing of the central pattern generator governing thoraco-coxal motoneurons; (ii) signals from the trochanteral receptors play a primary role in the coordinating activities of thoraco-coxal motoneurons with the movements of the distal leg joints [2]. These findings indicate that when the locomotor system is active load signals from the leg are utilized to support the initiation of specific phases of motoneuronal activity in the locomotor cycle, i.e. load signals assist in the generation of flexor tibiae and retractor coxae motoneuron activity during stance.
Fig. 2.25. This diagram summarizes all known sensory influences on timing in intra- and interjoint coordination during walking in the single middle leg. In the figure the individual joint networks are depicted by two neurons showing mutual interaction. Motoneurons are depicted by squares. Filled symbols denote elements/neurons being active, open symbols denote inactive elements/neurons. Sensory influences on the central networks are either excitatory (+) or inhibitory (-). For further details see text. (Figure adopted from [2])
Fig. 2.25 depicts a hypothetical reflex chain that was able to generate appropriate switching between the two states stance and swing in a stick insect middle leg. The basis of this system is that all 3 leg joints have their own central-rhythm generating networks which are not centrally coupled [34] but being coordinated by sensory signals. The step cycle is produced by 4 finite state machines. The description of the sequence of transitions triggers from the 1st to the 4th as shown in the figure, first row (1) from the left to last row on the right (4). 1. during the swing phase in the single-middle-leg preparation, the levator trochanteris (Lev) and extensor (Ext) motoneurons [2, 3] are active as well as the protractor coxae (Pro) motoneurons. Extension of the FT-joint is sensed by the femoral chordotonal organ (fCO), which excites the depressor trochanteris (Dep) and inhibits the Lev motoneurons causing a depression at the CT-joint. This leads the tarsus of the leg to touch the ground [111, 32]. 2. the tarsus touching on the ground causes an increase of the load on the leg, which is sensed by the trochanteral and femoral campaniform sensilla (trCS and fCS). trCS signals excite the retractor coxae (Ret) and inhibit the protractor coxae (Pro) motoneurons. If the leg was not restrained, this would cause it to move backward at the TC-joint. The fCS would excite the flexor tibiae (Flx) and inhibit the extensor tibiae (Ext) motoneurons, which would cause the leg to pull on the ground [1, 2].
Principles of Insect Locomotion
83
3. during the stance the flexion movement of the FT-joint, sensed by the fCO, would reinforce its own movement by assisting flexor activity and inhibiting the extensor (positive feedback, first part of the active reaction) [15, 16, 177, 137]. 4. at a particular position of the FT-joint, the flexor activity (flx) terminates and the extensor (ext) starts to fire again (second part of the active reaction). Flexion of the FT-joint, signaled by the fCO would activate Lev and inactivate Dep [32, 111]. Elevating the leg would decrease the load on the leg, which would be sensed by the trCS and fCS [1, 2]. This time the fCS would assist ongoing Ext activity; signals from the trCS would activate the Pro and inactivate the Ret during the execution of the swing phase [46, 2]. It has to be mentioned that this model describes timing of transition points within the step cycle only, but does not model the magnitude control of muscle activities nor trajectory generation. Also coordinating influences from other legs are not considered. Moreover, activity phases of the different joint muscles are appropriate for front legs or sideways walking middle legs only. However, as already implemented in Walknet, the stance-swing generator of the single leg is taken as a sensory triggered reflex chain, where the swing-stance transition is triggered by ground contact and stance-swing transition is triggered by reaching the PEP. Central pattern generators may be sensible in other situations, apart from generating walking movements. In emergency situations, a central system may be used to replace the “sensory-driven oscillator”. As explained earlier, the system shown in Fig. 2.24 can directly be used to act as a central oscillator, if the value of the central excitation is chosen to be high enough. A dramatic case of such an emergency could be the loss of one or several sensors due to an injury. A less dramatic case, but for biological systems probably equally important, occurs when fast rhythms are to be produced as is the case in a fast walking cockroach. Fast, here, is meant to be relative to the time delays resulting from the slow neuronal processing: if sensory feedback is too slow, it may not be able to contribute to the production of the rhythmic output. Although, as has been argued above, such a central system might be inaccurate in the case of external disturbances, it may be better to use this approximate information than exact information that comes too late. (Note that this argument may not be relevant for an artificial electronic system, because here transmission of signals is fast enough.) However, as an alternative to the use of central oscillators as active devices to control motor output, they may be used in a more passive way, that is for predictive purposes. One way is to change sensory thresholds in a given time window [72]. Moreover, central oscillators may be used on a longer time scale to detect long term deviations (e.g., in case of sensory drift) by providing expectation values that could be compared with the sensory input. If a long term deviation is detected, this information can be used to readjust the system for example via error backpropagation mechanisms [132].
2.8 Actuators In robots, actuators are typically implemented either by electric motors or, if high forces are required, by pneumatic or hydraulic devices. Biological systems use muscles. At
84
H. Cruse et al.
first sight, using muscles seems to be no good solution for an engineer. Muscles show highly nonlinear force-length and force-velocity characteristics (e.g., Hill equation. For further details see Biewener [21]. Illustrations of the characteristics are given there in Fig. 2.3 and 2.4.), including hysteresis effects and nonlinear damping properties. A combination of active parallel elastic elements and passive series elasticity further complicate the understanding. To separate active and passive properties, most neuromechanical models [202] first transform the neural activity profile into a variable called muscle activation (activation dynamics). This variable is independent of joint geometry and determines the time course of isometric contractions, i.e. contractions without associated changes in muscle length. Muscle activation can be viewed as a scaling factor within the range of 0 and 1. It scales the characteristics of active muscle properties, in particular the active force-length curve (see Biewener [21], Fig. 2.3) and the active force-velocity curve (see Biewener [21], Fig. 2.4). Together with passive force-length curve, which is due to strain of a nonlinear spring, and passive damping properties of various tissues (e.g., joint membrane cuticle of arthropod joints, Fig. 2.26), the scaled active muscle properties determine the course of isotonic contractions. A final important aspect of muscular actuators is that muscles can only actively operate in one direction (pull). Two antagonistic muscles are required to move one joint, allowing for variable joint stiffness owing to co-contraction of antagonists. However, as has recently been demonstrated by simulation, muscle properties imply essential advantages on three levels. 1. Advantage: If unexpected disturbances occur, these need not be dealt with by application of traditional feedback loops. However, a muscle alone can already act as shock absorber when the limb unexpectedly hits a rigid obstacle. This is valuable for the mechanical maintenance of the walking system itself, but may also be helpful if the outside object is an object to be manipulated or even more if it is a human being. It is often argued that this property could also be gained when using “soft” feedback controllers. But here is a critical difference: postural reflexes owing to sensory information always require some time to be activated, whereas the mechanical properties of the muscle apply immediately. To describe this effect, the term “preflex” has been established [30]. On the other hand, stiffness of the joint can easily be increased by coactivation of both antagonistic muscles. 2. Advantage: Systems driven by muscular mechanics show unexpected properties on the level of the whole system. It could be shown in experiments and in simulation studies that a fast walking cockroach when pushed sideways is stabilized against this massive disturbance without any need of sensory feedback [98, 175, 187, 99]. In fact, the response to sensory input would have come much too late. During walking the body is “pending” in an elastic mechanical system built up by the active and passive muscles which stabilizes body position in an inverted-pendulum-like fashion. Even when fast enough sensors were available, the same effect could only be reached by a considerable computational load. The explanation of properties of dynamical mass-spring system has already been proposed by Raibert’s famous monopod hopping machines. However, only recently it could be shown that these effects could be exploited for the control of four-legged and
Principles of Insect Locomotion
85
Fig. 2.26. The knee joint of an insect illustrates that the two segments of arthropod joints are connected by a membrane of soft, elastic cuticle (arthrodial membranes: dotted regions). The muscles, in this case the extensor and flexor of the tibia, connect to long tendons made of stiff cuticle, the so-called apodemes. These apodemes connect to the arthrodial membranes, not to the stiff cuticle of the other segment. Thus, there is an elastic material between the muscle and the segment to be moved. (Fig. 122 from [186]).
six-legged walking systems [89]. It should be mentioned that the possibility to store potential energy could also be exploited in order to decrease the total amount of energy consumption. For application, there is of course the problem as to how muscle-like properties could be obtained in artificial systems. Apart from very basic research investigating different chemical substances that are to the best of our knowledge not yet ready for application, there are three lines of solution to this problem. Hu et al. [120] investigate an actuator that consists of a serial connection between an elastic spring and an electric motor where the motor is used to control the elasticity (compliance) of the spring. A similar arrangement of springs and motors is used in the robot TarryIIb [184]. A very different approach is to use “muscle wires” as actuators. These are wires made from shape memory alloys (SMA) that contract when heated, usually by electric current, developing high forces. Similar to muscles they show some elasticity. As they can only develop force during contraction, antagonistic arrangement is necessary. An important advantage is their extremely small weight, in particular when compared with electric motors plus the necessary gears, the disadvantages being the high amount of electric energy necessary to activate them and the long time necessary to cool them down after activation. Advanced prototypes are built by Ayers [4] for underwater robots. Advantage III: A third line of interest is the application of McKibben muscles and related devices. These are elastic tubes that contract when filled with high pressured air. Their behaviour is even more similar to that of muscles. Further advantages are their small weight. Disadvantage is that high pressured air is necessary and a high number of valves have to be controlled and many tubes are necessary to guide the high pressured air to the muscles. Prototypes of walking machines driven by pneumatic actuators have been built by Dillmann, Berns and co-workers [20] and by Quinn et al. [166].
2.9 Conclusion Control of walking means that a large number of degrees of freedom (e.g., at least 18 joints in a six-legged insect, or 17 DoFs in the case of the biped robot Johnnie [102] are
86
H. Cruse et al.
to be controlled simultaneously. Traditional engineering solutions have applied a centralized, hierarchically structured control architecture. This structure is usually somewhat less centralized when applied to multilegged systems as the cyclic movement of each leg is controlled by a separate pattern generator which is coupled to that of the other legs. Sensors are used only to a minor extent (e.g., to detect ground contact). Many biological investigations have shown that sense organs provide critical information that is exploited by the biological control systems. This appears to be particularly necessary when walking in very cluttered and unpredictable environment. Many of these experimental findings have been summarized by a simulation termed Walknet, which is based on artificial neural nets. In Walknet, computation is extremely simplified by taking into account the loop through the world instead of using explicit computation, a property often characterized by the terms embodiment and situatedness. As mentioned, there are also “internal states” which means that a given sensory input can be responded to in different ways depending on the actual form of that internal or “motivational” state. These states described as swing state or stance state are determined by competitive modules which operate on a longer time scale than lower level modules and provide these lower level modules with some kind of limited protection against sensory input. As these states themselves are sensory driven, the whole system may still be termed a reactive system, however in a broader sense than usually used. The six individual leg controllers are connected neuronally via local coordination rules and mechanically via the substrate. The resulting “free gait” controller allows a high degree of adaptability to difficult terrain as are obstacles or large gaps and to negotiate tight turns. Taken together, there is no central control unit. Instead, local modules responsible for the control of “minibehaviours” cooperate in a self-organizing fashion.
References 1. Akay, T., B¨assler, U., Gerharz, P., B¨uschges, A.: The role of sensory signals from the insect coxa-trochanteral joint in contolling motor activity of the femur-tibia joint. J. Neurophysiol. 85, 594–604 (2001) 2. Akay, T., Haehn, S., Schmitz, J., B¨uschges, A.: Signals from load sensors underlie interjoint coordination during stepping movements of the stick insect leg. J. Neurophysiol. 92, 42–51 (2004) 3. Akay, T., Ludwar, B., G¨oritz, M., Schmitz, J., B¨uschges, A.: Segment specificity of load signal processing depends on walking direction in the stick insect leg muscle control system. Journal of Neuroscience 27, 3285–3294 (2007) 4. Ayers, J.: A conservative biomimetic control architecture for autonomous underwater robots. In: Ayers, J., Davis, J.L., Rudolph, A. (eds.) Neurotechnology for biomimetic robots. MIT Press, Cambridge (2002) 5. Baldi, P., Heiligenberg, W.: How sensory maps could enhance resolution through ordered arrangements of broadly tuned receivers. Biol. Cybern. 59, 313–318 (1988) 6. Bartling, C., Schmitz, J.: Reaction to disturbances of a walking leg during stance. J. Exp. Biol. 203, 1211–1233 (2000) 7. B¨assler, U.: Das Stabheuschreckenpraktikum. Franckh, Stuttgart (1965) 8. B¨assler, U.: Proprioreceptoren am Subcoxal- und Femur-Tibia-Gelenk der Stabheuschrecke und ihre Rolle bei der Wahrnehmung der Schwerkraftrichtung. Kybernetik 2, 168–193 (1965)
Principles of Insect Locomotion
87
9. B¨assler, U.: Zur Bedeutung der Antennen f¨ur die Wahrnehmung der Schwerkraftrichtung bei der Stabheuschrecke Carausius morosus. Kybernetik 9, 31–34 (1971) 10. B¨assler, U.: Zur Beeinflussung der Bewegungsweise eines Beines von Carausius morosus durch Amputation anderer Beine. Kybernetik 10, 110–114 (1972) 11. B¨assler, U.: Reversal of a reflex to a single motoneuron in the stick insect Carausius morosus. Biol. Cybern. 24, 47–49 (1976) 12. B¨assler, U.: Sense organs in the femur of the stick insect and their relevance to the control of position of the femur-tibia-joint. J. Comp. Physiol. 121, 99–113 (1977a) 13. B¨assler, U.: Sensory control of leg movement in the stick insect Carausius morosus. Biol. Cybern. 25, 61–72 (1977b) 14. B¨assler, U.: Neural basis of elementary behavior in stick insects. Springer, Heidelberg (1983) 15. B¨assler, U.: Afferent control of walking movements in the stick insect Cuniculina impigra. i. decerebrated animals on a treadband. J. Comp. Physiol. A 158, 345–349 (1986) 16. B¨assler, U.: Functional principles of pattern generation for walking movements of stick insect forelegs: The role of the femoral chordotonal organ afferences. J. Exp. Biol. 136, 125–147 (1988) 17. B¨assler, U., B¨uschges, A.: Pattern generation for stick insect walking movements - multisensory control of a locomotor program. Brain Res. Rev. 27, 65–88 (1998) 18. B¨assler, U., Rohrbacher, J., Karg, G., Breutel, G.: Interruption of searching movements of partly restrained front legs of stick insects, a model situation for the start of the stance phase? Biol. Cybern. 65, 507–514 (1991) 19. Bell, W.J., Kramer, E.: Search and anemotactic orientation of cockroaches. J. Insect Physiol. 25, 631–640 (1979) 20. Berns, K., Albiez, J., Kepplin, V., Hillenbrand, C.: Airbug-insect like machine actuated by fluidic muscle. In: Berns, K., Dillmann, R. (eds.) Proc. 4th Int.Conf.Climbing and Walking Robots, CLAWAR 2001, pp. 237–244. Professional Engineering Publishers, London (2001) 21. Biewener, A.A.: Animal locomotion. Oxford University Press, Oxford (2003) 22. Bizzi, E., Giszter, S.F., Loeb, E., Mussa-Ivaldi, F.A., Saltiel, P.: Modular organization of motor behavior in the frog’s spinal cord. Trends Neurosci. 18, 442–446 (1995) 23. Bl¨asing, B.: Adaptive locomotion in a complex environment: simulation of stick insect gap crossing behaviour. From animals to animats 8, 173–182 (2004) 24. Bl¨asing, B., Cruse, H.: Mechanisms of stick insect locomotion in a gap crossing paradigm. J. Comp. Physiol. A 190, 173–183 (2004) 25. Blickhan, R., Full, R.J.: Similarity in multilegged locomotion: Bouncing like a monopode. J. Comp. Physiol. A 173, 509–517 (1993) 26. B¨ohm, H., Heinzel, H.G., Scharstein, H., Wendler, G.: The course-control system of beetles walking in an air-current field. J. Comp. Physiol. A 169, 671–683 (1991) 27. Br¨aunig, P., Hustert, R., Pfl¨uger, H.J.: Distribution and specific central projections of mechanoreceptors in the thorax and proximal leg joints of locusts. i. morphology, location and innervation of internal proprioceptors of pro-and metathorax and their central projections. Cell Tissue Res. 216, 57–77 (1981) 28. Brooks, R.A.: A robot that walks: Emergent behaviors from a carefully evolved network. Neural computat. 1, 253–262 (1989) 29. Brooks, R.A.: Intelligence without reason. In: Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI 1991), Sydney, pp. 569–595 (1991) 30. Brown, I.E., Loeb, G.E.: A reductionist approach to creating and using neuromusculoskeletal movement. In: Winters, M.J., Crago, E.P. (eds.) Biomechanics and neural control of movement, pp. 148–163. Springer, Heidelberg (2000)
88
H. Cruse et al.
31. Brunn, D.E., Dean, J.: Intersegmental and local interneurons in the metathorax of the stick insect Carausius morosus that monitor middle leg position. J. Neurophysiol. 72, 1208–1219 (1994) 32. Bucher, D., Akay, T., Dicaprio, R.A., B¨uschges, A.: Interjoint coordination in the stick insect leg-control system: The role of positional signaling. J. Neurophysiol. 89, 1245–1255 (2003) 33. Burrows, M.: The Neurobiology of an Insect Brain. Oxford University Press, Oxford (1996) 34. B¨uschges, A., Schmitz, J., B¨assler, U.: Rhythmic patterns in the thoracic nerve cord of the stick insect induced by pilocarpine. J. Exp. Biol. 198, 435–456 (1995) 35. Camhi, J.M., Johnson, E.N.: High-frequency steering maneuvers mediated by tactile cues: Antennal wall-following in the cockroach. J. Exp. Biol. 202, 631–643 (1999) 36. Chasserat, C., Clarac, F.: Interlimb coordinating factors during driven walking in crustacea. A comparative study of absolute and relative coordination. J. Comp. Physiol. 139, 293–306 (1980) 37. Cocatre-Zilgien, J.H., Delcomyn, F.: Modeling stress and strain in an insect leg for simulation of campaniform sensilla responsses to external forces. Biol. Cybern. 81, 149–160 (1999) 38. Comer, C.M., Parks, L., Halvorsen, M.B., Breese-Terteling, A.: The antennal system and cockroach evasive behavior ii. Stimulus identification and localization are separable antennal functions. J. Comp. Physiol. A 189, 97–103 (2003) 39. Copp, N., Jamon, M.: Kinematics of rotation in place during defense turning in the crayfish Procambarus clarkii. J. Exp. Biol. 204, 471–486 (2001) 40. Cowan, N.J., Ma, E.J., Cutkosky, M., Full, R.J.: A biologically inspired passive antenna for steering control of a running robot. In: Dario, P., Chatila, R. (eds.) Robotics Research. The Eleventh International Symposium. Springer, Wien (2003) 41. Cruse, H.: On the function of the legs in the free walking stick insect Carausius morosus. J. Comp. Physiol. 112, 235–262 (1976a) 42. Cruse, H.: The control of body position in the stick insect (Carausius morosus), when walking over uneven surfaces. Biol. Cybern. 24, 25–33 (1976b) 43. Cruse, H.: The control of the anterior extreme position of the hindleg of a walking insect, Carausius morosus. Physiol. Entomol. 4, 121–124 (1979) 44. Cruse, H.: The influence of load, position and velocity on the control of leg movement of a walking insect. In: Gewecke, M., Wendler, G. (eds.) Insect Locomotion, pp. 19–26. Parey, Hamburg (1985a) 45. Cruse, H.: Which parameters control the leg movement of a walking insect? i. velocity control during the stance phase. J. Exp. Biol. 116, 343–355 (1985b) 46. Cruse, H.: Which parameters control the leg movement of a walking insect? ii. the start of the swing phase. J. Exp. Biol. 116, 357–362 (1985c) 47. Cruse, H.: What mechanisms coordinate leg movement in walking arthropods? Trends Neurosci. 13, 15–21 (1990) 48. Cruse, H.: The functional sense of central oscillations in walking. Biol. Cybern. 86, 271– 280 (2002) 49. Cruse, H., Bartling, C.: Movement of joint angles in the legs of a walking insect, Carausius morosus. J. Insect Physiol. 41, 761–771 (1995) 50. Cruse, H., Bartling, C., Dean, J., Kindermann, T., Schmitz, J., Schumm, M., Wagner, H.: Coordination in a six-legged walking system. simple solutions to complex problems by exploitation of physical properties. In: Maes, P., et al. (eds.) From Animals to Animats, vol. 4, pp. 84–93. The MIT Press/Bradford Books, Cambridge (1996) 51. Cruse, H., Bartling, C., Dreifert, M., Schmitz, J., Brunn, D.E., Dean, J., Kindermann, T.: Walking: a complex behaviour controlled by simple networks. Adapt. Behav. 3, 385–418 (1995)
Principles of Insect Locomotion
89
52. Cruse, H., Kindermann, T., Schumm, M., Dean, J., Schmitz, J.: Walknet - a biologically inspired network to control six-legged walking. Neural Networks 11, 1435–1447 (1998) 53. Cruse, H., K¨uhn, S., Park, S., Schmitz, J.: Adaptive control for insect leg position: controller properties depend on substrate compliance. J. Comp. Physiol. A 190, 983–991 (2004) 54. Cruse, H., M¨uller, U.: A new method measuring leg position of walking crustaceans shows that motor output during return stroke depends upon load. J. Exp. Biol. 110, 319–322 (1984) 55. Cruse, H., M¨uller-Wilm, U., Dean, J.: Artificial neural nets for controlling a 6-legged walking system. In: Meyer, J.A., Roitblat, H., Wilson, S. (eds.) From animals to animats, pp. 52–60. MIT Press, Cambridge (1993a) 56. Cruse, H., Riemenschneider, D., Stammer, W.: Control of body position of a stick insect standing on uneven surfaces. Biol. Cybern. 61, 71–77 (1989) 57. Cruse, H., Saxler, G.: Oscillations of force in the standing legs of a walking insect (Carausius morosus). Biol. Cybern. 36, 159–163 (1980) 58. Cruse, H., Schmitz, J., Braun, U., Schweins, A.: Control of body height in a stick insect walking on a treadwheel. J. Exp. Biol. 181, 141–155 (1993b) 59. Cruse, H., Silva Saavedra, M.G.: Curve walking in crayfish. J. Exp. Biol. 199, 1477–1482 (1996) 60. Cruse, H., Warnecke, H.: Coordination of the legs of a slow-walking cat. Exp. Brain Res. 89, 147–156 (1992) 61. Cu´enot, L.: Regeneration de pattes a´ la place d’antennes sectionn´ees chez un phasme. Comptes Rendus Acad. Sci. Paris 172, 949–952 (1921) 62. Dean, J.: Coding proprioceptive information to control movement to a target: simulation with a simple neural network. Biol. Cybern. 63, 115–120 (1990) 63. Dean, J.: A model of leg coordination in the stick insect, Carausius morosus. i. a geometrical consideration of contralateral and ipsilateral coordination mechanisms between two adjacent legs. Biol. Cybern. 64, 393–402 (1991a) 64. Dean, J.: A model of leg coordination in the stick insect, Carausius morosus. ii. description of the kinematic model and simulation of normal step patterns. Biol. Cybern. 64, 403–411 (1991b) 65. Dean, J.: Effect of load on leg movement and step coordination of the stick insect Carausius morosus. J. Exp. Biol. 159, 449–471 (1991c) 66. Dean, J.: A model of leg coordination in the stick insect, Carausius morosus. iii. responses to perturbations of normal coordination. Biol. Cybern. 66, 335–343 (1992a) 67. Dean, J.: A model of leg coordination in the stick insect, Carausius morosus. iv. comparisons of different forms of coordinating mechanisms. Biol. Cybern. 66, 345–355 (1992b) 68. Dean, J., Cruse, H.: Evidence for the control of velocity as well as position in leg protraction and retraction by the stick insect. In: Heuer, H., Fromm, C. (eds.) Generation and modulation of action patterns, vol. 15, pp. 263–274. Springer, Heidelberg (1986) 69. Dean, J., Cruse, H.: Motor pattern generation. In: Arbib, M. (ed.) Handbook for Brain Theory and Neural Network, pp. 696–701. Bradford Book/MIT Press, Cambridge (2003) 70. Dean, J., Wendler, G.: Stick insects walking on a wheel: Perturbations induced by obstruction of leg protraction. J. Comp. Physiol. 148, 195–207 (1982) 71. Dean, J., Wendler, G.: Stick insect locomotion on a walking wheel: interleg coordination of leg position. J. Exp. Biol. 103, 75–94 (1983) 72. Degtyarenko, A.M., Simon, E.S., Norden-Krichmar, T., Burke, R.E.: Modulation of oligosynaptic cutaneous and muscle afferent reflex pathways during fictive locomotion and scratching in the cat. J. Neurophysiol. 79, 447–463 (1998) 73. Delcomyn, F.: Insect locomotion on land. In: Herreid, C.F., Fourtner, C.R. (eds.) Locomotion and Energetics in Arthropods, pp. 103–125. Plenum, New York (1981) 74. Delcomyn, F.: Acitvity and structure of movement-signalling (corollary discharge) interneurons in a cockroach. J. Comp. Physiol. 150, 185–193 (1983)
90
H. Cruse et al.
75. Delcomyn, F.: Motor activity during searching and walking movemets of cockroach legs. J. Exp. Biol. 133, 111–120 (1987) 76. Delcomyn, F.: Walking in the american coackroach: the timing of motor activity in the legs during straight walking. Biol. Cybern. 60, 373–384 (1989) 77. Delcomyn, F.: Activity and directional sensitivity of leg campaniform sensilla in a stick insect. J. Comp. Physiol. A 168, 113–119 (1991) 78. Dicaprio, R.A., Clarac, F.: Reversal of a walking leg reflex elicited by a muscle receptor. J. Exp. Biol. 90, 197–203 (1981) 79. Dreller, C., Kirchner, W.H.: Hearing in honeybees: localization of the auditory sense organ. J. Comp. Physiol. A 173, 275–279 (1993) 80. D¨urr, V.: Stereotypic leg searching-movements in the stick insect: Kinematic analysis, behavioural context and simulation. J. Exp. Biol. 204, 1589–1604 (2001) 81. D¨urr, V.: Context-dependent changes in strength and efficacy of leg coordination mechanisms. J. Exp. Biol. 208, 2253–2267 (2005) 82. D¨urr, V.: Context-dependent changes in strength and efficacy of leg coordination mechanisms. J. Exp. Biol. 208, 2253–2267 (2005a) 83. D¨urr, V., Ebeling, W.: The behavioural transition from straight to curve walking: kinetics of leg movement parameters and the initiation of turning. J. Exp. Biol. 208, 2237–2252 (2005) 84. D¨urr, V., K¨onig, Y., Kittmann, R.: The antennal motor system of the stick insect Carausius morosus: anatomy and antennal movement pattern during walking. J. Comp. Physiol. A 187, 131–144 (2001) 85. D¨urr, V., Krause, A.: The stick insect antenna as a biological paragon for an actively moved tactile probe for obstacle detection. In: Berns, K., Dillmann, R. (eds.) Climbing and Walking Robots - From Biology to Industrial Applications. Proc. 4th Int. Conf. Climbing and Walking Robots (CLAWAR 2001), Karlsruhe, pp. 87–96. Professional Engineering Publishing, Bury St. Edmunds (2001) 86. D¨urr, V., Krause, A., Schmitz, J., Cruse, H.: Neuroethological concepts and their transfer to walking machines. Int. J. Robotics Res. 22, 151–167 (2003) 87. Duysens, J., Clarac, F., Cruse, H.: Load-regulating mechanisms in gait and posture: Comparative aspects. Physiol. Rev. 80, 83–133 (2000) 88. Ebeling, W., D¨urr, V.: Perturbation of leg protraction causes context-dependent modulation of inter-leg coordination, but not of avoidance reflexes. Journal of Experimental Biology 209, 2199–2214 (2006) 89. Ekeberg, O., Bl¨umel, M., B¨uschges, A.: Dynamic simulation of insect walking. Arthropod Structure & Development 33, 287–300 (2004) 90. Espenschied, K.S., Quinn, R.D., Beer, R.D., Chiel, H.J.: Biologically based distributed control and local reflexes improve rough terrain locomotion in a hexapod robot. Robot. Autonom. Sys. 18, 59–64 (1996) 91. Fielding, M.R., Dunlop, R.: Exponential fields to establish inter-leg influences for omnidirectional hexapod gait. In: Berns, K., Dillmann, R. (eds.) Proc. 4th Int. Conf. Climbing and Walking Robots (CLAWAR 2001), pp. 587–594. Professional Engineering Publishers, London (2001) 92. Fischer, H., Schmidt, J., Haas, R., B¨uschges, A.: Pattern generation for walking and searching movements of a stick insect leg. i. coordination of motor activity. J. Neurophysiol. 85, 341–353 (2001) 93. Franklin, R., Bell, W.J., Jander, R.: Rotational locomotion by the cockroach Blattella germanica. J. Insect Physiol. 27, 249–255 (1981) 94. Frantsevich, L., Cruse, H.: Leg coordination during turning on an extremely narrow substrate in a bug, Mesocerus marginatus (heteroptera, coreidae). J. Insect Physiol. 51, 1092– 1104 (2005)
Principles of Insect Locomotion
91
95. Frazier, S.F., Larsen, G.S., Neff, D., Quimby, L., Carney, M., Dicaprio, R.A., Zill, S.N.: Elasticity and movements of the cockroach tarsus in walking. J. Comp. Physiol. A 185, 157–172 (1999) 96. Full, R.J.: Integration of individual leg dynamics with whole body movement in arthropod locomotion. In: Beer, R., Ritzmann, R.E., McKenna, T. (eds.) Biological Neural Networks in Invertebrate Neuroethology and Robotics, pp. 3–20. Academic Press, San Diego (1993) 97. Full, R.J., Blickhan, R.: Locomotion wnergetics of the ghost crab. J. Exp. Biol. 130, 155– 175 (1987) 98. Full, R.J., Koditschek, D.E.: Templates and anchors: neuromechanical hypotheses of legged locomotion on land. J. Exp. Biol. 202, 3325–3332 (1999) 99. Full, R.J., Kubow, T.M., Schmitt, J., Holmes, P., Koditscheck, D.E.: Quantifying dynamic stability and maneuverbility in legged locomotion. Integ. and Comp. Biol. 42, 149–157 (2002) 100. Full, R.J., Tu, M.S.: Mechanics of six-legged runners. J. Exp. Biol. 148, 129–146 (1990) 101. Full, R.J., Tu, M.S.: Mechanics of a rapid running insect: two-, four- and six-legged locomotion. J. Exp. Biol. 156, 215–231 (1991) 102. Gienger, M., L¨offler, K., Pfeiffer, F.: A biped robot that jogs. In: Proc. IEEE Int. Conf. Robotics Automation, vol. 4, pp. 3334–3339 (2000) 103. Gnatzy, W., Heußlein, R.: Digger wasp against crickets. i. receptors involved in the antipredator strategies of the prey. Naturwiss. 73, 212–215 (1986) 104. G¨opfert, M.C., Briegel, H., Robert, D.: Mosquito hearing: sound-induced antennal vibrations in male and female Aedes aegypti. J. Exp. Biol. 202, 2727–2738 (1999) 105. G¨opfert, M.C., Robert, D.: The mechanical basis of Drosophila audition. J. Exp. Biol. 205, 1199–1208 (2002) 106. Gorb, S.N., Jiao, Y., Scherge, M.: Ultrastructural architecture and mechanical properties of attachment pads in Tettigonia viridissima (orthoptera, tettigoniidae). J. Comp. Physiol. A 186, 821–831 (2000) 107. Graham, D.: A behavioural analysis of the temporal organisation of walking movements in the 1st instar and adult stick insect (Carausius morosus). J. Comp. Physiol. 81, 23–52 (1972) 108. Graham, D.: Pattern and control of walking in insects. Adv. Insect Physiol. 81, 31–140 (1985) 109. Gregory, G.: Neuroanatomy of the mesothoracic ganglion of the cockroach periplaneta americana (l.). ii. median neuron cell body groups. Phil. Trans. R. Soc. Lond. B 306, 191– 218 (1984) 110. Guthrie, D.M.: Multipolar stretch receptors and the insect leg reflex. J. Insect Physiol. 13, 1637–1644 (1967) 111. Hess, D., B¨uschges, A.: Role of propriosceptive signals from an insect femur-tibia joint in patterning motoneuronal activity of an adjacent leg joint. J. Neurophysiol. 81, 1856–1865 (1999) 112. Hofmann, T., B¨assler, U.: Anatomy and physiology of trochanteral campaniform sensilla in the stick insect, cuniculina impigra. Physiol. Entomol. 7, 413–426 (1982) 113. Hofmann, T., Koch, U.T., B¨assler, U.: Physiology of the femoral chordotonal organ in the stick insect, Cuniculina impigra. J. Exp. Biol. 114, 207–223 (1985) 114. Holst, E.v.: Die relative koordination als ph¨anomen und als methode zentralnerv¨oser funktionsanalyse. Erg. Physiol. 42, 228–306 (1939) ¨ 115. Holst, E.v.: Uber relative koordination bei arthropoden. Pfl¨ugers Arch. 246, 847–865 (1943) 116. H¨oltje, M., Hustert, R.: Rapid mechano-sensory pathways code leg impact and elicit very rapid reflexes in insects. J. Exp. Biol. 206, 2715–2724 (2003) 117. Honegger, H.W.: A preliminary note on a new optomotor response in crickets: Antennal tracking of moving targets. J. Comp. Physiol. A 142, 419–421 (1981)
92
H. Cruse et al.
118. Horn, E., Bischof, H.J.: Gravity reception in crickets: the influence of cercal and antennal afferents on the head position. J. Comp. Physiol. A 150, 93–98 (1983) 119. Horseman, B.G., Gebhardt, M.J., Honegger, H.W.: Involvement of the suboesophageal and thoracic ganglia in the control of antennal movements in crickets. J. Comp. Physiol. A 181, 195–204 (1997) 120. Hu, J., Pratt, J., Chew, C., Herr, H., Pratt, G.: Adaptive virtual model control of a bipedal walking robot. In: Proc. IEEE Int. Symp. Intel. Sys., pp. 245–251 (1998) 121. Hustert, R.: Proprioceptor responses and convergence of proprioceptive influence on motorneurones in the mesothoracic thoraco-coxal joint of locusts. J. Comp. Physiol. 150, 77– 86 (1983) 122. Hustert, R., Pfl¨uger, H.J., Br¨aunig, P.: Distribution and specific central projections in the thorax and proximal leg joints of locusts. iii. the external mechanoreceptors: The campaniform sensille. Cell Tissue Res. 216, 97–111 (1981) 123. Imms, A.D.: On the antennal musculature in insects and other arthropods. Quart. J. Microsc. Sci. 81, 273–320 (1939) 124. Inman, V.T., Ralston, H.J., Todd, F.: Human walking. Williams & Wilkins, Baltimore (1981) 125. Jander, J.P.: Untersuchungen zum mechanismus und zur zentralnerv¨osen steuerung des kurvenlaufs bei stabheuschrecken (Carausius morosus). Ph.D. thesis, University of K¨oln, Germany (1982) 126. Jander, J.P.: Mechanical stability in stick insects when walking straight and around curves. In: Gewecke, M., Wendler, G. (eds.) Insect Locomotion, pp. 33–42. Paul Parey, Hamburg (1985) 127. Jindrich, D.L., Full, R.J.: Many-legged maneuverability: Dynamics of turning in hexapods. J. Exp. Biol. 202, 1603–1623 (1999) 128. Jindrich, D.L., Full, R.J.: Dynamic stabilization of rapid hexapedal locomotion. J. Exp. Biol. 205, 2803–2823 (2002) 129. Kaliyamoorthy, S., Zill, S.N., Quinn, R.D., Ritzmann, R.: Finite element analysis of strains in the Blaberus cockroach leg segment while climbing. In: Intelligent Robots and Systems IEEE/RSJ Proceedings, vol. 2, pp. 833–838 (2001) 130. Kaneko, M., Kanayma, N., Tsuji, T.: Active antenna for contact sensing. IEEE Trans. Robot. Autom. 14, 278–291 (1998) 131. Karg, G., Breutel, G., B¨assler, U.: Sensory influences on the coordination of two leg joints during searching movements of stick insects. Biol. Cybern. 64, 329–335 (1991) 132. Kawato, M., Gomi, H.: The cerebellum and vor/okr learning-models. Trends Neurosci. 15, 445–453 (1992) 133. Kemmerling, S., Varju, D.: Regulation of the body-substrate-distance in the stick insect: Step responses and modelling the control system. Biol. Cybern. 44, 59–66 (1982) 134. Kendall, M.D.: The anatomy of the tarsi of Schistocerca gregoria forsk˚al. Z. Zellforsch. 109, 112–137 (1970) 135. Kindermann, T.: Behavior and adaptability of a six-legged walking system with highly distributed control. Adapt. Behav. 9, 16–41 (2002) 136. Kittmann, R., Dean, J., Schmitz, J.: An atlas of the thoracic ganglia of the stick insect, carausius morosus. Phil. Trans. R. Soc. Lond. B 331, 101–121 (1991) 137. Knop, G., Denzer, L., Buschges, A.: A central pattern-generating network contributes to “reflex-reversal”-like leg motoneuron activity in the locust. J. Neurophysiol. 86, 3065–3068 (2001) 138. Krause, A.F., D¨urr, V.: Tactile efficiency of insect antennae with two hinge joints. Biol. Cybern. 91, 168–181 (2004) 139. Land, M.F.: Stepping movements made by jumping spiders during turns mediated by the lateral eyes. J. Exp. Biol. 57, 15–40 (1972)
Principles of Insect Locomotion
93
140. Lange, O., Reimann, B., Saenz, J., D¨urr, V., Elkmann, N.: Insectoid obstacle detection based on an active tactile approach. In: Witte, H. (ed.) Proc. Int. Symp. Adapt. Motion Anim. Mach. (2005) 141. Lewinger, W.A., Harley, C.M., Ritzmann, R.E., Branicky, M.S., Quinn, R.D.: Insect-like antennal sensing for climbing and tunneling behavior in a biologically-inspired mobile robot. In: Procceedings of the IEEE International Conference on Robotics and Automation (ICRA 2005), Barcelona, April 18-22 (2005) 142. Linder, C.: Self-organization in a simple task of motor control based on spatial encoding. Adaptive Behavior (2005) 143. Linder, C.R.: Self organisation in a simple task of motor control. In: Hallam, B., et al. (eds.) From animals to animats, vol. 7, pp. 185–194. MIT Press, Cambridge (2002) 144. Linsenmair, K.E.: Die windorientierung laufender insekten. Fortschr. Zool. 21, 59–79 (1973) 145. Manning, A.: Antennae and sexual receptivity in Drosophila melanogaster females. Science 158, 136–137 (1967) 146. Marchand, A.R., Leibrock, C.S., Auriac, M.C., Barnes, W.J.P., Clarac, F.: Morphology, physiology and in vivo activity of cuticular stress detector afferents in crayfish. J. Comp. Physiol. A. 176, 409–424 (1995) 147. Matheson, T.: Range fractionation in the locust metathoracic femoral chordotonal organ. J. Comp. Physiol. A 170, 509–520 (1992) 148. Matheson, T., Field, L.H.: An elaborate tension receptor system highlights sensory complexity in the hind leg of the locust. J. Exp. Biol. 198, 1673–1689 (1995) 149. M¨uller-Wilm, U., Dean, J., Cruse, H., Weidemann, H.J., Eltze, J., Pfeiffer, F.: Kinematic model of stick insect as an example of a 6-legged walking system. Adapt. Behav. 1, 155– 169 (1992) 150. Newland, P.L.: Morphology and somatotopic organisation of the central projections of afferents from tactile hairs on the hind leg of the locust. J. Comp. Neurol. 312, 493–508 (1991) 151. Newland, P.L., Emptage, N.J.: The central connections and actions during walking of tibial campaniform sensilla in the locust. J. Comp. Physiol. A 178, 749–762 (1996) 152. Noah, J.A., Quimby, L., Frazier, S.F., Zill, S.N.: Force detection in cockroach walking reconsidered: discharges of proximal tibial campaniform sensilla when body load is altered. J. Comp. Physiol. A 187, 769–784 (2001) 153. Noah, J.A., Quimby, L., Frazier, S.F., Zill, S.N.: Sensing the effect of body load in legs: responses of tibial campaniform sensilla to forces applied to the thorax in freely standing cockroaches. J. Comp. Physiol. A 190, 201–215 (2004) 154. Okada, J., Toh, Y.: The role of antennal hair plates in object-guided tactile orientation of the cockroach (Periplaneta americana). J. Comp. Physiol. A (2000) 155. Pearson, K.G.: Central programming and reflex control of walking in the cockroach. J. Exp. Biol. 56, 173–193 (1972) 156. Pearson, K.G.: Common principles of motor control in vertebrates and invertebrates. Ann. Rev. Neurosci. 16, 265–297 (1993) 157. Pearson, K.G., Franklin, R.: Characteristics of leg movements and patterns of coordination in locusts walking on rough terrain. Int. J. Robotics Res. 3, 101–112 (1984) 158. Pearson, K.G., Iles, F.J.: Nervous mechanisms underlying intersegmental co-ordination of leg movements during walking in the cockroach. J. Exp. Biol. 58, 725–744 (1973) 159. Pelletier, Y., McLeod, C.D.: Obstacle perception by insect antennae during terrestrial locomotion. Physiol. Entomol. 19, 360–362 (1994) 160. Petryszak, A., Fudalewicz-Niemczyk, W.: External proprioceptors on the legs of insects of higher orders. Acta Biologica Cracoviensia 36, 13–22 (1994)
94
H. Cruse et al.
161. Pfl¨uger, H.J., Braeunig, P., Hustert, R.: The organization of mechanosensory neuropiles in locust thoracic ganglia. Phil. Trans. R. Soc. Lond. B 321, 1–26 (1988) 162. Porta, J.M., Celaya, E.: Efficient gait generation using reinforcement learning. In: Berns, K., Dillmann, R. (eds.) Proc. 4th Int. Conf. Climbing and Walking Robots (CLAWAR 2001), pp. 411–418. Professional Engineering Publishing, London (2001) 163. Pringle, J.W.S.: Proprioception in insects. ii. the action of the campaniform sensilla on the legs. J. Exp. Biol. 15, 114–131 (1938) 164. Prochazka, A., Gillard, D., Bennett, D.J.: Implications of positive feedback in the control of movement. J. Neurophysiol. 77, 3237–3251 (1997a) 165. Prochazka, A., Gillard, D., Bennett, D.J.: Positive force feedback control of muscles. J. Neurophysiol. 77, 3226–3236 (1997b) 166. Quinn, R.D., Nelson, G.M., Bachmann, R.J., Ritzmann, R.E.: Toward mission capable legged robots through biological inspiration. Autonomous Robots 11, 215–220 (2001) 167. Radnikow, G., B¨assler, U.: Function of a muscle whose apodeme travels through a joint moved by other muscles: why the retractor unguis muscle in stick insects is tripartite and has no antagonist. J. Exp. Biol. 157, 87–99 (1991) 168. Ridgel, A.L., Frazier, S.F., Dicaprio, R.A., Zill, S.N.: Active signaling of leg loading and unloading in the cockroach. J. Neurophysiol. 81, 1432–1437 (1999) 169. Ridgel, A.L., Frazier, S.F., Dicaprio, R.A., Zill, S.N.: Encoding of forces by cockroach tibial campaniform sensilla: implications in dynamic control of posture and locomotion. J. Comp. Physiol. A 186, 359–374 (2000) 170. Ridgel, A.L., Frazier, S.F., Zill, S.N.: Dynamic responses of tibial campaniform sensilla studied by substrate displacement in freely moving cockroaches. J. Comp. Physiol. A 187, 405–420 (2001) 171. Ritzmann, R.E., Pollack, A.J., Archinal, J., Ridgel, A.L., Quinn, R.D.: Descending control of body attitude in the cockroach blaberus discoidalis and its role in incline climbing. J. Comp. Physiol. A 191, 253–264 (2005) 172. Roeder, K.D.: The control of tonus and locomotor activity in the praying mantis (Mantis religiosa l). J. exp. Zool. 76, 353–374 (1937) 173. Roggendorf, T.: Comparing different controllers for the coordination of a six-legged walker. Biol. Cybern. 92, 261–274 (2005) 174. Schilling, M., Cruse, H., Arena, P.: Hexapod walking: an expansion to walknet dealing with leg amputations and force oscillations. Biological Cybernetics 96(3), 323–340 (2007) 175. Schmitt, J., Garcia, M., Razo, R.C., Holmes, P., Full, R.J.: Dynamics and stability of legged locomotion in the horizontal plane: a test case using insects. Biol. Cybern. 86, 343–353 (2002) 176. Schmitz, J.: Load-compensating reactions in the proximal leg joints of stick insects during standing and walking. J. Exp. Biol. 183, 15–33 (1993) 177. Schmitz, J., Bartling, C., Brunn, D.E., Cruse, H., Dean, J., Kindermann, T., Schumm, M., Wagner, H.: Adaptive properties of “hard-wired” neuronal systems. Verh. Dt. Zool. Ges. 88, 165–179 (1995) 178. Schmitz, J., Dean, J., Kindermann, T., Schumm, M., Cruse, H.: A biologically inspired controller for hexapod walking: simple solutions by exploiting physical properties. Biol. Bull. 200, 195–200 (2001) 179. Schmitz, J., Dean, J., Kittmann, R.: Central projections of leg sense organs in Carausius morosus (insecta, phasmida). Zoomorphol. 111, 19–33 (1991) 180. Schmitz, J., Kamp, A., Kindermann, T., Cruse, H.: Adaptations to increased load in a control system governing movements of biological and artificial walking machines. In: Blickhan, R., Nachtigall, W. (eds.) BIONA reports 13: Motor System (2000)
Principles of Insect Locomotion
95
181. Schmitz, J., Schumann, K., Kamp, A.v.: Mechanisms for self-adaptation of posture and movement to increased load. Abstract Viewer/Itinerary Planner. Society for Neuroscience, Washington (CD-ROM, 2000, Program No. 368.7) (2000) 182. Schmitz, J., Stein, W.: Convergence of load and movement information onto leg motoneurons in insects. J. Neurobiol. 42, 424–436 (2000) 183. Schneider, A., Cruse, H., Schmitz, J.: A biologically inspired active compliant joint using local positive velocity feedback. lpvf. IEEE Trans. Systems Man Cybern. Part B: Cybernetics 35, 1120–1130 (2005a) 184. Schneider, A., Cruse, H., Schmitz, J.: Switched local positive velocity feedback controllers: Local generation of retraction forces and inter-joint coordination during walking. In: Witte, H. (ed.) 3rd International Symposium on Adaptive Motion in Animals and Machines (AMAM 2005), Ilmenau, September 25-30 (2005b) 185. Schumm, M., Cruse, H.: Control of swing movement: influences of differently shaped substrate. J. Comp. Physiol. A 192 (2006) 186. Seifert, G.: Entomologisches Praktikum, 2nd edn. Thieme, Stuttgart (1975) 187. Seipel, J.E., Holmes, P., Full, R.J.: Dynamics and stability of insect locomotion: a hexapedal model for horizontal plane motions. Biol. Cybern. 91, 76–90 (2004) 188. Sillar, K.T., Skorupski, P., Elson, R.C., Bush, B.M.H.: Two identified afferent neurones entrain a central locomotor rhythm generator. Nature 323, 440–443 (1986) 189. Spinola, S.M., Chapman, K.M.: Proprioceptive indentation of the campaniform sensilla of cockroach legs. J. Comp. Physiol. 96, 257–272 (1975) 190. Staudacher, E., Gebhardt, M.J., D¨urr, V.: Antennal movements and mechanoreception: neurobiology of active tactile sensors. Adv. Insect Physiol. 32, 49–205 (2005) 191. Stein, W., Schmitz, J.: Multimodal convergence of presynaptic afferent inhibition in insect proprioceptors. J. Neurophysiol. 82, 512–514 (1999) 192. Stierle, I.E., Getman, M., Comer, C.M.: Multisensory control of escape in the cockroach Periplaneta americana i. Initial evidence from patterns of wind-evoked behavior. J. Comp. Physiol. A 174, 1–11 (1994) 193. Strauss, R., Heisenberg, M.: Coordination of legs during straight walking and turning in Drosophila melanogaster. J. Comp. Physiol. A 167, 403–412 (1990) 194. Ting, L.H., Blickhan, R., Full, R.J.: Dynamic and static stability in hexapedal runners. J. Exp. Biol. 197, 251–269 (1994) 195. Tryba, A.K., Ritzmann, R.E.: Multi-joint coordination during walking and foothold searching in the Blaberus cockroach i. Kinematics and electromyograms. J. Neurophysiol. 83, 3323–3336 (2000) 196. Tsujimura, T., Yabuta, T.: A tactile sensing method employing force/torque information through insensitive probes. In: Proc. IEEE Int. Conf. Robotics Automation 1992, pp. 1315– 1320 (1992) 197. Tyrer, N., Gregory, G.: A guide to the neuroanatomy of locust suboesophageal and thoracic ganglia. Phil. Trans. R. Soc. Lond. B 297, 91–123 (1982) 198. Ueno, N., Svinin, M.M., Kaneko, M.: Dynamic contact sensing by flexible beam. IEEEASME Trans. Mechatronics 3, 254–264 (1998) 199. Watson, J.T., Ritzmann, R.E.: Leg kinematics and muscle activity during treadmill running in the cockroach, Blaberus discoidalis: I. slow running. J. Comp. Physiol. A 182, 11–22 (1998) 200. Watson, J.T., Ritzmann, R.E., Zill, S.N., Pollack, A.J.: Control of obstacle climbing in the cockroach, Blaberus discoidalis. i. kinematics. J. Comp. Physiol. A 188, 39–53 (2002) 201. Wendler, G.: Laufen und stehen der stabheuschrecke Carausius morosus: Sinnesborstenfelder in den beingelenken von regelkreisen. Z. vergl. Physiol. 48, 198–250 (1964) 202. Zajac, F.E.: Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control. Critical Reviews in Biomed. Eng. 17, 359–411 (1989)
96
H. Cruse et al.
203. Zeil, J., Sandeman, R., Sandeman, D.C.: Tactile localisation: the function of active antennal movements in the crayfish Cherax destructor. J. Comp. Physiol. A 157, 607–617 (1985) 204. Zill, S., Schmitz, J., B¨uschges, A.: Load sensing and control of posture and locomotion. Arthropod Structure & Development 33, 273–286 (2004) 205. Zill, S.N.: Load compensatory reactions in insects: Swaying and stepping strategies in posture and locomotion. In: Beer, R.D., Ritzmann, R.E., McKenna, T. (eds.) Biological Neural Networks in Insect Neuroethology and Robotics, pp. 43–68. Academic Press, New York (1993) 206. Zill, S.N., Moran, D.T.: The exoskeleton and insect proprioception. iii. activity of tibial campaniform sensilla during walking in the american cockroach, periplaneta americana. J. Exp. Biol. 94, 57–75 (1981a) 207. Zill, S.N., Moran, D.T.: The exoskeleton and insect propriocetion. i. responses of tibial campaniform sensilla to external and muscle-generated forces in the american cockroach periplaneta americana. J. Exp. Biol. 91, 1–24 (1981b) 208. Zill, S.N., Moran, D.T., Varela, F.G.: The exoskeleton and insect proprioception. ii. reflex effects of tibial campaniform sensilla in the american cockroach, periplaneta americana. J. Exp. Biol. 94, 43–55 (1981) 209. Zill, S.N., Underwood, M.A., Rowley, J.C., Moran, D.T.: A somatotopic organization of groups of afferents in insect peripheral nerves. Brain Res. 198, 253–269 (1980) 210. Zollikofer, C.P.E.: Stepping patterns in ants i. influence of speed and curvature. J. Exp. Biol. 192, 95–106 (1994) 211. Zolotov, V., Frantsevich, L., Falk, E.M.: Kinematik der phototaktischen drehung bei der honigbiene Apis mellifera. J. Comp. Physiol. A 97, 339–353 (1975)
3 Low Level Approaches to Cognitive Control B. Webb1 , J. Wessnitzer1 , H. Rosano1 , M. Szenher1 , M. Zampoglou1, T. Haferlach1, and P. Russo2 1 2
Institute for Perception, Action and Behaviour, University of Edinburgh
[email protected],
[email protected] Department of Electrical, Electronic and System Engineering, University of Catania, I-95125 Catania, Italy
[email protected]
Abstract. This chapter describes progress towards the building of a biologically inspired cognitive artifact. It first discusses the implementation of individual perception-action links based on insect capabilities; then how these may be combined in more complex, multimodal control. We describe several neural network implementions of insect based methods of navigation. We present the preliminary results of modelling associative learning capabilities based on the insect mushroom bodies.
3.1 Introduction In Chapter 1 we addressed the issue of active perception from psychological and biological perspectives. We suggested it is most productive to consider it as task-specific transformation of sensory input into action. This not only implies action-oriented processing (such as matched filtering and extraction of affordances) but also involves closed loops in which actions continually change the input. ‘Proto’-cognitive behaviours involve interactions between loops, learnt substitution of one loop for another, internal loops, and predictive loops: internal representations may be introduced at this level in the form of forward models. Higher cognitive capabilities such as planning may require more complex forms of internal representation and memory. We observed that nearly all these issues are relevant to insect behaviours, such as their ability to combine different sensorimotor loops smoothly, to learn, to navigate, to control a complex multi-degree of freedom body, and so on. Insects thus constitute a useful model for understanding many basic mechanisms in the context of perception for action. We proposed a general architecture for achieving cognition in robots, based on the insect brain. The main features are the use of parallel sensorimotor transformation loops that control individual behaviours, co-ordinated and modulated by higher level loops that involve multimodal inputs, learning and prediction. We noted in particular the neural architecture of the mushroom bodies as an interesting insect biological system to explore for learning capabilities. This chapter provides more biological background and describes the development of models for pre-cognitive and proto-cognitive capabilities, derived directly from P. Arena and L. Patan`e (Eds.): Spatial Temporal Patterns, COSMOS 1, pp. 97–175. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
98
B. Webb et al.
insect neurobiology and implemented on robots. There is a hierarchy of behaviours to be discussed: • Basic pre-wired behaviours e.g. sound localisation, visual reflexes; these may still involve specialised filtering, classification, active perception, some adaptability, and solving significant motor control problems. • Multimodal and protocognitive behaviours e.g. navigation behaviours such as homing and path integration that must smoothly combine multiple sensors and require memory, but do not need a cognitive map. • Associative learning e.g. of signals predicting other stimuli, enabling anticipatory behaviour, substitution of one sensory control for another (e.g. visual guidance towards a sound source), and cancellation of expected re-afference by association with efference copy.
3.2 Sensory Systems and Simple Behaviours 3.2.1
Mechanosensory Systems
Mechanoreceptors are distributed over the whole body of most insects and depending on the position may serve a variety of sensory functions: proprioception, equilibrium, gravity, perception of air currents, audition, etc. Two major mechanosensory appendages, the antennae and the cerci, are briefly described in this section, with particular attention to their role in behaviour and possible multimodal interactions. 3.2.1.1 Antennae All insects have specialised sensors on their heads in the form of antennae. The antennae of the cockroach Periplaneta americana is highly segmented, consisting of 150+ segments where each segment is covered with hard cuticle. The antennae are flexible at the intersegmental folds and highly sensitive to touch, as well as carrying many chemical receptors. Muscle control allows the cockroach to sweep its antennae (sometimes extending up to 1.3 times the body length [10]) independently. Antennae thus are active tactile sensors and their potential use in robot locomotion is discussed in [20], who observe a tight coupling between the stick insect’s walking rhythm and the rhythmic movement of the antennae. It is argued that long antennae allow insects to actively search for obstacles in their path. The active movement increases the probability of obstacle and olfaction detection. Tactile learning in the honeybee has been reported in [25] showing that honeybees could through antennal exploration learn and discriminate physical characteristics of surfaces. Antennal tracking responses to horizontally moving objects in the visual field of the cricket were reported in [58]. Tactile stimulation of the antennae can cause escape responses in cockroaches and crickets, and their role in steering behaviour and wall-following is discussed in [10]. Receptors from the antennae innervate the deutocerebrum, which consists of two distinct neuropils, the antennal lobe and the antennal mechanosensory motor centre [57], and links olfactory and mechanosensory inputs, integrating cues from these modalities [181]. Descending neural pathways from the antennal lobe to thoracic motor centres
Low Level Approaches to Cognitive Control
99
play an important role in escape responses [7]. The antennal wall following and escape system of the cockroach Periplaneta americana is modelled on a robot in [12, 13]. In [178], the integrated roles of visual and mechanosensory cues during antennal escape responses are investigated. 3.2.1.2 Cerci Escape responses can be elicited by many different sensory modalities, visual or mechanosensory. In some groups of insects, the abdomen bears a pair of terminal appendages called the cerci, carrying hair cells sensitive to air flow, particularly the airfront of a pouncing predator. The circuitry known to control this wind-mediated escape behaviour lies in the thoracic ganglia, abdominal ganglia, and abdominal nerve cord. Although it has been shown decapitated cockroaches are still able to exhibit escape responses, descending inputs from the brain or subesophageal ganglion play a role in modulating thoracic circuitry [130]. As might be expected for an escape response, the cricket’s response is in the order of 60-100ms to detect, calculate, and execute avoidance behaviour. This wind-mediated escape has been investigated and modelled on a robot in [12]. In the cricket, hundreds of tiny filiform hairs populate each cercus. These filiform hairs are mechanoreceptors that are highly sensitive to the direction and the dynamics of air currents. Each filiform hair is tuned to wind directions in particular direction and frequency spectra, causing firing in the associated sensory neuron. The axons associated with each receptor neuron enter the Terminal Abdominal Ganglion (TAG), more specifically a neuropil called the cercal glomerulus. Here they form a continuous map of stimulus direction [69], i.e. they arborise in specific locations according to their directional tuning properties. Computation of stimulus direction by TAG interneurons can be explained by examining the anatomical overlap between their dendritic trees and the map formed by the sensory afferents [68, 104, 69]. The activity of the cerci sensory system is not only determined by the excitation of receptor cells but also by central mechanisms controlling information transmission depending on behavioural contexts [81], e.g. inhibition during flight. Interneurons project their axons from the cerci towards motor neurons in the thoracic ganglia. In the cricket, four giant interneurons (GIs) effectively monitor wind stimuli from the quadrants of the animal’s surroundings, each being receptive over roughly 90 degrees. There are other GIs, but also non-giant interneurons, responsive to more narrow ranges of the wind velocity spectrum. The concepts of sensory maps and minimalist representations are well illustrated by the cerci system. The sensory field of the insect is represented as a map of wind directions within the cercal glomerulus, i.e. a functional map rather than a topographic projection. The afferents are structurally reorganised according to stimulus/response characteristics independently of the relative receptor locations on the periphery. Unlike the retinotopic organisation in visual systems, the loss of a localised region of receptors on the cerci would not lead to a blind spot in the representation of the stimulus space. Interestingly, Kanou et al. [72] have shown that crickets can adapt the directionality of their cercal escape response after damage only if they have experienced reaffferent input during free movement.
100
3.2.2
B. Webb et al.
Olfactory Systems
Odours play a very important role in insect behaviour, e.g. courtship behaviour and tracking sex attractant pheromones, and the search for food. Many insects are capable of following odour gradients despite turbulences in air flow which cause odour concentrations to fluctuate. A behavioural strategy called anemotaxis helps animals to navigate towards odour sources by moving upwind, using hair sensors to detect the wind direction, and visual information to stabilise their course [2]. Odour concentrations will be highly dispersed in the air and the task of olfaction is the perception, identification and discrimination of odours. What makes this sensory modality different from others is the multi-dimensionality of odour space. Odours are composed of a mix of odourants resulting in virtually infinite combinations of odours and yet insects can learn and recognise specific odours and combinations thereof with great accuracy [37], at low concentrations. The task of olfaction is one of (i) classifying similar odours as the same, (ii) discriminating distinct odours from one another, and (iii) classifying odours carrying the same meaning (e.g. reward) [62]. The challenge is to understand the computational rules or algorithms used by the brain to encode, store and retrieve these complex and multidimensional stimuli. Interestingly, it appears similar solutions to these problems in olfaction may have been adopted by different species and phyla, including vertebrates [54, 147]. The problem of constructing neural representations for odours is one of mapping a multidimensional odour space onto a two-dimensional sensory surface and thence the antennal lobe [82, 83]. Odourant receptor neurons (ORN) form the sensory surface but their layout is neither receptotopic nor chemotopic. Different ORNs are tuned to different odourants, but the tuning is fairly broad resulting in combinatorial activation patterns of several types of ORN. The ORN axons project into the antennal lobe (AL) of the deutocerebrum. From there interneurons connect to higher brain centres in other parts of the brain, particularly the lateral protocerebral lobe and the mushroom bodies. The antennal lobe is further divided into areas processing very specific and particular olfactory inputs, such as sex pheromones, plant and food odours, or CO2 . Evidence of sex-specific neuroanatomical differences in antenna and the AL for pheromone detection again illustrate the concept of matched filters [1]. The way odours are mapped in the brain is through convergence of all ORN axons expressing the same OR gene onto the same (sometimes same few) glomerulus [36]. In [35, 116], odour representation is thought of as spatio-temporal patterns in the antennal lobe which are distributed and combinatorial, and are not revealed simply by looking at mean firing rates [153]. The AL consists of two main types of neurons, excitatory projection neurons (PNs) and inhibitory local neurons (LNs). LNs are thought to synchronise groups of PNs by periodic inhibitory inputs. Odour-evoked synchronisation of subpopulations of neurons in the antennal lobe leads to oscillations in the field potential [97, 95] and these oscillations appear to be required for fine odour discrimination. Synchronisation of projection neurons may help to stabilise the odour response but to what extent the temporal patterns themselves code for odour properties, reflect binding of the activated neuron population, optimisation of the response, or dynamic stimuli, remains to be seen [83, 93].
Low Level Approaches to Cognitive Control
101
ORN
LN and PN
AL
KC
MB
(a) ORN axons
PN
LN Glomerulus
LN PN KC
AL
MB
Legend Dendrite
(b) Cell body
Axon
Fig. 3.1. Schematic representation of the olfactory processing pathway. (a) In the locust, ∼50000 ORN axons converge into glomeruli where ∼830 projection neurons (PN) and ∼300 local neuron (LN) of the antennal lobe (AL) generate activation patterns feeding the ∼50000 Kenyon cells (KC) of the mushroom bodies (MB). The mushroom bodies are thought of as higher olfaction brain centres. (b) Schematic view of the interactions between local and projection neurons in the antennal lobe.
Although we do not model olfactory sensing in the present work, we note that this system illustrates many interesting general principles of sensory pathways. e.g. feedforward and feed-back inhibition, divergence and convergence, which are important for understanding neural coding [65]. The olfactory system can be considered a system for pattern classification operating in noisy, high-dimensional and rapidly-changing environments. The network dynamics progressively converge (over repeated odour samplings) by relating information across glomeruli and PNs on more easily classifiable representations [145]. The main features as summarised in [120] are: • normalisation and dynamic range compression of input, • selective amplification of foreground odourants against complex chemical backgrounds, • compression and storage of information that defines classes of behaviourally relevant odours,
102
B. Webb et al.
• construction of a transmission signal indicating the presence or absence of an identified odourant, • enhancement of signal-to-noise ratio, • modification of the selectivity to form new classes by associative learning. Various attempts at modelling the complex spatio-temporal dynamics have been made (e.g. [120, 121, 96, 122]) and this sensory modality is also increasingly used to robotics (e.g. [73, 64, 44]). 3.2.3
Visual Systems
3.2.3.1 Background Many important behaviours in an insect’s repertoire are dependent on vision, particularly the detection of visual motion [139]; including the optomotor response, the landing response, orientation towards small, fast moving objects against the background [22], distance measurement from parallax, including peering behaviour (moving the head from side to side), the centering response, and visual regulation of flight speed. The optomotor response of flies estimates self-rotation based on visual motion information and generates a compensatory torque response to maintain flight stability. The centering response is a behaviour whereby bees flying through a narrow gap tend to fly through its centre [139], by balancing the optic flow on each side. Perception of visual cues also plays an important role in navigation strategies [38]. Path integration is simplified by adjusting flight speed in order to keep optic flow constant [15]. Not only do insects use optic flow to measure distance [26] but pursuit behaviour provides compelling evidence that these animals are tuned to pick up other small moving objects in the environment. There is even evidence of predatory insects concealing their own movements whilst stalking other insects, moving in such ways as to appear stationary to their prey [140]. A fly’s compound eyes are made up of facets, or ommatidia. An ommatidium contains a facet lens and 8 photoreceptors arranged in characteristic pattern; 2 at the centre with one on top of the other, and the other six placed around those. The number of ommatidia varies with species. The spatial resolution of an insect’s compound eyes not only depends on the number of ommatidia but also on the angular differences between ommatidia themselves and between the photoreceptive elements in each ommatidium [89, 91]. The optic lobes are a large pair of structures on either side of the brain adjacent to the ommatidia which make up the retina as shown in Fig. 3.2. Each optic lobe consists of four distinct parts: the lamina, the outer and inner medulla, and the lobula. Visual images received by the facet eyes are retinotopically projected towards these underlying neuropils, i.e. the neighbourhood relationships are maintained. Photoreceptors in the retina signal temporal deviations from the ambient light level and feed these to the next layer of cells, the lamina. Lamina cells are thought to emphasise temporal change. Due to their small size, the medulla neuropils have not been investigated with respect to their response characteristics [16] but indirect evidence suggests that local detection of motion (between adjacent ommatidia) is performed there [6]. Franceschini et al. showed that stimulating just two photoreceptors in turn produces a detectable signal in subsequent neurons [28]. This first processing stage has been described computationally by the Hassenstein-Reichardt model [43], although it has been
Low Level Approaches to Cognitive Control
103
Optic lobe Retina Lobula Central brain
Lamina Lobula plate Medulla
Fig. 3.2. Schematic cross-section through a fly’s head illustrating the main components of the visual system. Adapted from [6].
suggested that what is known of the neuronal structure connecting the photoreceptors to the lobula plate tangential cells (LPTC) does not equate to this model [53, 51, 52]. Higgins and co-workers have attempted to derive the functional organisation of a subset of identified retinotopic neurons feeding into the lobula plate. Their proposed model exhibits output properties that are consistent with current anatomical and physiological observations of real neurons, and also very close to the properties of the original Hassenstein-Reichardt model. The lobula plate, a posterior part of the lobula, hosts tangential cells (LPTCs - Lobula Plate Tangential Cells). These LPTCs are complex motion-sensitive neurons. At this point in the optic lobe, the retinotopic structure is abandoned and information converges onto these tangential cells. The LPTCs pool with their large dendritic trees the outputs of the many retinotopically organised local motion elements. Some may be more responsive to translational, vertical or horizontal motion, or indeed highly complex visual flow-fields [84]. LPTCs have been subdivided into various subclasses according to their response characteristics [6]: (1) preferred orientation (i.e. whether they respond to vertical or horizontal image motion), (2) prevalent electrical response mode, i.e. spiking or non-spiking, (3) projection area (e.g. to the contralateral brain hemisphere or to the ipsilateral side), and (4) spatial integration properties (i.e. whether response increases as moving visual patterns grow, or whether they respond more to small moving patterns) [22]. Synaptic interactions between these integrating neurons increase optic flow specificity [59]. These LPTCs act as matched filters for optic flow patterns detecting body rotations [85] but also for detecting smaller objects against the background [78]. Models of the fly’s visual system can be found, for example in [74, 75, 76, 6]. Some LPTCs have been linked to specific visually-guided behaviours [22]. Flight trajectories of flies tend to consist of straight flight sequences with abrupt changes
104
B. Webb et al.
in heading called saccades. Expansion of optic flow triggers saccades away from approaching visual features [152]. Tammero and Dickinson reported that flies exhibit fixed-amplitude saccades of ± 90 degrees and that the response is away from the optic lobe experiencing the greater amount of visual motion. Saccadic movements are important for a variety of reasons [90]. First, resolution is lost if rotation blurs the image. Second, moving objects are easier to see if the retinal image of the background is stationary. Third, it becomes easier to obtain heading and distance from flow-fields resulting from pure translation (Fig. 3.3). Rotation without translation
elevation
elevation
Translation without rotation
Rotation with translational element
Translation with rotational element
elevation
azimuth
elevation
azimuth
azimuth
azimuth
Fig. 3.3. Flow field patterns on an animal’s retina resulting from (a) pure translation, (b) pure rotation and (c,d) from combinations of both.
As well as the main visual pathway, a number of other specialised visual sensory systems are found in insects. The fruitfly Drosophila has been shown to have specialised ommatidia for detecting polarised light, situated at the dorsal rim area of the compound eye [86], as do other insects including bees, crickets and locusts. Ocelli, an insect’s simple eyes, can only detect changes in light intensity over their large visual fields for controlling pitch and roll deviation [107] sometimes adapted for horizon detection [142]. This links back to the ideas of affordances and matched filters discussed in chapter 1. Complex sensory stimuli are transformed by very specific setups of sensory pathways, tuned to specific visual features. Many visual behaviours involve integration with other sensory systems. Gaze control in the blowfly Calliphora is known to process several mechanosensory and visual cues
Low Level Approaches to Cognitive Control
105
to keep balance during flight and to align its eyes with respect to its surroundings [49]. Frye and co-workers report on the importance visuo-olfactory sensory integration for odour source localisation [33, 31]. In another paper, Frye and Dickinson report on how the motor output reflects the linear superposition of visual and olfactory input [32]. There can also be substantial flexibility in visual ‘reflexes’. Evidence for operant loops in flies allowing flexibility to use and vary different motor outputs for visual steering behaviour can be seen in [174, 175]. Drosophila can learn to control flight course, contextualise sensory information in order to avoid punishment or receive reward. 3.2.3.2 Integrating Visual Control into Walking Controllers Many of the visual behaviours described above require a turning response from the animal. While visual behaviour has been more extensively studied in flying insects, we are particularly interested in application to walking robots based on six-legged insects. Assuming the visual system has supplied a correction or target direction for the animal to pursue, executing that response involves solving a complex 18 degree-of-freedom problem to control each of the three joints on six legs. It is not clear to what extent the precise trajectories have to be calculated by each leg and how each thoracic segment (front, middle and hind legs) might interpret turning signals generated locally or from the brain. Modulation of leg coordination and leg trajectories during turns has previously been analysed using the insect’s response to a continuous visual flow [21]. In that experiment, insects were fixed on top of a movable ball in the middle of a rotating visual scene. It was suggested [21] that the front legs respond first to the visual stimulus, and this might trigger the response of the other legs. Here we report on the results of experiments and models that explore the body trajectory and the different roles of legs on different thoracic segments when the freely walking insect spontaneously turns towards a visual target. A complete account of these results is provided in [127]. Stick insects are known to be attracted to bush-shaped objects and to be strongly stimulated by vertical edges [71]. Adult stick insects (Carausius morosus) were placed in an arena (67cm by 177cm) with white walls (50 cm tall) around it to eliminate external visual stimuli. The visual target consisted of a black bar, 4.5 cm wide and 60 cm tall. The target was placed within the insect’s visual field in a different direction to its current heading, and no more than 30cm away. The animal would reliably respond by turning to walk in this direction (Fig. 3.4). Just before the insect reached it, the target was quickly removed vertically and then placed in a different position, no more than 30 cm away, inducing another turn. This could be repeated around 10 times before the insect changed its attention to the walls or ceased walking. A second experiment used the same paradigm but with the front tarsi of the stick insect temporarily blocked with dried paint. This prevents gripping of the substrate, minimising the directional forces of front leg movements, thus revealing more clearly the contributions of the middle and rear legs for turning. A total of 24 turns for the intact insect and 19 turns for the insect with tarsi blocked were combined for the analysis. The turns were normalised to start in direction zero, turn to a target angle of ‘one’, and to take the same time to complete. The average time to complete a turn was 60 frames (2.4 seconds). The variables studied are shown in Fig. 3.10: the body angle θB at time t; and the direction of movement of the prothorax
106
B. Webb et al.
θP and metathorax θM at time t, calculated with respect to a point in time n frames ahead of the current position, typically 10 frames ahead. This lag removed noise and intrinsic oscillations of the stick insect, but still tended to smooth fast changes in angle, particularly affecting those changing more abruptly, like θP .
Insect size Walking direction Prothorax
Target
Mesothorax Metathorax
Prothorax Mesothorax Metathorax
Insect size Starting position
Starting position
Fig. 3.4. Representative example of the paths stick insects follow when attracted to a black vertical bar. A zoomed section when turning is shown on the right, this represents a typical turn. Here, the prothorax direction suddenly changes almost 90 degrees. From [127] (Fig. 3).
Observed turning behaviour in the stick insect Fig. 3.5 shows, in the upper three plots, the direction followed by the body, prothorax and metathorax during a turn. For comparison, the means are plotted on one graph on the lower left. Despite the smoothing, it can still be clearly seen that at the beginning of the turn, the prothorax θP changes direction within just a few time steps, and very early during the turn is pointing towards the target. Prothorax leg direction, relative to the target, during stance is also plotted in Fig. 3.5. It can be seen that particularly the inner front leg pulls towards the target. The metathorax, on the other hand, follows a smoother transition, similar to that of the body itself. The point of rotation for the body was calculated at each moment during the sequence, for rotations larger than 1 degree. Fig. 3.6 shows a normalized graph of rotation positions. Most rotations accumulated between the mesothorax and the metathorax, laterally displaced from the body towards the position that legs contact the ground. These data suggest that the specific movements of the stick insect’s legs during turns results in the prothoracic segment following mostly straight lines, pointing most of the time towards the target, whereas the mesothorax and metathorax tend to follow curves, with a rotation point between the inner legs of the metathorax and mesothorax. This can also be seen in Fig. 3.4 and Fig. 3.15. Fig. 3.7 shows two typical turns done by the stick insect when the front tarsi are blocked, and hence can make minimal contribution to the turn. Fig. 3.8 shows analogous results to those in Fig. 3.5, except for the middle bottom figure, which instead of showing front leg trajectories shows body directions for smooth turns only. For most sharp turns (bottom left) the body pointed to the target when the prothorax was still
Low Level Approaches to Cognitive Control Body angle
0.5
20 frame 40
0
60
0
Upper plots superposition
20 frame 40
0
Front Leg Direction
20 frame 40
60
Pro− & Meta− thorax speed
1.4 1.2 1
0.6
0.6
0.5
0.4
0.4
0.2
0.2
20 frame 40
60
0
1
0.8
0.8
0
0
60
1.6
1
0
1
0.5
speed/initial speed
normalised absolute angle
1.5
1
0.5
norm absolute leg direction
0
Metathorax Direction
1.5 normalised absolute angle
1
0
Prothorax Direction
1.5 normalised absolute angle
normalised absolute angle
1.5
107
0
20 frame 40
60
0
0
20 frame 40
60
Fig. 3.5. Ethological results. Bottom left figure shows with a solid line the mean direction of the Prothorax θP ; Metathorax θM with a dotted line; and the body θB with a dashed line. Top left shows progress of θB in more detail, standard deviation is shown by the error bars. The top middle is that of the prothorax θP and the top right is the metathorax direction θM . The bottom middle shows the direction front legs follow relative to the initial heading, i.e., θB + θL ; both legs are shown, inner front leg (◦) and outer (∗). The bottom right shows changes in speed for the prothorax (◦) and metathorax (∗) relative to their velocity before the turn. From [127] (Fig. 5). MOL MIL HOL HIL
2
1
0
−1 1
0
−1
−2
−3
−4
Fig. 3.6. A cumulative plot of the point of rotation during turns. The three black asterisks (∗) represent the three thoracic segments the prothorax is encircled. Turns are to the right. From [127] (Fig. 6).
108
B. Webb et al.
Walking direction
Metathorax Prothorax
Insect size
Fig. 3.7. Targeting with front leg tarsi blocked. From [127] (Fig. 8).
moving laterally; therefore the prothorax ends at a slightly larger angle compared to that of the body. For smooth turns the prothorax was just slightly deviated from the body and the metathorax direction as shown in the middle bottom graph. In contrast with results shown for the intact insect the prothorax does not point consistently towards the target. The rotation shown in Fig. 3.9 is the average of all turns. In sharp turns the inner hind leg is often arrested in position, or even moved in reverse. The rotation point in these situations is moved almost on top of the insect body. Analysis of turning behaviour The individual leg trajectories for achieving the body trajectories described above vary considerably on different thoracic segments. Moreover, individual leg speeds on either side of the turn vary greatly. The problem we address in the following section is how this complex pattern of leg behaviour might be achieved with a simple control model. Typically, turning in six-legged robots is controlled by simple techniques, similar to wheeled robots, that cause a difference in speed on either side, which determines a point of rotation. If w is the distance between wheels, vo is the outer speed to the turn and vi is the inner speed, the point of rotation is given by R = vo w/(vo − vi ). The closer this point of rotation is to the center of the robot, the sharper the turn. For legged robots there are alternative options to control this difference in speed, for instance, increasing frequency between step phases on one side. However, we are motivated to produce trajectories like those seen in the insect, and describing this motion in terms of rotations does not fit naturally. Rather, we focused on describing the trajectory of the prothorax because it represents the most salient feature of stick insect turns: the prothorax tends to follow straight line trajectories, i.e. θ˙η = 0 for the prothorax. This can be described by:
θη = arctan (tan (φ )/η ) θ˙B = |vP | sin (φ )/R
(3.1)
where η is the relative distance to the point of rotation R along the body, being 1 for the prothorax and 0 for the point of rotation. The left hand side of Fig. 3.10 shows these
Low Level Approaches to Cognitive Control Body angle
normalised absolute angle
1
Upper plots superposition
1
0.5
0
0
0
60
0
1.5
20 frame 40
60
0
Smooth turns superposition
20 frame 40
60
Pro− & Meta− thorax speed 1
0.8
1
0.6
0.5
20 frame 40
0
60
speed/initial speed
1.5
20 frame 40
normalised absolute angle
0
1
0.5
0.5
0
Metathorax Direction
1.5
1.5
1
0.5
normalised absolute angle
Prothorax Direction
2 normalised absolute angle
normalised absolute angle
1.5
109
0.4 0.2
0
0
20 frame 40
60
0
0
20 frame 40
60
Fig. 3.8. Ethological results for the insect when front tarsi is blocked. Bottom left and bottom middle figures show with a solid line the mean direction of the Prothorax θP ; Metathorax θM with a dotted line; and the body θB with a dashed line. The bottom middle shows results only for smooth trajectories (n=4). Top left shows progress of θB in more detail, deviation standard is shown by the arrows. The top middle is that of the prothorax θP and the top right is the metathorax direction θM . The bottom right shows changes in speed for the prothorax (◦) and metathorax (∗) relative to their velocity before the turn. From [127] (Fig. 7). MOL MIL HOL HIL
2
1
0
−1 1
0
−1
−2
−3
−4
Fig. 3.9. A cumulative plot of the point of rotation during turns. The three black asterisks (∗) represent the three thoracic segments the prothorax is encircled. Turns are to the right. From [127] (Fig. 9).
variables with respect to the target. For the prothorax η = 1, therefore, the direction is always that of the target given its current position, θP = θη + θB = φ + θB = θT . Transforming this model of body motion into leg trajectories v, is fairly straightforward: if L is the distance from the coxa to the tarsus, then it follows that
110
B. Webb et al. θB
φ
θT
θνP
η =1
Target
vP = Bη=1
Length
B η=(r, θ) η =0
∆θB
Fig. 3.10. Left: θT is the angle to the target, θB is the body angle and φ is the relative angle to the target. Right: The prothorax is translated by Bη =1 and rotated ∆ θB around η = 0. Bη is the translation followed by other points along the body. From [127] (Fig. 11).
v(θ˙B , θη ) = [−Ly, Lx]θ˙B − θη . However, this equation for v implies that all legs in a kinematic model need to calculate at each point how much the body needs to rotate, and the rear legs need to know their relative position to the point of rotation. Furthermore, the rear leg direction will depend not only on φ , but on arctan(tan (φ )/η ). Alternatively, implemented in a dynamic model, the equations in (3.1) could be executed independently. The prothorax could follow trajectories according to φ = θη (η = 1), while the back of the body could account for the rotation θ˙B . Moreover, at least some of the rotation could be a passive result of the trajectory of the prothorax. This further simplifies calculations for leg trajectories in the prothorax, because if they no longer need to compute θ˙B , the equation v for front leg trajectories becomes simply v = −φ as seen in Fig. 3.10.
body rotation
MOL MIL HOL HIL
body
velocity
−∆rΗ +∆rΗ
r Lmeta l meta
IARmeta
Fig. 3.11. Left: Ideally, if the body were to rotate always around the hind inner leg, the MIL and the HOL would have specific orthogonal trajectories to follow. Right: Graphical representation of leg’s active role. Hind legs average speed rH is changed by ∆ rH resulting in a distance to the axis of rotation of IARH . From [127] (Fig. 12).
Low Level Approaches to Cognitive Control
111
However, purely passive rotation of the rear segments is not sufficient to account for the sharp rotations seen in the insect behaviour, particularly when the front tarsi have been blocked. Normally, when the body rotation is sharp, the inner hind leg is almost arrested near the AEP, and the body rotates around this point. The left hand side of Fig. 3.11 shows how each leg should move to contribute to this rotation: specifically, the middle hind leg should move sideways, and the outer hind leg should move forward to back. Control could consist of both middle legs producing the same lateral force, while increasing the speed of the outer hind leg and decreasing the speed of the inner hind leg by the amount ∆α . The right hand side of Fig. 3.11 shows how this affects the rotation point. This is easily translated into joint control by the observation that the alpha joint is particularly related to back and forward movement of the leg, and the gamma joint is related to lateral movement of legs (see Fig. 3.12). By introducing differential activation of the alpha joints in the rear legs, their relative speed can be easily controlled. Similarly, introducing a bias to the gamma joint in the middle legs will produce the required lateral movements. Model A robot simulation was programmed using ODE1 libraries. Fig. 3.12 shows the stick insect based robot and its leg geometry. The angles controlled are α for rostral caudal movements; β for moving the leg up and down; and γ for movements towards and away from the body. It is assumed that the insect is able to sense directly the angular error between a visual target and its current heading (body direction). This can be derived from the retinal position of the target in the eye on the relevant side of the animal (with an appropriate adjustment if the head is not held at the same angle as the body). No explicit visual processing was implemented on the robot but just a single sensor, positioned at the centre front, which detects the target angle and sends this information to each thoracic segment. The robot’s control of leg movements is completely decentralised, and interleg coordination is based on the WalkNet model [17] (see chapter 2). Transitions between leg phases (stance and swing) result from simple excitatory and inhibitory rules between adjacent legs. We used rules 1-3 and 5b from [17]: Rule 1, a leg swinging inhibits transition to swing in the anterior leg; Rule 2, a leg starting stance excites the anterior leg; Rule 3, a leg excites the caudal leg proportional to its distance travelled. Rules 2 and 3 are also implemented between contralateral legs. Rule 5b prolongs stance according to the load that leg is supporting. Additionally, we excited a leg when load was low. Individual leg trajectories are determined by combining a velocity controller, calculating the required joint velocities for a desired speed and direction of the leg during stance, with feedforward control as proposed in [134]. For the velocity controller, equa tion 3.3 gives joint velocities, A˙ = [α˙ β˙ γ˙] , where T (α , β , γ ) is the tarsus position, J is the Jacobian matrix and v is the desired velocity of the tarsus2 . A˙ = [α˙ β˙ γ˙] = [J(T(α , β , γ ))]−1 v(θL , speed).
1 2
(3.2)
Open Dynamic Engine. http://ode.org/ In [126] it is shown that this mathematical solution can be emulated using a neural network.
112
B. Webb et al.
Fig. 3.12. The stick insect based robot created using ODE libraries is shown on the left. The right hand side shows the leg geometry. From [127] (Fig. 1).
The height of the body (to compensate for gravity) was controlled locally as in [18] and the speed of all legs was kept constant for all experiments. Therefore the only free parameter was the direction of the leg, θL in the ‘x–y’ plane. For the front legs, θL was always that of the direction of the target, i.e. θL = θφ . For the middle and hind legs this was always set to θL = 0, i.e. without the front leg influence they would always walk forward (but see below). The three leg segments always stay in the same plane, as shown in Fig. 3.12, which allows us to solve equation (3.3) without further parameters. A˙ = [α˙ β˙ γ˙] = [J(T(α , β , γ ))]−1 v(θL , speed).
(3.3)
The feedforward controller adjusted the joint velocity proportionally to the position error caused by external forces. The final angular joint velocity for each leg was thus a linear combination between the angular velocity calculated in equation (3.3) and the velocity error caused by external forces (in general, the effects of pulling by other legs). Therefore, each leg was capable of following a given direction, but at the same time it tended to follow external forces. How much it followed external forces was controlled by a subordination parameter s(sα , sβ , sγ ) for each thoracic segment. For additional turning control, two more parameters were introduced based on results from the animal, described above. These further adjusted the velocities of the middle and rear legs so as to increase the rotation. The activity of gamma joints in the mesothorax was increased or decreased proportionally to the target angle β˙ = β˙ ± kγ θφ . The middle inner gamma joint was excited, whereas the middle outer gamma joint was inhibited, producing lateral forces from these legs. Similarly, alpha joints in the metathorax were exited or inhibited proportionally to the target angle α˙ = α˙ ± kα θφ . The hind inner alpha joint was inhibited and the hind outer alpha joint was excited. As a result, the metathorax rotates towards the target whilst the mesothorax moves sideways pivoting around the metathorax.
Low Level Approaches to Cognitive Control
113
Observed turning behaviour in the robot model The robot model was made to turn at angles from 20 to 90 degrees by increments of 10 and due to the symmetry of the system all turns were made to the same side. Because gait coordination is probabilistic, and every time a different pattern was found, three runs were taken for each angle, for a total of 24 turns. Runs were stopped once the body angle was within 5 degrees of the target and the metathorax was aligned with the prothorax in the same direction. Results from the simulation were analysed using the same approach as for the insect. Body angle
0.5
20 frame 40
0
60
0
20 frame 40
Upper plots superposition
0
Front Leg Direction 1.4 1.2
20 frame 40
60
Pro− & Meta− thorax speed
1
0.6
0.6
0.5
0.4
0.4
0.2
0.2
20 frame 40
60
0
1
0.8
0.8
0
0
60
1.6
1
0
1
0.5
speed/initial speed
normalised absolute angle
1.5
0.5
norm absolute leg direction
0
1
Metathorax Direction
1.5 normalised absolute angle
1
0
Prothorax Direction
1.5 normalised absolute angle
normalised absolute angle
1.5
0
20 frame 40
60
0
0
20 frame 40
60
Fig. 3.13. Simulation results: See figure 3.5 for details. From [127] (figure 23).
MOL MIL HOL HIL
2
1
0
−1 1
0
−1
−2
−3
−4
Fig. 3.14. Simulation results. A cumulative plot of the point of rotation during turns. The three black asterisks (∗) represent the three thoracic segments the prothorax is encircled. Turns are to the right. From [127] (Fig. 24).
114
B. Webb et al.
(a) Just Prothorax
(b) Thoracic Differentiation
(c) Insect
Fig. 3.15. Comparison between simulation results and an insect. Leg trajectories relative to the body show only the first third of the turn in all cases. Dark regions indicate stance and light is swing. From [127] (Fig. 3.25).
In [127] we described the results from exploring the effects of parameter values which lead us to chose the following final set of parameters for the model: kα = 1.25, γ γ kγ = 1.5, smeso = 0.3, smeta = 0.2, sαmeso = 0.3, sαmeta = 0.5. Testing the model under normal conditions, i.e. with the front tarsi unblocked, we also introduce partial influence of the front legs by the rear legs through non-zero subordination parameters, sαpro = γ 0.25 and s pro = 0.15 Fig. 3.13 shows the simulation now responds quickly to the target stimuli and holds the direction of the prothorax more steady (top, centre). The behaviour of the metathorax is more similar to the body (bottom, left) indicating that sideways movement is reduced. Overshooting the target angle sooner means that angular speed is now increased thanks to the metathorax and mesothorax active role. The prothorax direction, particularly the initial response, is smoother because front legs now respond to external forces. The front legs point more directly towards the target than is seen in the insect, although like the insect, the outer leg has larger values than the inner leg (bottom, centre). The metathorax speed is reduced (bottom, right) although still not as much as in the insect. Furthermore, the rotation point is moved to fall at the same lateral distance as the rear legs; this is shown in Fig. 3.14. In Fig. 3.15 typical leg trajectories are shown for the simulation and the insect. Comparison to alternative models An approach that has been proposed before is to control turning by introducing biases on every alpha joint in the body and having all legs following every external force. For large curvatures, the trajectories produced are similar to the insect, but this method fails to reproduce tight turns [79]. A typical trajectory for this reduced model and the average rotation point for a set of experiments is shown in Fig. 3.16. Because the rotation point is not close to the body, and most importantly towards the rear, trajectories are large, smoother and usually failed to hit the target. Some models like Walknet incorporate other variables for the control of turning, for instance, those related with positioning the AEP and PEP. However, the turning trajectories these produce follow the pattern shown in Fig. 3.16. Similar results were found if no feedforward control was used,
Low Level Approaches to Cognitive Control
350
MOL MIL HOL HIL
2
300
1
250
115
200
0
150 100
−1
50 −400
−300
−200
−100
0
1
0
−1
−2
−3
−4
Fig. 3.16. Left, typical trajectory is shown for targeting at 60 degrees when no inverse Jacobian is used and turns are only induced by introducing biases in the alpha joint. Right, average axis of rotation (AAR) for a set of turning shows that the AAR do not get very close to the body. From [127] (Fig. 17).
and turning was done by directly changing speed of equation 3.3 on either side, or introducing biases to alpha joints as before. Again, the simulation always produced smooth trajectories and the rotation point was usually away from the body and at the level of the mesothorax. We conclude from this that models that do not differentiate the control of the different thoracic segments are unable to produce insect-like turning behaviour. 3.2.4
Audition
3.2.4.1 Background Insects use auditory responses for a wide range of behaviours, from avoidance of bat ultrasound to the location of parasite hosts. One of the most conspicuous and best studied behaviours is acoustic communication in orthoptera, including grasshoppers and crickets, in which song signals are used to identify and locate mates. A historical account on the search for neural centres of cricket and grasshopper song can be found in [24]. Substantial experimental evaluation of the behaviour of female crickets to different sound sources has been carried out, usually employing one of three methods: free walking in an arena (e.g. [146]); a fixed body but legs or abdomen able to move and thus indicate attempted steering movements (e.g. [141]); and walking freely on a sphere in a ‘Kramer’ treadmill that compensates for the movement (e.g. [133] [164]). Predator avoidance, the most common auditory behaviour in insects, usually involves sensors that are highly sensitive to broad range of ultrasound, with direct ‘command neuron’ connections to avoidance responses such as sharp steering, stalling of flight, etc. [60]. For mate finding or parasitation more accurate directionality is required, as the target has to be located, and the insect must also recognise the specific sound. For example, cricket males produce calling songs of a characteristic frequency and temporal pattern and female crickets (and parasite flies) can find males using this cue alone. The song is produced by the male closing its wings, which rubs a file on one against a scraper on the other, with the sound being amplified by a resonant area on the wing. The carrier
116
B. Webb et al.
frequency of the song corresponds to the rate at which the teeth of the comb pass over the plectrum, and is around 4-5 kHz for most cricket species. The temporal pattern consists of regularly repeated bursts of sound (syllables) corresponding to each closing of the wings, followed by a silent interval as the wings open again. These temporal patterns vary for different cricket species, and thus provide a basis by which females might recognise conspecifics. Males also respond to the song of other males, leading to either spacing out of individuals within an area, or aggressive interactions [92]. In some orthoptera more elaborate communication occurs, in which a male and female ‘duet’, responding to one another’s song. Insect auditory systems have evolved independently many times. The majority are modified chordotonal (mechanical) proprioceptors, with the addition of a tympanum (thinned cuticle) and tracheal sac [61, 177]. In some animals (e.g. Drosophila, [39]) the antennae act as auditory receivers. In different species, ears appear in very different anatomical positions, e.g. forelegs (crickets), head (parasitoid flies, some beetles), or different positions on the body (mantids, grasshopper). The transduction process is still unclear and a comparative study of current evidence is presented in [176]. Typically, the ear contains a small number of sensory neurons (1-100), the axons of which form an auditory nerve that innervates the nearest ganglia. For some animals, the sensory neurons show differential sensitivity to different frequencies, i.e. frequency tuning. This seems to be for broad separation of cues (e.g. to distinguish bat ultrasound, which triggers avoidance, from calling song which triggers approach) - as yet there is no evidence that more complex frequency processing (comparable to the vertebrate cochlea) occurs, and there is only approximate tonotopic projection into the ganglia [50], although there is some evidence of inhibitory sharpening of tuning of interneurons [148]. The sensory neurons also encode intensity of the signal. For stationary signals, the firing rate appears to correspond to the sum of signal energy in the frequency domain [40] but due to adaptation, the response to temporally varying stimuli is more complex. There is also some evidence of range fractionation in intensity coding [99]. Localisation of sound is usually provided by peripheral anatomical structures and enhanced by neural processing. For example, the cricket’s ears are a pair of tympani, on each front foreleg, which are connected to each other and to a pair of spiracles on either side of the front part of the body by a set of tracheal tubes (Fig. 3.17). Because the cricket is small relative to the wavelength and distance of the sound it is trying to localise, there is little difference in the external amplitude of sound at the left and right tympani. However, sound also reaches the internal surface of the tympani from the other auditory ports after delay and filtering in the tracheal tubes. The vibrations of the tympani are thus determined by the combination of filtered delayed and direct sounds [103]. Depending on the frequency of the signal and the direction of the sound, the phase of the delayed sounds will be shifted (relative to the direct sound) differentially for the two tympani, so that the amplitude of the summed signals will differ, even though the amplitude of the direct signals is similar. For a fixed frequency, the resulting amplitude difference indicates the direction of the sound. This mechanism is a very effective means for detecting sound direction when the physical and processing capacities of the animal can support neither sound-shadowing nor phase-locked neural comparison, the two main cues for sound localisation in vertebrate systems. However, because
Low Level Approaches to Cognitive Control sound source
117
external sound pressure internal sound pressure
left tympanum
right tympanum
tracheal tube
Fig. 3.17. The cricket’s ears are a pair of tympani, on each front foreleg, which are connected to each other and to a pair of spiracles on either side of the front part of the body by a set of tracheal tubes. Thus the tympani act as pressure difference receivers and have a directional response.
the delays are fixed, it functions effectively only around a particular frequency. This could potentially contribute to the carrier frequency selectivity found in female cricket behaviour. Similar inherent tuning to detect the direction of specific sound frequencies is found in the very different anatomical structure, consisting of eardrums linked by a flexible bridge, found in the parasite fly that responds to the same male cricket songs [124]. In crickets and grasshoppers there is contralateral inhibition between auditory interneurons in the ganglia (e.g. in the cricket via the large omega neurons (“ON1”) [172]) enhancing the difference between the two sides and possibly also acting as a form of gain control [149]. One pair of identified ascending interneurons (“AN1”) in the cricket’s prothoracic ganglion appear to be critical for phonotaxis [132]. AN1 respond best to sound at the calling song carrier frequency, and clearly encode the pattern of the song in their spiking response. Hyperpolarising one of the pair leads to a change in walking direction. There are a number of other auditory interneurons in the prothoracic ganglion but their functional role in phonotaxis has not been so clearly characterised - some are known to be involved in ultrasound escape behaviour [118]. The ascending neurons project to the protocerebrum. The most comprehensive study of the role of cricket brain neurons in phonotaxis is provided by [131], who suggests a possible filtering circuit for syllable rate recognition by the female. He identifies two main classes of auditory responsive cells: BNC1 which appears to get direct input from
118
B. Webb et al.
AN1; and BNC2 which get input via BNC1. These neurons vary in their response to the pattern of sound. BNC1d appears to require a minimum syllable duration near to the typical calling song before it reaches threshold, which makes it a lowpass filter for the syllable rate, assuming a constant duty cycle. BNC2b appears to spike around once per syllable, which makes it highpass i.e. as the syllable rate decreases the firing rate will also decrease. BNC2a shows a bandpass filtering effect, responding at somewhat less than a spike per syllable for normal rates but producing fewer spikes for slow rates or fast rates. Schildberger argues that the response of BNC2a reflects an “AND”ing of the output of BNC2b and BNC1d, to produce a neural recognition signal for the appropriate sound pattern. A schematic illustration of the critical neural connections is given in Fig. 3.18.
BNC2
BNC2
a
BNC1 d
Left auditory nerve
b
BNC1
BNC2
BNC2 b
a
BNC1
BNC1 d
Right
AN1
AN1
auditory nerve
ON1
ON1
Fig. 3.18. Schematic illustration of some critical identified neural connections in the auditory system of the cricket. Auditory interneurons in the prothoracic ganglion (AN1 and ON1) receive input from auditory receptors and connect to brain neurons (BN). The connections to motor neurons are not yet known.
Staudacher and Schildberger [144, 143] have described properties of some descending neurons, many of which show a response to sound. The response to calling songs is typically ’gated’ by whether or not the animal is walking. One of these neurons has a firing rate that correlates with the angular velocity of the animal, and another seems to be necessary and sufficient for the onset of walking. However the evidence is not sufficient to determine with any clarity the output circuitry for phonotaxis. 3.2.4.2 Extensions to a Robotic Model of Phonotaxis Our previous robot models of sound localisation in crickets combine an analog circuit that reproduces the pressure-difference receiver characteristics of the cricket ear with a computational model of a small circuit of spiking neurons capable of filtering for
Low Level Approaches to Cognitive Control
119
the correct sound patterns and determining the correct turning response to reach the sound [123]. A redesign of the sensor to produce a more compact analog circuit was carried out (Fig. 3.19). The input to the circuit is via two microphones separated by distance equivalent to a quarter of the wavelength of the carrier frequency (4.7kHz) of the stimulus, i.e. 18mm. The input from each microphone is delayed by a period equivalent to a quarter of the phase of the carrier frequency, i.e. 53 microseconds, and subtracted from the input to the other microphone. This mimics the transmission of sound through the cricket’s trachea and the combination at the tympani as shown in Fig. 3.17.
(a) Functional description of ears circuit
(b) Ears circuit and microphones
Fig. 3.19. Schematic and picture of the robot ears circuit
Recent results from cricket biology have led to several significant revisions of the neural model used on the robot. First, it appears that the turning behaviour is not the output of pattern filtering mechanisms but rather involves a fast, ‘reflex’ pathway in which every burst of sound produces a corresponding deviation in the course of the cricket within 50ms (n.b. this is within one step cycle) [45]. However, this reflex is modulated by pattern filtering mechanisms, on a slower time scale: the reflex is usually suppressed, but is ‘released’ over several seconds when the correct sound pattern is detected, and continues to be active for several seconds after the correct sound pattern ends [119]. Note this fits the general architectural scheme we have discussed for the insect brain in chapter 1, i.e. a direct reflex loop modulated by higher processing. Some neurophysiological support for the existence of this fast loop has recently been published [63]. We have implemented a revised neural circuit that incorporates this fast reflex and its disinhibition by a slower loop [128] (Fig. 3.20). We suggest that there is a relatively direct, or ‘fast’ connection from the thoracic AN1 neurons to the motor control of turning. This connection is modulated by disinhibition via the brain neurons BN1 and BN2 which filter for the temporal pattern in the sound in the same way as before. Some interesting issues have been raised from initial tests on this model. It is difficult to obtain similar dynamics for the onset and offset of the response by varying the synaptic parameters within plausible ranges, suggesting that the modulation mechanism may require a different mechanism such as neuromodulator release. It is not clear from the data so far
120
B. Webb et al.
MOTOR LEFT
MOTOR RIGHT
BNC2
GATE
BNC1
FAST
Left auditory nerve
BNC2
GATE
BNC1
FAST
Right
AN1
AN1
auditory nerve
ON1
ON1
Fig. 3.20. An alternative hypothesis for cricket phonotaxis, in which fast connections from the auditory interneurons (AN1) drive motor responses, but only if recognition of the sound by the brain pathway (BNC1 and BNC2) has released inhibition of the response by the gating mechanism
whether there is likely to be bilateral recognition, as in the current model, or whether the input from the two sides is combined in a single recognition process. I.e. a number of predictions or issues for further experimentation on the cricket have been raised. 3.2.4.3 Resonant Filtering As described above, male crickets and bushcrickets produce stereotypic and species specific songs, and female response depends on recognition of the temporal pattern of song pulses. However, the mechanism by which this recognition occurs is still a matter of debate [50]. Behavioural experiments on the bushcricket Tettigonia cantans [9] compared three alternative explanations for the female’s band-pass preference for the pulse rate in male song: a circuit involving separate high-pass and low-pass filters; autocorrelation; and resonance. A stronger response to songs at integer fractions of the preferred frequency, and to other non-natural songs that reinforce that rhythm, suggested that resonance is the best explanation. Resonance in single neurons can occur as a result of intrinsic dynamics of membrane currents, and there is much interest in the significance of this property of neural processing. Izhikevich [66] proposed a simple model for a ‘resonate-and-fire’ neuron and showed this has many interesting properties, including the capability of acting as a bandpass filter. The activity of the neuron is described by a complex variable z whose time evolution is determined by the following differential equation: z˙ = I + (b + 2π iω )z
(3.4)
Low Level Approaches to Cognitive Control
121
where I is the input current, ω is the resonant frequency, and b the rate of attraction to rest. The neuron ‘fires’ when the imaginary part (considered as membrane potential) exceeds a threshold athresh . This is simply a damped oscillator, i.e. an input pulse will cause the membrane potential to start oscillating at the resonant frequency, with a gradually decreasing amplitude. The neuron fires if the peak of the oscillation exceeds a threshold — typically this requires two or more input pulses to occur either simultaneously, or at a relative interval close to the resonant frequency, so that the second input occurs at the peak of the previously induced oscillation. In the original formulation by Izhikevich, firing is followed by a non-linear reset to a new value, but this was excluded in our model to simplify analysis, as it was not found to significantly affect the results. We implemented Izhikevich’s resonant neuron model, as described above, using the parameters b = −30, ω = 25 (the preferred pulse frequency for T. cantans), and athresh = 0.12, simulated at a time-step of 1 millisecond. The input to the neuron, I, was modulated to represent the different song patterns tested on the bushcricket. In the interest of simplicity we neglected all details of input signal shape, transduction processes, synaptic dynamics, and noise and simply used square pulses, that is, the input was either I = 0 (for gaps between sound pulses) or I ∈ [8, 9, 10, 11, 12] representing variation in sound amplitude of the pulse. The response measured is simply the number of times the neuron fires (crosses the threshold) during 1 second of song pattern. The results presented below show this response averaged across the different input amplitudes, and scaled so that the size of the maximum response corresponds to the maximum response seen in the behavioural data. In Fig. 3.21 it can be seen that this simple model closely reproduces the bush cricket behaviour.
Phonotaxis score/Spike response
Phonotaxis score/Spike response
1
0.8
0.6
0.4
0.2
0
0.8
0.6
0.4
0.2
0
8
10 12.5
18.2
25
33
Pulse rate (Hz)
50
67
10
15
20 25
50
100
Pulse rate (Hz)
Fig. 3.21. Phonotaxis response of bushcricket (dark circles) replotted from [9] compared to firing rate of resonate-and-fire neural model (light circles). Left, as an 18ms pulse is played at different rates (for 67Hz the pulse duration was 7ms). Bushcricket data is replotted from [9]. Right, with different rates but a constant 50% duty cycle. Bushcricket data is replotted from [117]. Figures from [162] (Fig. 1 and 4).
122
B. Webb et al.
We can explain the effects of varying pulse and pause duration by considering the behaviour of the system as shown in the phase plane portraits in Fig. 3.22. With a constant input, the behaviour is described by a logarithmic spiral, with a period of 2πω , winding towards a fixed point z f located at Re(z f ) = I b2 +bω 2 , Im(z f ) = I b2 +ωω 2 (marked as a square for I = 0 and as an asterisk for I = 10 in Fig. 5). For square wave input, the system switches between spiralling one fixed point and another. In order to pass above the threshold and produce spikes the system needs to maximise the radius of the spiral in the phase plane, which for a linear system corresponds to maximising its (phase) velocity z˙. This means the switch between spirals needs to be timed so as to amplify or at least sustain z˙. Geometrically, as the input is real, its effect is horizontal, and hence the switch will amplify z˙ most if it occurs when Re(z) ≈ Re(z∗ ) and Im(z) < Im(z∗ ) for the positive change, and Re(z) ≈ Re(z∗ ) and Im(z) > Im(z∗ ) for the negative change. In other words it should act in the appropriate direction at the top or bottom of the spiral, as shown by the arrows in Fig. 3.22a. Thus certain timing constraints of the input have to hold: (i) the dynamical system needs to evolve for (approximately) an integer number of complete cycles between onsets (ii) the dynamical system needs to make (approximately) half a cycle (or half a cycle after an integer number of cycles) during each constant part of the input. This translates into constraints on frequency and duty cycle, e.g. both conditions are satisfied by songs at around 25Hz or 12.5Hz with a pulse
a
b
0.1
Im(z)
Im(z)
0.05
−0.1 −0.1
Re(z)
0.1
−0.05
−0.05
Re(z)
0.05
Fig. 3.22. Phase plane portraits of the system dynamics. The square marks the zero point for I = 0 and the asterisk the zero point for I = 10. The system switches between anti-clockwise logarithmic spirals about these two points, and spikes occur when it crosses the threshold (dashed line) Im(z) = 0.12. The radius of activity, and hence likelihood of spiking, increases if the changes in input (marke d by diamonds for positive change, circles for negative change) occur a t times that increase or sustain the phase velocity z˙, as seen on the left for a square wave at 25Hz with a 50% duty cycle. Otherwise they tend to act against the phase velocity, reducing the spiral radius, as seen on the right for a square wave at 12.5Hz with a 50% duty cycle (note different scale). This accounts for the dependence of the results on both frequency and duty-cycle of the input pattern. From [162] (Fig. 5).
Low Level Approaches to Cognitive Control
123
duration around 20ms (as seen in Fig. 3.22a) but the second condition is not satistified by songs at 12.5Hz with a pulse duration around 40ms (as seen in Fig. 3.22b). The results show that that neural resonance in a single neuron is a possible mechanism for temporal pattern recognition in T. cantans. It could also play a role in pattern recognition in other species of Orthoptera, or indeed in other animals. It shows that resonant properties of neurons could have a direct role in perception. At the same time, detailed modelling of membrane currents is not necessary to reproduce the effects: a basic model of damped resonance is sufficient in this case. It is possible that the resonance could be a property of a neural circuit rather than an individual neuron, although there is some evidence that an identified neuron in the auditory system of the cricket Teleogryllus oceanicus does show a resonant response [50]. It is interesting to note that the same model system could be driven by a constant tonic input to produce spikes at the pulse rate of the song, and thus could also serve as a pace-maker for the production of the song pattern by the male cricket. 3.2.5
Audition and Vision
In section 3.2.3 one visual reflex described was the optotomotor response: rotation of the entire visual field is usually the result of self-rotation, so unintentional self-rotation can be corrected by turning in the opposite direction in response to visual rotation signals. This reflex compensates for external disturbances and inaccuracies of the musclemotor response, to maintain a straight trajectory. Like most insects, crickets exhibit this reflex. But if the cricket is at the same time performing sound localisation, as described in section 3.2.4, then the acoustic and visual mechanisms have conflicting aims. The auditory response tries to align the trajectory towards the sound source, while the visual response tries to correct for any change in the trajectory, thus counteracting the alignment attempted by the auditory system. Different solutions have been proposed to solve this sensory conflict [123, 159, 160] including: adding the outputs of the visual and auditory systems with different gains; having the auditory system output inhibit the visual system output; having the auditory system output inhibit the visual system input; subtracting the auditory system output from the visual system input; and having the auditory system output control behaviour via the visual control system. A comparison among these methods, as discussed in [160], showed that simple suppression of one sensory system by the other was a reasonably effective mechanism. However, a more suitable and flexible control method is to predict the amplitude and time-evolution of the visual stimulation that arises when an auditory response is made, and thus cancel its specific effects. It was first suggested by von Holst and Mittelstadt in 1950 that biological systems may use an internal efferent copy of motor command signals to modulate sensory processing for predicted reafference. A similar idea was simultaneously proposed by Sperry, i.e. that sensory areas receive a corollary discharge corresponding to the expected feedback. Although this concept is often referred to in biology, there is surprisingly little direct neurophysiological evidence of connections from motor to sensory systems that could support this function. Moreover, biologists often
124
B. Webb et al.
Fig. 3.23. The FeedForward compensation model
overlook the problem that the system must somehow calculate the expected feedback in control theory terms, it must implement a forward model to be able to compensate the expected disturbance [158]. The control strategy explored in this section consists of a dynamic system able to predict the external visual influences due to the acoustically driven motor command. The prediction is used to inhibit the optomotor system, so that it is smoothly combined with phonotaxis. Moreover, the system is realised as part of a network of spiking neurons, demonstrating that it is a plausible solution for the cricket, and is shown to be effective in controlling a mobile robot. 3.2.5.1 A Multisensory Control System The multisensory integration is based on feedforward compensation, a well-known method in industrial applications of control theory. It is a simple method to compensate for disturbances that are a priori known and accessible. In such cases, the correct identification of the plant allows an exact compensation of the noise. Here we consider the optomotor reflex as the plant and the phonotaxis system as the source of noise: acoustic sensing produces a series of actions that will change the motor output planned by the optomotor reflex. Hence it can be compensated by using a dynamic feedforward model of the plant to predict the visual sensory (reafferent) signal corresponding to any action caused by the phonotaxis behaviour (Fig. 3.23). Each pair of phonotactic motor signals and the corresponding visual stimulus form an output-input couple, utilized to learn the required dynamics. This controller is embodied in a neural architecture based on the simple integrate and fire neuron models described by [80]. Synaptic effects are modelled as changes in conductance, and include facilitation and depression mechanisms to allow temporal filtering. Such techniques allow the construction of small networks with useful capabilities, such as selectivity to particular patterns, or copying of external dynamics. From these building blocks, a circuit was built to resemble the cricket neural system. It can be divided into 4 different subsystems (Fig. 3.24): A: The optomotor reflex senses the left/right virtual movement of the world and consequently drives the motor controller neurons to compensate the visual stimuli. B: The acoustic sensing is based on cricket physiology and is described in detail in [123].
Low Level Approaches to Cognitive Control
125
Fig. 3.24. Schema of the cricket bioinspired neural network model. The whole network is constituted by several subsystems: the optomotor reflex (substem A), the acoustic sensing block (subsystem B), the phonotactic model (subsystem C), the feedforward compensator (subsystem D), the motor controllers and an internal source of noise.
C: The phonotactic reflex has two different pathways: the reactive one (AN1→Fast) and the recognition one (AN1→BN1→BN7→Gate→Fast) [128]. D: The feedforward compensator has been designed to take the outgoing signal from the phonotactic system and to compensate the corresponding reafferent optomotor signal. An internal source of noise is used to test the correct working of the optomotor reflex. The motor controller units are biased with random spikes, to represent disturbances that could occur during walking in crickets , due to asymmetries in the motor system or to external factors such as uneven terrain. The feedforward compensator (i.e. subsystem C in Fig. 3.24) integrates the responses of the phonotactic system and the optomotor reflex. It takes the driving commands from the phonotactic system and tries to predict the reafferent visual signal. The driving commands also stimulate the motor controllers and consequently the motor actuators. A sound recognized, e.g. from the left, produces an inhibition on the left motor controller and an excitation on the right one, making the robot steer to left towards the sound source. There is a significant delay (a few hundred milliseconds) before the optomotor
126
B. Webb et al.
reflex will sense this left turn. Without any compensation, the optomotor system would react with a correction to the right side, driven by the opto-clockwise (OC) neuron, annihilating phonotaxis. The feedforward compensator (forward-OC neuron and related synapses) will avoid this incorrect response. The Fast-left neuron will stimulate the forward-OC neuron with the same number of spikes used to drive the motor controller. If the parameters of the forward-OC neuron and its input and output synapses are tuned to predict the reafferent signal, this will counterbalance the OC neuron stimulation, holding the neuron on its resting membrane potential. Following the guidelines of classical identification, we assumed the feedforward model had a fixed dynamic structure, for which a number of parameters had to be optimally identified. The structure was fixed by the neuron/synapse structure already present into the complete model of Fig. 3.24. A subset of the parameters could be a priori fixed due to some preliminary considerations. For example, appropriate synaptic delays could be directly estimated from a correlation analysis between the efferent signal (cause) and the reafferent signal (effect). There remained 6 neuron parameters and 6 parameters for each of the two synaptic connections to be optimised. In order to do so, data were gathered from experiments in which the roving Koala robot was allowed to control steering using only phonotaxis, i.e. with the synaptic connection of the optomotor system to the motors disabled. During these experiments the reafferent visual signals were recorded. This was used to tune the parameters first by hand, and then by use of a genetic algorithm, with the aim of providing appropriate inhibition to keep the opto neuron to its resting potential during phonotactic turns, thus filtering out the phonotactic-to-motor ‘noise’. Further details of this process are given in [129]. Experimental setup The results obtained by using the feedforward compensation should allow concurrent use of both phonotactic and optomotor reflexes. The visual noise introduced by the phonotactic behavior will be efficiently filtered out without suppressing the optomotor system, which is ready to react to other external environment disturbance. To test this, we use a Koala robot, equipped with auditory and visual sensors based on insect sensory systems as described in [159]. A tether connects the robot to a PC running the neural simulation program and recording data. The auditory stimulus is a simulated cricket song. We carry out trials with the robot starting in one of three positions: from near the center of the room, facing the speaker from about 180 cm and from half-way down each side wall, facing the opposite wall, about 160 cm from the speaker. For each position ten trials are recorded. The robot is stopped when it is about to hit the speaker or else one of the lab walls; a successful trial is counted when the center of the robot is within 30 cm of the speaker at the point it is stopped. The tracks are recorded using shaft encoders, which are sufficiently accurate for dead reckoning over the short paths considered in these experiments. For each recorded track, a ’directness’ parameter is calculated [135] which combines the average heading of the robot with the time taken to complete the trial. A route at maximum speed straight towards the source would have a directness value of 1. Results for Phonotaxis only: Fig. 3.25 (left) shows the behaviour of the robot driven by phonotaxis only. It performs successfully,reaching the target every time, with
Low Level Approaches to Cognitive Control
127
directness = 0.38. The phonotaxis behaviour when a random noise is introduced into the motor output is shown in Fig. 3.25 (right). The trajectories are more winding, but phonotaxis still guides the robot towards the loudspeaker, despite the noise, so the auditory behaviour is robust. The directness is reduced (directness = 0.31). Adding the optomotor reflex without feedforward compensation: Fig. 3.26 (left) shows the trials carried out when the optomotor reflex is also enabled and a potential conflict with phonotaxis can occur. The robot does not manage to reach the target every time. The optomotor reflex allows the phonotaxis to work correctly when the angle towards the sound source small, in which case it helps to keep the robot stabilised in the correct direction, hence the directness is slightly higher overall (= 0.40). Fig. 3.26 (right) shows the trials carried out when both the random noise and the optomotor reflex are enabled, but the forward model is not active. The trajectories show the conflict between the two sensory systems. Although both the phonotaxis and optomotor systems are working against the noise, the optomotor reflex also “corrects” phonotactic turns. The directness is very low (directness = 0.22). FeedForward compensator allows integration of Phonotaxis and Optomotor Reflex: The forward model behaviour is shown in Fig. 3.27 (left). Here the feedforward controller compensates the reafferent signal, correctly predicting optical signals induced by phonotaxis. The optomotor reflex no longer tries to “correct” these turns, leaving the phonotaxis to reach the target. Directness is improved (directness = 0.44) showing that the optomotor reflex is still active in stabilising the trajectory, but now without interfering with the phonotaxis.In the final experiment the multisensory integration capabilities of the forward model are verified when a random noise is introduced in the motor system (Fig. 3.27 (right)) The prediction of the phonotactic turns is efficient: in fact the attempts look like the previous experiment when no disturbance was considered. The optomotor reflex is able to compensate turns due to random noise, without conflicting with turns caused by phonotaxis. It means that phonotaxis is able to accomplish its
Fig. 3.25. Results with phonotaxis control only. The robot starts from three different positions. For each position ten trials are recorded . Left, the robot every time reaches the target. Right, the robot can also reach the targe with noise added, but directness is reduced.
128
B. Webb et al.
Fig. 3.26. Phonotaxis and optomotor are in additive mode. The robot sometimes is not able to reach the target, particularly when noise is added.
Fig. 3.27. Forward model correctly integrates phonotaxis and optomotor systems. The robot every time reaches the target.
purpose turning the robot towards the sound source. The value of directness is lower than the experiments without noise (directness = 0.34), but improved in respect to the experiments with noise in which the phonotaxis and optomotor behaviours were not integrated by the forward model. The implemented feedforward controller showed the capability to correctly predict and filter out phonotactic induced optical noise, leaving the optomotor system ready to react to unknown disturbances, as observed in experiments with crickets. Because the feedforward compensator could be implemented in a realistic spiking neural network, it is a plausible mechanism to explain real cricket behaviour, and shows how biological systems might implement, and tune, this form of control. The result obtained in this paper also opens the way to the hardware design and implementation of the whole neural-controller system for the integration in a complete autonomous machine.
Low Level Approaches to Cognitive Control
129
3.3 Navigation 3.3.1
Path Integration
3.3.1.1 Background Path integration, a term coined by Mittelstaedt [105], is a navigational strategy that explains the ability of some animals to return home on a direct path after a long and tortuous foraging excursion. Even in environments completely lacking distinct landmarks, or in the dark, an animal can use sensing of distance and direction travelled to integrate its velocity vector over time and thus estimate where it is in relation to its starting position. Path integration is a strategy used by a wide range of animals, including rats [3, 77], dogs [136], humans [106], spiders [110] and desert ants [166]. The importance of path integration to the field of robotics as a navigation strategy is best exemplified by NASA’s Mars Rover of the Pathfinder mission [100]. The homing behaviour in the desert ant (Cataglyphis fortis) has been intensively studied and there is a rich corpus of experimental data available (e.g., [173, 4, 14]). To perform path integration, ants use the polarised light from the sun to derive their current compass heading [165]. Sahabot [88], developed at the University of Zurich by, uses polarisation sensors modelled on those of the ant, but the underlying path integration system relies on the equations proposed in [105] rather than a neural implementation. Ronacher and Wehner [125] showed that even after removing all allothetic (external) cues desert ants can accurately estimate distance traveled, and recent evidence suggests they actually ‘count’ their steps to gauge distance travelled [170]. Ants thus rely on allothetic and idiothetic (internal) inputs to determine heading and distances travelled respectively. Several neural network models for path integration in ants have been developed. Among the best known are the models by Hartmann and Wehner [42] and Wittmann and Schwegler [171]. There are also several neural models of path integration in rats, usually designed as input to place cell systems based on the hippocampus (e.g., [8]). More recently, Vickerstaff and Di Paolo [156] used an evolutionary approach and found a compact network that implements the bi-component model suggested by [105]. However this result depended on the input to the network taking the form of a sinusoidal compass sensor response function. When using more biologically plausible sensor response functions, Vickerstaff and Di Paolo report a failure in evolving successful homing behaviour. We also take an evolutionary approach, but focus on how a population of neurons can encode (lower-dimensional) sensory information for path integration. 3.3.1.2 Evolving a Neural Network for Path Integration The agent used in the genetic algorithm is simply modelled as having a position (x, y) and heading θ on an unbounded two-dimensional plane. The sensors available to the agent approximate the response of interneurons, studied in the cricket [87], that receive input from a specialised region of the eye (also found in ants and bees) adapted to sense the polarised light patterns in the sky. We define a set of three ‘direction cells’ with preferred directions evenly distributed around 360◦. The activation of each cell is
130
B. Webb et al.
1 0.8 0.6
response of direction cell
0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 0
50
100
150 200 heading angle
250
300
350
Fig. 3.28. Response of three direction sensors with preferred directions of 60◦ , 180◦ and 300◦ . From [41] (Fig. 1).
calculated by performing the dot product of the preferred direction of the cell (h p ) and the actual heading (ha ) of the agent: cos(ha ) cos(h p ) · + [ξ ∼ N(0, σ )] (3.5) firing rate = sin(h p ) sin(ha ) where ξ is a Gaussian noise term with mean 0 and standard deviation σ . The ensemble of direction cells thus encodes the current heading of the agent as a population code as shown in Fig. 3.28. Each agent has two turn effectors, for turning left or right. The input defines how sharp the turn will be. Currently the agent is maximally allowed a 20 degree turn per iteration. The agent is also driven forward at a constant speed. Sensors and effectors are connected by a neural network. Several standard neuron representations were explored, but the results below use two kinds: a standard sigmoid neuron: 1 oi = (3.6) −aI (1 + e i+bs ) where the output oi depends on the summed input Ii , a parameter a and bias bs and the input to a cell is calculated by summing up the output of all connected cells multiplied by the connection weights: n
Ii =
∑ w ji o j
(3.7)
j=1
where Ii is the input to cell i, w ji is the connection weight from cell j to cell i and o j is the output of cell j; and a ‘memory’ neuron for which the cell potential c of the neuron is updated as follows: dc = Ii (3.8) dt
Low Level Approaches to Cognitive Control
131
and the output of the neuron applies the sigmoid function to the cell potential: oi =
1 (1 + e−c+bc )
(3.9)
A biological equivalent of such a neuron that integrates its input over time was described in the rat [23]. A genetically inspired neural encoding for the network structure and parameters was adopted based on [34], which allows a variable number of neurons and connections encoded as a list of integers. A neuron’s parameters and excitatory or inhibitory connections are encoded between a start marker and end marker, as shown in Fig. 3.29. The genome size was limited to 500 parameters, which gave an approximate upper bound of 50 neurons. Localised tournament selection was used to determine the individuals from each generation chosen for reproduction with mutation and crossover. Fitness was evaluated during simulated journeys in which the agent is moved through two randomly generated way points, and must then home according to the output of its turn effectors as activated by its network. Each agent is evaluated until a maximum number of time steps m is reached, and at each cycle the distance to home distt is measured. The fitness combines
1.2
1.4
Fig. 3.29. Marker-Based Genetic Encoding: a chromosome consisting of integers is interpreted as a neural network. A start marker is any number that modulo k results in a remainder of 1 and an end marker is any number that modulo k results in a remainder of 2. k is typically chosen somewhere between 5 and 15. From [41] (Fig. 2).
132
B. Webb et al.
the inverse of the squared distances with penalties for spiralling (excessive difference of the agent’s heading θ from the nest heading ω ) and network complexity as follows: fitness = (1 + kn)cn (1 + kc )cc
1
m 2 0 distt |θ
− ω | dt
(3.10)
where kn is a neuron penalty constant, kc is a connection penalty constant, cn is the neuron count and cc is the connection count. Initial trials found no acceptable solutions even when letting the genetic algorithm run for a very long time. The genetic algorithm would reach local maxima and show hardly any improvement after that. This could be the result of the genetic encoding not allowing much change to the network topology without a momentary decline of fitness. In order to tackle the problem, the search space was reduced by predefining certain topological aspects of the neural network structures. The following constraints were set on the network topology: • • • •
each direction cell had to be connected to at least one memory neuron, each direction cell had to be connected to at least one sigmoid neuron, sigmoid neurons had to be connected to turn effectors, a maximum of two turn effectors, one left and one right were allowed.
3.3.1.3 Simulated and Robot Results A fit solution that was consistently evolved is depicted in Fig. 3.30. A typical simulation run is shown in Fig. 3.31. It is reasonably easy to understand how the network works, by analysing its activity in various situations, as shown in Fig. 3.32. Each direction cell
Fig. 3.30. Evolved network using direction (D) cells. Each direction cell has one memory (M) CTRNN cell and two sigmoid (S) neurons associated with it. For illustration purposes, the turn motors (represented by the left and right arrows) are duplicated and the recurrent connection of the memory neurons onto themselves are omitted. From [41] (Fig. 3).
Low Level Approaches to Cognitive Control
Nest
133
Outbound
Homebound
Fig. 3.31. Example simulation run. From [41] (Fig. 4).
is directly connected via an excitatory connection to a memory neuron. As a result, by integrating its input, the memory neuron records the distance traveled in the direction signalled by that direction cell. The population of memory neurons therefore encodes the homing vector in a very natural way, similar to the way the population of direction cells encodes the current heading direction. Each direction cell and each memory cell is also connected to two sigmoid neurons, which in turn connect to the turning effectors. These thus become active only if both the direction cell and at least one of the memory neurons are active. During homing, the memory neurons thus act like repelling forces, turning the agent away from the direction that it travelled on its outbound path. We tested the network on a Koala robot to validate that the evolved network provides a plausible real world solution. The Koala (shown in Fig. 3.33) is equipped with 16 infra-red (IR) proximity sensors, which are utilised to detect obstacles, and a 22 MHz Motorola 68331 processor with 1 Mb of RAM . Robot motion is controlled by two DC motors configured in a differential drive setup. The current heading of the robot is determined by monitoring the changes in wheel encoder readings produced during robot motion. The derived heading is then used to activate the corresponding direction cells appropriately. Fig. 3.34 shows that the robot correctly locates the home position when subjected to zig-zagged patterns of various length and direction. Additional paths were defined to test the global vector derivation ability of the network. Fig. 3.35 shows an outbound path measuring 7m in length, with the final beacon is positioned only 1m from the home position. The robot is again shown to reliably locate the home position. The sequences of 3 left and 3 right turns reduce accumulative error caused by the wheel encoders which results in an error of position of 14cm. A simple obstacle avoidance capability was added to the robot using the Koala IR sensors. The algorithm forced the robot to turn away from an obstacle overriding the motor commands issued by the path integration network. The network however still receives input from the heading and distance sensors throughout the avoidance manoeuvre, and therefore updates the global vector appropriately. Thus when the robot encounters an obstacle, it first navigates around the obstacle before returning to its
134
B. Webb et al.
(a)
(b)
(c) Fig. 3.32. Examples of network activations for different situations. Bold lines indicate strong activations. (a) The agent is heading northeast and the top-right direction cell is strongly active. The sigmoid neuron connected to the left turn motor is the only neuron to receive strong inputs from both a memory neuron and the direction cell, thus causing the agent to turn left towards the nest. (b) The agent is heading northwest. This time, the top-left direction cell as shown in the left part of the figure is active and, in conjunction with the active memory neuron, it causes the agent to turn right. (c) When the agent is pointing towards the nest, the activations of both turn effectors cancel each other out, and the agent moves straight ahead. From [41] (Fig. 5).
Low Level Approaches to Cognitive Control
135
Table 3.1. Evolved neuron parameters Parameter Evolved value a 0.667 bs −4.372 bc −1.164 τ 135518 Table 3.2. Evolved connection weights Weights Excitatory value Inhibitory value wS-left 3.974 −3.974 wS-right 3.976 −3.976 wPOL-S 0.719 wPOL-M 0.012 wM-S 3.962 wM-M 10−4 -
Fig. 3.33. The Koala robot used in the experiments. From [41] (Fig. 8).
homing behaviour. The ability of the robot to correctly home following an encounter with an obstacle on the homeward path was tested as shown in Fig. 3.36. The robot skirts the obstacle before following the altered global vector to the home position. The second obstacle avoidance test places two staggered obstacles in the homeward path of the robot. Fig. 3.36 shows the robot to correctly avoid both obstacles before following to the corrected global vector to the home location. These results show that the network does update the homing vector as it is turning towards the home position after
136
B. Webb et al. 400
400
300
300
200
200
100
100
0
0 −300
−200
−100 A
0
100
400
400
300
300
200
200
100
100
0
0 0
100
200 C
300
400
−300
−200
−100 B
0
100
0
100
200 D
300
400
Fig. 3.34. Robot path integration over a zig zag path of different lengths: (A,C) 4m and (B,D) 6m. The errors were (A) 27cm, (B) 60cm, (C) 35cm, and (D) 42cm. From [41] (Fig. 10).
200
150
100
cm
50
0
−50
−100
−150
−200 −200
−150
−100
−50
0 cm
50
100
150
200
Fig. 3.35. Convoluted path. The error for this run was 14cm. From [41] (Fig. 11).
avoiding obstacles. The system accumulates error due to multiple heading adjustments when avoiding obstacles and is at present not highly accurate for large or multiple detours but could be improved with more accurate compass sensors.
Low Level Approaches to Cognitive Control 400
400
300
300
200
200
100
100
0 −200
−100
0 A
100
200
0 −200
−100
0
100
200
137
300
B
300
300
250
250
200
200
150
150
100
100
50
50
0
0 0
100
200 C
300
0
100
200
300
D
Fig. 3.36. Robot path integration and obstacle avoidance experiments. The obstacles were only placed into the arena after the agent reached the last waypoint and starting its homeward journey. The errors were (A) 32cm, (B) 92cm, (C) 105cm, and (D) 108cm. From [41] (Fig. 12).
The overall biological plausibility of the evolved network is an open question. Investigations into the polarisation vision pathways of insects have suggested that the central complex, a neuroarchitecturally distinct neuropil in the insect brain, may be involved in functions of compass orientation and path integration (see [56] for a review). The central complex has a regular neuroarchitecture, which has been said to bear some similarities with models of path integration (c.f., [111]). It hosts a maplike representation of e-vector orientations [46], and has connections with motor centres in the thoracic ganglia. In some insects it receives, besides polarisation-sensitive input, connections from circadian clock neurons, possibly involved in time compensation by adjusting compass bearings in relation to solar azimuth with time of day [56]. Given these developments in insect neuroscience, a future goal is to map more directly our models for path integration to the neuroarchitecture of the central complex. 3.3.2
Visual Homing
3.3.2.1 Background Many insects have been shown to be capable of navigation using visual homing. For example, we have demonstrated that crickets can learn to relocate an unmarked cool
138
B. Webb et al.
spot on a heated arena floor (Fig. 3.37) using the surrounding visual scene [168], and will search in the corresponding location if the visual surroundings are rotated (Fig. 3.38). This capability allows desert ants [166] and wasps [154], among other insects, to relocate a hidden nest entrance from anywhere in the immediate surroundings. The problem of visual homing can be abstractly stated as follows. An agent, equipped with a monocular camera is located at a point C. The goal is for the agent to move from C to a home position S, using a home image taken from location S and the current visual information from the camera (Fig. 3.39) to determine its movement. Visual homing is a short-range navigation method which can lead an agent to a position with accuracy, provided the majority of the scene visible from the home position is also visible from the current position. It is generally assumed that the agent has an omnidirectional image of the environment [29], which is true for most insects, and for robots can be obtained by using a spherical, parabolic or conical mirror located above a camera pointing upwards. Methods proposed for visual homing methods can be divided into two groups: feature-based methods, where specific features are extracted from the images and matched between the current and the home image; and image-based methods, where the image pixel values are used directly. Feature-based methods include the snapshot model [11, 163] and the Average Landmark Vector method [88, 109]. The most popular image-based method is Image Warping [30]. This works under the assumption that all landmarks are situated at equal distances from the home location. Using a greyscale one-dimensional horizon image taken from the current location, the model calculates all possible deformations of the image line, for all possible rotations and movement vectors. The agent then moves according to the movement vector that produces the image most closely resembling the home image. This approach been shown to be effective and robust, but at the substantial computational cost of exhaustive search to find the best match. A simpler image-based method was proposed by Zeil [180], who studied the properties of outdoor scenes for the purpose of image homing, using a panoramic camera in the natural habitat of a wasp that is known to locate its nest by visual homing. They captured a home position image, and similar images from a grid of positions around the nest. Each image was compared to the home image using the root mean square difference function: M N 2 ∑ ∑ (IC (i, j) − IS (i, j)) i=1 j=1 RMS = (3.11) M∗N where IC (i, j) is the intensity function for pixel (i, j) in the current image, IS (i, j) is the intensity function for pixel (i, j) in the home image, and M ∗ N are the dimensions of the images. The 3-dimensional function resulting from the (x, y) grid of current image positions pairs and the corresponding RMS values is called the RMS difference surface. Zeil et al. found that the difference surface they measured was unimodal i.e. movement towards the home position resulted in gradual reduction of the difference, reaching a global minimum in the target location, with no local minima whatsoever. The size of the area for which this held true (the ‘catchment area’) was up to 3 square meters. They suggested it is possible to take advantage of the unimodal property of the catchment
Low Level Approaches to Cognitive Control
139
50cm
Camera Computer (tracking software) Canopy
92cm Temperature gradient Cricket path Arena (30cm)
Cool spot (6cm)
Hot plate Hot water
Pump
Cold water
(a)
Rotation trial
Training trial
Original Cool spot
Random
Fictive
(b)
(c) Fig. 3.37. Experimental setup used to test homing in crickets. (b) For the rotation trials, the wall of the arena is rotated by 180 degrees, changing the position of the visual cues, and creating a fictive target location relative to those cues. (c) This natural scene wallpaper is wrapped around the inside of the arena wall.
140
B. Webb et al.
Time to reach position (seconds)
Artificial cues
No cues
Natural scene
Control
300
300
300
300
250
250
250
250
200
200
200
200
150
150
150
150
100
100
100
100
50
50
50
50
0
O
F
R
0
O
F
R
0
O
F
R
0
O
F
R
Location in arena, O=original cool spot, F= fictive, R=random
Fig. 3.38. Crickets search preferentially in the ‘fictive’ position of the cool spot as indicated by rotated cues or scenery
Fig. 3.39. The homing task is to move the agent from C in the direction of S. In the simplest case, visual information is the relative location of the projection landmark L; this allows the home direction to be calculated from simple geometry. From [179] (Fig. 1).
area to perform visual homing. The agent can move towards the home location by descending towards the point that gives the minimum RMS difference. Homing can thus be regarded as a function minimisation problem. Function minimization by gradient descent (also known as steepest descent) in general works by determining the negative of the gradient of the function at the current position and moving in this direction until a line-minimum is reached [167]. However, the only information available to an insect or robot is the current RMS difference and the sequence of the inputs so far. This means that it cannot easily calculate the
Low Level Approaches to Cognitive Control
141
gradient of the function at a given position. The strategy proposed by Zeil et al. [180], called ‘RunDown’ was for the agent to move in a random direction for a fixed distance, and then determine if the RMS difference has decreased, in which case it moves forward again in the same direction, otherwise, it turns 90 degrees before moving forward again. 3.3.2.2 Evolving a Gradient Descent Algorithm for Visual Homing In an attempt to find a more efficient way to take advantage of the properties of the RMS difference surface for the purpose of visual homing, we turned towards a biologically inspired approach for function optimisation. Organisms such as the nematode Caenorhabditis elegans perform chemotaxis using a single sensor to detect only the current concentration of the chemical substance [27], and thus solve an analogous problem to a robot trying to home using only the current RMS difference. In [27], it is demonstrated that the turning behaviour of C. elegans during chemotaxis can be modelled by: dθ = ωbias + z0C(t) + z1 (C(t) − C(t − 1)) dt
(3.12)
where ddtθ is the turning rate (or the angle turned in one time step), Obias , z0 and z1 are constant parameters, C(t) the input at time t, and C(t) −C(t − 1) an estimate of the temporal gradient of the input. In most forms of gradient descent or ascent, a decreasing step size is desirable, so that the agent can approach the goal rapidly and then slow down to locate it with the desired accuracy. This can be implemented by making the forward movement at each time step proportional to C(t), i.e. C(t) multiplied by some constant U. To use this Taxis algorithm it is necessary to find appropriate parameter values of Obias , z0 , z1 and U. We used an evolutionary strategy to determine these parameters, using a simulated agent homing on RMS difference surfaces calculated from previously collected images in a robot lab environment (available at http://www.ti.unibielefeld.de/html/research/avardy/index.html), courtesy of Andrew Vardy (Memorial University, St. John’s). These sets were omnidirectional images of the same 3 x 5.1m area (an office room), three sets with the same pictures captured under different illumination levels (‘Original’, ‘Twilight’, ‘Night’), and another set where a number of chairs had been added to the room (‘Chairs’). The images corresponded to a real world grid with 30 cm distances between every two consecutive images. To obtain robust controllers, we created a variety of surfaces that used the home image from one data set compared with current images from another data set, as listed in table 3.3. The fundamental property of the global minimum appearing at the home position held true for all surfaces (Fig. 3.40). Unimodality was observed in all combinations except those that used a home image from the ‘Chairs’ dataset, when some local minima were observed. The details of the evolutionary process used to optimise the four parameters (Obias z0 , z1 ,U) of the Taxis algorithm are given in [179]. Briefly, individuals were encoded by four real numbers. The selection method used was Stochastic Universal Sampling, and genomes were mutated (changing each parameter by a small random value drawn from a uniform distribution) with a probability of 0.6, and underwent crossover with the same probability. The population size was 200. The fitness was evaluated by running each individual for a maximum of 500 time steps on six different surfaces. A run was considered successful when the agent’s distance from the goal was less than a threshold
142
B. Webb et al.
Fig. 3.40. Four characteristic root mean square (RMS) difference surfaces. A) RMS difference between a home image from the Original set and every image in the Original set. B) RMS difference between the same home image and the Chairs set. C) RMS difference between a home image from the Chairs set and the Chairs set. D) RMS difference between a home image from Chairs and the Original set. Each unit in the axes corresponds to 0.3 m. From [179] (Fig. 3).
D (=0.1 m). The fitness score was 500−number of steps to reach home, with failure to reach home within 500 steps scored as 1. To reduce dependence of the evolved solutions on details of the simulated environment, we applied Jakobi’s ’radical envelope of noise’ methodology [70], adding Gaussian noise to the RMS value, the agent’s turning angle and the displacement at each time step. 3.3.2.3 Results of Evolving the Simulated Agent The GA was able to come up with an efficient controller in about 250 generations. The controller parameters from three different runs of the GA were: Controller 1 2 3
ωbias z0 z1 U (in 0.3m) 0.6490 0.9485 44.6288 0.6744 1.3821 -0.7951 37.0456 0.6492 103250 -0.7987 36.1076 0.9403
It is possible to directly interpret these parameters, following the discussion of Ferree and Lockery (1999). In all three controllers the bias Obias and the first-order term z1
Low Level Approaches to Cognitive Control
143
had the same sign. This meant that, for a certain value range of C(t) −C(t − 1) < 0, the bias and the first-order term cancel each other out, resulting in a small or zero turning rate when the gradient is decreasing. As the values change to greater or smaller than that range, the agent is forced to turn in steeper angles. This causes a behaviour called klinotaxis, described as “a change in turning rate in response to the spatial gradient of a stimulus field” [27]. The zero-order term z0 changes the turning rate proportionally to the input itself, producing klinokinesis: “A change in turning rate in response to the scalar value of a stimulus field” [27]. The sign of the zero-order term can be either positive or negative, because the input is always positive, and, since we are dealing with angles, reducing an angle towards 0 is the same as increasing it towards 360◦. As the agent moves towards the home position, klinotaxis becomes more and more important, while the role of klinokinesis is reduced to simply adjusting the effects of the first-order term. Note that C(t) − C(t − 1) is generally much smaller than C(t) itself, hence z1 is correspondingly larger than z0 . In order to evaluate the homing abilities of the controllers, we performed 1000 runs of each controller on each of the nine surfaces, including the six on which the controller was evolved, and the three not used in evolution. The RunDown algorithm was also tested on the same surfaces for comparison. The controllers were evaluated with respect to five different measures: the homing success rate; the mean number of steps; the total distance travelled; the total angle turned; and the homing precision achieved. A sample run of each controller appears in Fig. 3.41, and the homing success rate for each controller, for 1000 runs, appears in table 3.3. The common factor for the surfaces where performance is below 100% is that they use as the current image set, the ’Chairs’ dataset. This implies that the true difficulty for homing comes when local optima appear. The RunDown algorithm is more seriously affected in these conditions, because its fixed turn size can lead it to become trapped, moving around a local minimum in square patterns. We found no direct link between the surfaces used to train the controllers, and the number of steps needed to home; RunDown tended to travel further than the other controllers per homing run; but turn less (as might be expected). Further details and discussion of the measured results for each parameter are given in [179]. 3.3.3
Robot Implementation and Results
A Koala robot (Fig. 3.42) was used for real world comparison of the Taxis, RunDown and Image Warping algorithms . Images were captured using a Creative Labs Webcam pointing downward at a parabolic mirror manufactured by Kugler. The camera was supported on a rig consisting of three narrow pillars, which were visible in the image but did not appear to significantly affect any of the algorithms. The webcam connected via a USB port to a Dell Inspiron 7500 Laptop with an Intel Pentium III processor, running Mandrake Linux 8.2. All image processing, calculations and generation of movement commands were programmed in C on the Laptop. Motor commands were sent via a serial link to the built-in robot microprocessor for execution. The robot proved very consistent and accurate in its turning angle and movement distance, well within the noise parameters used in the simulation. It proved necessary to re-run the parameter optimisation procedure for the Taxis algorithm on the real robot. We explored a fast method for obtaining an estimate of
144
B. Webb et al.
Table 3.3. Success rates for the evolved controllers on different surfaces created by combining home and current images from different sets. Those combinations marked * were used in the evolution process, the others used only for evaluation. Home image Original Original Original Original Night Twilight Chairs Chairs Night
Current image Controller 1 Controller 2 Controller 3 Original* 100% 100% 100% Twilight 100% 100% 100% 100% 100% 100% Night* 90.6% 94.7% 84.9% Chairs* Original* 100% 100% 100% 100% 100% 100% Night 100% 100% 100% Original* Chairs* 95.4% 96.9% 83.3% 91.8% 95.7% 89.8% Chairs
Overall RunDown 100% 100% 100% 100% 100% 100% 90.07% 78.6% 100% 100% 100% 100% 100% 100% 91.87% 78.2% 92.43% 77.4%
Fig. 3.41. A sample run with each controller. Upper left: controller 1. Upper right: controller 2. Lower left: controller 3. Lower right: RunDown. From [179] (Fig. 5).
the surface properties that could be used for GA optimisation, which does not require capturing a complete image grid. The robot captured a home image, and then captured a line of RMS differences by moving outwards in one direction and capturing images at equal distances. A surface was then created by rotating this line of values through 360 degrees. By capturing the RMS difference line for each of the four directions and evolving a controller that could home on all four corresponding (artificial) surfaces, the controller was optimised for the properties (steepness, value range) of the given area.
Low Level Approaches to Cognitive Control
145
Fig. 3.42. The robot used to test homing. From [179] (Fig. 4).
RunDown was used as for the simulation tests, and the Image Warping algorithm was implemented as follows: Given rotation ψ , home vector angle α and the ratio ρ of the agent’s distance from the home position d to the home position’s distance from the landmarks R, i.e. ρ = Rd , a pixel located at angle θ in the current horizon image will be displaced by: ρ sin(θ − α ) −ψ δ = arctan (3.13) 1 − ρ cos(θ − α ) Searching over all α , ψ and ρ , the best match to the home image is found, and the corresponding home vector used to move the robot for a fixed distance. The termination of homing is signalled when the home vector changes by > 170◦, which indicates the robot has just passed over the home position. The only free parameter is the step size, which was set to 0.25 metres. One problem encountered on the robot was that we could no longer assume that all images have been captured from the same orientation. To determine the RMS for a particular image, it was calculated as the minimum RMS for all possible rotations (horizontal pixel shifts of the image), which should occur when the orientations are aligned. For the difference surfaces observed by the robot, as before, the global minimum was at the home position, and there was a gradual increase in the RMS difference as the robot moves away from home. However, the RMS surface was not as steep as in the
146
B. Webb et al.
Vardy dataset. The main reason appeared to be noise introduced by using pixel shifting to internally rotate the image, which assumes the camera is perfectly orthogonal to the panoramic mirror, and the position of the mirror centre in the panoramic image (i.e. before unfolding) is known with accuracy. Our camera rig was less accurate in this respect than those used by Zeil et al. and Vardy. The performance of the three algorithms was compared, using the same measures as in the simulation. The first tests used a well-lit, unchanging environment, and consisted of two sets of ten runs for each algorithm: ten starting from different positions 1 metre away from the home position, and another ten runs starting from 2 metres away. Three different home positions were used across these trials. Algorithm Taxis RunDown Warping Success 90% 80% 100% # of steps 20.7 13.8 6.8 At one meter Distance traveled 4.2m 2.5m 1.8m Angle turned 1316◦ 360◦ 316◦ Homing precision 0.43 0.18 0.29
Algorithm Taxis RunDown Warping Success 40% 90% 60% # of steps 22.3 23.6 16.3 At two meters Distance traveled 6.0m 4.5m 3.80m Angle turned 1559◦ 580◦ 999◦ Homing precision 0.42 0.25 0.27 Unlike the simulation, the Taxis algorithm performs worse than the RunDown algorithm, particularly for the longer distance where there is a low success rate (40%), but also at the shorter distance where the number of steps, distance travelled, angle turned and homing precision are all worse for the Taxis algorithm. We believe this is probably due to greater sensitivity of this algorithm to the noise introduced by image rotation. The Image Warping algorithm performed better than both the Taxis algorithm and RunDown over shorter distances. However, note that this result is for a static environment. Our next test examined homing by each algorithm when gradual changes were made to the environment with respect to the stored home image. Algorithm Taxis RunDown Warping Success 80% 90% 0% # of steps 22.3 13.3 With changing environment Distance traveled 5.1m 1.6m Angle turned 1670◦ 209◦ Homing precision 0.22 0.19 Obviously, sufficient change in the environment will render the home image position unrecognisable, and homing will fail for any algorithm. On the other hand, all
Low Level Approaches to Cognitive Control
147
Fig. 3.43. The moved landmarks experiment A: The unfolded home image. B: An unfolded image taken from the home position after changes in the scene. From [179] (Fig. 15).
Fig. 3.44. The moved chairs scene. H: The home position. M: The local minimum. T: The perceived home position for image warping. From [179] (Fig. 16).
148
B. Webb et al.
algorithms should demonstrate a level of tolerance to minor changes in the environment. In the example of the changes shown in Fig. 3.43, it was observed that the Image Warping algorithm was unable to home. In fact, its perceived home position had been relocated from position H in Fig. 3.44 where the robot lies, to the ‘trap’ position marked with T. Thus, the Image Warping algorithm was rendered completely useless. For the RMS-based algorithms (Taxis and RunDown) the most significant problem was the appearance of a local minimum at the position marked with M in Fig. 44. But the local minimum was not as deep as the global minimum, which remained on the home position. An explanation for this is that the RMS surface is calculated using visual information not only from the horizon line, but instead from the entire scene, including floor and ceiling patterns. This means that, even when the landmarks happen to move in such a way as to misguide Image Warping, the RMS surface needs much more radical changes in the scene to lose the global minimum. Consequently, in these tests, both these algorithms were still able to home. 3.3.3.1 Building More Robust Difference Surfaces Zeil et al. [180] identified a significant problem with difference surface homing: when IS and IC are captured in different illumination conditions, the chances of homing successfully decrease, often dramatically. We see an example of this in Fig. 3.45. To produce difference surfaces which are more robust to changes to illumination there are two obvious remedies: • Transform reference and current image intensities to minimise the effects of dynamic illumination (using e.g. histogram equalisation). • Use image similarity measures other than the root-mean-square. While these approaches are not mutually exclusive, we chose in this work to follow the second approach only. Along with dynamic illumination, we are also interested on the effect of the movement of imaged objects between captures of IS and IC . We first established the success/failure rate of the standard RMS approach, using the database of images described previously [155]. Landmark layout and/or illumination conditions differ from set to set as described in Table 3.4. We created a number of difference surface sets,defined by the source of snapshot and current images. For example, one set of difference surfaces used the “Original” images for both IS and IC ; another set took IS from “Winlit” and all IC from “Original”, thus simulating dynamic illumination. The complete list of data set pairings is given in Table 3.5. We used the pairings along the diagonal of the table to create difference surfaces reflecting static environments. The off-diagonal pairings involving the “Winlit” and “Doorlit” sets simulate a laboratory environment in which illumination is non-constant. The off-diagonal pairings involving the “Arboreal” set simulate an environment in which there is change in the location of relatively unobtrusive landmark. The pairings (Chairs1,Chairs2) and (Chairs2,Chairs1) yield difference surfaces reflecting the movement of more prominent objects in the environment. Every difference surface set consists of nineteen surfaces corresponding to nineteen different home snapshot locations uniformly distributed around the experimental area. To rate the success of homing we use the criterion defined in [155]: the return ratio RR of successful homing runs to the total number of homing
Low Level Approaches to Cognitive Control
149
450
400
350
300
y(cm)
0.35 0.3 0.25
250
200
RMS
0.2
150
0.15 0.1
100
0.05 0 500 400
50
300 250
300 200
200
150 100
100 y(cm)
50
50 0
0
x(cm)
100
ls
(a)
150 x(cm)
200
250
(b)
Fig. 3.45. (a) A difference surface formed using the RMS image similarity measure defined by Equation 3.14. The reference image was captured in a different illumination condition than all current images IC . Notice that a local difference surface minimum coincides with the reference location but other local optima have appeared. (b) Here we illustrate a number of homing runs using the difference surface in (a). Each homing run starts at a grid point on the laboratory floor. The simulated agent moves so as to optimise the difference surface using a gradient descent algorithm. Successful homing runs (i.e. those ending within 30cm of the reference location) are shown in blue and homing failures are shown in red. There are a significant number of homing failures.
runs. Homing runs are initiated from every non-snapshot location in the experimental ¯ is the mean return ratio for all nineteen difference area. The average return ratio (RR) surfaces in a difference surface set. We define the RMS image similarity between grayscale images IS and IC as
1 N (3.14) RMS(IS , IC ) = ∑ (IC (i) − IS(i))2 N i=1 The gradient of the difference surface at each position x, y is here calculated using Matlab’s gradient command which uses the following two-sided differencing equation: ∇ f (x, y) ≈
f (x+h,y)− f (x−h,y) 2h f (x,y+h)− f (x,y−h) 2h
(3.15)
where h is the separation between grid points, in our case 30cm. Gradients at non-grid points in the experimental area are estimated by linear interpolation. The agent then homes by moving 30cm opposite the direction of the gradient. The agent continues a homing run until one of the following stopping criteria is satisfied: the number of gradient calculations exceeds 400; or the agent detects that its last twenty homing steps
150
B. Webb et al. Table 3.4. Description of each of the image data sets used in this work
Original All overhead lights are on. No obstructive objects have been placed on the floor of the experimental area. Winlit Only the bank of lights near the curtained window (upper half of the image) are switched on. Those near the door are off. Doorlit Only the bank of lights near the closed door are switched on. Those above the curtained window are off. Arboreal All overhead lights are on. A plant has been placed in the centre of the experimental area. Chairs1 All overhead lights are on. Three office chairs have been placed along the walls of the laboratory, out of the experimental area. Chairs2 All overhead lights are on. The three chairs in “Chairs1” have been moved to the experimental area.
Table 3.5. Source of snapshot and current images for the various difference surfaces used in this work. A indicates that the pairing was used to create a set of difference surfaces. Current image data set
Snapshot set
Original Winlit Doorlit Arboreal Chairs 1 Chairs 2 Original
-
-
Winlit
-
-
-
Doorlit
-
-
Arboreal
-
-
-
-
Chairs 1
-
-
-
-
Chairs 2
-
-
-
-
Table 3.6. Average return ratios for homing experiments carried out on RMS difference surfaces in static and dynamic environments. The standard deviation of the average return ratio for each data set pairing is given in brackets. Current image data set Original
Winlit
Doorlit
Arboreal
Snapshot set
Original 0.977 (0.066) 0.584 (0.398) 0.654 (0.412) 0.933 (0.083) Winlit
0.427 (0.164) 0.949 (0.058) 0.020 (0.059)
Doorlit 0.536 (0.210) 0.037 (0.110) 0.967 (0.051)
-
Chairs 1
Chairs 2
-
-
-
-
-
-
-
Arboreal 0.956 (0.070)
-
-
0.924 (0.075)
-
-
Chairs 1
-
-
-
-
0.975 (0.048) 0.598 (0.117)
Chairs 2
-
-
-
-
0.953 (0.065) 0.578 (0.106)
cluster around a particular location. A homing run is deemed successful if it ends within 30cm of the snapshot location. The results of this experiment are given in Table 3.6. When IS and IC are taken from the same data set, difference surface homing works quite well in almost all cases. Difference surface homing is much less successful when illumination conditions change between capture of IS and IC . Results are mixed when landmarks change positions
Low Level Approaches to Cognitive Control
151
between capture of IS and IC . There seems in fact to be a general diminution of average return ratio when IC is drawn from “Chairs2” – even when the snapshot image is also drawn from “Chairs2”. We speculate that the presence of the large objects in the experimental area, rather than movement of imaged objects between capture of snapshot and current images, causes difference surface homing to perform less well. We shall try to justify this speculation below. Analysis of RMS We would like to know why homing on RMS difference surfaces is affected by visual dynamism as described above. As given in Equation 3.14, RMS is difficult to analyse. We therefore break the equation into several terms as follows: MSD(IS , IC ) = =
1 N ∑ (IS (i) − IC (i))2 N i=1
(3.16)
1 N 2 N 1 N 2 2 [I (i)] + [I (i)] − ∑ S ∑ C ∑ IS (i)IC (i) N i=1 N i=1 N i=1
(3.17)
Note first that the square root in Equation 3.14 has been removed from Equation 3.16, transforming the RMS into an expression of mean squared differences (MSD); we have done this because the square root plays no significant role in the behaviour of RMS and slightly muddies our mathematical analysis. Equation 3.17 contains three terms. These terms can be transformed into forms more amenable to analysis. Several standard textbooks on statistics (see e.g. [150]) tell us that 1 N ∑ [IS (i)]2 ≈ Var(IS ) + (I¯S)2 N i=1
(3.18)
1 N ∑ [IC (i)]2 ≈ Var(IC ) + (I¯C )2 N i=1
(3.19)
1 N ∑ IS (i)IC (i) ≈ Cov(IS , IC ) + I¯SI¯C N i=1
(3.20)
where Var(IS ) is the variance of the intensities in IS , I¯S is the mean intensity in IS ; Var(IC ) and I¯C are defined similarly. Cov(IS , IC ) is the pixelwise covariance between IS and IC . Since the number of pixels in IS and IC is large, the difference between the left and right hand sides of each of the three equations above is exceedingly small in practice. We substitute the right hand sides of Equation 3.18, Equation 3.19 and Equation 3.20 into Equation 3.17 and perform some algebraic manipulation. The MSD is transformed into MSD(IS , IC ) ≈ Var(IS ) + Var(IC ) − 2Cov(IS, IC ) + (I¯S − I¯C )2 (3.21)
152
B. Webb et al.
450
400
350
y(cm)
300
250
200
150
100
50
50
100
150 x(cm)
200
250
Fig. 3.46. Homing paths starting from all non-goal positions for a difference surface with snapshot location at x=90cm, y=30cm. The snapshot image was drawn from the “Original” set and all current images were drawn from the “Winlit” set. No homing runs reached the goal location.
From Equation 3.21 it becomes clear that in moving our homing agent so as to minimise the RMS between IC and IS , the agent is actually simultaneously • seeking high covariance between IS and IC (i.e. minimising −2Cov(IS, IC )); • seeking low variance current images (i.e. minimising Var(IC )); and • seeking equality of the mean intensities of IS and IC (i.e. minimising (I¯S − I¯C )2 ). The second and third items above can cause homing errors. We see the equalisation of mean intensities playing a deleterious role when for example the snapshot image is taken from the “Original” data set and all current images are drawn from the “Winlit” data set. Fig. 3.46 shows the homing paths (starting from all non-goal positions) for this data set pairing when the goal location was set at x = 90cm, y = 30cm. None of the runs manages to reach the goal position, even those which begin quite close to the goal. Instead the agent is directed towards the window, which is the part of the arena in the “Winlit” set whose average light intensity is most similar to that of the snapshot, taken from the “Original” set. Equation 3.21 also implies that the homing agent will be attracted to areas whose corresponding images have relatively low variance compared with images of nearby areas. Large nearby objects (such as chairs) in the image tend to decrease the variance, so this could account for the observed problems in homing with the “Chairs2” data set. It seems clear that assessing the similarity between IS and IC with covariance rather than RMS is a more sensible approach. Exploring the Covariance Image Similarity Measure We carried out the same set of experiments on homing as described above using the RMS, except that here we used covariance to measure the similarity between IS and IC where covariance (COV ) is defined as:
Low Level Approaches to Cognitive Control
153
N
COV (IS , IC ) = ∑ (IS (i) · IC (i)) − I¯S · I¯C
(3.22)
i=1
IS , IC , I¯S and I¯C are defined as above.The results are given in Table 3.7, and the statistical significance of the difference between these data and the results using RMS shown in Table 3.6, using McNemar’s test [138], are given in Table 3.8. Taken together, Tables 3.6, 3.7 and 3.8 tell us that the COV is sometimes a better image similarity measure than RMS in static conditions – in particular those conditions with non-uniform overhead lighting. COV always outperforms RMS when illumination conditions change between captures of IS and IC . We note, though, that COV results are quite poor in the face of relatively extreme illumination change, when IS is drawn from “Doorlit” and IC is drawn from “Winlit” or vice-versa. COV sometimes outperforms RMS when objects move between capture of IS and IC – namely, the (“Arboreal”,”Original”) and (“Chairs2”, “Chairs1”) data set pairings. The average return ratios for the two similarity measures are statistically indistinguishable when large objects are placed within the experimental area during capture of current images (i.e. when current images are drawn from the “Chairs2” or “Arboreal” sets). This last point is somewhat surprising. Given our above analysis of RMS, we expected Table 3.7. Average return ratios for homing experiments carried out on COV difference surfaces in static and dynamic environments. The standard deviation of the average return ratio for each data set pairing is given in brackets. Current image data set Original
Winlit
Doorlit
Arboreal
Chairs 1
Snapshot set
Original 0.974 (0.061) 0.729 (0.227) 0.805 (0.321) 0.939 (0.084) Winlit
0.915 (0.211) 0.998 (0.007) 0.065 (0.201)
-
Doorlit 0.668 (0.283) 0.076 (0.218) 0.994 (0.015)
Chairs 2
-
-
-
-
-
-
-
Arboreal 0.961 (0.069)
-
-
0.930 (0.083)
-
-
Chairs 1
-
-
-
-
0.978 (0.043) 0.600 (0.097)
Chairs 2
-
-
-
-
0.965 (0.069) 0.582 (0.124)
Table 3.8. This table indicates whether the average return ratio given in Table 3.7 (COV results) is significantly different than the average return ratio given in Table 3.6 (RMS results) for a given data set pairing. A ’Y’ indicates a statistically significant difference for a particular data set pairing; an ’N’ indicates that there is not enough experimental evidence to reject the hypothesis that the average return ratios are equal. McNemar’s test was used with a 5% level of significance. See text for details. Current image data set
Snapshot set
Original Winlit Doorlit Arboreal Chairs 1 Chairs 2 Original
N
Y
Y
N
-
-
Winlit
Y
Y
Y
-
-
-
Doorlit
Y
Y
Y
-
-
Arboreal
Y
-
-
N
-
-
Chairs 1
-
-
-
-
N
N
Chairs 2
-
-
-
-
Y
N
154
B. Webb et al. goal distance=0 cm 250
IS pixel intensity
200
150
100
50
0
0
50
100
150 I pixel intensity
200
250
300
C
Fig. 3.47. This figure depicts a scatterplot in which we plotted the intensity of each pixel in IC against the intensity of the corresponding pixel in IS . Both images were captured at the same location (x=60cm, y=270cm) but IS was taken from the “Winlit” data set and IC from the “Original” set.
difference surface homing with the COV measure to be more successful than homing with RMS in environments with large objects in the experimental arena. We are not as yet certain why this improvement fails to occur. Exploring the Mutual Information Image Similarity Measure The covariance is only a trustworthy measure of the similarity between IS given IC when there is a linear relationship between pixel intensities in IS and IC . Such a linear relationship between IS and IC does exist in static conditions [151]. There ceases to be a linear relationship between pixel intensities in IS and IC when the two images are drawn from different data sets (i.e. in dynamic conditions). However, judging from the relationship between IS and IC in Fig. 3.47, IS is quite predictable given IC . For example, if we are told that a pixel in IC has value 200, then we can predict with high probability that the corresponding pixel in IS has an intensity close to either 60 or 175. This predictability can be measured with mutual image information. Visual homing is quite similar to the problem of image registration. An image registration algorithm attempts to find the function which best transforms one image of an object or scene into a second image of the same scene or object. Hill et al. [55] give a comprehensive review of image registration algorithms used in medical imaging applications. The paper demonstrates that image registration solutions are quite similar to many visual homing algorithms. Early registration work tried to find landmarks in the images to be aligned and used the change in pose of these landmarks to infer the overall image transformation. More recent work in image registration has attempted to align entire images, eschewing landmark selection and correspondence issues. Similar to our work, these image registration algorithms search for the image transformation which maximises the similarity between one image and a second transformed image of the same scene. A focus of this work has been on the similarity measure used to compare images. The use of mutual information in image registration was first reported by Viola and Wells [157] and apparently independently discovered by Maes et al. [98].
Low Level Approaches to Cognitive Control
155
Mutual image information (MI) can be defined as MI(IS , IC ) = H(IS ) − H(IS |IC )
(3.23)
where H(IS ) is the entropy of IS and H(IS |IC ) is the conditional entropy of IS given IC . This definition of mutual information is adapted from the one given in [55]. Entropy and conditional entropy are themselves defined as follows: G−1
H(IS ) = − ∑ pS (a)log2 (pS (a))
(3.24)
a=0
G−1 G−1
H(IS |IC ) = − ∑
∑ pSC (a, b)log2 (pS|C (a|b)
(3.25)
a=0 b=0
In Equation 3.24 pS (a) is the probability that a pixel will have intensity a (0 ≤ a < G) in image IS . In this work, pS (a) is calculated from the normalised intensity histogram of IS . Image entropy is highest when all possible pixel values are equally likely (i.e. the pixel intensity histogram has a uniform distribution) and lowest (zero) when one pixel value is certain and the others never occur. The joint probability pSC (a, b) in Equation 3.25 is the probability that a given pixel in IS has intensity a and the same pixel in IC has value b; pSC (a, b) is calculated from the normalised joint intensity histogram of IS and IC . Finally, the conditional probability pS|C (a|b) is the probability that a pixel will have intensity a in IS given that the corresponding pixel in IC has intensity b. It is clear from Equation 3.23 that maximising the mutual information between IS and IC involves minimising the conditional entropy H(IS |IC ); H(IS ) is constant while homing. Analysis of Equation 3.25 tells us that H(IS |IC ) is minimal (zero) if knowing that a pixel in IC has intensity b allows us to predict with probability 1 that the corresponding pixel in IS has intensity a for all a and b. Conditional entropy will be much higher if intensity values in IC are poor predictors of corresponding pixel intensities in IS . Hence, mutual image information is a measure of how predictable IS is given IC . We use the following equation to compute MI in our experiments. This form is equivalent to Equation 3.23 ([55]) but is slightly less computationally intensive. G−1 G−1
MI(IS , IC ) =
pSC (a, b)
∑ ∑ pSC (a, b)log2( pS (a)pC (b) )
(3.26)
a=0 b=0
We repeated the experiments using MI as the image similarity measure rather than RMS or COV . The results of these experiments are given in Table 3.9. Table 3.10 makes clear that the average return ratios of COV and MI are statistically different for all but one data set pairing, when IS is taken from “Winlit” and all current images are taken from “Original.”. The comparison between covariance and mutual information is somewhat ambiguous: mutual information does dramatically better than covariance in most cases where illumination changes between IC and IS . Mutual information does slightly less well than covariance in static environments and in environments in which the agent passes near large, monochromatic objects. We note that RMS (or something very similar) is quite often used to measure the difference between images in other image-based navigation schemes (e.g. the image warping
156
B. Webb et al.
Table 3.9. Average return ratios for homing experiments carried out on MI difference surfaces in static and dynamic environments. The standard deviation of the average return ratio for each data set pairing is given in brackets. Current image data set Original
Winlit
Doorlit
Arboreal
Chairs 1
Snapshot set
Original 0.905 (0.097) 0.602 (0.238) 0.892(0.208) 0.865 (0.098) Winlit
0.914 (0.115) 0.979 (0.072) 0.307 (0.305)
-
Doorlit 0.790 (0.119) 0.274 (0.297) 0.987 (0.023)
Chairs 2
-
-
-
-
-
-
-
Arboreal 0.892 (0.097)
-
-
0.811 (0.133)
-
-
Chairs 1
-
-
-
-
0.832 (0.135) 0.557 (0.126)
Chairs 2
-
-
-
-
0.807 (0.148) 0.498 (0.152)
Table 3.10. This table indicates whether the average return ratio given in Table 3.9 (MI results) is significantly different than the average return ratio given in Table 3.7 (COV results) for a given data set pairing. A ’Y’ indicates a statistically significant difference for a particular data set pairing; an ’N’ indicates that there is not enough experimental evidence to reject the hypothesis that the average return ratios are equal. McNemar’s test was used with a 5% level of significance. See text for details. Current image data set
Snapshot set
Original Winlit Doorlit Arboreal Chairs 1 Chairs 2 Original
Y
Y
Y
Y
-
-
Winlit
N
Y
Y
-
-
-
Doorlit
Y
Y
Y
-
-
Arboreal
Y
-
-
Y
-
-
Chairs 1
-
-
-
-
Y
Y
Chairs 2
-
-
-
-
Y
Y
algorithm of Franz et al. [30] and image-based Monte Carlo localisation [101]). As in our work, these algorithms compare a current image with one or more images captured previously. Lighting and landmark locations might well have changed in the interim. We have demonstrated that mutual information is robust to this dynamism and so could provide a useful image similarity measure in image-based robot navigation in general.
3.4 Learning In chapter 1 we discussed evidence of the learning capabilities of insects, and the important role of the mushroom body (MB) a prominent region of multimodal integration in the insect brain. In this final section we describe the development of a computational model of learning in the MB, and and test the model’s performance for non-elemental and crossmodal associative learning tasks. We employ a realistic spiking neuron model and spike time dependent plasticity, and learning performance is investigated in closed-loop conditions. We show that the distinctive neuroarchitecture (divergence onto MB neurons and convergence from MB neurons, with an otherwise non-specific connectivity) is sufficient for solving non-elemental and crossmodal learning tasks and thus modulating underlying reflexes in context-dependent, heterarchical manner.
Low Level Approaches to Cognitive Control
157
The neural architecture for the agent is based on the insect brain [169], in particular, on evidence that the MB is involved in modulating more basic, reflexive behaviours ([114],[102]) and thus acts as a neural substrate for associations underlying contextspecific and non-elemental forms of learning. However, the goal of the implemented model design was not to imitate physiological mechanisms involved in MB-mediated learning as closely as possible, but rather to find an abstract description of the underlying principles, able to reproduce associative learning in closed-loop conditions. Yet, we aim to use realistic models of the biological components as more realistic models can be quantitatively and qualitatively different from more abstract connectionist approaches. Detailed discussion of insect brain architecture is provided in chapter 1. The main idea is that of parallel pathways, with sensory inputs forming direct reflex loops, but also feeding into secondary routes that are used to place information from various sensory modalities or other domain-specific sensorimotor loops into context. The system can thus improve on reflexive behaviours by learning to adapt and anticipate reflex-causing stimuli. This adaptation process is assumed to occur in the MB, which form such a parallel pathway for sensory inputs in the insect brain. 3.4.1
Neural Model and STDP
We chose the neuron model proposed by Izhikevich [67] since it exhibits biologically plausible dynamics, similar to Hodgkin-Huxley-type neurons, but is computationally less expensive and thus, suitable for large-scale simulation: C
dv = k(v − vr )(v − vt ) − u + I + [ξ ∼ N(0, σ )] dt
(3.27)
du = a(b(v − vr ) − u), (3.28) dt where v is the membrane potential and u is the recovery current. a = 0.3, b = −0.2, c = −65, d = 8, and k = 2 are model parameters. C = 100 (pF) is the capacitance, vr = −60mV is the resting potential, and vt = −40mV is the instantaneous threshold potential. ξ is a Gaussian noise term with standard deviation σ . The variables v and u are reset if v ≥ +35mV: v←c . (3.29) u ← u+d Synaptic inputs are modelled by: I(t + ∆ t) = gS(t)(vrev − v(t)),
(3.30)
where vrev is the reversal potential of the synapse (vrev = 0mV for excitatory and vrev = −90mV for inhibitory synapses) and g is the maximal synaptic conductance. S(t) is the amount of neurotransmitter active at the synapse at time t and is updated as follows: −∆ t S(t)e τsyn + δ , if presynaptic spike S(t + ∆ t) = , (3.31) −∆ t S(t)e τsyn , otherwise
158
B. Webb et al.
where δ = 0.5 is the amount of neurotransmitter released when a presynaptic spike occurred and τsyn is the synaptic timescale. The simulation timestep ∆ t is set to 0.25ms. Synapses are modified using Spike Time-Dependent Plasticity (STDP) which has been observed in biological neural systems (e.g., [19]). In STDP, synaptic change depends on the relative timing of pre- and post-synaptic action potentials. Synaptic conductances are adapted as follows: ⎧ t −t ⎨ A e preτ+post − gmax , if t − t < 0 + pre post r ∆g = , (3.32) −(tpre −tpost ) ⎩ τ− , if tpre − tpost ≥ 0 A− e where tpre and tpost are the spiking times of the pre- and postsynaptic neuron respectively. A+ , A− , τ+ ms, and τ− ms are parameters. We modified the STDP rule proposed 3 by [137] by adding an additional term − gmax r if tpre −tpost < 0 where r = 10 is a parameter. This means that if postsynaptic spikes are not matched with presynaptic ones, the synaptic conductance between them is decreased by this term. If this modification rule of synaptic conductances g pushes the values out of the allowed range 0 ≤ g ≤ gmax , g is set to the appropriate limiting value (gmax ). A ‘forgetting’ factor is introduced in the form of a slow decay of g: −∆ t
g(t + ∆ t) = g(t)e τdecay
(3.33)
where τdecay = 105 . 3.4.2
Non-elemental Associations
Thus far, computational models of MB function have been restricted to classification of sensory inputs in open-loop conditions ([113],[112]). Here we develop a MB model that modulates reflexive sensorimotor loops through non-elemental associative learning [37], that is, forms of learning that go beyond simple associations between two stimuli (classical conditioning) or between a stimulus and a response (instrumental conditioning). In non-elemental learning tasks, the stimuli are ambiguously associated with reward or punishment; each stimulus is followed as often by appetitive (+) as aversive (–) reinforcement so that learning requires the context of the stimulus to be taken into account. In negative patterning, the agent has to learn to approach (appetitive action) the single stimuli A and B but retreat (aversive action) from the compound AB. In biconditional discrimination, the agent has to learn to respond appetitively to the compounds AB and CD but aversively to the compounds AC and BD. In feature neutral discrimination, the agent has to learn to respond appetively to B and AC but aversively to C and the compound AB. In our simulation experiments, we take ‘reinforcement’ and ‘punishment’ to be sensory cues causing different reflex responses (appetitive or aversive); in successful learning, these responses become associated with the appropriate conditioned stimuli. We propose a minimalist architecture able to modulate reflex behaviours in closedloop conditions (where the system’s output influences the system’s inputs) for nonelemental learning tasks. In this section, we show that the general neuroarchitecture
Low Level Approaches to Cognitive Control
159
Table 3.11. Stimuli-reward combinations in the non-elemental learning tasks Non-elemental learning task Negative patterning Biconditional discrimination Feature neutral discrimination
Stimuli-reward combinations A+ B+ AB– AB+ CD+ AC– BD– AC+ C– AB– B+
AB+
AC+
A+
CD+
B+
B+
AC-
AB-
AB-
BD-
C-
(a)
(b)
(c)
Fig. 3.48. The wallpapers used for the non-elemental learning tasks: (a) negative patterning, (b) biconditional discrimination, and (c) feature neutral discrimination. The agent’s field of view is represented by the 4-by-4 grid. As the field of view is gradually moved to the left, the visual patterns predict what the agent will experience (+ or –) when it reaches the left edge. Refer to text for further explanations.
of the MB (fan-out and fan-in) is sufficient for explaining the above forms of nonelemental learning. Our experimental set-up was inspired by conditioning paradigms for visual pattern avoidance in flies, in which the animal in a flight simulator learns an appropriate yaw response to a particular visual pattern which is associated with an unpleasant heat beam. In our simulation, the agent has a limited field of view (45 degrees) on a ‘wallpaper’ (of 360 degrees total width) that displays different patterns. In the absence of any action by the agent, the field of view is moved gradually to the left, at 1.5 degrees per millisecond. If it reaches the left edge the agent is “punished” - this generates a reflex action, which moves the field of view back 180 degrees to the right. Before it reaches the edge it will encounter a visual pattern, which can thus be used to predict that the edge will be encountered. The aim is to learn to associate the reflex action with the visual pattern and execute it before encountering the edge, thus avoiding punishment. This anticipatory or conditioned reflex, if executed, will move the field of view 21 degrees to the right. In the non-elemental learning tasks, there are two reflexes, X0-V0 and X1-V1 (these could be called ‘appetitive’ and ‘aversive’, but in fact they have the same effective result of turning the agent back to 180 degrees). There are two corresponding modes for the simulator, i.e. when the field of view reaches the left edge, the agent experiences either X0 or X1, and will execute the corresponding reflex, V0 or V1. Which experience will occur is predicted by the visual pattern, according to the schemes illustrated in Fig. 3.48; for example, in negative patterning, the patterns A and B predict X0(+), and the pattern AB predicts X1(–). The agent must learn to execute the correct reflex (V0 or V1) when it sees a particular visual pattern, which will move it away from the edge. If it executes the wrong reflex, then it is instead moved further towards the edge (i.e. 21 degrees to the left).
160
B. Webb et al. Secondary pathway for sensory inflow via Mushroom Body LHI=16 Excitatory Inhibitory
KC=120
PN=16
EN=2
Reflex pathways
Fig. 3.49. The implemented MB network receives sensory cues from the visual field via projection neurons (PN), which make direct excitatory connections, and indirect inhibitory connections (via the lateral horn interneurons (LHI)) to the Kenyon cells (KC). The MB output converges on a small number of extrinsic neurons (EN), which are also excited by the underlying direct reflex pathways, and can activate these pathways. Learning occurs between the KC and EN, allowing anticipation of the reflex responses due to associations with particular visual patterns.
The field of view contains 45-by-45 pixels, which are mapped onto a 4-by-4 set of sensory neurons (see network description below). White areas on the wallpaper excite these neurons, thus stimulus A will excite the first row of neurons, B the second row and so on. A typical simulation run lasts 50 seconds, during which the wallpapers are switched every 0.5 seconds. The simulation timestep is 0.25ms. The mushroom bodies in insects have a characteristic neuroarchitecture: namely a tightly-packed, parallel organisation of thousands of neurons, the Kenyon cells. The mushroom bodies are further subdivided into several distinct regions: the calyces (input), the pendunculus, and the lobes (output). The dendrites (inputs) of the Kenyon cells have extensive branches in the calyces, and the axons (outputs) of the Kenyon cells run through the pendunculus before extending to form the lobes. Synaptic interconnections between Kenyon cell axons have been reported [47]. Note that there is considerable divergence (1:50) from a small number of sensory projection neurons (PN) onto the large number of Kenyon cells (KC), and considerable convergence (100:1) from the Kenyon cells onto extrinsic output neurons (EN) (these ratios are estimates based on data from [94]). KC receive direct excitatory input from PN neurons, but also indirect inhibitory inputs from the same neurons via lateral horn interneurons (LHI), arriving shortly after the excitation. These connections are illustrated in Fig. 3.49. It is hypothesised that the MB help disentangle spatio-temporal input patterns by operating as coincidence detectors selective to particular correlations in the input spike trains [115]. The mapping of sensory neurons onto MB neurons shows high divergence which can serve for the recognition of unique relationships in primary sensory channels.
Low Level Approaches to Cognitive Control
161
In our model, nonlinear transformation, separating the activity patterns in the PN layer into sparse activity patterns in the KC layer, is implemented by a randomly determined connectivity matrix between these layers. EN linearly classify the KC activity patterns. Plasticity of KC-EN synapses is achieved with a spike timing dependent plasticity rule. The EN output mediates conditioned responses by activating the appropriate reflex responses. The inhibition from the LHI, quickly following excitation from the PN, limit integration time for the KC to short time windows, making them highly sensitive to precise temporal correlations. 3.4.2.1 Network Geometry The network geometry as shown in Fig. 3.49 retains proportional dimensions to the MB system in insects but is smaller in size. A strategy based entirely on random connectivity and self-organisation through local learning and competition is explored. Each neuron pair X-X is connected with probability pX,X . The system implements non-specific connectivity with the exception of full inhibitory connectivity between EN (c.f., [37]). We describe the various network layers, their parameters, and their roles below. Learning occurs only through modulation of the KC-EN connections. We report in Sect. 3.4.2.2 the effects on learning performance of changing connectivity between the LHI and KC layers, and varying the size of the KC layer. PN layer. This layer receives sensory input. The layer consists of 16 neurons (the agent’s 45-by-45-pixels FoV is divided into a 4-by-4 grid). The input to a single PN neuron is calculated as follows: IPN =
sum of pixel values number of pixels
255
.
(3.34)
The neurotransmitter released at each timestep is calculated as follows: S(t + ∆ t) = S(t) + IPN × δ .
(3.35)
Black areas have a pixel value of 0 whereas white areas have pixel values of 255, thus only white areas in the FoV excite the network. KC layer. The KC layer consists of 16C2 = 120 neurons. Each KC will act as a coincidence detector and receive inputs from a small number of PNs (pPN,KC = 0.1). The synaptic timescale τPN,KC is set to 2 ms. This parameter needs to be small in order to make the KC neurons very sensitive to the relative timing of incoming input from the PN layer. This setup allows the KC neurons to act as coincidence detectors. The synaptic strength of PN-KC synapses needed to be carefully adjusted (gPN,KC are initialised uniformly at random in [20,30]). In addition to this we add an uniformly distributed jitter to the synaptic strengths. We implemented excitatory and inhibitory KC-KC connections (pKC,KC = 0.1, τKC,KC = 5ms) with equal probability (gKC,KC are initialised uniformly at random in [5,10]). LHI layer. Feed-forward inhibition by lateral horn interneurons (LHI) dampens KC activity in the MB. Thus, the integration time for the KC neurons is limited to short
162
B. Webb et al.
(a)
(b)
Fig. 3.50. (a) Agent behaviour in response to first (0ms), second (2000ms) and final (18000ms) presentations of the same wallpaper during a 20s simulation run. At the start it reaches the edge position (lower dotted line) and performs a reflex turn. In the next presentation it responds to the visual stimulus (between the upper dotted lines) but the response is sometimes incorrect. By the final presentation it reliably responds to the visual stimulus and thus successfully avoids the edge. (b) Boxplots of performance for different learning tasks (1) negative patterning, (2) biconditional discrimination, and (3) feature neutral discrimination. The agent encounters a median of 5 punishments before successfully using the visual patterns to anticipate and avoid it. The simulation runs lasted 50s.
time windows, making them highly sensitive to precise temporal correlations. This was implemented through 16 LHIs receiving their inputs from the PN layer and inhibiting activity in the KC layer (pPN,LHI = 0.2, τPN,LHI = 5ms, gPN,LHI are initialised uniformaly at random in [20,30], pLHI,KC = 0.1, τLHI,KC = 5ms, gLHI,KC are initialised uniformly at random in [20,30]). EN layer. Every KC-EN pair is connected (pKC,EN = 1 and τKC,EN = 5ms). However, the synaptic conductance gKC,EN for all synapses is initialised to 0, and is subsequently modified by STDP as described below. The ENs also receive excitatory input from the underlying reflex pathways, thus the learning reflects the coincidence of activity in these pathways and particular patterns of KC activity. STDP parameter Value A− 1 A+ 2 τ− 5ms τ+ 50ms gmax 30 r 103 3.4.2.2 Non-elemental Discrimination Performance The system was able to learn each of the non-elemental associations shown in Fig. 3.48. As the system learns to respond to the visual patterns, the reflex responses to encountering the edge are executed less often and the MB drives the agent’s behaviour (as shown in Fig. 3.50(a)). The performance index used in this paper is simply the number of times
Low Level Approaches to Cognitive Control
163
35 25 30
25 Performance Index
Performance Index
20
15
10
20
15
10 5
5
0
0.1
0.2
0.3
0.4
0.5
PLHI,KC
(a)
0
10
40
70 100 Size of KC layer
130
160
(b)
Fig. 3.51. Boxplots of (a) performance with varying probability of connectivity pLHI,KC between the LHI-KC layers. Performance with varying KC layer size (10,40,70,100,130, and 160 neurons) for a probability of connectivity pLHI,KC = 0.0
the reflexes are executed. As shown in Fig. 3.50(a), the naive system will repond with one reflex action during one presentation of one wallpaper. In the biconditional discrimination setup, for example, there are Nw = 4 wallpapers which are interchanged every tc = 0.5s during tT = 50s. Thus, tT /(Nw × tc ) = 25 activations would mean that a run was unsuccessful. Fig. 3.50(b) shows boxplots of the number of times reflex pathways were active over 30 simulation runs (each lasting 50 seconds) for each of the three conditioning paradigms. All simulation runs for negative patterning were successful and only one simulation run for biconditional and feature neutral discrimination each was unsuccessful. The agent learnt after a median of 5 activations of the reflex pathways which reflex to use, in response to which visual patterns, to successfully avoid the edge. Fig. 3.51(a) shows the learning performance with varying probability of connectivity pLHI,KC . As the connectivity between LHI-KC neurons increases (and with it the inhibition from this layer), the learning performance becomes maximal at pLHI,KC = 0.1. As the inhibition increases further, the performance drops off. With increasing inhibition by the LHI neurons, the activity in the KC layer becomes sparser. Fig. 3.51(b) shows the network performance with varying KC-layer size with probability of connectivity pLHI,KC = 0.0. The performance tends to improve with increasing KC layer size. 3.4.3
Associating Auditory and Visual Cues
The MB model is tested as part of an insect brain inspired architecture within a closed loop behavioural task replicating in simulation an experiment carried out on bushcrickets. We show the system can successfully associate visual to auditory cues, so as to maintain a steady heading towards an intermittent sound source. Male bushcrickets are able to maintain a straight course to a female, by coupling visual cues to an acoustically detected direction [48]. Stabilising effects of visual information on course maintenance are found in other insects (c.f., [5]), but in this case it was also shown that optical cues could stand in for (temporarily absent) auditory signals. In particular, the animal could quickly learn to walk at an arbitrary angle to a visual
164
B. Webb et al. Phonotaxis
Visual input
PN
KC
EN
Left
Right
Excitatory Inhibitory
Motor output
Fig. 3.52. The implemented MB network receives sensory cues from the visual field via projection neurons (PN), which make direct excitatory connections to the Kenyon cells (KC). The MB output converges on a small number of extrinsic neurons (EN), which are also excited by the underlying direct reflex pathways, and can activate these pathways. Learning occurs between the KC and EN.
landmark, corresponding to the sound direction. In the absence of sound it would follow the displacement of the landmark with an appropriate change in walking direction. In a comparable task, the MB of the cockroach have been shown to play a role in place memory relating distant visual cues to an invisible target [108]. Here we show how the neural architecture of the MB can account for such capabilities, using a biologically plausible neural representation and learning rule. 3.4.3.1 Neural Architecture In the current experiment, the reflex pathway represents a response to sound (see Fig. 3.52). Each spike of the output (left or right) neurons of the phonotaxis (sound localising) circuit (based on [161]) cause the agent to turn by 1 degree in the direction of the sound source and also excite the extrinsic neurons (EN). The visual position of the landmark is mapped onto projection neurons (PN) that activate the Kenyon cells (KC) that form the mushroom body. These converge on the extrinsic neurons. During conditioning, the sound is on and the agent moves towards it. After conditioning, the agent should have associated the required movements with a particular landmark direction, and thus be able to control its course using only visual cues. PN layer. This layer consists of 72 neurons giving a visual resolution of 360 72 = 5 degrees. We assumed preprocessing of visual information and that receptive fields are not overlapping. Each PN encodes a particular relative angle towards the landmark. The neurotransmitter release (at a PN when the landmark is in view at its particular relative angle) is calculated as follows: S(t + ∆ t) = S(t) + δ .
Low Level Approaches to Cognitive Control (a)
(b)
.....
..... 71
72
PN 1 2 3
4
PN 1
(c)
165
PN 1
PN 2
PN 2
PN 3
PN 3
KC 1
KC 1
KC 2
KC 2
KC 3
KC 3
EN 1
EN 1
EN 2
EN 2
LEFT
LEFT
RIGHT
RIGHT
5
Fig. 3.53. Schematic drawing of neural activity. As the agent zig-zags towards the sound source (a), turning is controlled by the output (left, right) of the phonotaxis network (b) activating the extrinsic neurons (EN). During these movements, the visual stimulus activates several different projection neurons (PN) which activate their respective Kenyon cells (KC). The system learns to associate KC and EN firing, so that after conditioning (c) the visual stimulus alone is able to control turning.
KC layer. The KC layer consists of 72 neurons. The topographical organisation of the PN layer is maintained, i.e. each KC only receives input from one PN. The synaptic strength of PN-KC synapses (gPN,KC ) were set at random in the interval [20,30]. EN layer. The EN layer contains 2 neurons and each EN is connected to all KC, i.e., every KC-EN pair is connected (τKC,EN = 5ms). Learning occurs only through modulation of the KC-EN connections. However, the synaptic conductance gKC,EN for all synapses is initialised to 0, and is subsequently modified as described below. The EN neurons also receive excitatory input from the underlying reflex pathways, thus the learning reflects the coincidence of activity in these pathways and the activity in the KC layer. STDP parameter Value A− -20 A+ 20 τ− 5ms τ+ 10ms gmax 50 r 103 3.4.3.2 Auditory Visual Association Performance The model was tested in two scenarios. In one, the agent walks on a treadmill so that it can only change its orientation - this is directly comparable to the original behavioural experiments on the bushcricket. In the other, the agent moves in an arena towards the sound source, which is more like the natural interaction of the insect with the environmental cues. In each case the conditioning trials (with constant sound) lasted a total of 120s, with the agent starting at a random heading. The conditioning trials were repeated and reset as follows. In the ‘treadmill’ scenario the heading was randomly reset every 10s, whereas in the ‘arena’ scenario if it arrived within a small radius of the sound
166
B. Webb et al.
180 160
200
heading angle (in degrees)
heading angle (in degrees)
140 120 100 80 60 40
100
50
20
conditiong angle θ=90 θ=135
conditioning angle θ=45
0 −20 −20
150
θ=45 0
0
20
40 60 80 100 120 140 landmark displacement (in degrees)
160
180
−20
0
20
40 60 80 100 120 140 landmark displacement (in degrees)
160
180
Fig. 3.54. Mean heading angle plotted versus landmark displacement. (a) The agent can turn but does not move forward, simulating an insect on a treadmill. (b) The agent can move through the environment. In each case the agent maintains the appropriate relative heading as the landmark is displaced.
source it was returned to its starting position (with a random heading) and repeated its approach. In the ‘arena’ scenario, coupling visual cues to phonotactic behaviour has a stabilising effect on course maintenance. As a measure we used the Vector Length (VL) defined as the quotient of the distance from start position to the position of the source and the actual path length. The mean VL was 0.59 (s.d.=0.1) before conditioning and 0.818 (s.d.=0.182) after conditioning. More importantly, the landmark could now be used to stand in for the sound. This was tested by looking at the mean angle that the agent moved with sound off when the landmark was displaced. As shown in Fig. 3.54, the agent’s direction is appropriate to the conditioning angle and follows the landmark displacement consistently, for both the ‘treadmill’ scenario (Fig 3.54a, closely comparable to the biological data seen in Fig. 9 in [48]) and the more naturalistic ‘arena’ scenario (figure 3.54b). The system has associated each possible visual position of the landmark with the movement required to re-orient so that the landmark falls into the visual position it occupied during walking towards the sound.
3.5 Conclusion Animal behaviour is a continuous closed loop, with sensory events transformed by the agent into motor actions, and these actions transformed by the environment into new sensory events. Neural learning mechanisms should thus be evaluated in a closed loop context if they are to be considered biologically relevant. A fundamental role of learning for behaving animals is associating the reflex response to one cue with another cue that can refine, predict or substitute for the original cue. We have replicated this capability in several different scenarios, using a plausible model of insect brain circuitry. We are now working towards evaluating this model by incorporating the more realistic models of the sensory inputs (e.g. audition) and motor outputs (e.g. six-legged walking) that were described earlier in this chapter. We aim to demonstrate the same learning mech-
Low Level Approaches to Cognitive Control
167
anisms operating on real robots, and assess their suitability for efficient hardware implemention, e.g. using FPGAs. The same neural architecture will also be developed for application to place memory (associating landmarks to a desired location) a behaviour known to be dependent on the Mushroom Body in insects [108].
References 1. Amrein, H.: Pheromone perception and behavior in Drosophila. Current Opinion in Neurobiology 14, 435–442 (2004) 2. Arbas, E., Willis, M., Kanzaki, R.: Organization of goal-oriented locomotion: pheromonemodulated flight behavior of moths. In: Beer, R., Ritzmann, R., McKenna, I. (eds.) Biological neural networks in invertebrate neuroethology and robotics. Academic Press, Cambridge (1993) 3. Benhamou, S.: Path integration by swimming rats. Animal Behavior 54, 321–327 (1997) 4. Bisch-Knaden, S., Wehner, R.: Local vectors in desert ants: context-dependent landmark learning during outbound and homebound runs. Journal of Comparative Physiology 189, 181–187 (2003) 5. B¨ohm, H., Schildberger, K., Huber, F.: Visual and acoustic course control in the cricket Gryllus-bimaculatus. Journal of Experimental Biology 159, 235–248 (1991) 6. Borst, A., Haag, J.: Neural networks in the cockpit of the fly. Journal of Comparative Physiology A 188, 419–437 (2002) 7. Burdohan, J., Comer, C.: Cellular organisation of an antennal mechanosensory pathway in the cockroach, Periplaneta americana. Journal of Neuroscience 16, 5830–5843 (1996) 8. Burgess, N., Donnett, J., O’Keefe, J.: Using a mobile robot to test a model of the rat hippocampus. Connection Science 10, 291–300 (1998) 9. Bush, S., Schul, J.: Pulse-rate recognition in an insect: evidence of a role for oscillatory neurons. Journal of Comparative Physiology 192, 113–121 (2006) 10. Camhi, J., Johnson, E.: High-frequency steering maneuvres mediated by tactile cues: antennal wall-following in the cockroach. Journal of Experimental Biology 202, 631–643 (1999) 11. Cartwright, B., Collett, T.: Landmark learning in bees. Journal of Comparative Physiology A 151, 521–543 (1983) 12. Chapman, T.: Morphological and neural modelling of the orthopteran escape response. Ph.D. thesis, University of Stirling (2001) 13. Chapman, T., Webb, B.: A model of antennal wall-following and escape in the cockroach. Journal of Comparative Physiology A 192, 949–969 (2006) 14. Collett, M., Collett, T., Srinivasan, M.: Insect navigation: Measuring travel distance across ground and through air. Current Biology 16, R887–R890 (2006) 15. Collett, T., Collett, M.: Path integration in insects. Current Opinion in Neurobiology 10, 757–762 (2000) 16. Comer, C., Robertson, R.: Identified nerve cells and insect behavior. Progress in Neurobiology 63, 409–439 (2001) 17. Cruse, H., Kindermann, T., Schumm, M., Dean, J., Schmitz, J.: Walknet-a biologically inspired network to control six-legged walking. Neural networks 11, 1435–1447 (1998) 18. Cruse, H., Schmitz, J., Braun, U., Schweins, A.: Control of body height in a stick insect walking on a treadwheel. The Journal of Experimental Biology 181, 141–155 (1993) 19. Dan, Y., Poo, M.: Spike timing dependent plasticity of neural circuits. Neuron 44, 23–30 (2004) 20. Duerr, V., Krause, A., Schmitz, J., Cruse, H.: Neuroethological concepts and their transfer to walking machines. International Journal of Robotics Research 22, 151–167 (2003)
168
B. Webb et al.
21. D¨urr, V., Ebeling, W.: The behavioural transition from straight to curve walking: kinetics of leg movement parameters and the initiation of turning. The Journal of Experimental Biology 208, 2237–2252 (2005) 22. Egelhaaf, M., Borst, A.: A look into the cockpit of the fly: visual orientation, algorithms, and identified neurons. Journal of Neuroscience 13(11), 4563–4574 (1993) 23. Egorov, A., Hamam, B., Fransen, E., Hasselmo, M., Alonso, A.: Graded persistent activity in entorhinal cortex neurons. Nature 14, 133–134 (2002) 24. Elsner, N.: The search for neural centers of cricket and grasshopper song. In: Neural Basis of Behavioural Adaptations, Fortschritte der Zoologie, vol. 39, pp. 167–193. Gustav Fischer Verlag, Stuttgart (1994) 25. Erber, J., Kierzek, S., Sander, E., Grandy, K.: Tactile learning in the honeybee. Journal of Comparative Physiology A 183, 737–744 (1998) 26. Esch, H., Zhang, S., Srinivasan, M., Tautz, J.: Honeybee dances communicate distances measured by optic flow. Nature 411, 581–583 (2001) 27. Ferree, T., Lockery, S.: Computational rules for chemotaxis in the nematode c. elegans. Journal of Computational Neuroscience 6(3), 263–277 (1999) 28. Franceschini, N., Riehle, A., Nestour, A.L.: Directionally selective motion detection by insect neurons. In: Stavenga, D., Hardie, R. (eds.) Facets of Vision, pp. 360–390. Springer, Berlin (1989) 29. Franz, M., Mallot, H.: Biomimetic robot navigation. Robotics and Autonomous Systems 30, 133–153 (2000) 30. Franz, M., Schoelkopf, B., Mallot, H., Buelthoff, H.: Where did i take that snapshot? scenebased homing by image matching. Biological Cybernetics 79, 191–202 (1998) 31. Frye, M., Dickinson, M.: Closing the loop between neurobiology and flight behavior in Drosophila. Current Opinion in Neurobiology 14, 729–736 (2004) 32. Frye, M., Dickinson, M.: Motor output reflects the linear superposition of visual and olfactory inputs in Drosophila. Journal of Experimental Biology 207, 123–131 (2004) 33. Frye, M., Tarsitano, M., Dickinson, M.: Odor localization requires visual feedback during free flight in Drosophila melanogaster. Journal of Experimental Biology 206, 843–855 (2003) 34. Fullmer, B., Miikkulainen, R.: Using marker-based genetic encoding of neural networks to evolve finite-state behaviour. In: Toward a Practice of Autonomous Systems. Proceedings of the First European Conference on Artificial Life. MIT Press, Cambridge (1992) 35. Galizia, C.G., Menzel, R.: The role of glomeruli in the neural representation of odours: results from optical recording studies. Journal of Insect Physiology 47, 115–130 (2001) 36. Gao, Q., Yuan, B., Chess, A.: Convergent projections of Drosophila olfactory neurons to specific glomeruli in the antennal lobe. Nature Neuroscience 3(8), 780–785 (2000) 37. Giurfa, M.: Cognitive neuroethology: dissecting non-elemental learning in a honeybee brain. Current Opinion in Neuroethology 13, 726–735 (2003) 38. Giurfa, M., Menzel, R.: Insect visual perception: complex abilities of simple nervous systems. Current Opinion in Neurobiology 7, 505–513 (1997) 39. Goepfert, M., Robert, D.: The mechanical basis of Drosophila audition. Journal of Experimental Biology 205, 1199–1208 (2002) 40. Gollisch, T., Schutze, H., Benda, J., Herz, A.: Energy integration describes sound-intensity coding in an insect auditory system. Journal of Neuroscience 22(23), 10434–10448 (2002) 41. Haferlach, T., Wessnitzer, J., Mangan, M., Webb, B.: Evolving a neural model of insect path integration. Adaptive Behavior 15(3), 273–287 (2007) 42. Hartmann, G., Wehner, R.: The ant’s path integration system: a neural architecture. Biological Cybernetics 73, 483–497 (1995)
Low Level Approaches to Cognitive Control
169
43. Hassenstein, B., Reichardt, W.: Systemtheoretische Analyse der Zeit-, Reihenfolgen- und Vorzeichenauswertung bei der Bewegungsperzeption des Ruesselskaefers Chlorophanus. Zeitschrift der Naturforschung 11b, 513–524 (1956) 44. Hayes, A., Martinoli, A., Goodman, R.: Distributed odor source localization. IEEE Sensors Journal 2(3), 260–271 (2002) 45. Hedwig, B., Poulet, J.F.A.: Mechanisms underlying phonotactic steering in the cricket Gryllus bima culatus revealed with a fast trackball system. J. Exp. Biol. 208(5), 915–927 (2005) 46. Heinze, S., Homberg, U.: Maplike representation of celestial e-vector orientations in the brain of an insect. Science 315, 995–997 (2007) 47. Heisenberg, M.: What do the mushroom bodies do for the insect brain? an introduction. Learning and Memory 5, 1–10 (1998) 48. von Helversen, D., Wendler, G.: Coupling of visual to auditory cues during phonotactic approach in the phaneropterine bushcricket Poecilimon affinis. Journal of Comparative Physiology A 186, 729–736 (2000) 49. Hengstenberg, R.: Gaze control in the bowfly Calliphora: a multisensory, two-stage integration process. The Neurosciences 3, 19–29 (1991) 50. Hennig, R., Franz, A., Stumpner, A.: Processing of auditory information in insects. Microscopy Research and Technique 63(6), 351–374 (2004) 51. Higgins, C.: Nondirectional motion underlie insect behavioral dependence on image speed. Biological Cybernetics 91, 326–332 (2004) 52. Higgins, C., Douglass, J., Strausfeld, N.: The computational basis of an identified neuronal circuit for elementary motion detection in dipterous insects. Visual Neuroscience 21, 567– 586 (2004) 53. Higgins, C., Pant, V.: An elaborated model of fly small-target tracking. Biological Cybernetics 91, 417–428 (2004) 54. Hildebrand, J., Shepherd, G.: Mechanisms of olfactory discrimination: converging evidence for common principles across phyla. Annual Review of Neuroscience 20, 595–631 (1997) 55. Hill, D.L.G., Batchelor, P.G., Holden, M., Hawkes, D.J.: Medical image registration. Physics in Medicine and Biology 46, 1–45 (2001) 56. Homberg, U.: In the search of the sky compass in the insect brain. Naturwissenschaften 91, 199–208 (2004) 57. Homberg, U., Christensen, T., Hildebrand, J.: Structure and function of the deutocerebrum in insects. Annual Review of Entomology 34, 477–501 (1989) 58. Honegger, H.W.: A preliminary note on a new optomotor response in crickets: antennal tracking of moving targets. Journal of Comparative Physiology A 142(3), 419–421 (1981) 59. Horstmann, W., Egelhaaf, M., Warzecha, A.K.: Synaptic interactions increase optic flow specificity. European Journal of Neuroscience 12, 2157–2165 (2000) 60. Hoy, R., Nolen, T., Brodfuehrer, P.: The neuroethology of acoustic startle and escape in flying insects. Journal of Experimental Biology 146, 287–306 (1989) 61. Hoy, R., Robert, D.: Tympanal hearing in insects. Annual Review of Entomology 41, 433– 450 (1996) 62. Huerta, R., Nowotny, T., Garcia-Sanchez, M., Abarbanel, H., Rabinovich, M.: Learning classification in the olfactory system of insects. Neural Computation 16, 1601–1640 (2004) 63. Imaizumi, K., Pollack, G.: Central projections of auditory receptor neurons of crickets. Journal of Comparative Neurology 493, 439–447 (2005) 64. Ishida, H., Nakamoto, T., Moriizumi, T., Kikas, T., Janata, J.: Plume-tracking robots: a new application of chemical sensors. Biological Bulletin 200, 222–226 (2001) 65. Iwama, A., Shibuya, T.: Physiology and morphology of olfactory neurons associating with the protocerebral lobe of the honeybee brain. Journal of Insect Physiology 44, 1191–1204 (1998)
170
B. Webb et al.
66. Izhikevich, E.: Resonate-and-fire neurons. Neural Networks 14, 883–894 (2001) 67. Izhikevich, E.: Dynamical systems in neuroscience: the geometry of excitability and bursting. The MIT Press, Cambridge (in press, 2007) 68. Jacobs, G., Miller, J., Murphey, R.: Integrative mechanisms controlling directional sensitivity of an identified sensory interneuron. Journal of Neuroscience 6(8), 2298–2311 (1986) 69. Jacobs, G., Theunissen, F.: Extraction of sensory parameters from a neural map by primary sensory interneurons. Journal of Neuroscience 20(8), 2934–2943 (2000) 70. Jakobi, N.: Evolutionary robotics and the radical envelope-of-noise hypothesis. Adaptive Behavior 6(2), 325–368 (1997), http://dx.doi.org 71. Jander, R., Volk-Heinrichs, I.: Das strauch-spezifische visuelle perceptor-system der Stabheuschrecke (Carausius Morosus). Zeitschrift vor physiology 70, 425–447 (1970) 72. Kanou, M., Teshima, N., Nagami, T.: Rearing conditions required for behavioral compensation after unilateral cercal ablation in the cricket Gryllus bimaculatus. Zoological Science 19(4), 403–409 (2002) 73. Kazadi, S., Goodman, R., Tsikata, D., Green, D., Lin, H.: An autonomous water vapor plume tracking robot using passive resistive polymer sensors. Autonomous Robots 9, 175– 188 (2000) 74. Kern, R., Lutterklas, M., Egelhaaf, M.: Neuronal representation of optic flow experienced by unilaterally blinded flies on their mean walking trajectories. Journal of Comparative Physiology A 186, 467–479 (2000) 75. Kern, R., Lutterklas, M., Petereit, C., Lindemann, J., Egelhaaf, M.: Neuronal processing of behaviourally generated optic flow: experiments and model simulations. Network: computation in neural systems 12, 351–369 (2001) 76. Kern, R., Petereit, C., Egelhaaf, M.: Neural processing of naturalistic optic flow. Journal of Neuroscience 21, 139–144 (2001) 77. Kimchi, T., Etienne, A., Terkel, J.: A subterranean mammal uses the magnetic compass for path integration. Proceedings of the National Academy of Sciences USA 101, 1105–1109 (2004) 78. Kimmerle, B., Egelhaaf, M.: Performance of fly visual interneurons during object fixation. Journal of Neuroscience 20(16), 6256–6266 (2000) 79. Kindermann, T.: Behavior and adaptability of a six-legged walking system with highly distributed control. Adaptive Behavior 9, 16–41 (2001) 80. Koch, C.: Biophysics of Computation. Oxford University Press, Oxford (1999) 81. Kohstall-Schnell, D., Gras, H.: Activity of giant interneurones and other wind-sensitive elements of the terminal ganglion in the walking cricket. Journal of Experimental Biology 193, 157–181 (1994) 82. Korsching, S.: Odor maps in the brain: spatial aspects of odor representation in sensory surface and olfactory bulb. Cellular and Molecular Life Sciences 58, 520–530 (2001) 83. Korsching, S.: Olfactory maps and odor images. Current Opinion in Neurobiology 12, 387– 392 (2002) 84. Krapp, H., Hengstenberg, B., Hengstenberg, R.: Dendritic structure and receptive-field organization of optic flow processing interneurons in the fly. Journal of Neurophysiology 79(4), 1902–1917 (1998) 85. Krapp, H., Hengstenberg, R., Egelhaaf, M.: Binocular contributions to optic flow processing in the fly visual system. Journal of Neurophysiology 85(2), 724–734 (2001) 86. Labhart, T., Meyer, E.: Detectors for polarized skylight in insects: a survey of ommatidial specialisations in the dorsal rim area of the compound eye. Microscopy Research and Technique 47, 368–379 (1999) 87. Labhart, T., Meyer, E.: Neural mechanisms in insect navigation: polarization compass and odometer. Current Opinion in Neurobiology 12, 707–714 (2002)
Low Level Approaches to Cognitive Control
171
88. Lambrinos, D., Moeller, R., Labhart, T., Pfeifer, R., Wehner, R.: A mobile robot employing insect strategies for navigation. Robotics and Autonomous Systems 30, 39–64 (2000) 89. Land, M.: Visual acuity in insects. Annual Review of Entomology 42, 147–177 (1997) 90. Land, M.: Motion and vision: why animals move their eyes. Journal of Comparative Physiology A 185, 341–352 (1999) 91. Larsson, M., Svensson, G.: Methods in insect sensory ecology. In: Methods in Insect Sensory Neuroscience. CRC Press, Boca Raton (2005) 92. Latimer, W.: Acoustic competition in bush crickets. Ecological Entomology 6, 35–45 (1981) 93. Laurent, G.: Dendritic processing in invertebrates: a link to function. In: Dendrites, pp. 290–309. Oxford University Press, Oxford (1999) 94. Laurent, G.: Olfactory network dynamics and the coding of multidimensional signals. Nature Reviews Neuroscience 3(11), 884–895 (2002) 95. Laurent, G., MacLeod, K., Stopfer, M., Wehr, M.: Spatiotemporal structure of olfactory inputs to the mushroom bodies. Learning and Memory 5, 124–132 (1998) 96. Laurent, G., Stopfer, M., Friedrich, R., Rabinovich, M., Volkovskii, A., Abarbanel, H.: Odor encoding as an active, dynamical process: experiments, computation, and theory. Annual Review of Neuroscience 24, 263–297 (2001) 97. Laurent, G., Wehr, M., Davidowitz, H.: Temporal representations of odors in an olfactory network. Journal of Neuroscience 16(12), 3837–3847 (1996) 98. Maes, F., Collignon, A., Vandermeulen, D., Marchal, G., Suetens, P.: Multimodality image registration by maximization of mutual information. IEEE Transactions on Medical Imaging 16(2), 187–198 (1997) 99. Mason, A., Faure, P.: The physiology of insect auditory afferents. Microscopy Research and Technique 63(6), 338–350 (2004) 100. Matthies, L.: Mars microrover navigation: Performance evaluation and enhancement. Autonomous Robots 2(4), 291–311 (1995) 101. Menegatti, E., Zoccarato, M., Pagello, E., Ishiguro, H.: Image-based monte-carlo localisation with omnidirectional images. Robotics and Autonomous Systems 48(1), 17–30 (2004) 102. Menzel, R., Giurfa, M.: Cognitive architecture of a mini-brain: the honeybee. Trends in Cognitive Sciences 5(2), 62–71 (2001) 103. Michelsen, A.: Directional heading in crickets and other small animals. In: Neural Basis of Behavioural Adaptations, Fortschritte der Zoologie, vol. 39, pp. 195–207. Gustav Fischer Verlag, Stuttgart (1994) 104. Miller, J., Jacobs, G., Theunissen, F.: Representation of sensory information in the cricket cercal sensory system. Response properties of the primary interneurons. Journal of Neurophysiology 66(5), 1680–1689 (1991) 105. Mittelstaedt, H., Mittelstaedt, M.L.: Mechanismen der Orientierung ohne richtende Außenreize. Fortschr. Zool. 21, 46–58 (1973) 106. Mittelstaedt, M., Mittelstaedt, H.: Idiothetic navigation in humans: estimation of path length. Experimental Brain Research 139, 318–332 (2001) 107. Mizunami, M.: Functional diversity of neural organisation in insect ocellar systems. Vision Research 35, 443–452 (1995) 108. Mizunami, M., Weibrecht, J., Strausfeld, N.: Mushroom bodies of the cockroach: their participation in place memory. Journal of Comparative Neurology 402, 520–537 (1998) 109. Moeller, R., Lambrinos, D., Roggendorf, T., Pfeifer, R., Wehner, R.: Insect strategies of visual homing in mobile robots. In: Biorobotics - methods and applications, pp. 37–66. AAAI Press / The MIT Press (2001) 110. Moller, P., Goerner, P.: Homing by path integration in the spider Agalena labyrinthica Clerck. Journal of Comparative Physiology A 174, 221–229 (1994)
172
B. Webb et al.
111. Mueller, M., Homberg, U., Kuehn, A.: Neuroarchitecture of the lower division of the central body in the brain of the locust (Schistocerca gregaria). Cell Tissue Research 288, 159–176 (1997) 112. Nowotny, T., Huerta, R., Abarbanel, H., Rabinovich, M.: Self-organization in the olfactory system: one shot odor recognition in insects. Biological Cybernetics 93, 436–446 (2005) 113. Nowotny, T., Rabinovich, M., Huerta, R., Abarbanel, H.: Decoding temporal information through slow lateral excitation in the olfactory system of insects. Journal of Computational Neuroscience 15, 271–281 (2003) 114. Okada, R., Sakura, M., Mizunami, M.: Distribution of dendrites of descending neurons and its implications for the basic organisation of the cockroach brain. Journal of Comparative Neurology 458, 158–174 (2003) 115. Perez-Orive, J., Bazhenov, M., Laurent, G.: Intrinsic and circuit properties favor coincidence detection for decoding oscillatory input. Journal of Neuroscience 24, 6037–6047 (2004) 116. Perez-Orive, J., Mazor, O., Turner, G., Cassenaer, S., Wilson, R., Laurent, G.: Oscillations and sparsening of odor representations in the mushroom bodies. Science 297, 359–365 (2002) 117. Plewka, R.: Zur erkennung zeitlicher gesangsstrukturen bei laubheuschrecken: Eine vergleichende untersuchung der arten tettigonia cantans und leptophyes laticauda. Ph.D. thesis, University of Frankfurt (1993) 118. Pollack, G.: Neural processing of acoustic signals. In: Hoy, R., Popper, A., Fay, R. (eds.) Comparative Hearing: Insects, pp. 139–196. Springer, Berlin (1998) 119. Poulet, J., Hedwig, B.: Auditory orientation in crickets: pattern recognition controls reactive steering. PNAS 102, 15665–15669 (2005) 120. Quenet, B., Dreyfus, G., Masson, C.: From complex signal to adapted behavior: a theoretical approach of the honeybee olfactory brain. In: Burdet, G., Combe, P., Parodi, O. (eds.) Series in Mathematical Biology and Medicine, vol. 7, pp. 104–126. World Scientific, Singapore (1999) 121. Quenet, B., Horn, D., Dreyfus, G., Dubois, R.: Temporal coding in an olfactory oscillatory model. Neurocomputing 38(40), 831–836 (2001) 122. Rabinovich, M., Volkovskii, A., Lecanda, P., Huerta, R., Abarbanel, H., Laurent, G.: Dynamical encoding by networks of competing neuron groups: winnerless competition. Physical Review Letters 87, 68,102 (2001) 123. Reeve, R., Webb, B.: New neural circuits for robot phonotaxis. Philosophical Transactions of the Royal Society London A 361, 2245–2266 (2003) 124. Robert, D., Goepfert, M.: Novel schemes for hearing and orientation in insects. Current Opinion in Neurobiology 12, 715–720 (2002) 125. Ronacher, B., Wehner, R.: Desert ants, Cataglyphis fortis, use self-induced optic flow to measure distances travelled. Journal of Comparative Physiology A 177, 21–27 (1995) 126. Rosano, H.: Decentralised compliant control for hexapod robots: A stick insect based walking model. Ph.D. thesis, School of Informatics, University of Edinburgh (2007) 127. Rosano, H., Webb, B.: A dynamic model of thoracic differentiation for the control of turning in the stick insect. Biological Cybernetics 97(3), 229–246 (2007) 128. Russo, P.: Sistemi neurali biologici e controllo predittivo per l’integrazione acustico-Visiva nel grillo. Master’s thesis, Faculty of Computer Science and Engineering, University of Catania (2005) 129. Russo, P., Webb, B., Reeve, R., Arena, P., Patane, L.A.: Cricket-inspired neural network for feedforward compensation and multisensory integration. In: IEEE Conference on Decision and Control and European Control Conference (2005) 130. Schaefer, P., Ritzmann, R.: Descending influences on escape behavior and motor pattern in the cockroach. Journal of Neurobiology 49, 9–28 (2001)
Low Level Approaches to Cognitive Control
173
131. Schildberger, K.: Multimodal interneurons in the cricket brain: properties of identified extrinsic mushroom body cells. Journal of Comparative Physiology A 154, 71–79 (1984) 132. Schildberger, K., Milde, J., Horner, M.: The function of auditory neurons in cricket phonotaxis. II. Modulation of auditory responses during locomotion. Journal of Comparative Physiology A 163, 633–640 (1988) 133. Schmitz, B., Scharstein, H., Wendler, G.: Phonotaxis in Gryllus campestris l. I Mechanism of acoustic orientation in intact female cricket. Journal of Comparative Physiology A 148, 431–444 (1982) 134. Schmitz, J., Dean, J., Kindermann, T., Schumm, M., Cruse, H.: A biologically inspired controller for hexapod walking: Simple solutions by exploiting physical properties. The biological bulletin 200, 195–200 (2001) 135. Schul, J.: Song recognition by temporal cues in a group of closely related bushcricket species (genus Tettigonia). Journal of Comparative Physiology A 183, 401–410 (1998) 136. Seguinot, V., Cattet, J., Benhamou, S.: Path integration in dogs. Animal Behaviour 55, 787– 797 (1998) 137. Song, S., Miller, K., Abbott, L.: Competitive Hebbian learning through spike-timingdependent synaptic plasticity. Nature Neuroscience 3(9), 919–926 (2000) 138. Sprent, P., Smeeton, N.: Applied Nonparametric Statistical Methods, pp. 133–135. Chapman and Hall, Boca Raton (2007) 139. Srinivasan, M., Poteser, M., Kral, K.: Motion detection in insect orientation and navigation. Vision Research 39, 2749–2766 (1999) 140. Srinivasan, M., Zhang, S.: Visual motor computations in insects. Annual Review of Neuroscience 27, 679–696 (2004) 141. Stabel, J., Wendler, G., Scharstein, H.: Cricket phonotaxis: localization depends on recognition of the calling song pattern. Journal of Comparative Physiology A 165, 165–177 (1989) 142. Stange, G., Stowe, S., Chahl, J., Massaro, A.: Anisotropic imaging in the dragonfly median ocellus: a matched filter for horizon detection. Journal of Comparative Physiology A 188, 455–467 (2002) 143. Staudacher, E.: Sensory responses of descending brain neurons in the walking cricket, Gryllus bimaculatus. Journal of comparative physiology A 187, 1–17 (2001) 144. Staudacher, E., Schildberger, K.: Gating of sensory responses of descending brain neurones during walking in crickets. Journal of Experimental Biology 201, 559–572 (1998) 145. Stopfer, M., Laurent, G.: Short-term memory in olfactory network dynamics. Nature 402, 664–668 (1999) 146. Stout, J., McGhee, R.: Attractiveness of the male acheta-domestica calling song to females 2. the relative importance of syllable period, intensity, and chirp rate. Journal of comparative physiology A 164(2), 277–287 (1988) 147. Strausfeld, N., Hildebrand, J.: Olfactory systems: common design, uncommon origins? Current Opinion in Neurobiology 9, 634–639 (1999) 148. Stumpner, A.: Picrotoxin eliminates frequency selectivity of an auditory interneuron in a bushcricket. Journal of Neurophysiology 79, 2408–2415 (1998) 149. Stumpner, A., van Helversen, D.: Evolution and function of auditory systems in insects. Naturwissenschaften 88(4), 159–170 (2001) 150. Svenshnikov, A.: Problems in Probability Theory, Mathematical Statistics and Theory of Random Functions, p. 85. W.B. Saunders Company, Philadelphia (1968) 151. Szenher, M.: Visual homing in dynamic indoor environments. Ph.D. thesis, School of Informatics, University of Edinburgh (2008) 152. Tammero, L., Dickinson, M.: The influence of visual landscape on the free flight behavior of the fruitfly Drosophila melanogaster. The Journal of Experimental Biology 205, 327–343 (2002)
174
B. Webb et al.
153. Theunissen, F.: From synchrony to sparseness. Trends in Neurosciences 26, 61–64 (2003) 154. Tinbergen, N., Kruyt, W.: On the orientation of the digger wasp, philanthus triangulum fabr, III. Selective learning of landmarks. In: Tinbergen, N. (ed.) The Animal and Its World. Harvard University Press (1938) 155. Vardy, A., M¨oller, R.: Biologically plausible visual homing methods based on optical flow techniques. Connection Science 17(1-2), 47–89 (2005) 156. Vickerstaff, R., Paolo, E.D.: Evolving neural models of path integration. Journal of Experimental Biology 208, 3349–3366 (2005) 157. Viola, P., Wells, W.M.: Alignment by maximization of mutual information. International Journal of Computer Vision 24(2), 137–154 (1995) 158. Webb, B.: Neural mechanisms for prediction: do insects have forward models? Trends in neuroscience 27(5), 278–282 (2004) 159. Webb, B., Harrison, R.: Integrating sensorimotor systems in a robot model of cricket behavior. In: Sensor Fusion and Decentralised Control in Robotic Systems III. SPIE, Boston, November 6-8 (2000) 160. Webb, B., Reeve, R.: Reafferent or redundant: How should a robot cricket use an optomotor reflex? Adaptive Behaviour 11(3), 137–158 (2003) 161. Webb, B., Scutt, T.: A simple latency dependent spiking neuron model of cricket phonotaxis. Biological Cybernetics 82(3), 247–269 (2000) 162. Webb, B., Wessnitzer, J., Bush, S., Schul, J., Buchli, J., Ijspeert, A.: Resonant neurons and bushcricket behaviour. Journal of Comparative Physiology A 193, 285–288 (2007) 163. Weber, K., Venkatesh, S., Srinivasan, M.: An insect-based approach to robotic homing. In: Jain, A., Venkatash, S., Lovell, B. (eds.) Fourteenth International Conference on Pattern Recognition, pp. 297–299. IEEE, Los Alimitos (1998) 164. Weber, T., Thorson, J.: Auditory behaviour in the cricket. II. Interaction of direction of tracking with perceive temporal pattern in split-song paradigms. Journal of Comparative Physiology A 163, 13–22 (1988) 165. Wehner, R.: The ants celestial compass system: spectral and polarization channels. In: Lehrer, M. (ed.) Orientation and Communication in Arthropods, pp. 145–285. Birkhauser, Basel (1998) 166. Wehner, R.: Desert ant navigation: how miniature brains solve complex tasks. Journal of Comparative Physiology A 189, 579–588 (2003) 167. Weisstein, E.: Method of steepest descent. From Mathworld - A Wolfram Web Resource (2006), http://mathworld.wolfram.com/MethodofSteepestDescent.html 168. Wessnitzer, J., Mangan, M., Webb, B.: Place memory in crickets. Proceedings of the Royal Society of London B (2008) 169. Wessnitzer, J., Webb, B.: Multimodal sensory integration in insects - towards insect brain control architectures. Bioinspiration and Biomimetics 1, 63–75 (2006) 170. Wittlinger, M., Wehner, R., Wolf, H.: The ant odometer: stepping on stilts and stumps. Science 312, 1965–1967 (2006) 171. Wittmann, T., Schwegler, H.: Path integration - a network model. Biological Cybernetics 73, 569–575 (1995) 172. Wohlers, D., Huber, F.: Processing of sound signals by six types of neurons in the prothoracic ganglion of the cricket Gryllus campestris l. Journal of Comparative Physiology 146, 161–173 (1981) 173. Wohlgemuth, S., Ronacher, B., Wehner, R.: Ant odometry in the third dimension. Nature 411, 795–798 (2001) 174. Wolf, R., Heisenberg, M.: Basic organisation of operant behaviour as revealed in Drosophila flight orientation. Journal of Comparative Physiology A 169, 699–705 (1991)
Low Level Approaches to Cognitive Control
175
175. Wolf, R., Voss, A., Hein, S., Heisenberg, M.: Can a fly ride a bicycle? Philosophical Transactions of the Royal Society B 337, 261–269 (1992) 176. Yack, J.: The structure and function of auditory chordotonal organs in insects. Microscopy Research and Technique 63(6), 315–337 (2004) 177. Yager, D.: Structure, development, and evolution of insect auditory systems. Microscopy Research and Technique 47(6), 380–400 (1999) 178. Ye, S., Leung, V., Khan, A., Baba, Y., Comer, C.: The antennal system and cockroach evasive behaviour I Roles for visual and mechanosensory cues in the response. Journal of Comparative Physiology A 189, 89–96 (2003) 179. Zampoglou, M., Szenher, M., Webb, B.: Adaptation of controllers for image-based homing. Adaptive Behavior 14(4), 381–399 (2006), http://dx.doi.org/ 180. Zeil, J., Hofmann, M., Chahl, J.: Catchment areas of panoramic snapshots in outdoor scenes. Journal of the Optical Society of America A 20, 450–469 (2003) 181. Zeiner, R., Tichy, H.: Combined effects of olfactory and mechanical inputs in antennal lobe neurons of the cockroach. Journal of Comparative Physiology A 182, 467–473 (1998)
Part II
Cognitive Models
4 A Bottom-Up Approach for Cognitive Control H. Cruse1 , V. D¨urr2 , M. Schilling1, and J. Schmitz1 1
2
University of Bielefeld, Department of Biological Cybernetics and Theoretical Biology, P.O. Box 100131, D-33501 Bielefeld, Germany {holk.cruse,malte.schilling,josef.schmitz}@.uni-bielefeld.de University of Cologne, Institute of Zoology, Weyertal 119, D-50931 K¨oln, Germany
[email protected]
Abstract. Cognitive skills including the ability to plan ahead require internal representations of the subject itself and its environment. The perspective on internal representations and assumptions on the importance of internal representations have changed over the last years: While traditional Artificial Intelligence tried to built intelligent systems relying solely on internal representations and working on a Knowledge level, behaviourist approaches tended to reject internal representations. Both approaches produced considerable results in different but complementing fields (examples are higher level tasks as mathematical proofs, or simple tasks for robots respectively). We believe that the most promising approach for more general tasks should connect both domains: For higher level and cognitive tasks internal representations are necessary or helpful. But these representations are not disconnected from the body and the lower-levels. The higher level is grounded in the lower levels and the robot is situated in an environment. While many projects today deal with the requirements of embodiment and situatedness, we focus on an approach like that of Verschure [55, 57, 58, 56]: The control system is built from the bottom up, growing towards higher levels and more complex tasks. The primitives of the different levels are constituted through the lower level and, as a main aspect, rely on neural networks. While Verschure only addressed the most relevant lower-level models in his work, we are constructing an architecture for cognitive control. The main idea for the cognitive control is the idea of mental simulation: Mental simulation sees planning as ”probehandeln”: This notion of trying a movement by mentally enacting it without performing the action physically relies strongly on the notion of an internal model. As a first step an internal model of the own body is constructed and used which later-on may be expanded to models of the environment. This body model is fully functional: it is constrained in the same way as the body itself. It can move and can be used in the same way as the body. Therefore, hypothetical movements can be tested on their consequences. For this purpose it must be possible to decouple the body itself from the action controlling modules to use the original controllers for control of the internal body model. Our approach shall address in particular (1) the structure and the construction of the mental models, (2) the process of using learnt behaviours to control the body or to modulate these behaviours in controlling the body or the internal model, (3) the decoupling of the body from the control structures to invoke the simulation, (4) the invention of new situation models and the decision when to construct new ones. An essential aspect of the approach proposed here is, apart from the idea that memory is composed of many individual situation models consisting of RNNs, that random selection of the connections is used, in this way introducing some kind of Darwinian aspect (see Edelmans Neural Darwinism). P. Arena and L. Patan`e (Eds.): Spatial Temporal Patterns, COSMOS 1, pp. 179–218. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
180
H. Cruse et al.
4.1 Introduction It has often, and convincingly, been stated that traditional AI is not generally suited to solve problems of cognition [5, 4, 17, 34]. These critiques include the idea that embodiment is an important concept and should therefore be taken into account. Further, they suggest that basic sensorimotor systems should be studied first, hoping that this bottom-up approach may lead to the understanding of higher cognitive functions. In the preceding chapters a number of relevant examples based on investigations of insect behaviour and on insect neurophysiology have been given. Specific attention was given to the properties of the insect brain, in particular with respect to the possible function of the mushroom bodies and related structures. This system has been considered to be responsible for different functions but the main point of interest there is that the mushroom bodies have been shown to be an important locus for learning. In this chapter, we propose a memory structure that uses simple analogue neurons and concentrate on the question of how many different situations could be stored. This structure may be compared with that of mushroom bodies, but is thought to represent a more general concept on how memories, in particular procedural memories, could be organized. After describing in general the behaviour-based approach put forward during the last two decades as a reaction to the more traditional AI approach, a way is sketched as to how the behaviour-based concept may be combined with higher, cognitive procedures. Our concept is strongly influenced by Verschure [55, 57, 58, 56] who proposed a layered structure. As the behaviour-based approach strongly relies on embodiment, we use as an example a behaviour that cannot be performed by a body with simple geometry, as for example a two-wheeled robot. Rather we use a hexapod walker with 18 degrees of freedom, i.e., a system with many extra degrees of freedom. This is challenging because such a body allows for quite complex behaviours to be performed. We have studied insect locomotion behaviour when negotiating curves. In addition, the specific question of how to cope with control of walking after losing a leg has been studied [43]. Furthermore, it has recently been shown [3] that a reactive controller can even solve complex tasks, as for example climbing over very large gaps. However, the question of how continuation of studies of reactive systems could lead to the understanding of cognitive systems is still an open problem. Therefore, classical approaches still are relevant, at least insofar as they have specified the problems, and as they have investigated to what extent earlier proposals have reached limitations. Inspired by these approaches, we will investigate how a reactive controller for hexapod walking could be improved by an internal model of the own body. This model might improve reactive behaviour, but could also be used for cognitive control in the sense of planning ahead. Studying insect behaviour and neurophysiology has provided some important insights, but most current models for cognitive systems are based on psychological and neurophysiological investigations in primates (including imaging techniques in humans). Generally the interpretations remain on a qualitative, conceptual level. Quantitative models that attempt to find a functional interpretation either suffer from the problem of dealing with a large number of neuronal units with a corresponding huge number of unspecified degrees of freedom [1], or use models not based on a neuronal structure, in this way simplifying the task [48]. If a strict neuronal architecture is used,
A Bottom-Up Approach for Cognitive Control
181
the studies concentrate on small isolated tasks, and the resulting models are therefore not suited to control an autonomous system. Nevertheless, as a crucial constraint for our architecture we would apply neuronal based structures. Therefore, after the general concept concerning the proposed memory architecture has been explained, we will introduce several types of small RNNs that can be applied as elements for procedural memories and that show the ability to learn by using simple learning rules. As a long term goal, this should lead to a complete system that has the ability to store new information, to find some order in this memory, and of course to retrieve specific information. In the following section we will briefly recall the arguments which led to the introduction of the behaviour-based approach. Following this approach we are going to propose an architecture in section 4.4 and 4.5 that could allow a six-legged walker to learn tasks of classical and operant conditioning, to search for solutions to new problems and, as the main goal, to be able to plan ahead. To this end we will propose RNN-based “engrams” embedded in this memory structure. Finally, we will discuss initial ideas for how abstract rules provided by an external teacher can be learnt and later be used to improve the performance of the corresponding behaviour.
4.2 Behavior-Based Approaches The approach of traditional Artificial Intelligence (AI) to explain intelligent behaviour relies mainly on the notion of a knowledge level [33]: Intelligence is considered as a high level process which relies on using knowledge, which is encoded in a symbolic representation. The process can be described as a manipulation of symbols. The origins of this perspective on intelligence can be traced back to traditional philosophy of mind and the view of dualism: mind and body are distinct from each other (as stated by Descartes). The development of the computer had another great impact on the young research field of AI: the metaphor of the mind as a computer and cognition and intelligence as a form of computation or calculation guided research over years. This view was supported as it showed to be successful in various tasks, which were supposed to need intelligence. Focussing only on an abstract symbolic level and on rules operating on this level appeared to be suitable on high-level tasks (as chess and mathematics), but it failed for “simpler” tasks: the recognition of speech and objects in pictures, adaptive control of movements, etc. Tasks including interactions with a not completely organised environment became hard and intractable for a pure symbolic approach. It became clear that the process of interacting with the environment can not be simplified to a sequence of successive stages, as assumed in traditional AI like in [40]: • Perceive the world through sensors. • Reasoning on an abstract level. • Action on the world through actuators. These three stages were considered as separate modules, which execute some form of transformation. This transformation is directed towards the next stage and there are no interconnections or recurrencies between the modules.
182
H. Cruse et al.
Research on perception (and today on action of humans in relation to thinking) revealed that in humans perceiving and acting on the world is not a serial process but a highly intertwined process involving lots of recurrent connections and parallel processing within the different stages and also between them. In addition, the representation is not only involved in reasoning, but strongly influences the way of perceiving and acting. Therefore, the approach of intelligence as symbol manipulation was questioned. Different problems had been encountered and although some of them could be neglected, they helped progressing research towards today’s concepts of embodiment and situatedness. Searle’s Chinese Room Argument has stimulated discussions on these issues — although today many reject the argument or, more specific, reject its premises. The problem of symbol grounding [21] and the question of how a higher level representation and its constructs can derive meaning has become an important issue (and from our point of view the most important one) in building a representation. Other problems as the frame problem [29], the frame of reference problem [6] and the problem of situatedness [53] were identified. Verschure [56] analysed the different problems and showed that they originate from the same causation: He integrated the different problems and identified as a main cause for them the problem of a prioris ([56], page 2): In short this problem is created by the critical dependance of a model of a cognitive process on the a priori specification of a world model. As a result such [in the sense of symbolic Artificial Intelligence] an approach runs the risk of specifying a system which is not grounded in the world in which it finds itself, is prone to mix up the domain ontologies involved, relies on a representational granularity which induces a search, and ignores some pertinent elements of system-environment interaction. The problem of a prioris describes the lack of pure symbolic representations to include meaning because of the disconnectedness of model and modelled objects. The model as a surrogate for the object (event, concept, . . . ) is described a priori — it is determined in advance which properties shall be modelled and included in the system and in which way. Early work in behaviour-based robotics therefore neglected the notion of an internal representation and built systems which were controlled by simple reactive systems. Nonetheless, these were able to solve tasks which were supposed to require intelligence. The concept of intelligence itself was questioned by these experiments and changed the understanding of the term. It was now assumed that intelligent behaviour can arise out of simple processes which are interacting with each other. This view has been stressed by the introduction and the notion of “prerational intelligence”. Intelligence could be thought in this way as an emergent property of the system, which can only be ascribed from the observer’s point of view when concentrating on the complex overall behaviour. Brooks’ [5] systems shaped the field itself and later research in cognitive science in general. Brooks showed that in many cases internal representations are not needed and the world can be used as its own best model [5, 4]. In fact, by using the world as its own model one circumvents the problem of a prioris, namely deciding what to represent and how to update the representation.
A Bottom-Up Approach for Cognitive Control
183
Nonetheless, we think it is not helpful to reject internal representations in general, but to incorporate internal representations: not as a starting point for an autonomous system, but as an extension in the development of a system. Steels has argued in this way ([49], page 2389): These findings suggest that representation-making might be a crucial bootstrapping device for higher mental function. At some point external representation-making becomes internalised to form the basis of thinking, in the sense of inner dialogs or mental imaging. So internal representations do not come before external representations but follow or co-evolve with them. For systems which also use an internal representation it is crucial how the latter is related to lower-level functions, as well as acting and sensing: Glenberg stated that internal representations have developed in the service of action and perception [20] and are therefore highly intertwined with the latter. Verschure introduced the research program of synthetic epistemology [56]: Before dealing with representations and intelligent operations on them we should focus on how knowledge is acquired and explored. This approach of building up knowledge representations without restricting their structure in advance was addressed by Verschure in a series of experiments on developing a bottomup control structure for robots and simulations [55, 57, 58, 56]. The internal model is not disconnected from the environment, the body, actions and sensors. It must be embodied and structures in the model must be grounded in the world. As a starting point, Cruse considered a model of the own body ([8], p. 138): The most important part of the world, [. . . ] is the own body. This leads to the speculation that a basis of such a world model might be formed by a model of the own body. The grounding of a model in the outside is realised through integrating the different activated neuronal traces in sensing and acting in an action (Glenberg calls this process meshing [20]). This builds up an internal representation of actions by relating sensations and movements (on different levels of abstraction which, only on higher levels, are accessible through language or for reasoning). Objects are represented by the actions they offer to the observer. They are interpreted from the perspective of possible actions of our body. Gibson called this affordance [18]. Concerning the grounding problem, Steels stated ([50], p. 172): Grounded robots that engage in communication using external representations not only need a physical body and low-level behaviours but also a conceptual world model which must be anchored firmly and dynamically by the robot in the environment through its sensori-motor apparatus. We argued that there is not a simple sweeping theoretical principle to turn a system that uses conceptual world models into a grounded system. Instead many processes must be carefully integrated. Or as Gallese [16] has formulated it in describing the grounding of a model of the own body: [embodied simulation] applies not only to actions or emotions, where the motor or viscero-motor components may predominate, but also to sensations like
184
H. Cruse et al.
vision and touch. It is mental because it has content. It is embodied not only because it is neurally realized, but also because it uses a pre-existing bodymodel in the brain realized by the sensorymotor system, and therefore involves a non-propositional form of selfrepresentation. We think that building an intelligent and cognitive system needs some form of internal representation for planning ahead (in the sense of an internal mental simulation). But the perspective on internal representations has changed. Internal representations can not be viewed as to be disconnected from the external world. Instead, the basic elements of the internal representation must be grounded in the environment and must have meaning. The higher-level representation must be connected in some form to the external environment it is representing and which is providing a meaning for the representation. Current results from neurobiological research —for example using imaging techniques to observe brain activities— suggest that higher level problem solving, thinking and planning ahead is related to problems of motor control: When planning a sequence of movements the same neuronal clusters were found active which were active when actually performing the movements and which therefore were thought of as being responsible for motor control. This lead to the idea of planning and understanding as a mental simulation (or Probehandeln as termed by Freud [13, 14]): we understand situations by constructing corresponding internal representations. These internal representations are dynamic representations which allow us to predict consequences of actions and to predict how a situation will develop. They are not disconnected from the environment — they are grounded in the units on the lower neural level which are used in perceiving them or acting on them. For planning these representations can be used to test the effects of actions in a mental simulation. Based on the observed effects of the simulated action one can decide to enact the action or to discard it and search for an action which is better suited with respect to the current goal. Mental simulation relies on grounded internal models. The first internal model to start with must describe the body of the system itself, including its properties and functionalities. This model must be grounded and embodied: It evolves in a bottom-up manner from lower-level functions, involving both, the sensory and the motor system. The system itself should be situated, therefore models of the environment have to be included and have to be grounded in the environment. In our approach extending the existing reactive controller, Walknet, we will start with a model of the own body which will be used for planning ahead through mental simulations. In a next step models of the environment shall be integrated — this leads to many open questions regarding these models: • • • •
learning of a model, updating existing models, retrieval of a model, creating a new model, . . .
Our approach to understand and rebuild cognitive behaviour is guided by the goal to build an architecture up to a cognitive level for planning ahead by using internal models, or even up to a language level.
A Bottom-Up Approach for Cognitive Control
185
Thus the results gained are posing the basis for a much wider research activity that will set-up first steps by defining the lower levels from which these higher levels can develop.
4.3 A Bottom-Up Approach for Cognitive Control While some researchers refused to use internal representations, others tried to combine the two approaches and therefore to combine the advantages of both and to learn from both fields. Verschure [57, 56], as mentioned, examined problems arising in the traditional approach to AI and summarized them as the problem of a prioris: the symbol systems are disconnected from the environment and the form of representation (what is represented and how) of the environment is determined in advance. Verschures’ work has influenced our own approach both, on a theoretical level, as well as own the level of implementation, as he uses a behavior-based approach realised in simulations and on a robot. Therefore, we want to briefly summarise his work —graphically depicted by the two lower black boxes in Fig. 4.1 (lower level, medium level)— and discuss connections to our own work, as well as differences. Verschure has proposed the program of synthetic epistemology: the focus is shifted away from the question of how to use knowledge. The questions of how a system acquires knowledge and how it retains this knowledge have to be addressed first. Verschure assumed that behaviour requires variables which mediate between perception and action and that an explanation of behaviour presupposes an understanding of these variables. He focussed on examining these variables — how they develop, how they are structured and how they are learnt. Verschure initiated a sequence of experiments in simulation and in real implementation on a robot — he argues that both forms of investigations are needed. Simulations allow analysis of the development of the neural control structure and evolution of this structure. On the other hand real world tests on a robot are needed to show the abilities of the robot. His aim was to combine the advantages of neural approaches —their ability to learn— with the advantages of a knowledge based system because the latter is easily understandable for the observer and allows to assign goals and states to different variables. The work of Verschure [55, 57, 58, 56] has inspired our approach on the architecture of the different levels, and gives some additional insights into the influences of learning at different stages in the control architecture (one of the key points of Verschure’s work is that even learning only at a perceptual “stage” leads to adaptation of behaviour). In our own approach the lower level and the connections to the medium level (see below) are at first not learnt but considered as given in the form of innate structures. The details of these structures as implemented in our model have been derived from biological experiments. Verschure’s research complements our own approach with respect to learning on the lowest level. As the structure of the two approaches is comparable, conclusions can be transferred. Therefore we want to briefly present Verschure’s approach and want to emphasise the main characteristics of the DAC robot series by adopting Sloman’s perspective on robotic systems.
186
H. Cruse et al.
Sloman [47] has developed a framework for examining architectures for robots — the CogAff framework. This framework is inspired by work from Artificial Intelligence but is supposed to be used for considering connectionist approaches and hybrid models. While work in AI has mostly concentrated on specialized forms of representation serving one specific purpose, Sloman encourages a broader approach for representations and in particular an approach which makes it possible to incorporate the different forms of representations. Therefore he developed the CogAff framework for describing agent architectures and their underlying representations. This provides a unified perspective on different approaches. It permits connecting different approaches and defines a space of possible architectures. In this framework Sloman talks about three stages: Perceive, reason and act, which have their origins in traditional AI and make it therefore easy to describe AI systems in terms of the CogAff framework. But while the traditional approach to AI used these stages as separated and ordered stages, for Sloman these stages are just an abstraction: there can be recurrent connections, concurrent processing in the different stages, the stages should not be regarded as confined modules but as intertwined processes. In addition to this loose division into stages he introduces 3 layers of abstraction: The lowest layer is a pure reactive layer. The medium layer introduces deliberation: planning ahead, representations, generating hypotheses and selecting one of the possible hypotheses (he calls this “what if” mechanisms). The highest layer includes reflective processes — meaning meta-management of the information processing. In Fig. 4.1 the architecture of Verschure’s approach is depicted — from the perspective of the layers of Sloman’s CogAff framework. The CogAff framework is illustrated by a 3 x 3 grid, on top of which Verschures’ as well as our concepts are explained. The DAC series of robots built by Verschure comprise complete learning systems, based on predefined reflex structures on a lower-level of a minimal behavioural competence (avoidance and attraction). On an adaptive level sensory cues from different layers can be connected to the responses and the internal state and construct sensory representations which are grounded in the lowest level. On a contextual control level these are used to build up complex representations, which represent sequences over time of motor events in association to sensory inputs. Verschure’s model is implemented in neural networks at the lowest level only. On the higher level other mechanisms are used in addition. His main ideas concentrate on the organisation and the self-formation of the different layers and the learning of behaviours. Our demand of a biologically plausible architecture makes it necessary to consider a different neural substrate for the implementation. On the lower levels our approach is very similar to Verschure’s approach: We start with a decentralized, reactive controller for hexapod walking. On the lowest level it consists of motor primitives, and simple reactive behaviours. Each leg is controlled locally by one of these controllers (see Fig. 4.1, bottom). These motor primitives are activated by elements of a medium level where it is decided which action should take place. So, basically the low-level motor primitives represent situation models, including the representations of the state in which a behaviour becomes active (stimuli) and the representation of the motor commands (response). On a medium level these motor primitives can now be activated and modulated. The lower level activations represent a state of the system whereas on the medium level
A Bottom-Up Approach for Cognitive Control
Perception
Central Processing
187
Action
MetaManagement
re? ective processes "what if" mechanisms
Reactive mechanisms
Deliberative reasoning
Internal Model
= high-level Invention of new plans = sequencing and modulation of motor primitives simulate
modulate
= medium-level
Behavior selection: local peripheral PG & Coordination In? uences sensors
control
Motor Primitives: • Stance-Net • Swing-Net
= low-level
Fig. 4.1. Illustration of the relation between different approaches, in particular the concepts of Sloman, of Verschure and of our own. The 3 x 3 grid (in gray) depicts Slomans conceptual approach, the black boxes (lower and medium level) indicate the overlay of the approach of Sloman and of Verschure. Our approach builds on top of Verschures approach by introducing a high-level structure consisting of internal models, in particular a model of the own body.
these activations of areas involved in perception and action are connected to establish a behaviour: they form prototypes (in Figure 4.1 the medium level). These selector modules are in their simplest form reactive patterns, but can be more complex sequences or situations. While the basic structure of the lower levels is comparable to Verschure’s approach, there are some essential differences which seem to be important for providing meaning to a system. Verschure represents a set of arbitrary sensor values and motor commands in his system. In our system sensors and actuators are not only represented on this lower level as variables. The main difference of our system compared to Verschure’s system is that we introduce an internal representation. This internal representation is grounded in the lower level primitives and variables. There is a direct connection between these layers. Meaning of a behaviour on the higher level emerges from the meaning of the primitives of the lower level which form and ground the behaviour. The cognitive level constitutes this higher layer. Cognition in the sense of being able to plan ahead [30] relies on an internal representation [49] which is grounded in embodied experiences [17]. So, this higher level is not distracted from the lowest level or a distinct separate symbolic layer. Instead, its primitives are embodied in the lower levels and the primitives used to construct a plan are exactly the same as in executing the plan. The mental model of the own body is depicted in the figure in the upper part. In fact, the internal model is not part of the control architecture, but is used by the control architecture to simulate possible sequences of actions. The simulation corresponds to
188
H. Cruse et al.
Slomans [47] “what if”-mechanism in the process of building plans, while the invention of new sequences comprises some kind of meta-management and a reflexive process. The main idea of this higher level is the idea of mental simulation (Gallese and Lakoff [17], page 458): Imagination is mental simulation, carried out by the same functional clusters used in acting and perceiving. Mental simulation sees planning as ”probehandeln”: This notion of testing a movement by mentally enacting without doing the action in reality relies strongly on the notion of an internal model. As a first step an internal model of the own body is constructed and used which later-on may be expanded to models of the environment. This body model is fully functional: it has to be constrained in the same way as the body itself and it can move and be used in the same way as the body. The body model can be thought of as a small internal puppet you can play around with to test hypothetical movements for their consequences. For this purpose it must be possible to decouple the body itself from the action controlling modules to use the original controllers for control of the internal representations [11]: Actions are not carried out directly, but instead trigger simulations of what they would do in the imagined situation. This ability to simulate or imagine situations is a core component of human intelligence . . .
4.4 Representation by Situation Models In this section we are going to propose our bottom-up approach for a cognitive system in more detail. It is thought as an extension of the reactive controller Walknet (see chapter 2) on the basis of which the notion of thinking as mental simulation shall be elaborated. In addition, this section will raise issues on the process of learning. The overall model will pose a set of requirements on the neuronal structure of the memory. The aim for the control architecture is to grow up from locomotion to cognitive capabilities. The system should be able to solve simple reactive tasks as well as complex tasks, including tasks which require cognitive capabilities as for example setting up plans. The involved perception processes are goal-oriented and the architecture should be able to recognise and learn situation models by self-observation. In general, an architecture for an autonomous robot (in the sense of an agent which acts following its own rules and its own desires) has to fulfil different requirements. For an architecture which even shows cognitive abilities these different aspects grow up from simple behaviours to higher mental functions. From our point of view cognition is the ability to plan ahead. This mental faculty is not detached from low level skills but it is directly linked and grounded in lower level behaviours and cannot be separated from the body (as the subject in perceiving and acting on the environment) and the current situation (including the previous actions and developments of the situation). These two topics —embodiment and situatedness [5]— have changed the research on Artificial Intelligence over the last years and are now widely accepted and a main focus in current research.
A Bottom-Up Approach for Cognitive Control
189
It is our aim to build up an architecture from the bottom up to such embodied and situated cognitive abilities. The architecture is inspired by biological insights and therefore should be implemented through a biologically plausible system in the form of neural networks. The different levels from which higher cognitive capabilities should emerge (see Fig. 4.1): • low-level = motor primitives and reactive behaviours. There are many different behaviours on this level. These can be triggered through sensory cues directly or the primitive modules are modulated through the higher levels. • medium level = combining and selecting the motor primitives. On a medium level the motor primitives are modulated to form actions (as series of goal-directed movements). The sequences could be learnt or could be generated following some general rules (like in our case the selection of the swing and stance mode for a single leg). The modulation is governed through the sensory inputs and additional higher sensory cues could be learnt (like for vision). • Higher level (→ cognitive level) = planning ahead. The cognitive level relies on internal representations and uses these representations to plan ahead. In our view the starting point for cognitive abilities and the first representation acquired is a model of the own body. So the agent should at first be equipped with a model of its own body which should then be expanded by a model of its surrounding. These models can then be used to construct a plan, when an unsuitable or unknown situation is encountered: In this situation the motor primitives from the lower level are modulated and combined through the medium level. They control the internal model of the body detached from the body itself. The consequences of the actions and of mutated actions can be observed in the model and can be then used for controlling the body. Using a sufficiently detailed body model, including some relevant aspects of the actual environmental (and internal) situations, different solutions to a given problem can be searched for by simulated trial-and-error (“2nd order embodiment”) [42]. To test and demonstrate the diverse abilities of the robot a set of tasks with increasing difficulty was proposed which should address and test the capabilities of the robot on each of the levels: 1. Control of swing and stance phase of the robot and the coordination of the legs in walking. 2. Stable walking in disturbed situations, steering control and curve walking. 3. Climbing a step. 4. An exploration task in which it is necessary that the robot learns to integrate its different sensory pathways. 5. Learning of conditioned reflexes for walking in cluttered environment and instrumental learning, e.g., for walking in a cluttered environment or for adapting to the loss of a leg. 6. A cognitive task: the body configuration problem — the robot is in an unsuitable configuration (with respect to its surrounding and its joint limits). The robot cannot move forward without tumbling down. Therefore the robot is forced to plan his
190
H. Cruse et al.
movements ahead and to mentally try/simulate different actions (e.g., one foot is stepping backwards) — before applying a successful action. The first three tasks can be solved by the Walknet architecture described in the second chapter of this book. A simple reactive system is therefore sufficient to explain these behaviours. Learning the integration of sensory pathways in an exploration task and forming of representations out of the sensory inputs will be addressed in the following chapters. Learning of situation models as such has been addressed above. In the following, we are not going to elaborate on the application of this learning procedure to the complete system or on the before mentioned other tasks. Instead, we are concentrating on the last task mentioned in the list. In this case, the solution requires knowledge about the own body, some knowledge about the interaction with the environment and knowledge about possible movements. This task is a cognitive task, meaning, it requires planning ahead. 4.4.1
Basic Principles of Brain Function
After having briefly sketched earlier concepts and related it to our ideas, in the remainder we will describe in more detail our approach which, as a main characteristic, is strictly based on the application of a neuronal architecture. Generally, attempts to simulate cognitive systems (which provides a way of understanding such systems) are usually based on structures of human-like brains with the, often implicit, assumption that cognition is only possible for a brain with highly complex structures similar to that of mammalian or avian brains. However, invertebrate animals e.g., insects or, even more obvious, cephalopods show astonishing complex behaviours that may be attributed to cognitive capabilities (see chapter 4.3). Among these tasks are “delay match to sample” and “delayed non-match to sample” tasks. Honey bees, for example, can solve “delayed matching to sample” tasks which allowed Giurfa et al. [19] to show that honey bees are able to learn the concept of “symmetry”. These authors could further show that honey bees are able to learn the concept of “difference” or “oddity” which implied these insects to be able to cope with “delayed non-matching to sample” tasks. Drosophila is assumed to be able to construct a dynamic representation of an optical pattern that has disappeared behind an occluder and expects it to appear again at the other side of that occluder [52]. Learning from observing conspecifics has been shown in octopus [23], switching context in ant navigation [59], weighting the saliency of parameters in Drosophila [54]. This means that also much smaller brains and presumably quite different architectures are capable of “higher”, cognitive functions. Studying and understanding such minimal solutions is presumably simpler than studying complex mammalian brains. It can help to understand basic principles and may therefore finally also help to understand the functioning of complex brains. Following this approach, we, in the words of R. Beer [2] study minimal cognition, i.e., will study small networks showing properties that might be called cognitive. This means that we will concentrate on networks with one unit representing one item. This might be considered as being too “localistic”, because destruction of one such unit would destroy the memory for this item which is in
A Bottom-Up Approach for Cognitive Control
191
contrast to biological, at least vertebrate brains [36]. However, we decided to live with this drawback in order to be able to meet another goal, namely to develop a system that hopefully will be able to cope with higher cognitive functions. Quiroga et al. discuss to what extend such a localist view might also be justified for vertebrate brains [36]. Another aspect referring to the organisation of memory concerns that the functional elements of brains are usually considered as to consist of a number of, for example, innate processes on the one hand and on a “memory” on the other hand. This separation is supported by the computer metaphor, but could be misleading when the goal is to understand the corresponding biological systems. Because it is often difficult to draw a clear line between both aspects, Fuster [15] proposed the terms “species memory” for the innate structures and “individual memory” for the latter. Thus, both aspects are termed “memory”. As, in particular, with respect to procedural memory, a distinction is hardly possible, we also will regard the complete brain as a memory system in the following. Basically, we assume that the memory system consists of modules which we call situation models. Situation models are small neural networks, which usually contain both sensory elements and motor elements. Application of this alternative, and, as we believe, fruitful view means that already the structure of Walknet, as described in detail in the second chapter, can be considered as a (procedural) memory that consists of a number of situation models. Modules as swing-net, stance-net or target-net, for example can be considered as innate situation models. Furthermore, Walknet, although described as a reactive controller, already contains motivational units, in this case for swing and for stance, which are represented by the output units of the selector net. Another property already represented in Walknet concerns the fact that in a typical memory system, the individual memory elements, i.e., situation models, are not stored completely independent from each other, but there are higher-level connections, which represent contextual relations. In this way also in Walknet, the modules are (sparsely) connected, for example by the connections representing the coordination rules, or by local competitive winner-take-all connections as do exist between the swing-net and the stance-net within each leg controller. In order to expand the structure given by Walknet to include further and more explicit cognitive elements, we have to expand the memory. According to our approach, this means that the system has to be able to (autonomously) construct additional situation models. Such a system has to be able to store new situations, which in turn requires to detect whether the actually given situation is new or not, and to recall already stored situations if appropriate. To this end we developed a general structure as will be described in section 4.4.3. The following parts are organized as follows. After a motivation why we use RNNs as basic elements for situation models, we will very briefly sketch the basic structure of the complete framework proposed. Then we will show some specific types of RNNs that can easily be constructed by means of explicit computation of the weights matrices, and which show interesting properties with respect to perception and to control of behaviour. In the following section we will then introduce a local learning algorithm that is able to train such individual RNNs from scratch.
192
4.4.2
H. Cruse et al.
Recurrent Neural Networks
Different methodological approaches are used to simulate cognitive properties. Here we concentrate on architectures based on artificial neurons, because these show similar general constraints as the biological solutions and are therefore, at least in principle, transferable to biological solutions. Further, we want to concentrate on recurrent neural networks (RNN), because these show a much richer behaviour than do feedforward networks and because recurrent networks comprise the general case that includes feedforward solutions. It should be mentioned here, that feedforward networks, a subset of RNNs, may also show interesting behaviour when being embodied and controlling a situated system, because then a recurrent system is construed by closing the loop through the world. In fact, some of the situation models being parts of Walknet comprise feedforward nets. The interesting aspect of recurrent systems is their ability to show dynamic (in the sense of time dependent) behaviour. Indeed, any dynamic behaviour can be represented by an RNN [24], being Turing machine equivalent (see chapter 5). In many approaches continuous time recurrent networks (CTRNN) are considered. These are RNNs where already the individual unit posses dynamic properties, usually properties of a first order low-pass filter. Such properties can also be simulated by appropriately connecting two static units instead of using one dynamic unit. Therefore, we simplify our approach by sticking with static units, which does not restrict the generality of the solutions. As mentioned, RNNs can show any complex behaviour (in fact, brains are considered RNNs), but it is completely open how the interesting architectures may look like. As in a general RNN each unit could in principle be connected with each other unit (including itself), this open question may also be formulated as to what are the interesting arrangements of the weights describing the connection matrix. For a net with n units the search space is n2 dimensional, not considering nonlinear expansions. How is it possible to find interesting solutions in this large search space? One way is to invent rules that allow to construct networks with selected properties. The other is to apply learning algorithms that lead to solutions for a given task. Whereas the latter may provide a specific solution, but not a general principle, the former has the drawback that, when applied to biological systems, it does not provide a way to stabilize the solution, once found, against any arbitrary disturbances. This is however important because a critical issue of RNNs is their inherent property to become instable. Of course, another drawback of the construction approach is that the results are limited by the fantasy of the researcher. Nevertheless, we will use this approach as a starting procedure and will later consider learning algorithms that are suited to stabilize the solutions found. 4.4.3
Memory Systems
Proposing a possible architecture for a memory system, we follow the idea that a memory consists of a collection of small networks (“engrams”), each representing specific information concerning a situation in the world. This approach may be disputed as to suffer from the problem of combinatorial explosion. However, there are experimental results supporting this assumption [12] (for further discussion see also [36]). Furthermore, examples given below will show that even simple systems can show quite rich
A Bottom-Up Approach for Cognitive Control
193
behaviour. In any case, it is in line with our goal of searching for minimal cognitive systems. Such an architecture poses several general questions. 1. How to construct these small RNNs and how to stabilize them against internal disturbances (e.g., thermal noise) and against external disturbances given by continuous changes in the sensory world. This is often called the stability-plasticity dilemma. 2. How to arrange many of these RNNs within a memory system, such that retrieval is possible, i.e., that an externally given situation can be recognized, in other words, how can this situation activate (only) the appropriate net. 3. How can new situations be detected as new. 4. How can information representing this new situation then be stored, i.e. how can a new network be constructed that represents this (new) situation. Problems, that arise after these questions have been solved, concern the formation of associations between stored information packages. This includes the formation of chunks, i.e., the connection of different information packages to form one unit, and the ability of framing, that refers to the fact that different information packages may be connected depending on a specific context and that this connection may dynamically depend on the actual context. We want to briefly provide a rough idea concerning the general architecture. Fig. 4.2 shows the main parts of this architecture. The basic architecture consists of three parts, 1. a preprocessor, 2. a simple feedforward network, called distributor net, and 3. a full recurrently connected net for representing the situation models to be stored. These three parts will be treated in the following one by one. Any neural network used as memory system requires, of course, some sensory input. For the investigation of complex tasks, we require a preprocessing system that contains “object recognizers”, which, for example, are able to detect a tree, a car, a person, and “parameter recognizers” which are able to detect real valued parameters as position of an object (relative to any egocentric or allocentric coordinate system), velocity of an object or distance between objects for example. To cope with simple motor tasks, the preprocessor mainly contains sensors for angular positions of leg joints, for ground contact or for load, for example. Such preprocessing systems (Fig. 4.2, lower section) will be considered as given and not discussed here in more detail. At the right hand side, there is a potentially fully connectable RNN, that is ready to store the information characterising the situation models. Fig. 4.3 illustrates a simple case where three such situation models have been realised each consisting of some recurrently connected units. These units may or may not contain connections to motor output. At the left hand side, there is a “distributor net”. It receives as input stimuli recorded via the sensors and the preprocessor network. Connections between the distributor net and the situation models (termed v-synapse) have to be learnt to provide the appropriate sensory input to each situation model. It might be mentioned here that the architecture has shown in Fig. 4.2 some, maybe not only superficial similarities to the connectivity of insect mushroom bodies [22].
194
H. Cruse et al.
Fig. 4.2. The general architecture of the network consisting of three elements: preprocessor, distributor and situation models. A, B, C, D represent the input to the system. The distributor net is characterized by weights vkl , the recurrent network using weights wi j contains the situation models. Most of the latter weights have a value of zero (see Fig. 4.3).
Fig. 4.3. The general architecture of the network consisting of three elements: preprocessor, distributor and situation models. Three situation models are explicitly shown.
Concerning the situation models, some of them might be extremely simple, representing innate reflexes. Others, still innate, might be more complex (as an example Walknet might be considered as such an innate situation model). There are situation models for static situations for algebraic rules, for representing of the kinematics of the own body including nearby objects, and the kinematics of other bodies, as well as for dynamic situations like falling bodies or a damped pendulum and even sequences of pictures. Other models may represent more distant spatial situation (e.g. landmarks). One and the same situation model might be used for perception (some only), for action, or for internal manipulation (“probehandeln”) as indicated by many observations in neurobiology and psychology [25, 8, 17]. Fig. 4.13 and 4.15 show in more detail how the architecture will be expanded by the planning system. In the following three types of RNNs are described and explained that could be used to represent specific situation models. These networks concern the so called MSBE nets, DE nets and MMC nets. All of them come in linear and in non-linear versions. Further,
A Bottom-Up Approach for Cognitive Control
195
some concrete examples will be given concerning possible applications of such RNNs in sections 4.4.5. For some of these networks, simple learning algorithms are available that will be explained in section 4.4.6. In the final sections we will show, using the Walknet as an example, how these elements could be combined to form a complete memory structure allowing for reactive as well as cognitive abilities 4.4.4
Recurrent Neural Networks
4.4.4.1
MSBE Nets
n
A very simple way to construct an RNN is to start with a basic equation
∑ ci xi = 0.
i=1
This basic equation can be solved for every variable xi leading to a system of equations xi =
−1 ci
n
∑ c j x j , i = 1 to n . These equations can be read as to describe an RNN with n j=
units, the activation (at time t+1) of which is given by xi , which in turn is determined by the activations of all units x j at time t. The weights are wi j =
−c j for i = j ci
(0)
and wi j = 0 for i = j, i.e., for the diagonal weights. Fig. 4.4 shows an example for a net with n = 3 units including the corresponding equations. The ai stand for the external input to set the activation of the units. For example, any input vector a might be given to the net for a limited time, for example for one iteration at time t = 0, and then set back to zero.
Fig. 4.4. A linear recurrent network consisting of 3 units. ai input vector, xi output vector, wi j weights.
For any given set of ci (ci = 0) and any positive factors di (see below), the RNN shows a stable behaviour over time. This means that after any arbitrary activation ai of the units at time t=0, the net will asymptotically approach a solution (activation vector x), that fulfils the basic equation. Because this type of RNNs is characterized by calculating Multiple Solutions of a Basic Equation, we call this type MSBE net. We will explain below what information can be stored in such a network. A simple extension of such nets is to introduce non-zero, positive diagonal weights (wii >0). This can be done by introduction of wii = di /( di + 1) with di ≥ 0 and
196
H. Cruse et al.
dividing all other weights of this unit by (di + 1). This extension only influences the dynamic behaviour of the net by slowing it down and thus protecting it from possible oscillations [28] (Such oscillations may, for example, arise as artefacts resulting from the discrete nature of the simulation.). Non-zero diagonal weights introduce low-pass filter properties, or damping, into the system with the time constant τ of the low-pass filter corresponding to τ = d − 1. Note that apart from the special case ci = 1, the weight distribution is non-symmetric (wi j = w ji , because wi j = 1/w ji ), in contrast to that of Hopfield nets. Asymmetry can also be produced by application of different factors di . Detailed mathematical investigations concerning the stability and, in particular, the selection of appropriate values for parameter d are given in chapter 5. An RNN can correspondingly constructed using non-homogenuous basis equations. In terms of ANN, these leads to a neural network with a bias unit (see e.g., Fig. 4.8). MSBE nets can be extended by non linear activation functions. The activation function (e.g., a threshold, a rectifier, a Fermi function), as used in the theory of neural networks, describes how the output of the unit is transformed before it is given to other units. Such networks reach stable states, if the slope of the non-linear function decreases with increasing (amount of the) input values (e.g., a Fermi function, or sign(x) * |x|). If this slope increases (e.g., x3 ), the net is instable (for |ai | <1). 4.4.4.2 DE Nets Another way of constructing an RNN is to start with a linear differential equation and transform this to a corresponding weight matrix. This is possible for any linear differential equation of arbitrary order [32] and for nonlinear differential equations if the higher derivatives remain linear. The order determines the number of units required. In the following the differential equation of a damped oscillator will be used as an example. The dynamics of a mass-spring pendulum, i.e., its position and its velocity changing over time, are given by the differential equation x¨ = −ω 2 · x − r · x˙ with ω representing the angular frequency and r being a measure of the friction. Following the St¨ormer-Verlet method [27], we substitute x˙ = v and obtain the two equations: x˙ = v and v˙ = −ω 2 · x − r · v Formulation of the discrete derivative gives: ∆∆ xt = vt and ∆∆ vt = −ω 2 ·xt − r ·vt leading to the equations (with dt = 1 corresponding to a small, appropriately chosen time step) xt+1 = xt + vt vt+1 = −ω 2 · xt + (1 − r − ω 2) · vt These equations can be described by matrix 1 1 w11 w12 = −ω 2 1 − r − ω 2 w21 w22
(4.1)
being applied to the network depicted in Fig. 4.4 for n = 2. Because these RNNs are derived from differential equations, they are called DE nets here. A simple dynamic behaviour as that of a low-pass filter or of a high-pass filter can also be modelled correspondingly, because such filters can be described by differential equations, too.
A Bottom-Up Approach for Cognitive Control
197
The above mentioned introduction of non-zero diagonals is an implicit way of introducing low-pass filter properties. Therefore, there is no strict separation between MSBE nets and DE nets. The dynamics of an MSBE net are that of a simple low-pass filter and are introduced to avoid oscillations. In DE nets, dynamics may be more complex and the nets are constructed explicitly to show this behaviour. 4.4.4.3 MMC Nets The third group of constructed RNNs described here are the so called MMC nets. They can be understood as to arise from a combination of several MSBE nets that share common variables. The idea underlying the construction of an MMC net is explained by applying it to a vector system that describes the geometrical arrangement of a human arm. This arm contains three segments (upper arm L1 , lower arm L2 and hand L3 ) and is for simplicity constrained to a 2D plane (Fig. 4.5). The vectors are defined in a cartesian coordinate system x/y with its origin situated at the shoulder. A forth vector, R, points from the origin (shoulder) to the tip of the hand. This arrangement would permit to construct a MSBE net using the basic equation L1 + L2 + L3 - R = 0.
Fig. 4.5. A 2D arm consisting of three segments L1 , L2 and L3 . Vector R determines the position of the endeffector. Diagonals D1 and D2 are used to define the MMC net.
However, we introduce two further vectors, the diagonals D1 and D2 . As there are two quadrangles and three triangles, this geometrical situation allows to define further basic equations, leading to a system of basic equations with common variables. In the following, as an example we use four (instead of five) basic equations that share the variable L3 . Solving each of them for L3 gives: L3 = −L2 − L1 + R L3 = L1 + D2 − D1 L3 = −L2 + D2 L3 = R − D1 The idea behind the construction of an MMC net is to compute several equations for one variable (in our example the four equations), then compute the mean value and use the mean value of this variable for the next iteration. This is done for all variables (six in this example). Therefore, these nets are termed Mean of Multiple Computation (MMC)
198
H. Cruse et al.
nets. Of course, both computations can be lumped together in one equation, leading to an equation system for the complete arm: L3 = −0.5L2 + 0.5R + 0.5D2 − 0.5D1 L2 = −0.5L3 − 0.5L1 + 0.5D2 + 0.5D1 L1 = −0.5L2 + 0.5R − 0.5D2 + 0.5D1 R = 0.5L3 + 0.5L1 + 0.5D2 + 0.5D1 D2 = 0.5L3 + 0.5L2 − 0.5L1 + 0.5R D1 = −0.5L3 + 0.5L2 + 0.5L1 + 0.5R
(4.2)
As an artificial neuron can only represent scalar values but not vectors, the system (4.2) has to be implemented twice, one for each of the two vector components (x, y). Each of these nets has the same weights as given in (4.2). Damping parameters can arbitrarily be introduced as explained above. The corresponding pair of (identical) networks is illustrated in Fig. 4.6 a). One net is shown by full lines, the second by dashed lines. Full circles represent weight values of 0.5, open circles weight values of -0.5, and squares represent diagonal weights (which in equation (4.2) are zero).
Fig. 4.6. MMC net describing the 2D arm as depicted in Fig. 4.5. left: linear version; right: MMC net with non-linear extensions
As mentioned, MMC nets can be considered as to consist of several MSBE nets of which common variables are combined by calculating their mean value before feeding them back for the next iteration. Similar to MSBE nets, such an MMC net, after a disturbance, always relaxes to a solution that is geometrically possible. Extensions to arbitrarily complicated geometries are easily possible [51]. This is also true for extensions to more than two dimensions: each additional dimension requires a further net with, however, identical weights given by the vector equations (4.2). Positive diagonal
A Bottom-Up Approach for Cognitive Control
199
weights can be introduced for damping the dynamics as described above, but they are less important here because the calculation of the mean value already attenuates the oscillations. When applying this net to represent the kinematics of a body (e.g., an arm), there is the drawback that, in the linear version, the length of the segments may vary during relaxation. This is not a problem for the vectors R, D1 and D2 . However, vectors L1 , L2 and L3 should maintain their length in the application. This is possible by adding nonlinear characteristics into the feedback channels, what at the same time allows for explicitly defining variables that correspond to joint angles. Therefore, the input (and output) vector of this non-linear version contains the components of vector R, i.e., the position where the arm should point to, and the three joint angles α , β and γ (Fig. 4.5, 4.6 b). When such an external input is given, the signals provided by the recurrent internal channels have to be suppressed (indicated by open triangles in Figs. 4.6) which can be done in different ways two of which are shown in section 4.4.6 (Convergence proof exists only for the linear version [51]). 4.4.5
Applications
4.4.5.1 MMC Nets A network as shown in Fig. 4.6 b) can be used to control the actual movement of an arm by applying the α , β and γ output values as motor commands (or as reference input to lower level feedback systems). In addition, this net can also be used to internally simulate movements of the body (arm), when the connection to the motor system (i.e., the output values of the net) are switched off and only the internal feedback channels remain active. (Note that the recurrent connections of the net as shown in Fig. 4.6 should not be interpreted as to correspond to the external (sensory) feedback loops. The latter may be provided via the external input channels) Thus, the system may be considered as a basis for planning movements. There are also hints that such a system may be used for perception [46]. This network can be used as a forward model of the arm (input: α , β , γ ; output: Rx , Ry ) as well as an inverse model (input: Rx , Ry ; output: α , β , γ ) depending on the input vector applied. Furthermore, it can be used as any mixed model (e.g., input: Rx , α ; output: β , γ ). As the external input Rx , Ry can be considered as being provided via visual input, and as the external input from α , β and γ may arise from sensors measuring the size of the actual joint angles, this network can also be described as a solution for the sensor fusion problem. Indeed, erroneous angle values would be corrected by the network such that the net relaxes to a solution that is geometrically correct. A further interesting aspect is that the neuronal units of this recurrent network can not be defined as typical motor units or typical sensory units. Rather, in this “holistic” network such a separation is not possible which very much resembles the properties of canonical neurons or of mirror neurons described in several parts of the primate brain [39, 38, 12]. A simpler, namely linear version of an MMC net will be briefly mentioned, that can be used for navigation relative to known landmarks (Fig. 4.7). Again, the situation can be described by vectors connecting (in the example chosen) three visually recognizable landmarks and the home (or nest) that cannot be seen (i.e., there are totally six vectors). The three vectors connecting the landmarks (respectively their x and y components) are
200
H. Cruse et al.
stored in the memory (in the form of weight values) forming an MMC net in a way that the output corresponds to those vectors that point from each landmark to a common position in space. This position could be any point, one being the actual position of the agent, another could be the position of the home. When the agent moves in its world it has to determine the three vectors pointing from the agent’s position to the landmarks. These vectors are then given as input to the MMC net for several iterations until, following this “disturbance”, the net has relaxed to this solution. Then for one iteration the vectors Si , which also are stored in the memory, are given as input instead of the vectors Pi . Following this new disturbance, the net relaxes to a state such that the output values (POi ) point to a position in between the actual position of the agent and the home position. Then the difference between the output vector PO and the sensory input S can be used to drive the motor output. This difference is zero when the agent is at its home position. As has been shown by Cruse [9] this network is able to describe several results found in behavioural experiments. Two simulation examples are given in Fig. 4.7, lower panel. As shown in top view, the animal (small dots) approaches the invisible home position (or a food source, for example) by looking at the landmarks (large dots). The home, in this case positioned exactly between the three landmarks, is approached in a direct way. On the right hand, an experiment is shown where the position of the three landmarks has been changed after learning. As has been shown by several experiments with ants [60] and rodents [7], in this case the animals still approach the home position, but show a considerably broader distribution of searching positions, as is replicated by our simulation. A simple version of a network solving this task is given below (section 4.4.6). 4.4.5.2 DE Nets As for MMC nets, RNNs constructed from differential equations can be used to drive the motor output, for example network (4.1) to rhythmically move a limb. However, they may also be used to recognize rhythmic movement of objects seen in the world. If, for example, a pendulum is observed, i.e., the preprocessor provides position and velocity values, the corresponding situation model may be simulated and thereby the memory representing properties of a pendulum be activated. Instead of using sensory input measuring position or velocity of an object, also acoustic sensors might exist that register intensity and change of intensity of a tone in order to register an acoustic rhythm or, as another example, the frequency of a tone and their variation in time. This might be relevant for the recognition of cricket sounds, for example, and used for acoustic orientation. 4.4.5.3 MSBE Nets What is a possible application of the simple MSBE nets? The most simple interpretation of a, for example, three unit net, is to consider it as a represention of the one dimensional geometrical arrangement of three objects, say John, Mary, and a ball. The activations of the units describe the position of these objects. Therefore the net contains the information of their relative distances (e.g., whether the ball is nearer to John or to Mary). More generally, such a net might not represent a geometrical distance, but instead may represent how tightly, in an abstract way, two objects (e.g., Mary and ball)
A Bottom-Up Approach for Cognitive Control
201
Fig. 4.7. Top left: Schematic arrangement of three landmarks M1 , M2 , and M3 , a source location S, and the actual position of the animat P. Vectors (Mnm = - Mmn ) connect the landmarks, vectors Sn connect the source with the landmarks, and vectors Pn connect the actual position of the animat with the landmarks. Top right: The network representing the situation depicted on the left. Only the net for one component (x or y) is shown in. Input values are Pn or Sn ; output values are POn . Lower panel: Two simulations of animals that find the home position, situated in the middle between the three landmarks (large dots), top view. On the left, the animal, starting in the upper left corner, approaches the home position in a direct way, as has been observed in the basic biological experiment. On the right, an experiment is simulated where, after learning, the landmarks have been moved from the training position (open circles) to the position marked by large closed circles. As in the biological experiments, the position of the animal shows a broader distribution. The lower part shows the temporal development of the harmony value of the network (the harmony value represents the relaxation level of the network — a net that is fully relaxed has a harmony value of one), which can be interpreted as to influence the activation of a place cell as found in the hippocampus of rodents.
are related. In connection with a DE net, the movement of objects (e.g., the ball moving from John to Mary) can be represented. Furthermore, such a kind of net might also be used to represent simple algebraic relations. For example, if a net represents the basic equation x1 + x2 - x3 = 0, and the net receives, via the preprocessor and the distributor, any numbers for x1 and x2 as input, the output will relax to a correct answer, i.e., will provide the appropriate value for x3 . The corresponding weight matrix is:
202
H. Cruse et al.
⎞ 0 −1 1 ⎝−1 0 1⎠ 1 1 0 ⎛
(4.3)
As for the MMC net, any combination of input values is possible. If matrix (4.3) is to be applied to the network of Fig. 4.4, providing x1 and x2 calculates a summation, x1 + x2 , whereas providing x2 and x3 as input and x1 as output calculates the difference x3 x2 , for example. MSBE nets can also be used to store abstract rules as for example the representation of patterns as A, B, B. In other words, this pattern can be described by the rule that x1 can take any value, but x2 = x3 . An appropriate matrix for this case is, for example, ⎞ ⎛ 1 0 0 ⎝0 0 −1⎠ (4.4) 0 −1 0 Another, non-geometric interpretation concerning such networks is that they may form the basis of Pavlovian learning paradigms. After learning, a conditioned stimulus (CS) can predict an unconditioned stimulus (US) and elicit a response without the US being necessary. This can be represented by the most simple version of such an RNN, consisting of just two units (Fig. 4.8), one standing for US (a1 ) and the other for CS (a2 ), respectively. Although it appears at first sight to be overdone to use an RNN for this simple task, it can be shown that, using this approach, a large number of complex Pavlovian paradigms can be represented in neuronal terms (Cruse, H. and Sievers, K., in prep.).
Fig. 4.8. A simple, two unit recurrent network with bias
It should be mentioned that for the RNNs discussed here different kinds of error signals can be defined as will be explained below (section 4.4.6) in detail. This error is zero (or the harmony value is 1, see below), if the net is fully relaxed, meaning that the external input signal corresponds to the eigenbehaviour of the net. Thus, the error value may be used as a measure for the recognition of the situation. 4.4.5.4 Rules of Logic RNNs can also be used to represent rules of logic, for example the logic AND or the Exclusive Or (XOR). As for binary logic the units can only take values of zero and one, nonlinear activation functions (in this case the Heaviside function) have to be applied. According to Pinkas [35], every (binary) logic expression can be described by an energy function, which can be transformed in an RNN. Examples for AND and for XOR are
A Bottom-Up Approach for Cognitive Control
203
Fig. 4.9. On the left: AND net; On the right: XOR net. Input is a1 and a2 , output is x3 .
given in Fig. 4.9. Inputs are a1 and a2 , output is x3 . Therefore, unit x4 in the right figure represents a hidden unit. As can be seen, the matrices are symmetric. Therefore, these networks containing units with bounded activation functions are of the Hopfield type. 4.4.6
Learning
The networks introduced above show interesting properties, but all suffer from the fact that the weights are explicitly computed. Furthermore, all nets are characterized by the fact that the complete network forms a neutrally stable system. This means that a minor deviation of any weight from the ideal value would lead to a deterioration (either unlimited growth or decrease to zero of all units) of the net. Therefore, an algorithm would be helpful that allows to learn the appropriate weight from scratch and can also stabilize these weights. Traditional approaches are based on Error Backpropagation procedures as Back Propagation Through Time (BPTT) or Real-Time Recurrent Learning (RTRL). A fundamental problem with training recurrent nets is that the dynamic behaviour of an incompletely trained net may be very complex because there is a superimposition of two dynamics: that of the RNN as such and that of the learning procedure which changes the weight values. Therefore the traditional approaches are based on off-line training. On-line training is possible by using teacher-forcing techniques. In this case the neuron receives one external input, the desired activation, and compares this value with the sum of all internal (=recurrent) input values. The difference (=error) can be used for training the weights, for example using the Delta rule. This algorithm can be applied locally, i.e., within the neuron and on-line. To this end, the dendritic tree of these neurons has to be partitioned into two regions: One region with a fixed synapse, whose presynaptic neuron belongs to “sensory” neurons transmitting the external input ai (Fig. 4.10). The second dendritic region is characterised by synapses wi j , whose presynaptic neurons are components of the recurrent network (Fig. 4.10, left) and are recurrently connected to neuron i. Therefore, the activation of a single neuron is determined by an external component ai and an internal component, the weighted sum of the internal recurrent inputs si . The weighted sum of the internal n
recurrent inputs of neuron xi is given by si (t + 1) =
∑ wi j (t) · x j (t) or, for the complete
j=1
network, s(t + 1) = W (t) · x(t) . During training, the weights wi j can be changed such that the error δi (t) = ai (t) − si (t) becomes zero. This can be obtained by application of
204
H. Cruse et al.
the learning algorithm wi j (t + 1) = wi j (t) + ∆ w j j with ∆ wi j = ε · x j (t) · δi (t) with ε > 0 being the learning rate. Application of this rule, which corresponds to the Delta rule, leads to a weight change until δi (t) = 0 , i.e., until the sum si of the weighted recurrent inputs equals the external input ai . To learn dynamic instead of static patterns a specific learning rule has to be applied as given in Makarov et al. ([28], equation 10).
Fig. 4.10. A recurrent network consisting of three units. The circuit characterising the individual unit is shown to the right.
To be able to use these networks as a memory, it is necessary to prevent the network to automatically adapt to each new input situation. Thus, once the synaptic connections have learnt the specific input situation, further learning is stopped. A simple solution is to finish learning after the error δ i has fallen below a given threshold because then the external situation is represented within the network. As an alternative, further learning n
can be stopped, if the summed squared error E(t) = ∑ δi (t)2 of the entire network has i=1
fallen below a given threshold. Please note that units as shown in Fig. 4.10 are appropriate to explain the learning rule. The corresponding net is however less interesting as it, in the form shown here, cannot be activated by any external input. However, an extension will be shown below that allows to receive external activation. This kind of teacher forcing is easily possible when the output of the units is not fed back to the network, because any inappropriate output values resulting from incomplete training do not influence the behaviour of the net. If however the output should be fed back, an alternative solution has been proposed, which is based on the so called Su units, as explained in the following (Fig. 4.11). As in the case of Fig. 4.10, the dendritic tree of these neurons is partitioned into two regions. Furthermore, each individual neuron xi (i = 1 to n) is equipped with a special internal structure (Fig. 4.11, right) described in the following. As indicated above, a major problem with training RNNs is that the dynamics of the network are superimposed by the dynamics due to the learning procedure [24]. Both dynamics could however be separated, if, during training, the overall output xi would always equal the external input (i.e., xi = ai ) independent of the actual learning state, i.e., independent of the actual values of the weights wi j . This can be achieved if Su units are used. Fig. 4.11 shows such an Su unit in which the output corresponds to the external input (ai ), if this is different from zero, or it equals the recurrent input (si ) if the external input is switched off (i.e. ai = 0). In other words, the external input overwrites, or suppresses
A Bottom-Up Approach for Cognitive Control
205
Fig. 4.11. Left: Network with Su units. On the right, the Su unit is shown.
the recurrent signal if a nonzero external input exists. This unit maintains its activation if, after complete learning, the external input is switched off. Therefore RNNs with Su units can be used as memory elements. Application of the learning procedure shows that it is possible to train one vector with n components or more (
206
H. Cruse et al.
⎞ 1 0 0 ⎝0 0.5 0.5⎠ 0 0.5 0.5 ⎛
This matrix allows a simple geometrical interpretation: in 3D space, the solution of the task is given by a 2D plane with x2 = x3 that is perpendicular to the x2 − x3 plane, and any value of x1 . Non-linear Versions Networks have been investigated, the Su units of which are expanded by different nonlinear activation functions g(x) (e.g., x3 , sign(x)· |x|, tanh(x), Heaviside function). This nonlinear function was either applied to the summed input value si (case a), or to the output value xi before the output value was fed back to recurrently activate the units via the weights wi j (case b). Training the nets, i.e., reaching a stable weight matrix, was possible for all cases investigated. For case b learning converges if g(x) ∗ x > 0, for case a this is the case if g(x) is a monothonic function and g (g(x)) = 0 [28]. Equipped with the necessary nonlinear characteristics, these networks could also learn the behaviour described by nonlinear differential equations as for example that of a van der Pol oscillator [27]. Learning the Landmark Net as an Example for Learning Linear MMC Nets As described in section 4.4.4, a simple, linear MMC net was derived that can be used for landmark navigation (Fig. 4.7). The underlying equations can be rewritten as P1 = (d ∗ P1 + P2 + P3 − (M1 + M3 ))/(d + 2) P2 = (P1 + d ∗ P2 + P3 + (M1 + M2 ))/(d + 2) P3 = (P1 + P2 + d ∗ P3 + (M2 + M3 ))/(d + 2) This net can be regarded as an MMC net with bias. Note that vectors are denoted different to those of Fig. 4.7. This net cannot be derived from one Basis Equation. Instead, this system can be described by two MSBE nets which share one variable, thus forming an MMC net. If the number of the to-be-perceived units (objects) does not exceed the number of the available neuronal units, which can be applied simultaneously, direct learning is possible. If this number is limited, hidden units (better: hidden, unknown relations between units) have to be detected (invented) spontaneously, and later tested for saliency. There are two cases: either the new unit is invented, and corresponds to an unknown relation between two already existing units (e.g., in the case illustrated in Fig. 4.12 this might be the “invention”: D1 = L1 + L2), or the mean value of two units is computed (P1* = P11 + P12 ), which is a special case of the former. We will continue with the latter case which is a helpful invention for learning the landmark net. Assume that we have a moving agent equipped with a perception system (“preprocessor”) that can recognize not more than three units at a time. (Recall that units are described here as vectors but, when applied to the neural network, contain two components, the x- component and the y-component. Both apply to different, but identical linear neural networks). This
A Bottom-Up Approach for Cognitive Control
207
Fig. 4.12. Left: Vectors describing the position of an agent relative to three landmarks. Vectors are denoted different to Fig. 4.7. On the right: Three units with bias which might represent the landmark situation.
means that the system has to deal with triangles (e.g., P1 , M1 , P2 ) that can be described by a basis equation. Therefore, the corresponding MSBE net can be learnt as described above. This means that the agent, when moving relative to the landmark as shown in Fig. 4.12, can monitor four triangles and therefore learn four MSBE nets. (Actually, two different, but non degenerate views on the landmarks would be sufficient). However, after this learning procedure, these four nets are separated and do not yet form one common MMC net. In order to connect these nets, the agent has to detect that some of these units belong together, i.e., represent the same object (=vector) in the world. To find an appropriate pair, the agent spontaneously has to develop new hypotheses which then have to be tested if being sensible or not. To this end, any two units are connected with a specific two-unit net (termed hypothesis net, of which many are assumed to preexist). The outputs of this hypothesis net will then be fed back to the overall input and will, via Hebb correlation connected to the corresponding input channel. In this way two learnt triangle nets can be connected. If the random selection by chance has found two units which actually describe the same object in the world, there will be no error in the units of the hypotheses net, when the complete net is tested with different views of the landmark situation. This means that the hypothesis is correct and the hypothesis net will be stabilized thus forming hidden units. If an error is detected for at least one view (there might be specific positions of the agent relative to the landmarks that do not provide an error in spite of the hypothesis being wrong), the hypothesis net will be discarded. In this way, an MMC net consisting of interconnected MSBE nets can be learnt from scratch. Apart from the local version of the delta rule and the traditional Hebb rule, spontaneous activation of randomly selected weights is applied. The latter concerns the weights of the distributor nets (for more details see [10]).
4.5 Towards Cognition, an Extension of Walknet After having described the general structure of our memory architecture consisting of the three parts preprocessor, distributor and situation models as well as details concerning how to structure and to learn specific situation models, in the following we will explain how these concepts may be applied to a concrete case. To avoid the investigation of a possibly too simple paradigm, we will apply our approach to Walknet, a neural
208
H. Cruse et al.
network that is suited to control a number of complex behaviours as walking on uneven ground, negotiating curves, dealing with amputation of a leg and climbing over large gaps. Recall that performance of these behaviours requires to control 18 DoFs, which are connected partly in series and partly in parallel. Thus, the architecture allows for a large number of diverse behaviours. In the following parts the different layers involved will be presented and for the reactive and the medium layer it will be briefly shown how they address the reactive tasks. For the cognitive layer and the last task, we want to outline our approach and its connection to the lower layers which is current and future work. Therefore, this part will raise and not answer some interesting questions we will approach through our research project. We are going to emphasize these questions to show how they can contribute to research on cognition in general. 4.5.1
The Reactive and Adaptive Layer
Compared to the architecture proposed by Verschure (see Fig. 4.1), in our architecture the lower level corresponds to the motor primitives of the Walknet (stance-net, swingnet) representing simple motor behaviours which can be modulated from a higher level (see chapter 2): The motor primitives can be modulated by changing the velocity — such a modulation can be found in the insects to control gaits as well as turning , the latter by adjusting walking velocity differently on legs of both sides of the body — or the target angles for the swing-network for directed movements. Each leg has its own local control module, which decides which behaviour (stance or swing) shall be active, depending on current sensory signals and coordination signals from the other leg controllers. The controller is called analog-selector [43]. Depending on the position of the current leg, on the current state of the leg and on the load acting on the leg, the selector modulates the behaviours leading to a swing movement or a stance movement. The coordination between the legs is established by some simple local rules which are obtained from experiments on stick insects. These rules can shift the posterior extreme position (at this position the leg ends its stance movement and swings to the front) forward or backward. The result is a shorter or longer stance movement. The Walknet is composed of these local control modules: the motor primitives, the selector and the coordination influences connecting the leg controllers. Walknet has been implemented on the robots TARRY IIB and Gregor [44]. The robots produce adaptive and stable walking behaviour (depending on the velocity using a tripod or tetrapod gait) when controlled by Walknet. Walknet is able to adapt to a variety of tested disturbances. For example, the effect of the most drastic disturbances (the amputation of one leg) on the Walknet has been analysed and the Walknet was able to compensate for these disturbances [43]. To be able to compare the architecture of Walknet with our general memory architecture (Fig. 4.2), the structure of Walknet can be redrawn as is shown in Fig. 4.13. The left lower part shows the preprocessor, which in this case contains simple sensors as are position sensors monitoring joint angles, or load sensors. The distributor guides this information to the different situation models as are stance-nets or swing-nets, for example.
A Bottom-Up Approach for Cognitive Control
209
Swing-nets and stance-nets are illustrated by boxes and some sensory input is indicated (α for the leg position, i,e. the joint angles of a leg, load for the forces developed by the leg that during stance can be measured by the ground reaction forces). The output affects the muscles, which, via the body kinematics (plus dynamics) and the mechanical properties of the environment, produce new sensory input.
Fig. 4.13. Walknet being redrawn to match the scheme provided in Fig. 4.13
4.5.2
Cognitive Level
In the following we will describe how this system which mostly comprises a reactive system in the strict sense, but to some extent also depends on inner states (stance, swing), and which is assumed to consists of “innate” structures, can be expanded by learning procedures in different ways. These procedures concern the capability of learning based on “internal trial-and-error”, which requires a manipulable model of the own body and of relevant aspects of the environment. The former may also be described as a prerequisite for the ability to plan ahead. Successful solutions will be stored which might also be called as learning by insight. As a possible later extension, application of the body model may also permit learning using paradigms of classical (Pavlovian) and of operant (instrumental) conditioning, as well as learning by imitation. It may be recalled here that a biological memory is not functionally comparable with the memory as we know it from classical computers. A biological memory is not a passive storage, but is responsible for performing the computations as well. Therefore,
210
H. Cruse et al.
already the neural modules termed swing-net and stance-net, for example, can be considered to be a part of the memory system. Because we regard them as being innate, these modules are part of what is called species memory as opposed to individual memory. The latter contains information acquired by the individual subject. Note that these terms may indicate a morphological and functional separation between both systems. However, this is not the case as in biological memories both aspects are tightly coupled with respect to morphology and function. As described for the reactive parts, the sensory input values are directly used by the reactive structures swing-net and stance-net. These inputs are, however, principally also available for a higher level memory system depicted in the upper part of Fig. 4.13 (situation models at the top) and of Fig. 4.15 (situation models in the part termed cognitive). There are, of course also other sensory inputs possible, for example visual or acoustic information. In Fig. 4.13 and 4.15 this is indicated by the massive parallel ascending channels on the left side that are able to transmit sensory information to the upper level (These channels distribute sensory information or actuator signals. Therefore, this network is called distributor, see also Fig. 4.2.). Apart from this direct sensory input there is also information available that the lower level system is computing for its own use. As an example, Fig. 4.13 shows the “AEP” channel that, in this specific case, is computed by the target-net (chapter 2) which receives as direct input the joint angles of the left middle leg (L2) and computes the leg angles that correspond to the hind leg (L3) position when the latter had reached its AEP. This information, representing a reference value or a goal state for the hind leg swing controller may also reach the upper level. There may even be more abstract information available. Examples are the output channels of the selector net, which could be termed swing motivation or stance motivation, respectively. Even more abstract notions including temporal aspects could be derived (not shown). All these informations are provided to the upper level via the ascending channels plotted at the left side of Fig. 4.13. No specific output channels are required for the upper level models. This again shows that a strict morphological separation between both systems, as plotted for convenience in the illustration of the extended architecture in Fig. 4.15, is not required. This architecture allows to learn different types of conditioned reflexes and of instrumental learning. As an example, imagine that the visual system, evaluated by the preprocessor records “cluttered environment” (not explicitly depicted in Fig. 4.13). As such, this can be considered a conditioned stimulus (CS) that does not affect the walking behaviour. If however, the leg often hits against an obstacle during swing, and the unconditioned avoidance reflex is therefore elicited, a situation model representing a conditioned reflex will develop and already the visual input will elicit a higher lifting of the legs. Using the example of instrumental learning, this may occur as sketched in the following way. When, in the cluttered environment, the leg is hitting against an obstacle, this can be considered as a negative reward. To be able to exploit this information, this reward signal computed from one or several sensors has to be available internally and this signal may be used to influence the synapses of the distributor termed v-synapse as described earlier. If we assume that also the innate connections as, for our example,
A Bottom-Up Approach for Cognitive Control
211
the connection to the controller of the β angle in the swing-net, may also be due to such changes, a spontaneous change of this v-synapse would lead to a stronger or lesser lifting of the leg, depending on the sign of the spontaneous change. Lowering the leg would not lead to a decrease of the negative reward. If however as a consequence of a random search the v-synapse is changed such that the leg is lifted high enough, the negative reward will decrease and learning will be finished. The change of the v-synapse will also influence learning of the w-synapses, i.e., the connections within the swing-net (if this is permitted), but the detailed effect of the dynamics in the instrumental learning paradigm have still to be studied, including effects of extinction and reacquisition. In combination the latter would lead to an ongoing adaptation to the respective ruggedness of the substrate. The main problem in the case of instrumental learning is how to find in the first place the synapse the change of which may lead to a success. This is a problem because the search space can be quite large. The search space can, however, be restricted by (a) temporal contiguity — only actually active synapses are due to changes as mentioned above - and (b) by starting the search in “morphological neighbourhood”. This means that, in our example, load sensors that are responsible to detect a leg hitting an obstacle during swing have their dendrites arranged in the neighbourhood of the corresponding motoneurones (plus interneurones). Thus, the negative reward potentially eliciting synaptic changes most strongly influence the functionally related connections. Such topological arrangements are generally found in neural systems of both vertebrates and invertebrates, but are not represented in Fig. 4.13. Such a topological structure could either be provided genetically or could be learnt via Kohonen-type learning or, most probable for biological cases, by a combination of both. In the following we consider a problem that cannot be solved by just changing one or two synapses. We can think of situations where the Walknet controller runs into an impasse. One such scenario might be the following. Assume that one hind leg, say the right hind leg, and the opposite, left middle leg are in stance and near their anterior extreme position, whilst the left hind leg is in stance, too, but near its posterior extreme position (see Fig. 4.14). To move ahead, the left hind leg has now to be lifted off the ground. However, in such a situation the walker might loose mechanical stability. Therefore, no movement is possible when the walker has to rely on the reactive controller only (for a real insect that has feet with adhesive structures or a robot with suckers, such problems occur much more rarely). The following extension could provide a solution. First of all, we need an internal body model that represents the complete kinematics of the walker’s body. As shown in Fig. 4.15 this model is implemented above the preprocessor. In our simulation the model is based on a MMC net [41]. During normal walking, this model is connected to the sensory input and thereby walks “passively”. It may have, however, an active role, too, for instance, it may be involved in sensor fusion, i.e., it might be used to correct erroneous data. How can the body model be applied if a problem as described in the above example happens to occur? In this case, at first the problem as such has to be detected. For this purpose, we require appropriate “emergency” sensors. In our example this might be a sensor which detects sudden changes of body position in space, or, on the level of the individual hind leg, an “inner” sensor that detects that swing movement
212
H. Cruse et al.
Fig. 4.14. The animal walks into an impasse. The time course while the animal deals with the impasse is shown in 5 time steps from top to bottom. On the left an animal is depicted that walks with a reactive Walknet control. The grey shaded pictures (third and fourth time step, see numbers at the right hand side) show the animal falling over when there would be only a reactive control layer. On the right the intervention of the higher level is depicted, which recognizes the impasse in the second time step, searches for solutions (time step 4) and finally applies a suitable one (time step 5).
does not take place although the conditions to change from stance to swing are fulfilled. Such an emergency signal causes that walking is stopped and that the system will search for solutions to this problem. Seen by an outside observer, a possible solution would be that the left middle leg performs a rearward directed swing movement in order to take over the load of the left hind leg, because then the left hind leg could be lifted off the ground. Such a behaviour is however not possible for the Walknet controller. Given this situation, one possibility is that for the actual situation, defined by the sensory pattern including the “emergency signal”, there is already a situation model stored in the memory that provides a solution. This situation model would require input units for the positions of the three legs (and possibly a measure for the load the hind leg is carrying) and as an output a connection to the unit that controls the AEP of the left middle leg. A corresponding RNN can easily be designed if the units are equipped with nonlinear activation functions (as rectifiers or Heaviside functions) and a bias unit. This solution is indicated in Fig. 4.15 as situation model (case A). As a consequence of switching on the emergency signal (not shown in Fig. 4.15), the channels ascending to the upper level will be opened and
A Bottom-Up Approach for Cognitive Control
213
Fig. 4.15. The control network including the body model and the higher level situation models
thereby the situation model which matches the actual input vector will be activated. If no appropriate solution is found in the memory, then a new situation model has to be constructed. In this case, as no appropriate behaviour is known yet, the search for a new solution has to be initiated first. To this end, we need the body model which has to include a few, for the actual situation critical aspects of the environment. The body model is now used for “imagined walking”. Therefore, the motor output to the standing body, indicated by the switch in Fig. 4.15, has to be switched off and instead be directly connected to the body model. This means that the Walknet controller now drives the body model instead of the real body. This internal system is used to find solutions to the problem. This means that via trial-and-error different behaviours are tested using the internal body model (plus some limited information concerning the shape of the substrate). To narrow the search space, already available microbehaviours as for example provided by the swing-net will be
214
H. Cruse et al.
used and only parameters will be changed (e.g., AEP of a selected swing-net, or exploitation of a situation model normally used for rearward walking). Further, as already discussed above, it appears to be sensible to begin the search with controllers of legs being neighbours of the leg causing the problem. This was easier if the representation of the different information channels are ordered in a topological way. The detailed system which is responsible for randomly searching for possible elements and for starting new trials is not shown in Fig. 4.15. By this way, different behaviours will be tested internally using the kinematics simulation (“2nd order embodiment” [42, 31]), until a solution is found. To this end, the internal shortcut is switched off again. This will then be, if successful, stored as a new situation model in the memory and executed. To decide whether a solution found by this internal simulation is sensible or not, the emergency sensors can be exploited also during imagined walking. As an alternative, a success sensor could be introduced that monitors whether the distance to the overall goal of the actual behaviour is decreasing. It might be added here that the existence of the body model could also be exploited for imitation learning. In humans it has recently been shown that imitation of an observed behaviour is the better the more the observed body resembles the own body [45]. Finally, in this section we will address the case that an abstract rule is given to the system by an external trainer and the task of the system is to exploit this rule in appropriate situations. It should be mentioned at the outset, that these ideas are preliminary and contain even more open questions than do the earlier examples described. Such abstract rules may be the transitivity rule or other more complex rules of logic, for example. For the application to our walker, we use a simpler example. In the earlier Walknet, there is no explicit rule that legs are not allowed to be lifted off the ground when the leg is sufficiently loaded (insects are assumed to be equipped with such a mechanism, and we have recently expanded Walknet by such a mechanism, but for the purpose of our example we need to use the earlier version). Let us assume that any teacher provides this rule to the walker and the rule will be stored as a situation model (upper part in Fig. 4.15). This is possible if the preprocessor of the system contains some protolanguage, i.e., the preprocessor has to be able to register rules given symbols and to transform these external symbols to neuronal units. In our example, this model may, in the simplest case, consist of two units, one input unit for detecting high load, and one output unit for the activation of the depressor muscle of a leg. However, the rule is abstract in the sense that it does not specify a leg to which it is applied. In Fig. 4.15 this abstract rule is represented by another situation model (case B). Let us now consider the situation that during walking a stance leg happens to be highly loaded. Three cases can be distinguished. 1. Nothing specific happens because there is no connection given that triggers a reaction to the load signal. 2. When, however, in this specific situation the external trainer reminds the system of respecting the rule, the stored situation model representing the general rule will be activated by this stimulus. This means that a unit “load” is activated. This unit is connected to the “load” units of the individual legs as it was indicated for the case of the avoidance reflex, and the corresponding downward connections have to be assumed for the “depressor” units. As now both the situation model B and
A Bottom-Up Approach for Cognitive Control
215
the controller units of the actually behaving leg are activated simultaneously, the situation model B representing the general rule can be used like an external situation to train a specific situation model for the actually activated leg controller. To this end the signals from situation model B run downwards and elicit the construction of another situation model for those units that receive a subthreshold preactivation, i.e., a situation model which is now specifically connected to the actual leg (Fig. 4.15), forming a “procedural” memory. In this scenario the stored general rule might not be necessary because we could have also directly used the signals given by the external teacher. 3. However, for the third case, where no teacher activates the general rule at the appropriate moment, we need the general rule being stored. Assume that a number of such general-rule models have been learnt and stored as general, or “declarative” situation models. If now a problem happens to occur, as an alternative to searching a solution by activating the body model, the system may also search in its memory of “declarative” situation models and randomly activate one of them. As described above, this general model would then try to train a specific model that connects to the actual situation. If this procedure leads to a solution of the problem, the specific situation model may be stored. As above, the main problem concerns the question of how the search space could be limited. Whereas in the case of using the body model the morphological neighbourhood may be exploited, in the case of abstract rules the introduction of some kind of “semantic” neighbourhood was helpful. This problem addresses the open question of how declarative memory might be structured. It has been shown [37], for example, that, at least for not too large memories, relatively simple systems are able to introduce a memory structure that represents such a semantic neighbourhood. Therefore, although a large number of questions have to be solved yet, a solution appears to be possible.
4.6 Conclusions This chapter has provided a proposal for a general memory architecture that consists of three parts,the preprocessor, the distributor and the situation models. The latter consist of RNNs and are suited for perception tasks as well as for motor control tasks, i.e., represent different forms of procedural memory. These networks might be learnt using a local version of the Delta rule, or might be given as innate structures, thus belonging to individual memory or to species memory, respectively [15]. Open questions concern the way how the weights of the distributor net are learnt, the details of the preprocessor as well as the way how learning is influenced by top-down mechanisms and bottom-up attention. Another important question addresses the problem of how related situation models, i.e., models belonging to the same context, are interconnected. However, a concept has been presented allowing for the construction of a network that is able to plan ahead and thereby to invent new, sensible behaviours, thus forming a cognitive system. As an essential element, this network contains a manipulable body model giving rise to second order embodiment.
216
H. Cruse et al.
References 1. Arbib, M.A., Bonaiuto, J., Rosta, E.: The mirror system hypothesis: From a macaque-like mirror system to imitation. In: Proceedings of the 6th International Conference on the Evolution of Language, pp. 3–10 (2006) 2. Beer, R.D.: The dynamics of active categorical perception in an evolved model agent. Adaptive Behavior 11(4), 209–243 (2003) 3. Bl¨asing, B.: Crossing large gaps: A simulation study of stick insect behavior. Adaptive Behavior 14(3), 265–285 (2006) 4. Brooks, R.A.: Intelligence without reason. In: Myopoulos, J., Reiter, R. (eds.) Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI 1991), pp. 569–595. Morgan Kaufmann publishers, San Mateo (1991) 5. Brooks, R.A.: Intelligence without representation. Artificial Intelligence 47, 139–159 (1991) 6. Clancey, W.J.: The frame of reference problem in cognitive modeling. In: Proceedings of the Annual Conference of the Cognitive Science Society, pp. 107–114. Lawrence Erlbaum Associates, Mahwah (1989) 7. Collett, T., Cartwright, B., Smith, B.: Landmark learning and visuo-spatial memories in gerbils. J. Comp. Physiol. 158(6), 835–851 (1986) 8. Cruse, H.: The evolution of cognition: A hypothesis. Cognitive Science 27, 135–155 (2003) 9. Cruse, H.: A recurrent network for landmark-based navigation. Biological Cybernetics 88(6), 425–437 (2003) 10. Cruse, H., H¨ubner, D.: Selforganizing memory: active learning of landmarks used for navigation. Biological Cybernetics (submitted, 2007) 11. Feldman, J., Narayanan, S.: Embodied meaning in a neural theory of language. Brain and Language 89(2), 385–392 (2004) 12. Fogassi, L., Ferrari, P.F., Gesierich, B., Rozzi, S., Chersi, F., Rizzolatti, G.: Parietal lobe: from action organization to intention understanding. Science 308(5722), 662–667 (2005) 13. Freud, S.: Formulierung u¨ ber die zwei prinzipien des psychischen geschehens. In: Gesammelte Werke, Bd. VIII, pp. 229–238 (1911) 14. Freud, S.: Die verneinung. In: Gesammelte Werke, Bd. XIV, pp. 9–15 (1925) 15. Fuster, J.M.: Memory in the cerebral cortex. MIT Press, Cambridge (1995) 16. Gallese, V.: Intentional attunement. The mirror neuron system and its role in interpersonal relations. Interdisciplines, http://www.interdisciplines.org/mirror/papers/1 17. Gallese, V., Lakoff, G.: The brain’s concepts: the role of the sensory-motor system in conceptual knowledge. Cognitive Neuropsychology 22(3-4), 455–479 (2005) 18. Gibson, J.J.: The theory of affordances. In: Robert Shaw, J.B. (ed.) Perceiving, Acting, and Knowing, pp. 67–80. Lawrence Erlbaum Associates, Mahwah (1977) 19. Giurfa, M., Zhang, S., Jenett, A., Menzel, R., Srinivasan, M.: The concepts of ‘sameness’ and ‘difference’ in an insect. Nature 410(6831), 930–933 (2001) 20. Glenberg, A.M.: What memory is for. Behavioral and Brain Sciences 20(1) (1997) 21. Harnad, S.: The symbol grounding problem. Physica D 42, 335–346 (1990) 22. Heisenberg, M.: Mushroom body memoir: from maps to models. Nat. Rev. Neurosci. 4(1471003X (Print)), 266–275 (2003) 23. Hochner, B., Shomrat, T., Fiorito, G.: The octopus: a model for a comparative analysis of the evolution of learning and memory mechanisms. Biol. Bull. 210(3), 308–317 (2006) 24. Jaeger, H., Haas, H.: Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004) 25. Jeannerod, M.: To act or not to act: Perspectives on the representation of actions. Quarterly Journal of Experimental Psychology 52A, 1–29 (1999)
A Bottom-Up Approach for Cognitive Control
217
26. K¨uhn, S., Beyn, W., Cruse, H.: Modelling Memory Functions with Recurrent Neural Networks consisting of Input Compensation Units. I. Static Situations. Biological Cybernetics 96(5), 455–470 (2007) 27. K¨uhn, S., Cruse, H.: Modelling Memory Functions with Recurrent Neural Networks consisting of Input Compensation Units. I. Dynamic Situations. Biological Cybernetics 96(5), 471–486 (2007) 28. Makarov, V., Song, Y., Velarde, M., H¨ubner, D., Cruse, H.: Elements for a general memory structure: properties of recurrent neural networks used to form situation models. Biological Cybernetics (accepted, 2008) 29. McCarthy, J., Hayes, P.J.: Some philosophical problems from the standpoint of artificial intelligence. Machine Intelligence, 26–45 (1987) 30. McFarland, D., B¨osser, T.: Intelligent behavior in animals and robots. MIT Press, Cambridge (1993) 31. Metzinger, T.: Different conceptions of embodiment. Psyche 12(4) (2006) 32. Nauck, D., Klawonn, F., Kruse, R.: Neuronale Netze und Fuzzy-Systeme. Vieweg-Verlag, Wiesbaden (2003) 33. Newell, A.: The knowledge level. Artificial Intelligence 18(1), 87–127 (1982) 34. Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge (2001) 35. Pinkas, G.: Symmetric neural networks and propositional logic satisfiability. Neural Comput. 3(2), 282–291 (1991) 36. Quiroga, Q.R., Kreiman, G., Koch, C., Fried, I.: Sparse but not ‘grandmother-cell’ coding in the medial temporal lobe. Trends in Cognitive Sciences 12(3) (2008), http://dx.doi. org/10.1016%2Fj.tics.2007.12.003 doi: 10.1016/j.tics.2007.12.003 37. Ritter, H., Kohonen, T.: Self-organizing semantic maps. Biol. Cybern. 61, 241–254 (1989) 38. Rizzolatti, G.: The mirror neuron system and its function in humans. Anat. Embryol. 210(5– 6), 419–421 (2005) 39. Rizzolatti, G., Fadiga, L., Gallese, V., Fogassi, L.: Premotor cortex and the recognition of motor actions. Cognitive Brain Research 3(2), 131–141 (1996) 40. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. Prentice-Hall, Englewood Cliffs (2003) 41. Schilling, M., Cruse, H.: Hierarchical mmc networks as a manipulable body model. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL (to appear, 2007) 42. Schilling, M., Cruse, H.: The evolution of cognition – from first order to second order embodiment. In: Wachsmuth, G.K.I. (ed.) Modeling Communication with Robots and Virtual Humans. Springer, Heidelberg (2008) 43. Schilling, M., Cruse, H., Arena, P.: Hexapod walking: an expansion to walknet dealing with leg amputations and force oscillations. Biological Cybernetics 96(3), 323–340 (2007) 44. Schilling, M., Patan`e, L., Arena, P., Schmitz, J., Schneider, A.: Different, biomimetic inspired walking machines controlled by a decentralised control approach relying on artificial neural networks. In: Proceedings of SAB 2006 Workshop on Bio-inspired cooperative and adaptive behaviours in robots, Rome, Italy (2006) 45. Shiffrar, M.: Movement and event perception. In: Goldstein, B. (ed.) The Blackwell Handbook of Perception, pp. 237–272. Blackwell Publishers, Oxford (2001) 46. Shiffrar, M., Pinto, J.: The visual analysis of bodily motion. In: Prinz, W., Hommel, B. (eds.) Common mechanisms in perception and action: Attention and Performance, pp. 381–399. Oxford University Press, Oxford (2002) 47. Sloman, A., Chrosley, R.: More things than are dreamt of in your biology: Information processing in biologically-inspired robots. Cognitive Systems Research 6(2), 145–174 (2005) 48. Steels, L.: Intelligence—dynamics and representations. In: The Biology and Technology of Intelligent Autonomous Agents. Springer, Berlin (1995)
218
H. Cruse et al.
49. Steels, L.: Intelligence with representation. Philosophical Transactions: Mathematical, Physical and Engineering Sciences 361(1811), 2381–2395 (2003) 50. Steels, L., Baillie, J.C.: Shared grounding of event descriptions by autonomous robots. Robotics and Autonomous Systems 43(2-3), 163–173 (2003) 51. Steink¨uhler, U., Cruse, H.: A holistic model for an internal representation to control the movement of a manipulator with redundant degrees of freedom. Biol. Cybernetics 79 (1998) 52. Strauss, R., Pichler, J.: Persistence of orientation toward a temporarily invisible landmark in drosophila melanogaster. Journal of Comparative Physiology A 182, 411–423 (1998) 53. Suchman, L.A.: Plans and Situated Actions: The Problem of Human-Machine Communication (Learning in Doing: Social, Cognitive & Computational Perspectives). Cambridge University Press, Cambridge (1987) 54. Tang, S., Wolf, R., Xu, S., Heisenberg, M.: Visual pattern recognition in drosophila is invariant for retinal position. Science 305(5686), 1020–1022 (2004) 55. Verschure, P., Voegtlin, T., Douglas, R.: Environmentally mediated synergy between perception and behaviour in mobile robots. Nature 425, 620–624 (2003) 56. Verschure, P.F., Althaus, P.: The study of learning and problem solving using artificial devices: Synthetic epistemology. Bildung and Erziehung 52(3), 317–333 (1999) 57. Verschure, P.F.M.J., Althaus, P.: A real-world rational agent: unifying old and new AI. Cognitive Science 27(4), 561–590 (2003) 58. Verschure, P.F.M.J., Voegtlin, T.: A bottom up approach towards the acquisition and expression of sequential representions applied to a behaving real-world device: Distributed adaptive control iii. Neural Netw. 11(7-8), 1531–1549 (1998) 59. Wehner, R.: Desert ant navigation: how miniature brains solve complex tasks. Journal of Comparative Physiology A 189, 579–588 (2003) 60. Wehner, R., Michel, B., Antonsen, P.: Visual navigation in insects: coupling of egocentric and geocentric information. The Journal of Experimental Biology 199, 129–140 (1996)
5 Mathematical Approach to Sensory Motor Control and Memory M.G. Velarde1, V.A. Makarov1, N.P. Castellanos1 , Y.L. Song1, and D. Lombardo2 1
2
Instituto Pluridisciplinar, Universidad Complutense de Madrid, Paseo Juan XXII 1, 28040 Madrid, Spain
[email protected] Dip. di Ingegneria Elettrica, Elettronica e dei Sistemi, Universit´a degli Studi di Catania, Viale A. Doria 6, 95125 Catania, Italy
[email protected]
Abstract. In this chapter we provide mathematical models for a general memory structure and for sensory-motor control via perception, detailing on some of the Recurrent Neural Networks (RNNs) introduced in Chapter 4. In the first section we study how individual memory items are stored assuming that situations given in the environment can be represented in the form of synaptic-like couplings in recurrent neural networks (RNN). We provide a theoretical basis concerning the learning process convergence and the network response to novel stimuli. We show that a nD network can learn static and dynamic patterns and can also replicate a sequence of up to n different vectors or frames. Such networks can also perform arithmetic calculations by means of pattern completion. In the second section we introduce a robot platform including the simplest probabilistic sensory and motor layers. Then we use the platform as a test-bed for evaluating the capabilities of robot navigation with different neural networks. We show that the basic robot element, the short-time memory, is the key element in obstacle avoidance. However, in the simplest conditions of no obstacles the straightforward memoryless robot is usually superior in performance. Accordingly, we suggest that small organisms (or agents) with short life-time do not require complex brains and even can benefit from simple brain-like (reflex) structures. In section 3 we propose a memotaxis strategy for target searching, which requires minimal computational resources and can be easily implemented in hardware. The strategy makes use of a dynamical system modeling short time memory which “collects” information on successful steps and corrects decisions made by a gradient strategy. Thus a memotactic robot can take steps against the chemotactic-like sensory gradient. We show (theoretically and experimentally) that the memotaxis strategy effectively suppresses stochasticity observed in the behavior of chemotactic robots in the region of low SNR and provides from 50 to 200% performance gain.
5.1 Theory of Recurrent Neural Networks Used to Form Situation Models 5.1.1
RNNs as a Part of a General Memory Structure
How biological memories are organized is still a fairly open question, although a huge number of experimental studies have been reported dealing with features at different levels and using methods from different fields such as psychology or neurophysiology including brain imaging techniques. This situation has eventually been dubbed the crisis of the experimentalists. In order to understand brain functions, simulation studies P. Arena and L. Patan`e (Eds.): Spatial Temporal Patterns, COSMOS 1, pp. 219–268. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
220
M.G. Velarde et al.
appear to be a useful pragmatic approach. Such simulations on the one hand can originate new principles of information storage and engineering, and, on the other hand, may suggest experimental procedures to test novel hypotheses. On the search for appropriate simulation models, recurrent neural networks (RNNs) have been intensively studied. This begun with Hopfield’s seminal papers [25, 26] and has led to a vast literature. Significant architectures are Elman-Jordan networks [19] or echo state networks [28]. Apart from many studies concentrating on monolithic architectures, sparsely coded networks [37] or expert networks (see e.g. [43]) have been investigated. In particular the latter show the advantage that they form separable modules which can be treated in an easier way compared to monolithic structures, both with respect to studying the mathematical properties [10, 38] and with respect to the way how these modules might be implemented into a large memory structure (see for details Chap. 4). The latter, for example, implies problems of how memory contents can be organized to reflect hierarchical or contextual relationships. Another question concerns the structure of the basic RNN forming the individual modules, or “neural assemblies”. Two simple types of RNNs have recently been investigated and proposed as a possible basis for such elementary memory structures, one being the so-called multiple solutions of basic equations (MSBE) networks [32, 33] and the other being the so-called mean of multiple computations (MMC) networks [30, 41]. In [17] a general memory architecture has been proposed, in which these relatively small RNNs can be embedded. This general architecture, inspired by the insect mushroom body system [47], can be used for learning and controlling more complex behaviors as for example landmark-based navigation. The same architecture is currently being studied to form a general theory explaining many of the Pavlovian paradigms [18]. 5.1.2
Input Compensation (IC) Units and RNNs
The networks considered here consist of n recurrently connected “simple” nonlinear units called, respectively, “suppression unit” or Su and “max-unit” or Mu (Figs. 5.1A and 5.1B). Both units operate in a discrete time t ∈ Z and have an external input denoted as signal ξi (t) that we also call activation, an internal (recurrent) input si (t), and an output evaluated by the unit on the next time step xi (t + 1). The recurrent input is given by a weighted sum of the output of all units in the network n
si (t) =
∑ wik xk (t)
(5.1)
k=1
where the matrix W = {wi j } plays the role of inter-unit coupling (Fig. 5.1C). The non-linear properties of the units arise from the treatment of the recurrent signal according to the signal at the external input. In Su the recurrent signal is simply suppressed and replaced by the external input if the latter is different from zero, or otherwise sent unchanged to the output ξ (t), if ξ (t) = 0 xSu (t + 1) = (5.2) s(t), otherwise
Mathematical Approach to Sensory Motor Control and Memory
The output of the Mu is given by max(ξ (t), s(t)), if s(t) ≥ 0 Mu x (t + 1) = min(ξ (t), s(t)), otherwise
221
(5.3)
Another way of representing the transduction properties of the Mu is [ξ (t) − s(t)]+, if s(t) > 0 xMu (t + 1) = s(t) + −[s(t) − ξ (t)]+, otherwise
(5.4)
where the subscript “+” denotes the rectifier operator x, if x ≥ 0 x+ = 0, otherwise
(5.5)
Although being equivalent, representation (5.4) is better suited for hardware implementation, whereas (5.3) better serves for mathematical analysis. Combining n IC units (either Mu or Su) into a recurrent network (Fig. 5.1C) yields a system whose dynamics is described by the following nD map n
xi (t + 1) = F ξi (t), ∑ wik xk (t) ,
i = 1, . . . , n
(5.6)
k=1
where F is a nonlinear function determined by the type of units used in the network, and it is given either by (5.2) for Su or (5.3) for Mu. A xi(t)
B
di(t) +
external input
S
wi1 wi2 wi3
si(t)
-
xi(t) external input
P
+
external activation
si(t)
P
output
C
wi1 wi2 wi3
x1(t) x2(t) x3(t) recurrent input
S xi(t +1)
coupling matrix
x1
IC
x2
output
-
S
xi(t +1)
x3
S
P
x1(t) x2(t) x3(t) recurrent input
S output
di(t)
IC IC
w11 w12 w13 w21 w22 w23 w31 w32 w32
x3 x2 x1
Fig. 5.1. Circuit implementation of a recurrent neural network (RNN) composed of input compensation (nonlinear) units: A) Suppression-IC unit (or Su), and B) Max-IC unit (or Mu). Both units have two inputs (ξi (t) and si (t)) and one output (xi (t + 1)). The latter is given by a nonlinear function of the inputs (see main text). Blocks marked by Σ and Π perform input summation and multiplication, respectively. The other blocks are nonlinear elements with sketched transduction characteristics. C) Neural network (case n = 3) composed of either suppression or max units recurrently coupled by the matrix W = {wi j }.
222
M.G. Velarde et al.
Before going further let us first introduce suitable notations. For any given vectors x, y ∈ Rn , let x, y ≡ xT y be a real number (T denotes transpose). Then x = x, x is the length or modulus of the vector x. We shall also use the vector inequality x < y ⇔ (xi ≤ yi , ∀i, and x = y) 5.1.2.1 Task to Be Performed by IC-Network The recurrent IC-network described by (5.6) can be trained to perform different tasks, for example to “remember” and reproduce given static or dynamic stimuli, or to perform simple algebraic associations. To illustrate the latter we introduce the so-called basic equation B, x = 0, or
n
∑ B i xi = 0
(5.7)
i=1
where B ∈ Rn is a constant vector. Equation (5.7) can be considered as an algebraic constraint to the network state. It defines a hyperplane passing through the origin in the nD phase space of the network. We note that the coefficients of the basic equation B1,...,n are defined up to a constant, i.e. if B, x = 0, then kB, x = B, kx = 0, where k is a constant. To rule out this uncertainty we can fix one of the coefficients, say B1 ≡ 1. As we shall show below the network can be trained in such a way that the basic equation (5.7) will be an attractor, i.e. any arbitrary activation of the network will relax to the above mentioned hyperplane. Once the network has been trained, i.e. the coefficients B1,...,n have been learnt or somehow stored in the coupling matrix W , the network can be used to perform simple arithmetics. Indeed, giving a new incomplete activation (stimulus) ξi (t) = ei , (i = 1, . . . , n), ek = 0 where e is an arbitrary nD vector ( B, e = 0), the network adopts a solution such that xi = ei , (i = 1, ..., n, i = k), xk = − B, e /Bk i.e. satisfying the basic equation (5.7). This task can also be viewed as pattern completion. We notice that even if more than one activation input is absent (equal to zero), the network will relax to a proper underdetermined solution. The network can stay either in learning phase or in operational phase. In the operational phase the coupling matrix is fixed and the network reproduces the previously learnt stimulus or responds to a novel stimulus applied to the external input. During learning the network is exposed to the training, in general dynamic stimulus. We shall distinguish two types of situations to be learnt. One is so called static, when external stimuli presented to the network are assumed to be (temporarily) independent pieces of a “global picture”. For example, first vector a is presented and then, after resetting the activations of all units, another vector b is given. Then the learning requires two updating cycles at each vector presentation. Moreover, as we show below the sequence of vectors can be arbitrary in this case. The other, dynamic situation is characterized by essentially time dependent stimuli, i.e. stimuli composed of different
Mathematical Approach to Sensory Motor Control and Memory
223
vectors whose sequence now indeed matters. Then we can write the situation on the external network input as a time function ξ (t) = ξ (t + p) where p is the time domain period of the training vector. For example, in the case p = 1 we train the network by a constant input vector ξ (t) = a; for p = 2 the network is trained by two alternating vectors: ξ (t) = abababab..., and so on. 5.1.2.2 Learning Rules for Static and Dynamic Situations Training consists in appropriate adjustment of the coupling matrix W by using a learning rule (the same for Su and Mu), that can be described as teacher forcing based on the classical delta rule [32]. The updating of the matrix elements is done according to wi j (t + 1) = wi j (t) + εδi (t)x j (t)
(5.8)
where ε > 0 is the learning rate, and δi (t) = ξi (t)− si (t) is the error between the internal and external inputs (Figs. 5.1A and 5.1B). We assume that during the training of the weight matrix W the network has no internal dynamics, i.e. (5.2) and (5.3) are reduced to x(t + 1) = ξ (t). This indeed is true for Su with non-zero external activation. For Mu this also can be accomplished by starting the training process from W = 0 (and s < ξ ). For the static case the dynamics of the weight matrix is given by W (t + 1) = W (t) I − εξ (t)ξ T (t) + εξ (t)ξ T (t)
(5.9)
where ξ (t) is the training (in general time dependent) external activation applied to the network. We note that at each step the coupling matrix is updated by a single vector independently on the other elements of the training sequence. For the dynamic situation the evolution of the weight matrix in the learning phase is given by W (t + 1) = W (t) I − εξ (t − 1)ξ T (t − 1) (5.10) +εξ (t)ξ T (t − 1) The learning rule (5.10) can be considered as a (n × n) map driven by an external force with delay. Now, in contrast to (5.9), each training step uses two sequential vectors, hence their ordering becomes important. The training is deemed finished when the total squared error ∑ δi2 falls below a threshold. To quantify the learning performance we shall also use the normalized intermatrix distance W (t) − W∞ d(t) = (5.11) W∞ where W∞ = limt→∞ W (t) is the limit (learnt) matrix. Note that W∞ is not used for learning but only for a posteriori description of the learning dynamics. In general (5.9) or (5.10) may not converge to a fixed point, which means that the network is unable to learn (in terms of representation and replication) the corresponding external situation. 5.1.3
Learning Static Situations
Let us start with the case when a RNN is used to learn a static situation. Then as mentioned above the learning follows the rule (5.9).
224
M.G. Velarde et al.
5.1.3.1 Convergence of the Network Training Procedure Let us assume that the static situation to be learnt is described by p nonzero vectors {a1 , a2 , . . . , a p }, and at each learning step the network is exposed to one of them. We also require that all training vectors appear sufficiently frequently in the learning sequence. The latter means that the occurrence frequency of the i-th vector fi = ti /t does not tend to zero for t → ∞ (ti is the number of the vector occurrences up to time t). Examples of training sequences satisfying this condition are periodic (e.g. ξ (t) = a1 , . . . , a p ,a1 , . . . , a p , . . .), and probabilistic (vectors appear at random with nonzero probabilities). Training by orthogonal vectors First we consider the case when a1 , . . . , a p are nonzero orthogonal vectors in Rn . Then the following theorem on the convergence of the network learning process holds: Theorem 5.1. Assume that an n-units RNN is trained by an arbitrary sequence of p nonzero orthogonal vectors a1 , . . . , a p , each of which appears in the sequence with nonzero frequency (i.e., an infinite number of times for an infinite sequence). If the learning rate satisfies 2 2 2 0 < ε < min (5.12) , ,..., a12 a2 2 a p 2 then for any initial conditions W0 = W (0) the learning process given by (5.9) converges to the coupling matrix W∞ ≡ lim W (t) = W0 (I − M p ) + M p t→∞
where
(5.13)
p
ai aTi 2 i=1 ai
Mp = ∑
(5.14)
Particularly, for i) p = n or ii) p < n but with W0 = 0. W∞ = M p
(5.15)
The proof of this theorem appears in [36]. Note that training by one vector, i.e., ξ (t) = a, reduces (5.14) to M1 = aaT /a2, which corresponds to the case considered in [32]. According to Theorem 5.1 the learning can be always achieved by using a small enough learning rate. The training starting with zero initial conditions (W0 = 0) leads to a symmetric coupling matrix wi j = w ji . The learning result (in terms of W∞ ) does not depend on the sequence of the presentation of the vectors a1 , . . . , a p to the network. The latter, for instance, means that training by a periodic sequence of two vectors (e.g. ξ (t) = a, b, a, b, a, . . .) gives the same matrix W∞ as the training by a random sequence of these vectors (e.g. ξ (t) = a, a, b, a, a, a, b, b, a, . . .), even if the probability to find vector a is different from the probability to find vector b (provided that both are nonzero).
Mathematical Approach to Sensory Motor Control and Memory
225
Finally, for practical implementation, we note that the learning time scales as t∝
1
min fi ln 1−ε 1a 2
i
Thus an excessively small (respectively, big) learning rate and/or small occurrence frequency of one of the training vectors damages the learning performance. Training by arbitrary vectors The condition on the vector orthogonality used above is hardly satisfied by real world stimuli. To extend our results we now relax this requirement. Let us assume that the network is trained by an arbitrary sequence of p training vectors {a1 , . . . , a p }. We denote by {γ1 , . . . , γr } a maximal linearly independent subset of the complete training matrix. Then by Gram-Schmidt orthogonalization procedure [42] we define an orthogonal set {c1 , . . . , cr }, where γk , c j c for 2 ≤ k ≤ r 2 j j=1 c j
k−1
c1 = γ1 , ck = γk − ∑
(5.16)
The following theorem holds: Theorem 5.2. Assume that an n-units RNN is trained by an arbitrary sequence of p arbitrary nonzero vectors a1 , . . . , a p , each of which appears in the training sequence with nonzero frequency. If the learning process (5.9) starting from initial condition W0 = W (0) converges to the coupling matrix W∞ , then W∞ = W0 (I − Mr ) + Mr where
r
Mr =
ck cT
∑ ck k2
(5.17)
(5.18)
k=1
with r = rank{a1 , . . . , a p }. Particularly, for i) r = n or ii) r < n but with W0 = 0 W∞ = Mr
(5.19)
The proof of this theorem appears in [36]. Remark: Although the chosen basis {c1 , . . . , cr } is not unique, the matrix Mr does not depend on the choice, since an orthogonal projection (given by Mr ) on the space spanned by {a1, . . . , a p } is unique [42]. The independence of the learning result (matrix W∞ ) on a particular sequence of the training vectors discussed above is also applied here. Another important point is that the learning operates on a linearly independent subset of the training vectors only. The latter, for instance, means that the training by an arbitrary sequence of three vectors a, b, and c, one of which is a linear combination of the other two (i.e., c = k1 a + k2 b) is equivalent to training by any pair of these vectors. In other words, the training by e.g. ξ (t) = a, b, c, a, b, c, . . . gives the same coupling matrix W∞ as the training by e.g. ξ (t) = a, c, a, c, a, . . ..
226
M.G. Velarde et al.
5.1.3.2 Example of Training by Two Vectors Let us now give an example of the network training by two linearly independent vectors a and b satisfying a basic equation. Using Theorem 5.2 with zero initial conditions (W (0) = 0), we obtain W∞ = αα T + β β T where
α=
a a2b − b, a a , and β = a a a2 b2 − b, a 2
(5.20)
(5.21)
We note that α = β = 1 and α , β = 0, i.e. α and β are orthonormal vectors. As an example let us train a 3D network to learn the basic equation x + y − 2z = 0
(5.22)
We chose as the training vectors a = (1, 3, 2)T and b = √ (1, 1, 1)T fulfilling (5.22). From √ T (5.21) we obtain the equivalent vectors α = (1, 3, 2) / 14 and β = (4, −2, 1)T / 21. Finally substituting them into (5.20) we get the limit trained matrix ⎞ ⎛ 5 −1 2 1 W2v = ⎝ −1 5 2 ⎠ (5.23) 6 2 2 2 5.1.3.3
Response of a 3D Su RNN Trained by Two Vectors to a Novel External Stimulus As it has been mentioned in the Introduction, the RNNs considered here are deemed to be used as building blocks in a rather larg memory framework, where different parts (individual networks) will interact with each other. Therefore, not only the network learning capabilities are of interest, but likewise its ability to “recognize” different stimuli and to complete incorrect stimulus patterns. Let us consider a 3D Su network trained by two linearly independent vectors a and b. This type of network has been numerically studied in [32, 17]. We apply to the network a novel stimulus e that does not satisfy the basic equation. Due to the Su network constraint, to obtain a dynamical response some of the elements of the novel stimulus should be equal to zero, e.g. e = (0, 0, e3 ). Then the network response is given by the following theorem: Theorem 5.3. Assume that a 3D Su-network has been previously trained by two linearly independent vectors a and b satisfying the basic equation. Then a novel incomplete external activation e is given to the network. i) If e = (0, e2 , e3 )T , and a3 b2 = a2 b3 then independently on the initial conditions the network state relaxes to x1 = −B2 e2 − B3e3 , x2 = e2 , x3 = e3
(5.24)
where the constant B2,3 are given by the basic equation (B1 ≡ 1) associated with the vectors a and b.
Mathematical Approach to Sensory Motor Control and Memory
227
For a3 b2 = a2 b3 the network shows no dynamics: x(t) = (x01 , e2 , e3 )T . ii) If e = (0, 0, e3 )T , then the network state relaxes to x1 = −C2 B2 − e3B3 , x2 = C2 , x3 = e3 C2 =
(5.25)
x02 +B2 (e3 B3 −x01 ) 1+B22
where x01,2 are the network initial conditions. The proof is given in [36]. From Theorem 5.3 we can derive the following statement concerning the memory content of the Su network. If the novel stimulus e has one or two vanishing components, then the output of the corresponding units evolves in time to a steady state such that the final network state always fulfills the basic equation. Indeed, using (5.24) for one zero or (5.25) for two zeros in the novel activation e we can check B, x∞ = 0. Thus the 3D network trained by two linearly independent vectors can perform algebraic calculations, or it can be used for pattern completion. 5.1.3.4 Damping Factors (n = 3) Let us consider a 3D Su network trained by two vectors a and b. The weights on the diagonal of the learnt coupling matrix (5.20) can be interpreted as damping elements. Then using this matrix we can formally construct a new damped matrix as follows that ⎛ ⎞ d1
1 ⎜ α α1+d 2 1 +β 2 β 1 Wdamped = ⎜ ⎝ (1+d1 )h2
α3 α1 +β 3 β 1 (1+d3 )h3
α1 α2 +β 1 β 2 α1 α3 +β 1 β 3 (1+d1 )h1 (1+d1 )h1 α2 α3 +β 2 β 3 d2 1+d2 (1+d2 )h2 α3 α2 +β 3 β 2 d3 1+d3 (1+d3 )h3
⎟ ⎟ ⎠
(5.26)
where hi = 1 − αi2 − βi2 and di are the so called damping factors. Thus for di = (αi2 + βi2 )/hi the damped matrix (5.26) is reduced to W∞ as defined by (5.20), in other words, the training procedure (5.9) produces the damping factors (αi2 + βi2 )/hi . We note that (5.26) can be easily extended to the case of nD network. Clearly, for a set of arbitrary chosen damping factors (di = −1), for any vector x located in the attractor plane we have Wdamped x = x, which means that the damping factors do not affect the existence of the solution (interpreted as memory contents). However their adjustment leads to a change in the relaxation provoked by a novel stimulus, as the following theorem states: Theorem 5.4. Assume that a novel incomplete stimulus e is applied to a 3D Su-network with the modified coupling matrix (5.26) and d1,2 > 0. i) If e = (0, e2 , e3 )T , and a3 b2 = a2 b3 then independently on the initial conditions the network state relaxes to (5.24). Moreover, the relaxation rate is controlled by d1 /(1 + d1). ii) If e = (0, 0, e3 )T , then starting form zero initial conditions the network state relaxes to −B3 e3 (1 + d2) −B3 e3 (1 + d1) , x 3 = e3 x1 = , x2 = (5.27) 2 + d1 + d2 B2 (2 + d1 + d2 )
228
M.G. Velarde et al.
d1 = 1 d2 = 10
2
1.5
x3
1
d1*, d2*
x3 =
e3
x(0)
0.5
0
BE
d1 = 10 d2 = 1
-0.5 2 1.5
-1 -0.5
x1 1
0 0.5 1
0.5
1.5 0
x2
2
Fig. 5.2. Response of a 3D Su RNN trained by two vectors a = (1, 3, 2)T and b = (1, 1, 1)T to a novel stimulus e = (0, 0, 1) for different values of the damping factors d1,2 . All trajectories start from the point (0, 0, 1) (marked by x(0)) and lie in the plane x3 = 1 (marked by x3 = e3 ) given by the internal network constraint. According to (5.27) and (5.28) they follow straight lines tending to the plane (marked by BE) given by the basic equation (5.22). The thick line corresponds to the ∗ = (α 2 + β 2 )/h . It is orthogonal to original coupling matrix with the damping factors d1,2 1,2 1,2 1,2 the line obtained by the intersection of the basic plane and the plane x3 = e3 .
with the relaxation rate given by λ3 = (d1 d2 − 1)/(1 + d1 )(1 + d2 ). Moreover, the relaxation trajectory on the plane x3 = e3 follows the straight line x1 (t) =
1 + d2 B2 x2 (t) 1 + d1
(5.28)
The proof is given in [36]. Theorem 5.4 shows that for a novel activation in the form e = (0, e2 , e3 ), damping factors affect only the relaxation rate, while the final network output is still given by (5.24), i.e. the network finds a solution satisfying the basic equation. The maximal relaxation velocity is achieved for d1 → 0. For a novel activation in the form e = (0, 0, e3 ), damping factors affect both the relaxation rate and the final output of the network. For arbitrary (positive) values of the damping factors the network finds a solution satisfying the basic equation. Indeed, from (5.27) one can immediately see that B, x∞ = 0. However, the new solution in general differs from the original solution (5.25). The maximal velocity of the relaxation process is achieved for d1 d2 = 1, then λ3 = 0 and the network “jumps” in a single step to the final state (5.27). 2 + β 2 )/h For d1,2 = (α1,2 1,2 using (5.28) one can show that the relaxation trajectory 1,2 is orthogonal to the intersection of the basic plane and the plane given by x3 = e3 . To illustrate the results we have trained a 3D Su network by two vectors a = (1, 3, 2)T and b = (1, 1, 1)T . In accordance with Theorem 5.2 the training process leads to the
Mathematical Approach to Sensory Motor Control and Memory
229
coupling matrix given by (5.23). Then, as defined by (5.26), we construct the damped matrix with different damping factors. Once a new damped matrix is obtained, we apply to the network a novel stimulus e = (0, 0, 1), which does not satisfy the basic equation (5.22). According to Theorem 5.4 the network output will relax to a new state satisfying the basic equation. Figure 5.2 shows trajectories in the network phase space for several values of 2 d1,2 . The thick solid line corresponds to the original coupling matrix (d1,2 = (α1,2 + 2 β1,2 )/h1,2 ). The other six trajectories correspond to (d1 , d2 ) = {(10,1), (7,1), (4,1), (1,4), (1,7), (1,10)}. As predicted, the trajectory obtained for the original coupling matrix (5.23) is orthogonal to the intersection of the basic plane and the plane given by x3 = e3 . Thus the learning process given by (5.9) converges to the coupling matrix such that a novel stimulus will be processed by the network in a way that the network state will relax to the basic equation by the shortest trajectory. However, we note that setting damping factors such that (1 + d1 ) = B22 (1 + d2 ) we also obtain a trajectory going straightforwardly (orthogonal) from the initial perturbation to the basic plane. Moreover, minimizing also |λ3 | (Theorem 5.4) we obtain the optimal coupling matrix. For the basic equation (5.22) this can be obtained by setting d1 = d2 = 1. Then the network will relax to the basic equation (find a solution to the pattern completion problem) just in a single step iteration. 5.1.4
Dynamic Situations: Convergence of the Network Training Procedure
In the previous section we considered learning of static situations. Let us now study the convergence properties of the network learning essentially dynamic situations. Then the training process is described by the rule (5.10). As we show below this leads to a more complex network behavior. We assume that the training is done by exposing the RNN to an external input stimulus composed of periodically repeated sequences of vectors ai ∈ Rn
ξ (t) = a1 , a2 , . . . , a p , a1 , a2 , . . . , a p , a1 , a2 , . . . , a p . . .
(5.29)
where p is the period of the training stimulus. For example, a simple static situation corresponds to p = 1 and (5.29) is reduced to ξ (t) = a1 . In general p is defined by the stimulus complexity (“external world”) and can be rather arbitrary. Thus the network may receive as the training input a stimulus with the period shorter or equal to the number of units in the network (p ≤ n) or during learning the network can be exposed to a sequence of vectors whose number exceeds the network order (p > n). By analogy to linear algebra we shall call the former situation as underdetermined whereas the latter is overdetermined. The training in these two cases may lead to essentially different results. 5.1.4.1 Network Training: “Underdetermined” Case (p ≤ n) First we consider the case when p ≤ n and a1 , . . . , a p are nonzero orthogonal vectors in Rn . Then the following theorem on the convergence of the network learning process holds:
230
M.G. Velarde et al.
Theorem 5.5. Assume that an n-units RNN is trained by a periodic stimulus (5.29) composed of nonzero orthogonal vectors a1 , . . . , a p (p ≤ n) with the learning rate satisfying 2 2 2 0 < ε < min (5.30) , , . . . , a12 a2 2 a p 2 Then for any initial conditions W (0) the learning process given by (5.10) converges to the coupling matrix + Mp, W∞ ≡ lim W (t) = W (5.31) t→∞
is a constant matrix defined by the initial conditions and given in [36]; and where W p
ai+1 aTi , with a p+1 ≡ a1 2 i=1 ai
Mp = ∑
(5.32)
Particularly, for i) p = n or ii) p < n but with W (0) = 0 W∞ = M p
(5.33)
The proof is given in [36]. Theorem 5.5 states not only the convergence but it also provides the upper limit of the learning rate when the convergence still exists. For any set of nonzero vectors {ai } we can select the learning rate in a way that the training process will always converge. From the implementation point of view an adjustment of the learning rate may improve the learning performance (speedup convergence). As earlier noted, the requirement on orthogonality of the training stimulus in Theorem 5.5 is hardly satisfied by real world stimuli. Then relaxing it leads to the following theorem. Theorem 5.6. Assume that an n-units RNN is trained by a periodic stimulus (5.29) composed of linearly independent vectors a1 , a2 , . . . , a p (p ≤ n) and the learning process (5.10) starting from zero initial condition W (0) = 0 converges to the coupling matrix W∞ . Then n−p ! " (5.34) W∞ = (a2 , a3 , · · · , a p , a1 , 0, · · · , 0) ×(a1 , · · · , a p , α1 , · · · , αn−p )−1 where 0 ∈ Rn is zero vector and α1 , · · · , αn−p are auxiliary nonzero linearly independent vectors such that the space spanned by {α1 , . . . , αn−p } is orthogonal to the space spanned by {a1 , · · · , a p }. The proof is given in [36]. Remark: For p = n the resulting coupling matrix (5.34) is reduced to W∞ = (a2 , a3 , · · · , an , a1 ) (a1 , a2 , · · · , an )−1
(5.35)
Mathematical Approach to Sensory Motor Control and Memory
231
5.1.4.2 Network Training: “Overdetermined” Case (p > n) Let us now assume that the network of n-units is subject to learn an external stimulus whose period is longer than the number of units in the network, i.e. p > n. Then we have: Theorem 5.7. Assume that an n-units RNN is trained by a periodic stimulus (5.29) composed of vectors a1 , a2 , . . . , a p (p > n) and the learning process (5.10) starting from zero initial condition W (0) = 0 converges to the coupling matrix W∞ . Let {a1, a2 , . . . , an } be the linearly independent subset of the training stimulus. Then W∞ = (a2 , a3 , · · · , an , an+1 ) (a1 , a2 , · · · , an )−1
(5.36)
(W∞ ) p = I and an+1 = W∞ an , . . . , a p = W∞ a p−1
(5.37)
with satisfied. The proof is given in [36]. Note that in contrast to the underdetermined case considered above the learning convergence is not always possible here. 5.1.4.3 Examples of Stimulus Learning: Underdetermined Case (p ≤ n) Let us now give few examples illustrating the above stated theorems. In numerical simulations we use a RNN composed of three suppression units (Fig. 5.1A and 5.1C), though the results can be extended to the Mu network. In the underdetermined case such a network can be trained either by a static stimulus (one vector, p = 1) or by dynamic stimuli with period two (two vectors, p = 2) or period three (three vectors, p = 3). Network training by one stimulus vector We begin by training the network with a single vector. Strictly speaking this case can be ascribed to a static situation. Indeed, we supply to the network a constant input vector ξ (t) = a and allow the coupling matrix evolve. However, for completeness we also show this case here. Starting from zero initial conditions W (0) = 0 the resulting trained coupling matrix is given by [32] aaT W∞ = (5.38) a2 The same result is obtained from Theorem 5.5. Indeed, setting p = 1 in (5.32) and using (5.33) we end up at (5.38). The same matrix is obtained from Theorem 5.2 using c1 = a and r ≡ 1. To illustrate the training process let us set a = (1, 3, 2)T and assume zero initial conditions W (0) = 0. Then using (5.38) the resulting trained matrix is ⎞ ⎛ 1 3 2 1 ⎝ 3 9 6⎠ (5.39) W1v = 14 2 6 4
232
training stimulus a
a a
a
a
a
a
a
a
a
b a
b
a
b
a
b
a
a
b c
a
b
c
a
b
c
learning performance
x1 , x2 , x3
x1, x2,x3
C
x1 , x2 , x3
x1, x2,x3
B
stimulus replication x1 , x2 , x3
x1, x2,x3
A
M.G. Velarde et al.
Fig. 5.3. Examples of stimulus learning and replication by RNN composed of 3 suppression units in the underdetermined case. A) Network training by the constant stimulus vector a = (1, 3, 2)T . B) Network training by alternating (period-two) sequence of vectors a = (1, 3, 2)T and b = (1, 1, 1)T . C) Network training by period-three stimulus composed of vectors a = (1, 3, 2)T , b = (1, 1, 1)T , and c = (−1, 2, 0)T . The learning rate has been fixed to ε = 0.1 for all three cases.
Figure 5.3A (left and center panels) shows the training stimulus and the time evolution of the matrix distance (5.11) of the coupling matrix W (t) to the theoretically predicted trained matrix (5.39). During the training the coupling matrix exponentially converges to (5.39) and in few steps (at 7th iteration) the error decreases below 1%. Then the external input can be withdrawn ξ = 0, and the RNN will replicate the previously learnt stimulus (Fig. 5.3A, right panel). Network training by two alternating input vectors Let us now consider the case, when during training, several different vectors are presented to the three-unit network. As above mentioned such kind of training can be considered as dynamic in contrast to the static situation discussed in Sec. 5.1.4.3, since now the network activation varies in time. In the simplest case we train the network by two vectors, say a and b (assuming they are linearly independent). Then using Theorem 5.6 from (5.34) we have W∞ = (b, a, 0)(a, b, α )−1 where α can be chosen as
(5.40)
Mathematical Approach to Sensory Motor Control and Memory
233
# # # # # # # a2 a3 # # a3 a1 # # a1 a2 # T # # # # # # , , α= # b2 b3 # # b3 b1 # # b1 b2 # We also note that if the training vectors a and b satisfy the basic equation, i.e. B, a = 0 and B, b = 0, then α = B. Using as a = (1, 3, 2)T and b = (1, 1, 1)T we obtain α = (1, 1, −2)T and finally ⎞ ⎛ 5 −1 2 1 (5.41) W2v = ⎝ 21 −9 6 ⎠ 6 13 −5 4 Figure 5.3B shows the learning dynamics. As in the case of the training by a constant input vector, the coupling matrix also exponentially converges to the predicted matrix. However, the convergence rate is lower. To obtain an error below 1% of the initial intermatrix distance 196 iteration steps are required. Learning period-three stimulus Let us now train the network of 3-units by a stimulus of period 3, i.e. p = n and ξ = abcabcabc . . . Using Remark 2 to Theorem 5.6 from (5.35) we have W∞ = (b, c, a)(a, b, c)−1 Setting as a = (1, 3, 2)T , b = (1, 1, 1)T , and c = (−1, 2, 0)T we obtain ⎞ ⎛ −5 −2 6 W3v = ⎝ 9 6 −13 ⎠ 0 1 −1
(5.42)
(5.43)
Figure 5.3C shows the learning process dynamics. To obtain the error below 1% of the initial matrix distance 6450 iteration steps are required. Thus the longer the stimulus period is, the slower the learning process is. 5.1.4.4 Examples of Stimulus Learning: Overdetermined Case (p > n) To illustrate the stimulus learning in the overdetermined case we use a network of two coupled Su units (n = 2). As a physical model to learn we select the linear damped oscillator x¨ + rx˙ + ω 2 x = 0, where r is the damping constant and ω is the oscillation frequency. Direct discretization of the harmonic oscillator equations yields X(t + 1)= WLO X(t) 1 1 WLO = −ω 2 (1 − r − ω 2)
(5.44)
Note that in general the solution of (5.44) may differ from the solution of its continuous counterpart.
234
M.G. Velarde et al.
A
training stimulus
learning performance
x1
x1
5%
iteration
B
stimulus replication 5% error 0.5% error
0.5%
iteration
iteration
x1
x1
5.9% error 1% error
5.9% 1%
iteration
stimulus presentation
iteration
Fig. 5.4. Examples of learning and replication of dynamical stimuli by RNN consisting of two suppression units in the overdetermined case (i.e. stimulus period p exceeds the number of units in the network n). A) Learning period 8 oscillations generated by the discrete counterpart of the harmonic oscillator (5.44). Left panel: First 30 vectors (only ξ1 projection is shown) used for the network training. Middle panel: Learning performance, i.e. dynamics of the distance of the coupling matrix W (t) to the theoretically predicted matrix (5.47). Right panel: Stimulus replication. Once the learning has been finished with the corresponding criteria (5% or 0.5% error) we use the network to replicate the stimulus. For 5% error the network output strongly deviates from the presented stimulus, whereas for 0.5% the RNN reproduces the stimulus fairly well. The learning rate was fixed at ε = 0.05. B) The same as in (A) but now the network is subjected to learn decaying oscillations. We use the stimulus of 60 vectors long and present it to the network several times. The learning gradually improves each time we present the stimulus. After five stimulus presentation the learning reaches high quality and then the RNN replicates well the stimulus. The learning rate was fixed at ε = 0.5.
Periodic oscillations For r = 0, (5.44) with appropriate ω produces a periodic oscillation (in terms of a periodic sequence of 2D vectors X) whose period depends on ω . Using (5.37) from Theorem 5.7 for a solution of period p we have the following condition on ω p det WLO (ω ) − I = 0 (5.45) For instance, for p = 8 from (5.44) and (5.45) we have ((ω 2 − 4)ω 2 + 2)2(ω 2 − 2)2(ω 2 − 4)ω 2 = 0
(5.46)
Mathematical Approach to Sensory Motor Control and Memory
235
√ where ω 2 = 2 ± 2 correspond to two different period-8 trajectories. Using either of them we can generate dynamical period 8 stimulus vectors a1 , . . . , a8 . Presenting such a stimulus to the RNN, according to Theorem 5.7 the learning process should converge to the coupling matrix Wosc = (a2 , a3 )(a1 , a2 )−1 (5.47) √ √ √ Using a1 = (1, 0)T , a2 = (1, −2 − 2)T , and a3 = (−1 − 2, 2 + 2 2)T we indeed arrive at 1√ 1√ (5.48) W8v = −2 − 2 −1 − 2 Figure 5.4A (left panel) shows the external stimulus of period-8 which has been applied to the network for training. The stimulus learning process, as we observed in the underdetermined case, exponentially converges to the theoretically predicted coupling matrix (5.48) (Fig. 5.4A, center panel). The inter-matrix distance decreases below 5% in 127 iterations and in 227 iterations the matrix error becomes less 0.5%. Once the external stimulus has been learnt (stored) we can use the network to replicate the stimulus. Figure 5.4A (right panel) illustrates the network output for two conditions to truncate the learning process for different error thresholds. With 5% error the output significantly diverges from the original stimulus already in the second period (i.e. 8 < t ≤ 16). However, with the more accurate (longer) learning (error 0.5%) the network reproduces the stimulus with a high precision. Damped oscillations Let us now consider the damped oscillator (5.44) (i.e. r > 0). Now, strictly speaking, the stimulus is not periodic, since the oscillation amplitude decays in time. Nevertheless, the network can learn the √ stimulus, and Theorem 5.7 can be applied. To illustrate this case we use ω 2 = (3 − 5)/2 (which corresponds to period 10 undamped oscillations) and r = 0.1 to generate the training stimulus. Then we use the first three vectors a1 , a2 , and a3 to evaluate the theoretical prediction for the learning convergence given by (5.47), which yields 1 1 (5.49) Wdamped oscill = −0.382 0.518 Numerical simulation (Fig. 5.4B) indeed shows that the learning process converges to the coupling matrix (5.49). However, depending on the learning rate ε and the damping constant r, the learning may not converge in a single stimulus presentation. This happens when the learning time scale is lower than the scale of the oscillation decay, hence the network “has no time” to learn the stimulus, which disappears too fast. In this case, a solution to train the network is to present the same stimulus several times. In Fig. 5.4B we truncated the stimulus to 60 vectors, the left panel shows two stimulus presentations (epochs). At each stimulus presentation the network gradually improves the coupling matrix approaching the theoretically predicted one (5.49). After three stimuli presentations the error falls below 6% and five presentations result in 1% error. The latter precision is enough to get a high quality stimulus replication (Fig. 5.4B, right panel). Finally we note that nD network can learn the dynamical situations given by nD linear maps (i.e. loosely speaking by any linear differential equation).
236
5.1.5
M.G. Velarde et al.
Dynamic Situations: Response of Trained IC-Unit Networks to a Novel External Stimulus
Let us consider a RNN of IC units when the learning process has been finished, i.e., the network now works in the operational phase with W = const. Then a novel, in general arbitrary, stimulus is given to the network, which provokes a response of the network state x(t), whose dynamics we shall describe now. For convenience we shall ascribe to the previously performed learning phase t < 0 (“past”) and to the operational phase t > 0 (“future”). Then the external network activation can be represented by ξ (t), t < 0 f (t) = (5.50) e, t≥0 where ξ (t) is the learnt stimulus (e.g. a or ab), which fulfills the basic equation, and e is an arbitrary (including e = 0) postlearning network activation. Then the network response x(t > 0) is expected to evolve in time to a new constant value (or diverge in time) according to (5.6). To complete the problem we also have to specify the initial condition for the network state, i.e. x(t = 0) = x0 . Two possibilities to define x0 can be considered: i) the so called “continuous” case when the initial network state in the operational phase is the same (continues) as the state at the end of the learning phase, i.e. x0 = ξ (0), or ii) the network state being reset, i.e. x(0) = 0. If after learning we take off the external activation from all units (i.e. e = 0) in the continuous case the network will replicate the previously learnt stimulus, e.g. x(t > 0) = a for the training with single vector a, or x(t > 0) = ababab... for the training with two alternating vectors a and b (Fig. 5.3, right panels). Then, for e = 0 and x(0) = 0 (resetting) the network will stay at the rest x(t > 0) = 0. However, for a nonzero novel stimulus (e > 0), the network will relax to a new state, in general, different for Su and Mu networks. 5.1.5.1 Relaxation Dynamics of Su-Networks (n = 3) In the case of a RNN composed of Su units we have constraints xi (t > 0) = ei for all ei = 0. Thus only the units with zero external activations are allowed to evolve in time, while the others instantaneously (in one time step) adopt the new activation values. If non-zero activations are applied to all units then the network shows no dynamics at all. Thus we assume that the novel stimulus is “incomplete”, i.e. it has zero (empty) components. In this section for the sake of simplicity, we limit our analysis to the case of 3D networks. Moreover, without loss of generality we assume that zero components are in the first elements of the novel stimulus vector e. Stimulus completion First, for convenience, let us introduce auxiliary parameters that can be evaluated using the training vectors a and b
Γ = a2b2 − a, b 2 ∆i j = ai b j a2 + a j bi b2 − (ai a j + bib j ) a, b
(5.51)
Mathematical Approach to Sensory Motor Control and Memory
237
We note that for linearly independent a and b, Γ > 0. The response of a trained RNN to a novel incomplete stimulus follows from the following theorem: Theorem 5.8. Assume that a 3D Su-network has been previously trained by a stimulus ξ (t) satisfying the basic equation B, ξ (t) = 0. Then a novel constant stimulus vector e with at least one zero component is applied. 1a) For the network trained by one vector (ξ = a) and e = (0, e2 , e3 ), the network state always relaxes to x1 =
e2 a 2 + e3 a 3 a 1 , x 2 = e2 , x 3 = e3 a2 − a21
(5.52)
1b) For the network trained by two linearly independent vectors (ξ = abab . . .) and e = (0, e2 , e3 ) the network dynamics depends on the following condition # # # ∆11 # # # (5.53) # Γ #<1 If (5.53) is satisfied then the network state relaxes to x1 =
∆12 e2 + ∆13 e3 , x 2 = e2 , x 3 = e3 Γ − ∆11
(5.54)
and diverges otherwise. 2a) For the network trained by one vector (ξ = a) and e = (0, 0, e3 ), the network state always relaxes to e3 x= a (5.55) a3 2b) For the network trained by two linearly independent vectors (ξ = abab . . .), and e = (0, 0, e3 ), the network dynamics depends on the following condition # # # # 2 # # 2(a b − a b ) 2 1 1 2 # #<1 (5.56) # # 2 + 4(a b − a b )2 Γ # # ∆33 ± ∆33 2 1 1 2 If (5.56) is satisfied then the network state relaxes to x=
e3 (a + b) a3 + b3
(5.57)
and diverges otherwise. The proof is given in [36]. Remark3: The final network state (given for the corresponding case by (5.52), (5.54), (5.55), and (5.57)) does not depend on the initial network state x0 . The latter means that the network behaviour is robust to the internal perturbations. The proof is given in [36].
238
M.G. Velarde et al.
Response of RNN trained by one vector Let us first consider the case of a RNN trained by one vector a. According to Theorem 5.8 any perturbation of the network state exponentially decays to a fixed point. However, this fixed point may not belong to the plane defined by the basic equation. If only one of the three components of the novel external stimuli e is zero, then the network will relax to a state that fulfills the basic equation only when e2 a2 = e3 a3 i.e., when the new activation has the same ratio between its second and third nonzero components as that of the training vector. Indeed, in this case x1 (t) → a1 e3 /a3 and B, x∞ = e3 /a3 B, a = 0. Thus the network recovers the “lost” stimulus part. For the case of two vanishing components in the novel stimulus the network always finds a new solution satisfying the basic equation. Indeed, from (5.55) we have B, x∞ = e3 /a3 B, a = 0. Moreover, the new network state represents a scaled version of the stimulus a previously learnt. The scaling factor can be adjusted by tuning of the nonzero component e3 of the novel stimulus. Thus starting from arbitrary initial conditions the network completes the previously learnt pattern and provides it on the output with the scale factor controlled by the novel stimulus. Response of RNN trained by two vectors Let us now consider the case when the network has been trained by two (linearly independent) vectors a and b. Note that now the stimulus vectors uniquely define the plane corresponding to the basic equation. At variance with to the training by one vector, now the novel stimulus e does not necessarily lead to a relaxation of the network state to a fixed point. If (5.53) or (5.56) is not satisfied by the training stimulus (e.g. for a = (1.5, 3, 2)T and b = (1, 1, 1)T , ∆11 /Γ = 25/14 > 1) the network state diverges. This means that for the network stability an suitable (safe) network training is required. If the novel (arbitrary) stimulus has one zero component, then the necessary and sufficient condition ([36]) for the network to always relax to the basic plane (i.e. B, x∞ = 0) is that a1 = b1 (which is always possible). Indeed, (5.53) is reduced to (a3 b2 − a2 b3 )2 > 0 and (5.54) to x1 = −
B 2 e2 + B 3 e3 , x 2 = e2 , x 3 = e3 B1
(5.58)
which satisfies the basic equation B, x = 0. Therefore, the network can be used to perform simple algebraic tasks as e.g. x 1 = x2 + x 3
(5.59)
To illustrate the algebraic capabilities of the network, first from the given algebraic equation (5.59) we have B = (1, −1, −1)T . Second, we chose two vectors satisfying the basic equation with a1 = b1 . Using as an example a = (1, 0.5, 0.5)T and
Mathematical Approach to Sensory Motor Control and Memory
239
A 12
x1
x1 = x2 + x 3 x(0)
10 8
x(1)
6
x(2) x(3)
4 2 0
x2
x(15) 0
1
2
3
4
5
6
B 3
x1
x(1)
2
x(3)
x1 = 2x2 /5
1
x(15)
0
x(2)
-1
x(0) -2 -3 -4
x2 -3
-2
-1
0
1
2
3
4
Fig. 5.5. Response of trained 3D Su RNN to a novel stimulus. A) Example of an algebraic task performed by the network (projection of the phase space on the plane (x1 , x2 ) is shown). The network first has learnt the basic equation x1 = x2 + x3 by using as a stimulus periodic sequence of two vectors a = (1, 0.5, 0.5)T and b = (1, 1.5, −0.5)T . Then as a novel activation we use (0, e2 , e2 ). The initial value of the output on the first unit is either x1 (0) = 0 or x1 (0) = 10. Independently on the initial network state the network relaxes to the learnt basic equation (e2 + e3 , e2 , e3 ) = (2e2 , e2 , e2 ), thus dynamically “evaluating” the missing stimulus part. Arrows mark the direction of trajectories. B) Example of stimulus summation and scaling. As in (A) the network has been trained by two vectors a = (2, 1.5, 0.5) and b = (−1, 1, −2). Then a novel activation in the form (0, 0, e3 ) is applied to the network. We use four values e3 = {−1, 0, 1, 2}. Independently on the initial condition x(0) the network “sums” and scales the learnt vectors x = e3 (a + b)/(a3 + b3 ). All trajectories end up at the line x1 = 2x2 /5. Arrows mark the direction of trajectories.
b = (1, 1.5, −0.5)T we train a 3D Su network similarly as done in Fig. 5.3B. Once the coupling matrix (given in this case by (5.40)) has been learnt we can apply to the network novel incomplete stimuli. For illustration we use e = (0, e2 , e2 ) (i.e. with the same second and third components). Then according to Theorem 5.8 and (5.58), independently on the initial condition x(0), the network output relaxes to x = (2e2 , e2 , e2 ), which means that the network finds the missing stimulus value x1 = 2e2 = e2 + e2 . Figure 5.5A shows the network trajectories starting from different initial conditions for different values of the stimulus component e2 . All trajectories end up at the straight line x1 = x2 + x3 fulfilling the task (5.59). If the novel stimulus has two zero components and the training stimulus satisfies (5.56) then the network perturbation always relaxes to the basic plane. Moreover, the new adopted network state represents a weighted mean of the two training vectors.
240
M.G. Velarde et al.
Again, as in the case of training by one vector, the scale factor is controlled by the nonzero stimulus component. To illustrate this case we also use (5.59) as the basic equation and train the network by two vectors a = (2, 1.5, 0.5) and b = (−1, 1, −2). Then according to (5.57) the network state relaxes to x = e3 (2/3, 5/3, −1), which yields x1 = 2x2 /5. Figure 5.5B shows four network trajectories for four different values of the novel stimulus e3 starting from different initial conditions x(0) and converging to x = e3 (2/3, 5/3, −1). Note that the trajectories exhibit damped oscillation (jump from one to the other side) around the attractor line x1 = 2x2 /5. Damping with constant input Let us consider a network trained by a single vector a. Similar to Sec. 5.1.3.4 the weights on the diagonal of the learnt coupling matrix wii = a2i /a2 can be interpreted as damping elements and the damped matrix is given by ⎞ ⎛ d ah an h1 1 2 1 1+d1 1+d1 · · · 1+d1 ⎜ a1 h2 d2 an h2 ⎟ ⎟ ⎜ 1+d2 1+d2 · · · 1+d 2 ⎟ ⎜ (5.60) Wdamped = ⎜ . . . . .. . . .. ⎟ ⎠ ⎝ .. a1 hn a2 hn dn 1+dn 1+dn · · · 1+dn where hi = ai /(a2 − a2i ). Then for di = ai hi the damped matrix (5.60) is reduced to W∞ as defined by (5.38), in other words, the training procedure (5.10) produces the damping factors ai hi . Clearly, for a set of arbitrary chosen damping factors (di = −1), Wdamped a = a, which means that the damping factors do not change the existence of the solution (interpreted as memory contents). However, as the following theorem states, their adjustment leads to a change in the relaxation rate provoked by a perturbation of the network state. Theorem 5.9. Assume that a novel incomplete stimulus e is applied to a 3D Su-network with the modified coupling matrix (5.60) and ai = 0, di ≥ 0 (i = 1, 2, 3). i) If e = (0, e2 , e3 )T then the network state relaxes to x1 = (e2 a2 + e3 a3 )h1 , x2 = e2 , x3 = e3
(5.61)
Moreover, the relaxation rate is given by d1 /(1 + d1 ), i.e. the larger d1 , the slower the network approaches the stable solution. ii) If e = (0, 0, e3 )T then the network state relaxes to x=
e3 a a3
(5.62)
Damping factors affect the relaxation rate. Particularly, for d1 = d2 = d, the larger d the slower the network approaches the stable solution. The proof is given in [36]. According to the theorem the damping factors do not alter the result of the network response to a novel stimulus (compare (5.52) and (5.55) with (5.61) and (5.62), respectively). However, the convergence rate can be adjusted by the damping factors.
Mathematical Approach to Sensory Motor Control and Memory
241
di(t)
xi(t)
+
external input
S
-
si(t)
wi1 wi2 wi3
LPF P
yi(t +1)
S output
x1(t) x2(t) x3(t) recurrent input
xi(t +1)
Fig. 5.6. Modified Su with the first order low pass filter (LPF) incorporated in the circuit
Low pass filter and damping elements In several studies concerning the behaviour of RNNs, instead of using simple summation units as done here, units containing blocks showing dynamical properties are used. An often applied extension is the use of a low pass filter at the output (Fig. 5.6). The low pass filter damps fast oscillations allowing the network to converge rapidly to a fixed point. The dynamics of a first order low-pass filter is given by
τ y˙ = −y + I(t)
(5.63)
where I is the signal on the filter input, y is the filter output, and τ defines the filter time constant, i.e. decay velocity of the filter output in response to delta-function input. Since our RNNs operate in discrete time we also discretize (5.63) and obtain the equation describing the dynamics of the LPF block in Fig. 5.6 yi (t + 1) =
τ −1 1 yi (t) + Ii (t) τ τ
(5.64)
The behaviour of a Su-RNN based on the damped coupling matrix with damping dmp is identical to that of a network using the undamped matrix (i.e. with elements di di = 0) but with each unit being equipped with a low-pass filter with the time constants τi = didmp − 1. 5.1.5.2 Relaxation Dynamics of Mu-Networks (n = 3) Let us now discuss the evolution of a Mu-network under a general step-like perturbation (5.50). Once the training process described in Sec. 5.1.4.3 has been finished, the dynamics of the Mu-network obeys (5.3). Note that the new activation may not satisfy the basic equation (5.7), i.e. B, e = 0, and the coupling matrix W is defined by (5.38). When dealing with Su-networks we could easily predict the evolution of the units with non-zero activation: xi (t > 0) = ei for ei = 0. For Mu-networks the answer to the same question is not so trivial. It has been shown that, as for the Su-network, one of the network output variables always appears to be fixed (unchanged) during the relaxation, i.e. xi (t > 0) = ei . However, there is no indication as to which of the units stays fixed. First let us assume that the new stimulus vector e is given by rescaling the training stimulus e = ρ a (ρ is a nonzero constant), and hence satisfies the basic equation. Then
242
M.G. Velarde et al.
the dynamics of (5.3) is simply reduced to x(t) = e, i.e. there is no time evolution of the network state. Consequently, in the following we consider the case e = ρ a, i.e. we apply to the network a novel significantly different stimulus. Then we have the following result: Theorem 5.10. For a Mu-network assume that the training vector a and the posttraining stimulus vector e are either both positive (a, e > 0) or negative (a, e < 0) and e = ρ a. Let i ∈ {1, 2, ..., n} be the only unit satisfying a, e |ai | ≤ |ei |, ai = 0 a2
(5.65)
Then xi (t > 0) = ei , i.e. the output variable corresponding to this unit is fixed, while the others evolve in time according to x j (t) = λ (t)a j t−1 a2i ei 1 − λ (t) = a,e − + aeii ai a2 a2
(5.66)
for j = i. The final network state is given by lim x(t) =
t→∞
ei a ai
(5.67)
The proof is given in [36]. Theorem 5.10 provides the answer on the unit number whose output will be fixed in time. Note that it usually, but not always, corresponds to the highest element of the posttraining stimulus vector. Besides, the final network state (5.67) reproduces a rescaled version of the training vector a. Let us illustrate the theorem in the 3D case considering positive training and posttraining vectors (a, e > 0), such that the condition (5.65) is satisfied for i = 3 unit, i.e.
κ a 1 > e1 , κ a 2 > e2 , κ a 3 ≤ e3 ,
(5.68)
where κ = a, e /a2. As aerlier done we use a = (1, 3, 2)T , and for the sake of simplicity we set e1 = 1. Then Fig. 5.7A shows a geometrical solution for the inequalities (5.68). We can have either e3 > e2 or e3 < e2 (above or below the bisectrix). In both cases, according to Theorem 5.10 the third unit of the network will have no dynamics, i.e. x3 (t) = e3 . Thus as mentioned above, the unit with constant output usually, but not always, corresponds to the highest element of the post-training vector. Indeed, Figs. 5.7B and 5.7C confirm the theorem predictions: i) x3 (t > 0) = e3 , and ii) x1,2 (t) → e3 a1,2 /a3 , which for the parameter values used in the figure gives x1,2 → (2, 6) for Fig. 5.7B, and x1,2 → (1.1, 3.3) for Fig. 5.7C. Thus the final network state is x = 2a in the first case and x = 1.1a in the second. 5.1.6
IC-Networks with Nonlinear Recurrent Coupling
In the previous sub-sections we have illustrate the dynamical behavior of “simple” nonlinear networks, when each IC unit on its internal input receives a linear weighted sum
Mathematical Approach to Sensory Motor Control and Memory
243
of outputs of all units (5.1). In this context we may refer to such recurrent input as a linear coupling. The use of nonlinear coupling can greatly enhance the flexibility of RNN, to represent and model more complex external stimuli or situations, e.g. nonlinear algebraic relationships or pattern completion. Specific cases of nonlinear RNN, could successfully be trained [32], but there was no general statement concerning the conditions under which such training is possible. Let us therefore investigate up to what extent a RNN can be trained when “strong” nonlinear properties are introduced in the recurrent coupling. To generalize the network architecture shown in Fig. 5.1C we include nonlinear blocks into the recurrent pathway, i.e., elements whose output is a nonlinear function of the input. Figure 5.8 shows two possible network architectures with nonlinear blocks inserted between the unit outputs and their internal inputs. The difference between these architectures is in the order of operation, i.e., first nonlinearity and then coupling (Fig. 5.8A), or viceversa (Fig. 5.8B). Denoting the nonlinearity used in the network by g(x), in the first case we have the generalized version of (5.1) in the form
A
6
e3 ka =e
5
1
1
B
4 3
C 2
ka =e 3
3
1 ka =e 0
x1,2,3
B
2
1
e2
2
1.5
2
2.5
3
6
C3.5
5
3
4
e3 = const
3.5
4
2.5 e3 = const
3
2
2
1.5
1
5
10
iteration
15
1
5
10
iteration
15
Fig. 5.7. Relaxation dynamics of 3D Mu-network. A) Graphical solution for the constraint (5.68) with a = (1, 3, 2)T and e1 = 1. The gray area corresponds to the possible values of e2 and e3 . Dashed line is the bisectrix. Two filled circles marked by letters B and C correspond to stimulus values used in simulations of the relaxation dynamics of the network shown in panels B and C, respectively. B,C) Time evolution of x1,2,3 (marked by cycles, squares and triangles, respectively). In both cases the solution converges to x = ae3 /a3 satisfying the basic equation.
244
M.G. Velarde et al. B
coupling
external activation
external activation
A
IC IC IC
coupling
IC IC IC
nonlinearity nonlinearity
output
output
Fig. 5.8. IC-networks with nonlinear blocks (shown as boxes) in the recurrent pathway (case n = 3). A) The nonlinear blocks are placed before the coupling. B) The nonlinear blocks appear after the coupling. n
si (t) =
∑ wik g (xk (t))
(5.69)
k=1
whereas in the second case
si (t) = g
n
∑ wik xk (t)
(5.70)
k=1
5.1.6.1 Training the Network with Nonlinear Blocks Preceding the Coupling Let us start with the network where the nonlinear blocks are placed before the coupling (Fig. 5.8A). We shall consider the learning of a static stimulus, i.e., when during the training the network receives a constant external input ξ (t) = a. In this case, generalizing the above described learning algorithm (5.9) or (5.10) and using the constant external input, we obtain W (t + 1) = W (t) I − ε g(a)aT + ε aaT (5.71) where g(a) = (g(a1 ), . . . , g(an ))T . For the learning process described by (5.71) the following theorem holds. Theorem 5.11. Assume that network A (Fig. 5.8A) is trained by a constant stimulus a such that gT (a)a = 0 and the learning rate satisfies 0<ε <
2 gT (a)a
(5.72)
Then the learning process given by (5.71) converges. Moreover, for zero initial conditions W (0) = 0 we have aaT W∞ ≡ lim W (t) = T (5.73) t→∞ g (a)a The proof is given in [36]. We also note that for g(x) = x (5.73) reduce to (5.38). Using Theorem 5.11 we see that the learning convergence can be achieved by an appropriate choice of the learning rate ε for any nonlinear function g satisfying g(x)x > 0. The latter condition for instance is true for odd functions like tanh(x), x3 , sign(x), etc.
Mathematical Approach to Sensory Motor Control and Memory
245
5.1.6.2 Training the Network with Nonlinear Blocks Following the Coupling Let us now consider the case when the nonlinear blocks are included after the coupling (Fig. 5.8B), hence we have the following learning rule W (t + 1) = W (t) − ε g(W (t)a)aT + ε aaT
(5.74)
where g(W (t)a) = (g( W1 , a ), . . . , g( Wn , a ))T , with Wi being the i-th row of W . The convergence of the learning algorithm (5.74) is given by the following theorem. Theorem 5.12. Assume that network B (Fig. 5.8B) is trained by a constant stimulus a and g(x) ∈ C1 (R) is a monotonic function on R, g (g−1 (ai )) = 0. If the learning rate satisfies 2 (5.75) 0 < ε < min 1≤i≤n g (g−1 (ai ))a2 then the learning process given by (5.74) converges. Moreover, for zero initial conditions W (0) = 0, we have W∞ ≡ lim W (t) = t→∞
g−1 (a)aT . a2
(5.76)
The proof is given in [36]. Again (5.76) reduce to (5.38) for g = x. We also note that if g−1 (a) does not exist (e.g. tanh−1 (3)) then the learning process diverges. 5.1.6.3 Simulation Results To illustrate the above stated theorems we use g(x) = x3 as the nonlinearity, and a = (1, 3, 2)T as the training stimulus. Let us first consider the case of the nonlinearity preceding the coupling (Fig. 5.8A). We obtain g(a) = (1, 27, 8)T . Then from (5.72) we get εmax = 1/49 ≈ 0.020 and from (5.73) ⎞ ⎛ 1 3 2 1 ⎝ 3 9 6⎠ (5.77) WA = 98 2 6 4 1 Network A Network B
matrix distance
0.8
0.6
0.4
0.2
0
0
20
40
60
80
100
iteration
Fig. 5.9. Performance of learning constant stimulus by IC-networks with nonlinear blocks in the recurrent pathway shown in Fig. 5.8 (g(x) = x3 , a = (1, 3, 2)T , and ε = 0.02)
246
M.G. Velarde et al.
For network B with nonlinearity following the coupling (Fig. 5.8B) using the same training vector a = (1, 3, 2)T we find g−1 (a) = a1/3 = (1, 31/3 , 21/3 ) and g (g−1 (a)) = 3(a1/3)2 . Then using Theorem 5.12 we get εmax ≈ 0.023 and ⎞ ⎛ 0.071 0.214 0.142 (5.78) WB ≈ ⎝ 0.103 0.309 0.206 ⎠ 0.090 0.270 0.180 Figure 5.9 shows the learning performance for the two network configurations shown in Fig. 5.8. In simulations we used the same nonlinearity and the same learning rate. For network A (coupling follows nonlinearity, Fig. 5.8A) the matrix distance goes below 1% in 114 iterations, whereas for network B the same precision is achieved in 8 steps. Thus for the given nonlinearity the use of network architecture B is more beneficial. 5.1.7
Discussion
If, following Fuster [21], one interprets the term “memory” to comprise not only declarative and/or procedural memory, but also one does not separate individual memory from species memory, then the task to understand the memory does not mean less than to understand the brain. Therefore, investigation of the memory organization and functions is a challenging task. The goal to understand memory function primarily requires solution of two basic problems. One concerns the question how individual memory items are stored in the form of neural networks. The second question is how such memory elements may be connected to form large temporal or contextual structures. Here we have dealt with the first problem assuming that situations given in the environment are represented by RNNs of yet unspecified structure. Among different situations to learn we have static, represented by one or more stimulus vectors that are considered temporarily independent, and dynamic situations, represented by a sequence of temporarily ordered stimulus vectors. For instance, situations described by linear or nonlinear differential equations belong to the latter case. Following the hypothesis that stimuli are stored in the inter-unit couplings, it has been shown that specific RNN architectures based on Su or Mu units can be used to learn and represent static and dynamic situations. Moreover, we have shown how the learning rule should be adjusted (selecting (5.9) or (5.10)) according to the situation. The obtained “situation models” might then be used to represent the learned stimuli or to control behavioral output. Examples of the tasks to be solved include pattern completion, solution of simple algebraic problems, learning and finding the position of a home relatively to visible landmarks or to represent a model of the own body. Most of these investigations had been previously performed based on numerical studies only. Therefore no proof concerning the generality or limits of these proposals had been given. This gap has now been closed in a way that many of the qualitative statements could be proven or the quantitative limits be defined. Using the teacher forcing method and the traditional delta rule applied locally within a neuron, training RNNs is possible for situations being described by linear basic equations, by linear differential equations and also by nonlinear versions of both types. Also linear MMC networks can be learned this way. Concerning the training, there is no
Mathematical Approach to Sensory Motor Control and Memory
247
difference between the use of Su or Mu units in the network, but the difference appears in the network responses to a novel stimulus. During the training of a RNN with linear couplings, the network learns the weight matrix W whose damping factors can be the same or may be different for all units. Both, learning of static situations and learning of dynamic situations have been studied. We have shown that the learning of static situations works on a linearly independent subset of the training vectors and the limit coupling matrix does not depend on their particular sequence. This is not the case for dynamic situations where the time order is indeed important. Theorems 1 and 2 for the static case and Theorems 5, 6, and 7 for the dynamic case provide coupling matrices that will be formed during the learning and what are the appropriate limits for the learning rate in general, i.e. for the case of arbitrary number of units in the network and arbitrary stimulus complexity. As an example for the static case, matrix (5.20) shows the weights (elements of W ) of a threeunit network trained statically with two vectors. Matrix (5.26) shows how the weights can be interpreted to include damping factors that define the relaxation characteristics of the network after a disturbance. As an example for the dynamic case, the weights are given by (5.38), (5.40), or (5.42) for learning a periodic sequence of one, two or three stimulus vectors, respectively. If, after the learning of the periodic sequence, the external input is then switched-off, the network reproduces this input, either a constant vector or a temporal pattern (Fig. 5.3). We have shown that nD network can learn a dynamic situation consisting of up to n different vectors. Moreover, the network will reproduce sequentially the learned vectors in the order they have been shown to the network. Such network ability can be used, for instance, to store a movie, where each frame can be considered as a training vector. Theorems 11 and 12 extend results of the learning static situations onto the case of nonlinear coupling. They provide general conditions on the type of nonlinearities, weight matrices developed during the training, and limits for the learning rate for different arrangements of the nonlinear blocks in the inter-unit coupling. For instance, the nonlinearity preceding the linear part of the coupling (as shown in Fig. 5.8A) should satisfy the condition xg(x) > 0, which is fulfilled by odd functions like g(x) = tanh(x). The nonlinearity following the coupling (Fig. 5.8B) should satisfy the condition g (g−1 (x)) > 0. We have shown that placing the nonlinearities at the output of the units requires more learning steps to converge compared to placing the same nonlinear functions at the internal input of the units. However, the latter network may cause problems if the inverse of the nonlinear function does not exist for the whole range of possible values of the stimulus vector, e.g. tanh−1 (x) exists only for |x| < 1. Generally, artificial neural networks are investigated in two versions using either i) simple summation units, or ii) units equipped with dynamic properties, usually a lowpass filter. The latter cases are called continuous time recurrent neural networks [10]. Steink¨uhler and Cruse [41] have indicated that using summation units with appropriate positive weights at the diagonal of the coupling matrix may endow the network with low-pass filter like properties. Using the units with low-pass filter blocks (Fig. 5.6) here we have shown that indeed damping factors di in Su units correspond to the decaying time constants τi = di − 1 of the first order low-pass filter introduced at each unit. Recently, inclusion of such low-pass filter units instead of setting the diagonal weights
248
M.G. Velarde et al.
to zero were shown to be advantageous in the case of a network used for landmark navigation [17]. Apart from the learning situation models, an important question concerns how such networks behave after the learning process has been finished. If a novel stimulus is then applied, the network will relax to a new state. As it has been discussed by K¨uhn et al [32] these networks can be used to represent short- and long-term memories. They can be used to perform simple algebraic tasks or more abstract pattern separation as ABB or ABA. Applying the property of pattern completion these representations could be used for reconstruction of missing inputs, or for pattern recognition. For example the missing value of x1 can be recovered given x2 and x3 . These properties have now been investigated in quantitative detail for the case of three-unit networks. Regarding the static case for a network consisting of three Su units and being trained by two linearly independent vectors, Theorem 3 shows that after a disturbance the network relaxes to the attractor plane defined by the basic equation. Therefore, the network can be used to compute simple algebraic equations. Theorem 4 provides information concerning the relaxation dynamics. The damping factors learnt lead to the relaxation trajectory being orthogonal to the plane defined by the basic equation, i.e. the relaxation follows the shortest path possible. We have also shown how one can tune the damping factors to speed down the relaxation (decrease of the number of the required time steps down to a single iteration). Similarly, with respect to dynamic situations, Theorem 8 considers the response of a network with Su units, which has been trained by either one or two input vectors. For the network trained by one vector providing a novel incomplete input to two units that corresponds to the training vector, while setting the other stimulus component to zero, the network will restore this missing stimulus part. If two of three inputs are set to zero, in general, the new network output will represent a scaled version of the learned vector. Moreover, the scaling factor can be controlled by the nonzero input. For the network trained by two vectors application of the incomplete novel stimulus with two zero components leads to a new state on the network output satisfying the basic equation. Furthermore, we have shown that the new state represents a scaled sum of the training vectors. If specific conditions are fulfilled by the training vectors and then one component of the novel stimulus is set to zero, the threeneuron network can be used to perform simple algebraic calculations, e.g. x1 = x2 + x3 (Fig. 5.5) if some constraints are fulfilled not being necessary when applying the learning procedure for the static case. Using Mu units, in the same situation the network after disturbance will always relax to a scaled version of the training vector. One unit of the network will maintain its value during the relaxation and the conditions for which unit is selected are given for specific cases in Theorem 10. Having investigated basic properties of the recurrent networks constructed by Su or Mu units, we now have a solid basis to approach the second goal when searching for a memory structure, namely how to connect different situation models in a sensible way in order to represent temporal sequences of such situation models and to search for possibilities of how such situation models could be arranged within a dynamical hierarchy.
Mathematical Approach to Sensory Motor Control and Memory
249
5.2 Probabilistic Target Searching 5.2.1
Introduction
Cognition is one of the core concepts of artificial intelligence, involving processes such as perception, memory, and reasoning usually related to humans. Recent advances in behavioral studies and robot design have led to formulating the concept of “minimal cognition” (see e.g. [8, 9]): a model agent must be simple enough to be computationally and analytically tractable; otherwise there is no chance to deeply understand its behaviour. Let us address the navigation problem in mobile robotics exemplifying the principle of minimal cognition by establishing the relationship between the agent sensory-motor complexity, life-time and the complexity of its brain. In the 1980s and the first half of 1990s the deterministic approach to guide a robot towards a goal was in high prominence (see e.g. [45, 1, 13]). This approach assumes implicitly that the robot has unlimited computational capacity, complete information on the external world, and measurements, e.g. of the distance or position, have no error. Although this procedure may work in some ideal conditions, frequently such robots fail to behave properly. As expected numerous studies show that living organisms (especially the simplest) have no huge computational capacity, and they neither rely on precise data nor build an exact sophisticated description of the environment, but they do perform very successfully in a complex, time evolving world. So a different rationale approach should be behind this success. Then a new methodology, opposite to the deterministic approach, gained interest. Robots are inherently uncertain about their state and the state of the environment. Accordingly, such new approach is based on probabilistic principles that account better for the complexity of real-world applications [5, 44]. The core of the probabilistic approach is built up on two items: probabilistic perception, and probabilistic control. For example, guessing a quantity from sensor data, the probabilistic approach computes the whole probability distribution, instead of generating a single best guess only. Moreover, a probabilistic robot knows about its own ignorance, a key prerequisite of truly autonomous robots. As a result, such a probabilistic robot can gracefully recover from errors, as done in the kidnapped robot problem [20]. A recent example of the probabilistic approach is the MEDUSA algorithm [29]. Offering many advantages over the deterministic approach, the probabilistic robots usually require a very high computational capacity and hence the need of approximations. In the following subsections we make use of the concept of minimally cognitive artifacts to answer the question: What does a simple agent with a limited life-time really require for constructing a useful representation of the environment? A widely accepted idea is that the system should exploit statistical dependences contained in the sensory signals and reduce redundancy [2, 6]. However, the limited life-time implies that sometimes an agent has no time or capability to generate objective and actionindependent response. The system should make use of a personalized representation of the world that depends on its own physical properties, which in certain circumstances can lead to the (not so) surprising conclusion that a complex brain is useless for a simple organism.
250
5.2.2
M.G. Velarde et al.
The Robot Probabilistic Sensory - Motor Layers
In this section we adopt the concept of probabilistic perception and motor control, and propose a robot platform, i.e. the sensory and motor layers. To reduce the problem dimension (so staying in the minimal cognition principle) we consider very simple sensor and motor layers. This will later allow us to study how the navigational capabilities of the robot change when its brain evolves. Figure 5.10A provides a sketch of the general robot architecture including sensory, neural network (the “brain”), and motor layers. We shall deal with a robot having a limited life time, able to move in a limited space (a room) with a goal to reach a target. The robot moves one step at a time (Fig. 5.10B in either of four directions (left, right, up, down). The limited life-time implicitly forces the robot to go to the target in the minimal number of steps. 5.2.2.1 Sensory Layer The robot sensory system perceives a certain stimulus emitted by the target (e.g. sound or smell), whose intensity decreases with the distance. For illustrations we assume that the stimulus intensity decays as: I(r) =
hI0 , r+h
(5.79)
where r is the distance from the current robot position to the target, I0 is the intensity at the target, and h is the cut-off constant. According to the limited resources concept the exact world model (5.79) is not available to the robot. Instead the robot can compare but not measure the stimulus intensity between the two consecutive steps getting the differential characteristics: ∆ Ii = Ii − Ii−1 + δi , (5.80) A
Neural network layer
sensory layer
DI
(u,v)
motor layer
b
a ga a
B
x
Target
y
Robot
pup
pleft
pright pdown
Fig. 5.10. A. General scheme of a robot with three main blocks: sensory system, neural network (brain), and motor control. B. The robot moves one step at a time in either of four directions (left, right, up, down) with probabilities depending on the input received by the motor layer.
Mathematical Approach to Sensory Motor Control and Memory
251
where δi is the sensor noise describing the measurement uncertainty and uniformly distributed in [−δ , δ ]. When the stimulus difference is much higher than the uncertainty |Ii − Ii−1| δ the sensory system provides a reliable output. The radius at which the robot “correctly hear” the target is: $ hI0 r≈ . (5.81) δ The robot performs a step at a time and consequently the sensory output occurs at integer multipliers of the step time interval ∆ . Without loss of generality we set ∆ = 1. When the robot does a step towards the target its sensory system produces a spike. Then the output can be presented as a sequence of δ -functions or spikes: S(t) = ∑ δ (t − mk ),
(5.82)
k
where {m} is the set of steps with positive ∆ I. Note that the stimulus measurement, i.e. inferring the absolute value of I(r) is much stronger, unnecessary requirement to the robot skill. The possibility of testing the gradient of the stimulus intensity means that the robot has got a simple one-step-memory capacity in the sensory layer. This allows us to draw the first important conclusion: minimal (proto) intelligence requires memory capacity in the sensory layer. We also add that hardware implementation of such a sensor can be achieved with few electric capacitors and switches. 5.2.2.2 Motor Layer The robot motor layer is defined by two parameters: α and γ (Fig. 5.10A). These parameters, either fixed in time or changing from step to step, determine the robot navigational behaviour. Directionality parameter α Let us for simplicity fix γ = 1, and assume that the sensory output is fed directly to the motor layer (i.e. there is no intermediate neural network between the sensory and motor layers). Then the robot next step is defined by the successful of the previous action, i.e. by the presence or absence of sensory spike. From Fig. 5.10 (left inset) it follows that the probabilities (Fig. 5.10B) are given by: p left = pright = α , (α , 1 − 3α ), a spike received (pahead , pback ) = (1 − 3α , α ), otherwise.
(5.83)
An increase of α diminishes the probability of going back and increases the probability to follow the successful direction. Thus the constant α controls the robot directionality. In one limit α = 1/4 we get the “Brownian” robot (stochastic behavior with equal probabilities in all directions), in the other α = 1/3 we have a “purposeful” robot that always does a step in the direction of the actual target location. However, even in the latter case the robot makes the next step equiprobably in either of three directions, hence remaining probabilistic.
252
M.G. Velarde et al.
Modulation parameter γ (stochasticity level) A successful previous step defines the target location in the right half space (Fig. 5.11 left inset). Consequently, the robot next move should be either to go ahead or turn to the left or to the right. As above mentioned the directionality parameter α is responsible for that. Parameter γ scales the probability of going ahead, thus generalizing the motor layer. The next robot step further divides the half-space into two unequal parts. Since we assumed no a priory knowledge on the target position, the probability to find the target in the corresponding part is proportional to its area: Psuccessful S2 = Punsuccessful S1
⇒
Psuccessful =
S2 , S1 + S2
(5.84)
where S1 and S2 are the areas behind and in front of the robot, respectively (Fig. 5.11 center and right insets). According to (5.84) the best strategy for the next step depends on the area ratio, i.e. on the current robot position in the room. Since the robot has no information on its position in the room, a good approximation is an open space, i.e. room boundaries are far enough from the robot position. In this case the probabilities to make a good step ahead or a turn are: 1 1 2 Pahead = 1 − , Pturn = − , L 2 L
(5.85)
where L is the room size. For L 1 Pahead = 2Pturn , thus we suggests the optimal parameter value to be at least γ = 2. The motor layer parameters satisfy to the condition: β + (2 + γ )α = 1. From this condition γ is limited by 1 − 2α γ ≤ γmax = . α We can equal the probability of going ahead and back by setting the modulation parameter: γeq = 21α − 1. Decreasing γ < γeq corresponds to a robot that tends to escape (go away) from the target. The robot with γeq < γ ≤ γmax will move towards the target. Step increasing sensory input
Next step to the left
Next step ahead
Available Target Positions
L
S2
(equally probable)
S1
A
S2A
S1L
Fig. 5.11. Sketch diagram illustrating the relation of the areas with possible location of the target. The ratio of different areas gives the probability of the next step to be successful or not.
Mathematical Approach to Sensory Motor Control and Memory
253
Let us make an important remark here. The case
γ → ∞, αγ → 1 corresponds to a deterministic robot. Indeed, such a robot always (with probability 1) follows the previous step in the direction of increasing stimulus intensity. Hence in one limit our concept of the probabilistic motor layer also includes the deterministic case. Thus the modulation parameter γ biases the robot behaviour from stochastic to deterministic. 5.2.3
Obstacles, Path Complexity and the Robot IQ Test
We assume that the obstacles do not change the sensory information available to the robot, but the robot never crosses an obstacle. We recall that the robot above discussed has no information on the presence and positions of the obstacles. In general, obstacles on the pathway make harder the robot task of reaching a target. Path complexity quantifies how complex the way from the start to the end is. We define it as the mean number of steps needed by a Brownian particle (i.e. a Brownian robot without step limit) to reach the target Pc = NBrownian . (5.86) Note that the definition (5.86) is universal since it is expressed in the natural robot measure of the number of steps and it is a function of the obstacle geometry and the distance to the target, but not of the capabilities of a particular robot. The Pc is limited from below by the minimal initial distance to the target and can diverge when the target is unreachable, i.e. when no path connecting the starting robot position and the target exists. We quantify the path complexity for three room-configurations (see table): empty, with small obstacle and with complex obstacle. As expected the empty room has the minimal Pc and the room with the complex obstacle exhibits the highest path complexity.
Room configuration Empty Small obstacle Complex obstacle Path complexity (×105 ) 6.44
6.59
6.81
As a measure of the robot quality we introduce its “intelligence” coefficient: Pc &, (5.87) IQ = k log % Nsteps & % where Nsteps is the mean number of steps required by the robot to reach the target, and k = Nsc /Ntr ≤ 1 is the robot successfulness, i.e. the ratio of the number of successful target reaching to the number of statistical experiments (trials). Thus the robot that frequently fails to reach the target is penalized.
254
M.G. Velarde et al.
Numerous statistical experiments have been carried out using as a test bed the three different room-configurations. In the empty room the robot IQ (Fig. 5.12A) is a growing function of both parameters α and γ . The maximum “intelligence” shows at γ = 10 and α = 0.083. However when even a small obstacle appears on the path, this robot strongly decays in performance (Fig. 5.12B). The better strategy would be to reduce the modulation parameter γ to 2 but still keeping α maximal. Thus for simple obstacles we need to decrease the robot determinism but still do not change the strategy, i.e α = αmax . In the presence of a complex obstacle (Fig. 5.12C) all curves for different modulation parameter γ have a maximum at intermediate values of α . Surprisingly all maxima have similar robot IQ. This means that in the presence of complex obstacles the robot determinism is less important but the strategy should be changed, by decreasing of the directionality coefficient α to an intermediate value. Indeed to avoid a complex obstacle the robot should make a random walk backwards from the target and then to one of the side. Just decreasing γ we cannot achieve this behavior, since the robot most likely will return back to the trap. 5.2.4
First Neuron: Memory Skill
In the previous subsection we considered robot models having no “brain”, but only a direct sensory – motor pathway. Experiments with such a robot showed that in different environmental conditions different choice of the motor layer parameters is required. The next step introduces a simple brain to the simple agent. Under brain we shall understand a network of artificial deterministic neurons. This network can have an internal dynamics and is activated by the robot sensory system. Let us now introduce the simplest brain consisting of a single neuron. The sensory spike train (5.82) innervates the neuron according to: du u = − + J + wS(t), dt λ
(5.88)
where u is the “membrane” potential, J is the constant membrane current, w accounts for the synaptic strength, and λ is the membrane time constant. As we shall show below this neuron adds a short-time memory skill to the robot with λ defining the “forgetting” time scale. empty
Robot IQ
6
small obstacle
A
complex obstacle
B
g =1 g=2 g = 10
C
4
2
0
0.1
0.2
0.3
a
0.1
0.2
0.3
a
0.1
0.2
0.3
a
Fig. 5.12. Robot IQ for different values of the motor layer constants and different configurations of the environment (room): A) in an empty room, B) in a room with small obstacle, and C) in a room with complex obstacle
Mathematical Approach to Sensory Motor Control and Memory
255
To complete the robot design we couple the internal brain state to the motor parameters: α = α (u, j), γ = γ (u, j). (5.89) 5.2.4.1 Memory Updating Rule The time evolution of the membrane potential u at the j-th step is given by: u( j + ε ) = u( j − 1 + ε )e−
1−ε λ
+ (1 − e−
1−ε λ
)λ J + wδ j,m ,
(5.90)
where ε is an infinitesimal constant, δ jm is the Kronecker symbol defining whether a sensory spike at j-th step occurs or not. Without loss of generality, rescaling and shifting the membrane voltage u → wu + λ J, we get from Eq. (5.90) the following 1D map: u j = Bu j−1 + δ ( j, m),
(5.91)
where B = e−1/λ defines how strongly the next brain state keeps track of the previous one. The map (5.91) describes short time memory. The bigger the λ is, the slower the system forgets its past. Our previous robot design corresponds to λ = 0, i.e. to a robot with no memory. In this case B = 0 and the evolution of the memory state is trivial. If the sensory system generated a spike on the previous step, i.e. the robot made a step towards the target, the memory variable is set to 1, otherwise u = 0. The map (5.91) now can be used as an updating rule for the memory state. We note that such memory realization does not consume the memory (computer resources) since the internal variable u is dynamically updated at each state using only the constant B, the previous value of u and the last sensory output (presence or absence of spike). Nevertheless the robot past affects its current state so the robot remembers its behavior. In the general case the map (5.91) admits complex solutions. Figure 5.13 illustrates some important particular cases. 5.2.4.2 IQ of a Robot with Memory For illustration we used the simplest form of (5.89):
α j = α0 ,
γj =
γ0 u j, 1−B
(5.92)
where α0 and γ0 are the coefficients of the motor layer of the robot with no memory. The robot motion has been considered in different room configurations evaluating the robot IQ using (5.87). Figure 5.14 summarizes results. Not surprisingly in an empty room the memory did not give any gain relative to the memory-less robot. The robot IQ even reduces for longer memory scales (upper insets in Fig. 5.14). This is explained by the memory “inertness”. Due to the unavoidable presence of randomness in the sensory output, for big λ the memory state never reaches the optimal value. Instead it oscillates according to (5.91) around some suboptimal value similar to Fig. 5.13D. This is equivalent to an effective decrease of the modulation parameter γ due to (5.92), which biases the robot from deterministic to stochastic behavior. As earlier observed (Fig. 5.12A) such a decrease leads to a reduction of the
256
M.G. Velarde et al.
robot IQ leading to the conclusion that “thinking too much” is not good in a simple situation. However, the picture significantly changes in the presence of obstacles. Even a small obstacle was a big problem for the memory-less robot in the case of γ = 10 (Fig. 5.12B). The simple memory unit with λ ≈ 1 greatly improves the robot performance (center inset in Fig. 5.14). In the presence of a complex obstacle the robot with the simplest brain did not leave any chance to its memory-less counterpart. For λ ≈ 2 it won both for γ = 2 and γ = 10 (Fig. 5.14 lower inset). Noticeably that the robot IQ raised up to the values relatively closed to the IQ of the simple memory-less robot in the empty room (IQ = 4 vs 6), i.e. such a robot copes very successfully with the complex obstacle and performs practically equally in the empty room. Note that maximal IQ of the robot with memory is practically the same (about 4) for drastically different values of the modulation parameter γ , i.e. the robot behavior is more robust.
A
B
sensory output
uj
uj
1
1
sensory output
uj -1
C
uj -1
D
sensory output
uj
uj
1
1
uj -1
sensory output
uj -1
Fig. 5.13. Examples of periodic sensory outputs and the dynamics of the sensory neuron driven by those stimuli. A) Periodic spikes from the sensory unit lead to a successive increase of the j−1 j−1 u by progressively decreasing steps towards the fixed point of state variable u j = 1−B 1 1−B + B + the map (5.92) at u = (1 − B)−1 , i.e. the robot “learns” the good strategy; B) No spike from the sensory unit, i.e. the robot goes away from the target, leads to a successive decrease of the state variable to u− = 0; C) Period two spikes, i.e. the robot does steps towards and backwards. 1 B − The internal state oscillates between two different states u+ c = 1−B2 , and uc = 1−B2 ; D) More complicated sensory signal (two steps forward, one backward) provokes period three stable fixed point.
Mathematical Approach to Sensory Motor Control and Memory
No memory
257
With memory
g = 10
g=2
empty
robot IQ
6
5
small obstacle
4
robot IQ
5.2
4.6
4
3.4
complex obstacle
robot IQ
4
3
2
1
-2
10
-1
10
0
l
10
1
10
-1
10
0
l
10
1
10
Fig. 5.14. Comparative performance (IQ test) of robots with (blue curves) and without (red dashed lines) memory skill in different environments. λ defines the memory time scale with λ = 0 corresponding to the memory-less robot.
5.2.5
Second Neuron: Action Planning
To make the robot versatile, capable to change the strategy “on the fly”, let us now further improve the robot brain model by introducing one more neuron that we shall refer to as motor neuron. First, note that a change of strategy is not possible without having memory, i.e. planning is a ”superior” brain function. Second, it essentially demands nonlinear dynamics, i.e. any linear extension can be viewed as a memory modification that may lead to quantitative but not qualitative improvements. The membrane evolution of the moto-neuron obeys: v˙ = f (v) + u,
(5.93)
where f (v) is a nonlinear function that for reasons of a potential hardware implementation we choose as a piece-wise linear form:
258
M.G. Velarde et al.
⎧ v ⎪ ⎨−k,
if v < bk f= if bk ≤ v ≤ 1 − (1 − a)k ⎪ ⎩ −1 + (1 − v)/k if v > 1 − (1 − a)k (b−a)v−b(1−k) 1−(1+b−a)k
(5.94)
with a, b, and k being constants. Equations (5.93) and (5.94) define a piece-wise linear map, which then is used as an updating rule for the new brain variable v j = g(v j−1 , u j−1 ) similar to (5.91). The motor neuron allows better tuning the motor layer parameters according to the task performing by the robot at a given time instance. To test the robot performance we built a room model with different obstacles of various shapes (Fig. 5.15A). Three different robots are used, called according to their brain structures as: “brain-less”, “sensory neuron”, and “sensory+motor neurons”, their task is to search for objects appearing at random positions in the room. The robots have limited operational time interval (lifetime) to perform each task. If in the given time interval the robot finds an object we assume that the task has been accomplished. Figure 5.15A shows examples of robot trajectories. All three robots find the object (green circle). However they spend considerably different number of steps: 3927, 1542 A
B
brain-less
sensory neuron
3927 steps
1542 steps
sensory + motor neurons
967 steps
100 Brain-less Sensory neuron Sensory + motor neurons
success [%]
80
60
40
20
0
0
1000
2000
3000
4000
5000
number of available steps Fig. 5.15. Relative performance level in target searching for three different robot designs. The robots are instructed to search targets that appear at random positions in the untidy room of (353 × 448) size. A) Examples of the robot trajectories from the initial position marked by blue square to the target marked by green circle. Obstacles are shown in black. Blue dashed circles highlight the points where robots got stacked and spent a lot of steps before finding a way out. B) Mean success rate to find an object by different robots for a given number of steps.
Mathematical Approach to Sensory Motor Control and Memory
259
and 967. In general, the trajectories of the brain-less robot are quite straightforward, so it usually wins when the target is nearby. However it also frequently stacks even in simple obstacles (blue circles in Fig. 5.15A). The internal neural dynamics makes the robot trajectory less direct but also help to get out from obstacles. Figure 5.15B shows the mean success rate. For a given acceptable percentage of success the robots require different time intervals. For small rates, less than 30% (i.e. only about 1 of 3 objects is found by the robot), the brain-less robot wins the competition. It occurs due to frequent appearance of objects just in front of the robot initial position. However, such a robot cannot reach even 45% of success for any time interval, i.e this robot finds less than 45% of the target whatever time it has. The presence of memory neuron provides a gain for a number of steps bigger than 2000. Then the robot possessing the short time memory significantly improves the success rate, finding those targets that are unreachable for the “brain-less” robot. This result is in accordance with Fig. 5.14, where the memory function led to a gain only in environments with obstacles on the robot path. Finally the second “action planning” neuron improves even more the robot skill. As expected, for a small number of steps it provides much better performance over the robot with single memory neuron, achieving the same as the brain-less robot performance at 1000 steps. For higger step number, the robot with action planning is superior. Thus the motor neuron profitably changes the robot strategy according to the task complexity. 5.2.6
Conclusions
We have considered a probabilistic model of a robot platform including sensory and motor layers. We have implemented a limited life-time, which may be given e.g. by the battery charge or by a limited operational time interval, and ascribed to the robot the goal of searching for a target. The robot sensory skill includes the simplest differential sensor that does not explicitly measure the sensory intensity nor its absolute position. The motor layer is described by two parameters controlling strategy and stochasticity. Clearly, the robot performance could be easily enhanced by improving the robot sensory or the motor layer. However we claim that a protocognitive behavior is not a consequence of highly sophisticated sensory–motor organs but emerges through an increase of the internal complexity and reutilization of the minimal sensory information. Using the platform as a test-bed it appears that in the presence of obstacles the robot strategy and the level of determinism on the motor layer should be flexible. Simple obstacles can be overcome by reducing the robot determinism and keeping the strategy, whereas to avoid complex obstacles a strategy change is required. Starting from the simplest robot we have introduced a “brain” based on a simple neural network using dynamical systems with intrinsic variables. This helped to solve the problem of extensive computations and also provided robustness against perturbations. We have shown that the most fundamental robot element, the short-time memory, is essential in obstacle avoidance. However, in the simplest conditions of no obstacles the straightforward memory-less robot is usually superior. Thus the memory is only good in complex environments and a higher-level brain function is necessary to improve the robot performance. We have shown that low level action planning involves essentially nonlinear dynamics and provides a considerable gain to the robot performance
260
M.G. Velarde et al.
dynamically changing the robot strategy. Yet for very short life-time the brain-less robot was superior. Accordingly we suggest that small organisms (or agents) with short life-time does not require complex brains and even can benefit from simple brain-like (reflex) structures. To some extent this might mean that controlling blocks of modern robots are too complicated in view of their life-time and mechanical abilities.
5.3 Memotaxis Versus Chemotaxis 5.3.1
Introduction
The efficiency of the interaction of biological systems with the environment relays on an internal dynamical representation of the external world through the incoming sensory information. How does the brain form a useful representation of the environment and deals with uncertainty and incompleteness of the sensory information? Answering this question is based on considering robots as dynamical systems embedded in the essentially uncertain environment and guided by biophysically motivated neural networks. Then instead of fighting with the uncertainty such a robot may benefit from it, showing a behavior similar to those widely observed in living beings, e.g. when searching for a target. The latter represents one of the most demanded problems in the modern robotics (see e.g. [23]). In the simplest case, the behaviour of bacteria can be described as a set of behavioral rules dynamically coupling sensory information obtained from the external world to motor actions taken by the organism. They rely on local concentration gradients to search for the source of a nutrient [8], called chemotactic strategy. An agent mimicking this behaviour can be designed according to the rule: make a step, measure a change in sensory signal, if it is positive make a step in the same direction, otherwise change the direction. Such searching strategy requires the concentration to be high enough to ensure that its difference measured at two nearby locations is larger than typical fluctuations [12, 11]. However, sensory signal usually decay rapidly with distance to the source (e.g. exponentially for odors or for sound). Then a weak signal-to-noise ratio (SNR) strongly limits the distance at which the agent performance is still acceptable. Accordingly, existing chemotactic methods (see e.g. [22, 24, 27, 34, 39]) are only applicable to high SNR situations, and exhibit problems in essentially uncertain environments. On the other hand, an efficient signal processing system takes advantage of statistical structure in their input signals, both to reduce the influence of noise and to generate compact representation of seemingly complex data. Since 1959 when Barlow proposed the principle of efficient coding hypothesis [7], a widely accepted idea is that nervous system (and robots) should exploit statistical dependences contained in sensory signals. Following this idea, recently, a different searching algorithm has been proposed [46]. In the framework of the so-called infotaxis algorithm any search process can be thought of as acquisition of information on source location. Thus information plays a role similar to concentration in chemotaxis. Then the infotaxis strategy locally maximizes the expected rate of information gain. Earlier works [16, 35] have shown that sensory information processing based on a simple neural network implementing short-time memory can improve efficiency of the target searching in a simplistic theoretical environment. This new strategy proposed
Mathematical Approach to Sensory Motor Control and Memory
261
has been called, memotaxis. Here we develop further this idea providing numerical and experimental results and demonstrate how a memotactic pathway, working in parallel to chemotaxis, can extend safe searching distance beyond the area defined by the SNR for the chemotaxis strategy. 5.3.2
Robot Model
5.3.2.1 Searching for a Target in an Environment with Uncertainty We start with the description of a simplified robot platform, assuming that the robot occupies one cell and moves one step at a time t ∈ Z in a limited two-dimensional discrete space (a square arena) described by Cartesian coordinates: 1 < (x, y) < L , x, y ∈ Z where L is the size of the arena (Fig. 5.16A). The arena has an object (a target), placed at an arbitrary cell, which emits a sensory signal (e.g. sound waves) to be perceived by the robot. We assign to the robot the goal of searching for the target and then we quantify
Fig. 5.16. A) Sketch of a robot searching for a target in a discrete 2D space. The target (marked by circle) emits a stimulus (e.g. sound or smell) available to the robot sensory system. The robot (marked by square) makes one step at a time in either of four directions denoted for convenience by: 1 right, 2 up, 3 left, and 4 down. B) Zigzag principle of choosing the next step direction based on the delayed sensory output and the motor action. If s(t − 1) = 1 (as in the figure), then the next step is taken in the same direction (up in the figure) as the step at (t − 1), i.e. m(t + a) = m(t −1). C. Robot design including sensory system, motor layer and two sensory-motor pathways connecting them. Chemotactic behavior is obtained by activating the chemotactic pathway only, which delays the sensory output by two steps (block τ = 2) and then sends unchanged the sensory output to the motor layer (through logical OR block). In memotaxis both pathways operate in parallel. The memotaxis consists of three feed-forward coupled blocks: delay (marked by τ ), short time memory (marked by λ ), and thresholding. The binary outputs of the two pathways are concatenated logically by OR block whose output is used to generate the next motor action m(t + 1).
262
M.G. Velarde et al.
its searching performance in terms of the number of steps spent to find the target. To be physically plausible we only require decreasing of the stimulus intensity with the distance to the target. In numerical simulations we have used the earlier introduced world model valid for description of sound intensity: I(r) =
hI0 , r+h
(5.95)
where d is the distance from the current robot position to the target, d is the intensity at the target, and h is the cut-off constant. The output of the robot sensory system can be described by the binary code 1, if ∆ I(t) > 0 s(t) = (5.96) 0, otherwise. where ∆ I(r) = I(t) − I(t − 1) + ζ (t), is the increment of the signal intensity at the two successive steps (t − 1) and t , and ζ (t) = ζ (t) − ζ (t − 1) is the uncertainty coming for the environment and imperfection of the sensor, which we assume to be Gaussian random process with zero mean and the standard deviation σ . Further on we assume that the robot has a genetically programmed knowledge: the stimulus intensity is higher when the target is closer. If we admit uncertainty in the robot-environment system, the probability of the sensory system to give correct output at a given position is [16] 1 −I0 h ∼ Pcr (d, σ ) = erfc √ (5.97) 2 2χ where χ = d 2 σ , is the uncertainty level. According to (5.97) the sensory system performance diminishes with the squared distance and the noise level. Furthermore, there is a critical radius around the target position at which the sensory system practically makes no errors: $ hI0 dsafe ≈ (5.98) 2σ The safe uncertainty level χsa f e = hI0 , linearly decreases with increasing the magnitude of the sensory intensity, which is a general limitation of chemotaxis searching strategies. Figure 5.17 shows examples of robot trajectories and statistical distributions of the number of steps spent to reach the target for three conditions: i. no uncertainty, ii. intermediate uncertainty, and iii. high uncertainty. Indeed, in the first case the robot goes directly to the target and reaches it in 130 steps (Fig. 5.17A left panel), which practically corresponds to the initial distance of the robot to the target d0 = 110 steps. Small mismatch is explained by the zigzag nature of the robot movements. However, when the uncertainty increases the robot trajectory becomes more and more stochastic, and the number of steps exponentially increases. 5.3.2.2 Memotaxis Strategy: Short-Time Memory Let us denote by m ∈ {1, 2, 3, 4} the directions of steps available to the robot (Fig. 5.16A). Then the chemotactic robot makes the next step according to the rule [16]
Mathematical Approach to Sensory Motor Control and Memory
m(t + 1) =
m(t − 1), if s(t) = 1 (m(t − 1) + 2)mod4, otherwise
263
(5.99)
The operational scheme (5.99) yields a turn at each step towards the quadrant where according to the sensory output the robot assumes to find the target. Figure 5.16B illustrates an example of the robot movements. Suppose that the robot made two steps: up and left marked by narrow arrows, which correspond to m(t − 1) = 2 and m(t) = 3, respectively. In the case of negligible noise intensity (σ = 0) the robot sensory system would give s(t − 1) = 1 and s(t) = 0. Let us now introduce a new sensory-motor pathway that incorporates short-time memory, memotaxis, consisting of three feed-forward coupled blocks (Fig. 5.16C), serving as a short time memory. The first block delays the sensory signal by steps, i.e. its output is given by s(t − τ ) . The second block is a one-dimensional discrete dynamical system realizing information storage. The state variable u(t) of this block is described by the following 1D map [35] u(t) = e−1/λ u(t − 1) + (1 − e−1/λ )s(t − τ )
(5.100)
where λ is a constant accounting for the forgetting time scale of the memory. The higher its value, the slower the system forgets its past. Note that such memory (behavioral
Fig. 5.17. Dynamics of the deterministic chemotactic based robot in different conditions: i. No uncertainty (noise level σ = 0.001), ii. Low uncertainty (noise level σ = 0.005), and iii. High uncertainty (noise level σ = 0.01). A) Examples of robot trajectories in square empty room 150x150 steps. Filled circle and square indicate the target and the robot initial positions, respectively. The robot reaches the target in 130, 370, and 1118 steps for no, low, and high uncertainty cases, respectively. Shadow circular areas surrounding the target position correspond to safe radius when the robot sensory system practically makes no error. The radii are 71, 32 and 22 steps for the corresponding panels. B) Probability densities of the length of normalized trajectories (ratio of the number of steps spent by the robot Nst to the initial distance to the target d0 = 110steps) evaluated over 10000 statistical trials. Median values of the number of steps for the three cases are: 130, 338, and 673, which indicates fast increase of the robot stochasticity.
264
M.G. Velarde et al.
ability) realization does not consume the memory (hardware resources) since the internal variable is dynamically updated at each step. In the case λ = 0 the evolution of the memory state is trivial u(t) = s(t − τ ) , i.e. the memory state repeats (with delay) the output of the sensory system. For λ > 0 the past nontrivially affects the current memory state. Since u(t) is a cumulative potential its maximum value, 1, is approached when a long continuous sequence of sensory spikes is received. Then a casual zero sensory output (missing spike) can be caused by a failure of the sensory system due to nonzero uncertainty. To detect this situation we use the threshold block in the memotactic pathway (Fig. 5.16C), whose output is given by 1, if u(t) > uth b(t) = (5.101) 0, otherwise. where 0 < u(th) < 1 is the decision tolerance level (threshold). Finally, the outputs of the chemotactic and memotactic pathways are concatenated in the OR block. Thus when the memory state u(t) is above the threshold value, the robot moves as if a sensory spike has been received, no matter what was the output of the sensory system conveyed by the memotaxis pathway. First we tuned the memotactic pathway. We performed statistical tests changing the parameter values and evaluating the mean performance gain averaged over 10000 trials G(λ , τ , uth ) = Mmem / Mchem
(5.102)
where Mmem and Mchem are the median lengths of trajectories for the memotactic and chemotactic robots, respectively. We have found that (τ = 3, λ = 1.5, uth = 0.75) is the optimal parameter set providing the maximal performance of the memotaxis strategy. Finally we tested how the performance gain changes with an increase of the sensory uncertainty level. Figure 5.18 shows the benefit from the use of the memotactic pathway increases with increasing uncertainty in the robot environment.
Fig. 5.18. Performance gain provided by the memotaxis strategy relative to chemotaxis as a function of the normalized sensory uncertainty level. Dashed line marks no gain level. Memotaxis is useful at strong uncertainties.
Mathematical Approach to Sensory Motor Control and Memory
265
5.3.2.3 Experimental Result A standard wheel drive rover Lynxmotion 4WD2 controlled through a differential drive system has been used to perform experiments (Fig. 5.19). Technical details about the robot are reported in Chapter 11. To validate the simulation results, the robot was equipped with two microphones and an analog board capable to recognize a cricket chirp and to provide two output signals proportional to the intensity of the sound at each microphone. A digital compass has been embedded in the system to monitor the angle between the current robot orientation and the desired orientation during a robot turn in order to achieve characteristic zigzag movements used in the theoretical development (Fig. 5.16B). Table I collects statistics for each robot design over 20 experimental trials. The chemotactic robot reached the target in 12 runs, whereas the memotactic robot succeeded in 18 runs, i.e. the memotactic robot was 50 percent more successful. We also evaluated the mean number of robot steps for each strategy over successful runs. In average the memotactic robot was 10 percent faster. Thus experimental results confirm the advantage of using the memotaxis strategy. Table 5.1. Comparative statistics (over 20 trials) of the robot with chemotaxis and memotaxis searching strategies. Mean number of steps spent by the robot has been evaluated over successful trials. Robot design Chemotaxis (λ = 0) Memotaxis (λ = 2)
Succes rate (%) 60 67.5
Mean number of steps 90 61.3
Fig. 5.19. Experimental setup. Lynxmotion 4WD2 robot equipped with a digital compass, two microphones, an analog board for recognizing a cricket chirp, and a microcontroller for handling the whole system and communication with the main computer, where all searching strategies have been implemented.
5.3.3
Conclusions
The searching behavior of simple living beings, such as chemotactic bacteria, can be approximated by a gradient based rule. Then an agent mimicking this behavior
266
M.G. Velarde et al.
can successfully search for a target, if there is sufficiently high signal to noise ratio (SNR) in the robot-environment system. Two different robot platforms have been used to show that the performance of the chemotactic strategy is indeed strongly diminished when the uncertainty level rises above a ”safe” level. We have provided an estimate of the safe uncertainty level and have shown that it linearly diminishes with a decrease of the intensity of the sensory stimulus. Experiments show that outside of the safe area the robot behavior becomes stochastic and it frequently (in 40 % of cases for the zigzag robot) fails to find the target. To overcome this problem several approaches have been recently developed. One of them, designed for detection of odor sources, is called infotaxis [46]. This strategy maximizes at each step the information gain. The proposed memotaxis strategy [16, 35] requires minimal computational resources and can be easily implemented in hardware. The approach is based on the use of a dynamical system modeling short-time memory, an attribute used for searching by e.g. moths. When interacting with the environment this dynamical system collects information on successful steps and then its output can be used to correct the decision on the next step taken by the gradient strategy. Thus, similar to infotaxis, the memotactic robot can take steps against the sensory gradient. We have shown that this ability significantly improves the robot performance in low SNR environments. For two different (zigzag and rover) robots implementations are done of the sensory-motor systems such that the same sensory-motor pathway realizing memotaxis can be used. Consequently, the proposed memotaxis implementation is of universal value, i.e. independent on the robot platform. Using two robot models we have shown that the memotaxis strategy effectively suppresses the stochasticity observed in the behavior of chemotactic robots in the region of low SNR. We have shown by numerical simulation that the memotaxis strategy provides from 50% to 200% performance gain relative to the chemotactic robot. This result has been confirmed experimentally using zigzag and roving robots moving in a real environment.
References 1. Arras, K., Tomaris, T., Jensen, B., Siegwart, R.: Multisensor on-thefly localization: Precision and reliability for applications. Robotics and Autonomous Systems 34, 131–143 (2001) 2. Atick, J., Redlich, N.: Towards a theory of early visual processing. Neural Comput. 2, 308– 320 (1990) 3. Atick, J.: Could information theory provide an ecological theory of sensory processing? Network 3, 213–251 (1992) 4. Atick, J., Bialek, W.: Princeton Lectures on Biophysics. World Scientific, Singapore (1992) 5. Atrash, A., Koening, S.: Probabilistic Planning for Behavior-Based Robot. In: Proc. Flairs Conference, pp. 531–535 (2001) 6. Atteneave, F.: Some informational aspect of visual perception. Psychol. Rev. 61, 183–193 (1954) 7. Barlow, H.: Sensory communication. MIT Press, Cambridge (1961) 8. Beer, R.D.: Toward the evolution of dynamical neural networks for minimally cognitive behavior. In: Maas, P., Mataric, M., Meyer, J., Pollack, J., Wilson, S. (eds.) From animals to animats 4: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior, pp. 421–429. MIT Press, Cambridge (1996) 9. Beer, R.D.: The dynamics of active categorical perception in an evolved model agent. Adaptive Behavior 11, 209–243 (2003)
Mathematical Approach to Sensory Motor Control and Memory
267
10. Beer, R.D.: Parameter space structure of continuous-time recurrent neural networks. Neural Computation 18, 3009–3051 (2006) 11. Berg, B.C.: Random Walks in Biology. Princeton Univ. Press, Princeton (1993) 12. Berg, B.C., Purcell, E.M.: Physics of chemoreception. Biophys. J. 20, 193–219 (1977) 13. Borenstein, J., Konen, Y.: The vector field histogram - fast obstacle avoidance for mobile robots. IEEE Journal of Robotics and Automation 7(3), 278–288 (1991) 14. Brooks, R.: A robust layered control system for a mobile robot. IEEE J. Rob. Autom. 2, 14–23 (1986) 15. Brooks, A.: Hardware retargetable distributed layered architecture for mobile robot control. In: Proceedings IEEE Robotics and Automation, pp. 106–110 (1987) 16. Castellanos, N.P., Makarov, V.A., Patane, L., Velarde, M.G.: Sensory-motor neural loop discovering statistical dependences among imperfect sensory perception and motor response. In: Proc. of SPIE, vol. 6592 (2007) doi:10.1117/12.724327 17. Cruse, H., H¨ubner, D.: Selforganizing memory: active learning of landmarks used for navigation (in Preparation) 18. Cruse, H., Sievers, K.: A general network structure for learning Pavlovian paradigms (in Preparation) 19. Elman, J.L.: Finding structure in time. Cognitive Science 14, 179–211 (1990) 20. Engelson, S., McDermott, D.: Error correction in mobile robot map learning. In: Proc. of the 1992 IEEE Int. Conf. on Robotics and Automation, pp. 2555–2560 (1992) 21. Fuster, J.M.: Memory in the Cerebral Cortex: an Empirical Approach to Neural Networks in the Human and Nonhuman Primate. MIT Press, Cambridge (1995) 22. Grasso, F.W., Consi, T.R., Mountain, D.C., Atema, J.: Biomimetic robot lobster performs chemoorientation in turbulence using a pair of spatially separated sensors: Progress and challenges. Robotics and Autonomous Systems 30, 115–131 (2000) 23. Hamza, M.H.: Robotics and Applications. In: RA 2006, vol. 210. ACTA Press (2006) 24. Herrero, M.A.: The mathematics of chemotaxis. Handbook of differential equations 3, 137– 1993 (2007) 25. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79, 2554–2558 (1982) 26. Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two state neurons. Proc. Natl. Acad. Sci. 81, 3088–3092 (1984) 27. Ishida, H., Kagawa, Y., Nakamoto, T., Moriizumi, T.: Odor-source localization in the clean room by an autonomous mobile sensing system. Sens. Actuators B 33, 115–121 (1996) 28. Jaeger, H., Haas, H.: Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 2, 78–80 (2004) 29. Jaulmes, R., Pineau, J., Precup, D.: Probabilistic robot planning under model uncertainty: an active learning approach. In: NIPS Workshop on Machine Learning Based Robotics in Unstructured Environments (2005) 30. Kindermann, T., Cruse, H.: MMC – a new numerical approach to the kinematics of complex manipulators. Mechanism and Machine Theory 37, 375–394 (2002) 31. Kortenkamp, D., Weymouth, T.: Topological mapping for obile robots using a combination of sonar and vision sensing. In: Proceedings of the AI, pp. 979–984 (1994) 32. K¨uhn, S., Beyn, W.J., Cruse, H.: Modelling memory functions with recurrent neural networks consisting of input compensation units: I. Static situations. Biological Cybernetics 96, 455– 470 (2007) 33. K¨uhn, S., Cruse, H.: Modelling memory functions with recurrent neural networks consisting of input compensation units: II. Dynamic situations. Biological Cybernetics 96, 471–486 (2007)
268
M.G. Velarde et al.
34. Kuwana, Y., Nagasawa, S., Shimoyama, I., Kanzaki, R.: Synthesis of the pheromone oriented behaviour of silkworm moths by a mobile robot with moth antennae as pheromone sensors. Biosens. Bioelectron. 14, 195–202 (1999) 35. Makarov, V.A., Castellanos, N.P., Velarde, M.G.: Simple agents benefits only from simple brains. Trans. Engn., Computing and Tech. 15, 25–30 (2006) 36. Makarov, V.A., Song, Y., Velarde, M.G., Huber, D., Cruse, H.: Elements for a general memory structure: Properties of recurrent neural networks used to form situation models. Biological Cybern (2008) 37. Palm, G., Sommer, F.T.: Associative data storage and retrieval in neural networks. In: Domany, E., van Hemmen, J.L., Schulten, K. (eds.) Models of Neural Networks III. Association, Generalization, and Representation, pp. 79–118. Springer, New York (1996) 38. Pasemann, F.: Complex dynamics and the structure of small neural networks. Network: Computation in Neural Systems 13, 195–216 (2002) 39. Russell, R.A., Bab-Hadiashar, A., Shepherd, R., Wallace, G.G.: A comparison of reactive robot chemotaxis algorithms. Rob. Auton. Syst. 45, 83–97 (2003) 40. Schilling, M., Cruse, H.: The evolution of cognition, from first order to second order embodiment. In: Wachsmuth, I. (ed.) (2008) 41. Steink¨uhler, C.H.: A holistic model for an internal representation to control the movement of a manipulator with redundant degrees of freedom. Biol. Cybernetics 79, 457–466 (1998) 42. Strang, G.: Introduction to Linear Algebra. Wellesley-Cambridge Press (2003) 43. Tani, J.: Learning to generate articulated behavior through the bottom-up and the top-down interaction processes. Neural Networks 16, 11–23 (2003) 44. Thrun, S.: Probabilistic algorithms in robotics. AI Magazine 21, 93–109 (2000) 45. Ulrich, U., Borenstein, J.: Reliable obstacle avoidance for fast mobile robots. In: IEEE Int. Conf. on Robotics and Automation, pp. 1572–1577 (1998) 46. Vergassola, M., Villermaux, E., Shraiman, B.I.: Infotaxis as a strategy for searching without gradients. Nature 445, 406–409 (2007) 47. Wessnitzer, J., Webb, B.: Multimodal sensory integration in insects - towards insect brain control architectures. Bioinspiration and Biomimetics 1, 63–75 (2006)
6 From Low to High Level Approach to Cognitive Control P. Arena, S. De Fiore, M. Frasca, D. Lombardo, and L. Patan´e Department of Electrical, Electronic and System Engineering, University of Catania, I-95125 Catania, Italy {parena,lpatane}@diees.unict.it
Abstract. In this Chapter the application of dynamical systems to model reactive and precognitive behaviours is discussed. We present an approach to navigation based on the control of a chaotic system that is enslaved, on the basis of sensory stimuli, into low order dynamics that are used as percepts of the environmental situations. Another aspect taken into consideration, is the introduction of correlation mechanisms, important for the emergence of anticipation. In this case a spiking network is used to control a simulated robot learning to anticipate sensory events. Finally the proposed approach has been applied to solve a landmark navigation problem.
6.1 Introduction In living beings, cognitive capabilities are based on inherited behaviours that are important for their survival. These reactive behaviours, triggered by external stimuli, are the basic blocks of a cognitive architecture. In this Chapter we propose a new technique, called weak chaos control technique, that has been used to implement the reactive layer of a sensing-perception-action scheme. This control mechanisms is functionally inspired by the research activity of Prof W. Freeman and coauthors, on the formation of percepts in the olfactory bulb in rabbits. The proposed model has been designed to be embedded in hardware for realtime control of roving robots. The definition of a strategy for the generation of reactive behaviour represents a first step toward the realization of a cognitive control. Growing up in complexity, the importance of a correlation layer is evident. To discuss on these aspects, we introduce a network of spiking neurons devoted to navigation control. Three different examples, dealing with stimuli of increasing complexity are investigated. First, a simulated robot is controlled to avoid obstacles through a network of spiking neurons. Then, a second layer is designed aiming to provide the robot with a target approach system, which makes the robot able to move towards visual targets. Finally, a network of spiking neurons for navigation based on visual cues is introduced. In all the cases it has been assumed that the robot knows some a priori responses to low level sensors (i.e. to contact sensors in the case of obstacles, to proximity target sensors in the case of visual targets, or to the visual target for navigation with visual cues) and has to learn the response to high level stimuli (i.e. range finder sensors or visual input). The biologically plausible paradigm P. Arena and L. Patan`e (Eds.): Spatial Temporal Patterns, COSMOS 1, pp. 269–308. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
270
P. Arena et al.
of Spike-Timing-Dependent-Plasticity (STDP), already introduced in Chapter 3 is included in the network to make the system able to learn high level responses that guide navigation through a simple unstructured environment. The learning procedure is based on classical conditioning. To conclude the Chapter, a new methodology for landmark navigation, based on correlation mechanisms, is introduced. Either for animals or for artificial agents, the whole problem of landmark navigation can be divided into two parts: first, the agent has to recognize, from the dynamic environment, space invariant objects which can be considered as suitable “landmarks” for driving the motion towards a goal position; second, it has to use the information on the landmarks to effectively navigate within the environment. Here, the problem of determining landmarks has been addressed by processing the external information through a spiking network with dynamic synapses plastically tuned by an STDP algorithm. The learning processes establish correlations between the incoming stimuli, allowing the system to extract from the scenario important features which can play the role of landmarks. Once established the landmarks, the agent acquires geometric relationships between them and the goal position. This process defines the parameters of a recurrent neural network (RNN). This network drives the agent navigation, filtering the information about landmarks given within an absolute reference system (e.g the North). When the absolute reference is not available, a safety mechanism acts to control the motion maintaining a correct heading. Simulation results showed the potentiality of the proposed architecture: this is able to drive an agent towards the desired position in presence of stimuli subject to noise and also in the case of partially obscured landmarks.
6.2 Weak Chaos Control for the Generation of Reflexive Behaviours All forms of adaptive behavior require the processing of multiply sensory information and their transformation into series of goal-directed actions. In the most primitive animal species the entire process is regulated by external (environmental) and internal feedback through the animal body [51, 26]. Cortical processes information coming from objects identified in the environment through spike trains from receptors by enrolling dedicated neural assemblies. These are nonlinear dynamical coupled systems whose collective dynamics constitutes the mental representation of the stimuli. Freeman and co-workers, in their experimental studies on the dynamics of sensory processing in animals [24, 25], conceived a “dynamical theory of perception”. The hypothesis is that cerebral activity can be represented by a chaotic dynamics. They attained this result by different experiments on rabbits which inhaled in a pre-programmed way several smells. Through the electroencephalogram (EEG), Freeman evaluated the action potentials in the olfactory bulb and he noticed that the potential waves showed a complex behavior. So he came to the conclusion that an internal mental representation (cerebral pattern) of a stimulus is the result of a complex dynamics in the sensory cortex in cooperation with the limbic system that implements the supporting processes of intention and attention [22]. More in details, according to Freeman [25], the dynamics of the olfactory bulb is characterized by a
From Low to High Level Approach to Cognitive Control
271
high-dimensional chaotic attractor with multiple wings. The wings can be considered as potential memory traces formed by learning through the animal’s life history. In the absence of sensory stimuli, the system is in a high dimensional itinerant search mode, visiting various “wings”. In response to a given stimulus, the dynamics of the system is constrained to oscillations in one of the wings, which is identified with the stimulus. Once the input is removed, the system switches back to the high-dimensional, itinerant basal mode. Analyzing the experimental data acquired, Freeman proposed a dynamical model of the olfactory system, called K-sets [21, 47]. The model is able to show all the chaotic and oscillatory behaviours identified through the experiments. Freeman and Skarda [47] discussed on the important role of chaos in the formation of perceptual meanings. Accordingly to their works, neural system activity persists in a chaotic state until sensors perturb this behaviour. The result of this process is that a new attractor emerges representing the meaning of the incoming stimuli. The role of chaos is fundamental to provide the flexibility and the robustness needed by the system during the migration through different perceptual states. A discrete implementation of Freeman’s K model (i.e. KA sets) was developed and applied to navigation control of autonomous agents [29]. The controller parameters have been learned through an evolutionary approach [28] and also by using unsupervised learning strategies [29]. Our main objective here is to propose a reactive control architecture for autonomous robots, taking care of the functional properties discovered by Freeman in the olfactory bulb. The idea is to use a simple but chaotic dynamical system with suitable characteristics that can functionally simulate the creation of perceptual patterns. The patterns can be used to guide the robot actions and the control system can be easily extended to include a wide number of sensors. Furthermore the control architecture can be implemented at a hardware level in an FPGA-based board to be embedded on an autonomous roving robot [4]. The perception stage is represented by a suitable chaotic attractor, controlled by incoming signals from sensors. In particular a multidimensional state feedback control strategy has been implemented. A peculiarity of the approach is that, whereas most of the chaos control techniques are focused on the control of the chaotic trajectories towards equilibrium points or native limit cycles, in this case the controlled system is able to converge also to orbits never shown by the uncontrolled system. This is due both to the particular system chosen and to the fact that a multireference control is required. For this reason, we called this technique “weak chaos control” (WCC). The crucial advantages of this approach are the compact representation of the perception system, the real time implementation in view of its application to robot navigation control and the possibility to take into consideration the physical structure of the robot within the environment where it moves. This characteristic is realised since the robot geometry is introduced within the phase space, where chaotic wanderings are represented. Moreover obstacles or target positions are mapped in the phase space directly reflecting their actual position, with respect to the robot, as in the real environment. We have therefore, in the phase space, a kind of mirrored real environment, as measured by the sensors. The emerging controlled orbit will influence the behaviour of the robot by means of suitable actions, gained through the implementation of a simple unsupervised learning phase, that has already been presented in [3].
272
6.2.1
P. Arena et al.
The Chaotic Multiscroll System
In this section the chaotic circuit used as perceptual system is introduced. Since this system should be able to deal with a great number of sensorial stimuli and represent them, a chaotic system, able to generate multiscrolls [38], has been adopted. This can be viewed as a generalization of the Chua’s double scroll attractor represented through saturated piecewise linear functions and of other circuits able to generate a chaotic attractor consisting of multiply scroll distributed in the phase space (i.e. n-scrolls attractor) [39]. It is able to generate one-dimensional (1-D) n-scrolls, two-dimensional (2-D) n × m-grid scrolls or three-dimensional (3-D) n × m × l-grid scroll chaotic attractors by using saturated function series. In this work a 2-D multiscroll system has been chosen. It is described by the following differential equations [38]: ⎧ ⎪ x˙ = y − db2 f1 (y; k2 ; h2 ; p2 , q2 ) ⎪ ⎨ y˙ = z (6.1) ⎪ z ˙ = −ax − by − cz + d 1 f 1 (x; k1 ; h1 ; p1 , q1 )+ ⎪ ⎩ +d2 f1 (y; k2 ; h2 ; p2 , q2 ) where the following so-called saturated function series (PWL) f1 (x; k j ; h j ; p j , q j ) has been used: qj
f1 (x; k j ; h; j p j ; q j ) =
∑
gi (x; k j ; h j )
(6.2)
i=−p j
where k j > 0 is the slope of the saturated function, h j > 2 is called saturated delay time, p j and q j are positive integers, and ⎧ 2k j i f x > ih j + 1, ⎨ gi (x; k j ; h j ) = k j (x − ih j ) + k j i f |x − ih j | ≤ 1, (6.3) ⎩ 0 i f x < ih j − 1 ⎧ ⎨
0 i f x > −ih j + 1, k (x + ih ) − k g−i (x; k j ; h j ) = j j j i f |x + ih j | ≤ 1, ⎩ −2k j i f x < −ih j − 1
(6.4)
System (6.1) can generate a grid of (p1 + q1 + 2) ∗ (p2 + q2 + 2) scroll attractors. Parameters p1 (p2 ) and q1 (q2 ) control the number of scroll attractors in the positive and negative direction of the variable x (y), respectively. The parameters used in the following (a = b = c = d1 = d2 = 0.7, k1 = k2 = 50, h1 = h2 = 100, p1 = p2 = 1, q1 = q2 = 2) have been chosen according to the guidelines introduced in [38] to generate a 2-D 5 × 5 grid of scroll attractors. An example of the chaotic dynamics of system (6.1) is given in Fig. 6.1. 6.2.2
Control of the Multiscroll System
In our approach the perceptual system is represented by the multiscroll attractor of equations (6.1), whereas sensorial stimuli can interact with the system through (constant or periodic) inputs that can modify the internal chaotic behavior. Since one of the
From Low to High Level Approach to Cognitive Control
273
400 300 200
Y
100 0 −100 −200 −300 −300
−200
−100
0
X
100
200
300
400
Fig. 6.1. Projection of the 5x5 grid of scroll attractors in the plane x-y
main characteristics of perceptive systems is that sensorial stimuli strongly influence the spatial-temporal dynamics of the internal state, a suitable scheme to control the chaotic behavior of the multiscroll system on the basis of sensorial stimuli should be adopted. Briefly, chaos control refers to a process wherein a tiny perturbation is applied to a chaotic system in order to realize a desirable behavior (e.g. chaotic, periodic and others). Several techniques have been developed for the control of chaos [11]. In view of our application, a continuous-time technique like the Pyragas’s method is a suitable choice [42]. In this method [42, 43] the following model is taken into account: dy dt
= P(y, x) + F(t),
dx dt
= Q(y, x)
(6.5)
where y is the output of the system (i.e. a subset of the state variables) and the vector x describes the remaining state variables of the system. F(t) is the additive feedback perturbation which forces the chaotic system to follow the desired dynamics. Pyragas [42, 43] introduced two different methods of permanent control in the form of feedback. In the first method, that is used here, F(t) assumes the following form: F(t) = K[( y(t) − y(t)]
(6.6)
where y( represents the external input (i.e. the desired dynamics), and K represents a vector of experimental adjustable weights (adaptive control). The method can be employed to stabilize the unstable orbits endowed in the chaotic attractor reducing the high order dynamics of the chaotic system.
274
P. Arena et al.
6.2.2.1 Control Scheme In our case a strategy based on equations (6.6) has been applied. The desired dynamics is provided by a constant or periodic signal associated with the sensorial stimuli. Since more than one stimulus can be presented at the same time, the Pyragas method has been generalized to account for more than one external forcing. Hence, the equations of the controlled multiscroll system can be written as follows: ⎧ d2 ⎪ ⎪ x˙ = y − b f1 (y; k2 ; h2 ; p2 , q2 ) + ∑i kxi (xri − x) ⎨ y˙ = z + ∑i kyi (yri − y) (6.7) ⎪ z˙ = −ax − by − cz + d1 f1 (x; k1 ; h1 ; p1 , q1 ) ⎪ ⎩ +d2 f1 (y; k2 ; h2 ; p2 , q2 ) where i is the number of external references acting on the system; xri , yri are the state variables of the reference circuits that will be described in details below and kxi , kyi represent the control gains. It can be noticed that the control acts only on the state variables x and y, and this action is sufficient for the proposed navigation control strategy. The complete control scheme is shown in Fig. 6.2. Each reference signal (xri , yri ) can be a constant input or a periodic trajectory representing a native cycle. This can be generated using the multiscroll system (6.1) with particular parameters (a = b = c = 1). In this case the number of multiscroll systems needed to generate the reference cycles should be the same as the number of reference trajectories required. In a more simple way, these reference signals can be built using sinusoidal oscillators: xr (t) = Axr sin(ωxr t − ϕxr ) + xo f f
Fig. 6.2. Block diagram of the control scheme when three distinct reference signals (i.e. sensorial stimuli) are perceived by the multiscroll system
From Low to High Level Approach to Cognitive Control
300
200
200
Ref1 Ref2 Ref3 Ref4 System Center Cent.Pos
Y
100
Y
100
300
Ref1 Ref2 Ref3 Ref4 System Center Cent.Pos
275
0
0
−100
−100
−200 −200
−100
0
X
100
200
300
−200 −200
−100
(a)
0
X
100
200
300
(b)
Fig. 6.3. Limit cycle obtained when the multiscroll system is controlled by two sensorial stimuli. (a) Control gains: kx1 = kx2 = ky1 = ky2 = 0.8; (b) Control gains: kx1 = ky1 = 2, kx2 = ky2 = 0.6.
yr (t) = Ayr sin(ωyr t − ϕyr ) + yo f f
(6.8)
where (xo f f , yo f f ) is the center of the reference cycle, ω is the frequency (in this work ωxr = ωyr = 1), ϕxr and ϕyr are the phases, Axr and Ayr define the amplitude of the reference signal. 6.2.3
Multiscroll Control for Robot Navigation Control
In the following sections the proposed reactive control scheme will be applied to robot navigation control. Taking inspiration from Freeman’s works, we adopt a chaos control approach to enslave the chaotic trajectories of our perceptual system towards different pseudo-periodic orbits. We chose to use, as reference signals, periodic inputs, even if the method can be used also considering constant references. To this aim, taking into account a single reference periodic signal, the control gain range is defined in the following way: kx , ky ≥ kmin (xr , yr ) (6.9) If the reference signal is a native orbit of the chaotic system, it can be shown that kmin = 0.534. Below kmin the system presents a chaotic behavior, whereas if control gains are above kmin a limit cycle behavior occurs. In particular, as concerns the control by using a single reference dynamics, for low values of kx and ky the control of the multiscroll attractor has a residual error, however for the purpose of navigation control this weak condition is still acceptable. For higher values of kx and ky , the steady error approaches zero. One of the most interesting aspects of this technique, applied to the multiscroll system into consideration and useful for robot control purposes, is evident when there are more than one external reference. For example, let us consider the case in which there are two concurrently active inputs, and so there are two reference signals in the multiscroll phase plane. If the control gains of the two reference systems are not equal, the resulting controlled limit cycle (emerged cycle) will be placed, in the phase plane, near the reference cycle associated with the higher control gain. If the two reference dynamics have the same control gains, the resulting cycle will be placed exactly at halfway
276
P. Arena et al. 400
400 Ref1 System
300
Ref1 System
300 200
100
100 Y
Y
200
0
0
−100
−100
−200
−200
−300 −300
−200
−100
0
100 X
(a)
200
300
400
−300 −300
−200
−100
0
100
200
300
400
X
(b)
Fig. 6.4. An example of the evolution of the multiscroll system when controlled by reference dynamics associated with sensors. (a) When a single stimulus is perceived, the system converges to the reference cycle; (b) when the stimulus ends, the system behaves chaotically.
between them. These results are shown in Fig. 6.3. When stimuli are perceived, the system converges to a limit cycle that constitutes a representation of the concurrent activation of the sensorial stimuli. When stimuli are no longer active, the multiscroll returns to its default chaotic dynamics. An example of this process is shown in Fig. 6.4. In the next sections a reactive system applied to the navigation control of a roving robot is proposed. 6.2.4
Robot Navigation
To explore an area avoiding obstacles, the robot, sensing the environment, can create an internal representation of the stimuli in relation to its body. The loop is closed by an action that is chosen to accomplish a given target behavior (e.g. exploration with obstacle avoidance). Every sensor equipped on the robot provides a reference cycle. This is addressed by associating for instance, the perception of an obstacle with a stimulus and associating it with a representation (pattern). Therefore the controlled multiscroll system is the perceptual system and the emerged orbit stands for the internal representation of the external environment. Finally, according to the characteristic of the emerged cycle (amplitude, frequency, center position) an action, in terms of speed and rotation angle, is associated. At this stage, action is linked to perception (the emerged cycle) using a deterministic algorithm. However this association can be obtained through a bio-inspired adaptive structure and the classical Motor Map paradigm could represent a good candidate [45]. Fig. 6.5 shows a simulated robot equipped with four distance sensors and a target sensor. The corresponding reference cycles reported in the phase plane are related to the sensor positions. Distance sensors are directional and are associated each one to a single reference cycle, whereas target sensor is characterized by an omnidirectional field of view and for that reason it is associated to more than one (i.e four) reference cycles [6]. The offsets assigned to each input cycle are defined to match the position of the scroll centers. Moreover in this work we have chosen to link the value of the control gains with the intensity of perceived sensorial stimuli. The technique, based on placing reference cycles in the phase plane in accordance with the distribution of sensors on the robot, is important to strictly connect the internal
From Low to High Level Approach to Cognitive Control
277
representation of the environment to the robot geometry. In our tests only distance and target sensors have been used, although other sensors could be included. Distance sensors have a visibility range that represents the area where the robot is able to detect static and dynamic obstacles, whereas target sensor returns the target angular displacement with respect to the front axis of the robot, when the robot is inside the detection region of the target. As concerns the action (in terms of absolute value of speed and heading) performed by the robot, it depends on the multiscroll behavior. In particular, when no stimulus is perceived (i.e. there are no active sensors) the system evolves chaotically and the robot continues to explore the environment moving with constant speed and without modifying its orientation. Moreover, another possible exploring strategy can be taken into consideration: the robot can exploit the chaotic wandering of its internal state variables to generate and follow a chaotic trajectory that can help the system during the environment exploration. When external stimuli are perceived, the controlled system converges to a cycle (i.e. a periodic pattern) that depends on the contribution of active sensors through the control gains kxi and kyi . The action that will be executed is chosen according to the characteristics of the cycle, in particular its position in the phase plane. A vector pointing to the center of the limit cycle of the controlled multiscroll attractor is defined; predefined actions are chosen on the basis of module and orientation of this vector. When the stimuli stop, the multiscroll returns to evolve in a chaotic way. A different strategy has been adopted for the target. When a target is in the detection range of the robot, it is considered as an obstacle located in a position symmetric with respect to the motion direction. This is associated with a reference cycle which controls the multiscroll attractor with a low gain, so that avoiding obstacles has priority over reaching targets. In this way the generated reference cycle has the task to weakly suggest a rotation towards the target, since preserving the robot safety is considered more important than retrieving a target.
Fig. 6.5. Scheme of a simulated roving robot equipped with four distance sensors and a target sensor. In the phase plane x − y, the reference cycles associated to each sensor are reported. Target sensor, due to its omnidirectional field of view, is associated to four reference cycles. Cycles can be generated by system (6.1) with parameters: a = b = c = d1 = d2 = 1, k1 = k2 = 50, h1 = h2 = 100, p1 = p2 = 1, q1 = q2 = 2 and changing the offset. Equivalently, equations (6.8) can be used.
278
6.2.5
P. Arena et al.
Simulation Results
To test the performance and the potential impact of the proposed architecture we developed a software tool for mobile robot simulations and a hardware implementation in an embedded platform. The first evaluation stage was carried out via a 2D/3D simulation environment [6]. A robot model involved in a food retrieval task was simulated. To evaluate the performance of the proposed control scheme, a comparison with a traditional navigation control method is reported. The navigation strategy chosen as benchmark is the Potential Field (PF) [35]. A basic version using a quadratic potential has been implemented in the simulator for the same simulated robot used to test the weak chaos control approach [9]. Also in this case the robot can use only local information, acquired from its sensory system to react to the environment conditions (i.e. local PF). The parameters of the PF algorithm (e.g. robot speed, constraints for the movements) have been chosen in order to allow a comparison with the WCC technique. The sensory system of the simulated roving robot consists of four distance sensors and a target sensor as depicted in Fig. 6.5. Several environmental configurations were considered; here we report the results for two different scenarios shown in Fig.6.6. Four targets were introduced in both the environments: the circle around each target represents the range in which the target can be sensed. Obstacles are represented by walls and by the black rectangles. The robot had to navigate in this environment reaching the targets, while avoiding obstacles. When a target is found, it is disabled, so that the robot navigates toward the other targets. For each environment we performed a set of five simulations for the different control methods taken into consideration. In particular we compared the navigation capabilities of the robot controlled through the local potential field method and through two versions of the WCC architecture. The difference between the two versions is limited to the behaviour of the robot during the exploration phase (i.e. when no stimuli are perceived). The former implements a very simple behaviour that consists into a forward movement with the speed set to its maximum value (i.e. WCC f ), whereas the latter considers the chaotic evolution of the multiscroll system to determine the action of the robot exploring the environment (i.e. WCCc ). Here when no stimuli are perceived, the perceptual core
(a)
(b)
Fig. 6.6. Environments used to evaluate the performance of the proposed architecture in comparison with the Potential Field. The dimension of both the arenas is 50x50 robot units, the randomly placed objects correspond to walls and obstacles and the circles indicate the visibility range of the targets.
From Low to High Level Approach to Cognitive Control
(a)
(b)
(c)
(d)
(e)
(f)
279
Fig. 6.7. Trajectories followed by the robot controlled through: (a) (d) local potential field; (b)(e) WCC with forward exploration behaviour, (c)(f) WCC with chaotic exploration behaviour
of the control system behaves chaotically and the robot action depends on the position of the centroid of the chaotic wandering shown by the system during the simulation step. Each simulation step corresponds to a single robot action and it is determined simulating the dynamical system for 2000 steps with an integration step equal to 0.1 [6]. An example of the trajectories followed by the robot in the three cases is shown in Fig. 6.7. For each simulation the robot is randomly placed in the environment and the three control methods are applied monitoring the robot behaviour for 10000 actions (i.e. epoches). To compare the performances of the algorithms we consider the cumulative number of targets found and the area explored by the robot [29]. In Fig. 6.8 the cumulative number of targets found in the two environments, calculated in time windows of 1000 epochs, is shown. The performances of the three control methods are comparable in both the environments. Another performance index taken into consideration is the area covered by the robot during each simulation. The results shown in Fig. 6.9 demonstrate that the WCCc guarantees a high exploration capabilities with respect to the other control methods. Movies of these simulations are available on the web [5].
6.3 Learning Anticipation in Spiking Networks Basic problems in navigation control can be solved using many different approaches [8], our idea is to investigate a suitable navigation control scheme, which relies on learning strategies in spiking neurons [27] (i.e. the fundamental processing units of the nervous system).
280
P. Arena et al.
(a)
(b)
Fig. 6.8. Cumulative number of targets found for the three control algorithms in the (a) four-rooms environment (Fig.6.6 (a)) and (b) environment filled with obstacles (Fig.6.6(b)). The simulation time is 10000 epochs and the mean number of targets, mediated over 5 simulations, calculated with time windows of 1000 epochs, is indicated. The bars show the minimum and maximum number of targets found.
(a)
(b)
Fig. 6.9. Area explored using the three control algorithms in the (a) four-rooms environment (Fig.6.6 (a)) and (b) environment filled with obstacles (Fig.6.6(b)). The arena which dimension is 50x50 robot units, has been divided into locations of 2x2 robot units. The simulation time is 10000 epochs and the mean value of area explored, mediated over 5 simulations, calculated with time windows of 1000 epochs, is indicated. The bars show the minimum and maximum value.
Literature on navigation control is vast as well as the papers focusing on biologically inspired approaches to navigation. A review on bio-inspired navigation control schemes is provided by Trullier et al. [50] and by Franz and Mallot [20]. A key role in navigation control in animals is played by hippocampal cells, which encode spatial locations and act as an associative memory [14, 13, 12, 36, 34]. The aim of our work is not to reproduce a biological model, but to use the paradigm of spiking computation to build an artificial structure suitable for the control of a roving robot. The formulation of the control system in terms of spiking neurons requires to adopt neural mechanisms in order to implement the desired features. For instance, the learning strategy should be based on a biologically plausible mechanism. The so-called Spike-Timing-Dependent-Plasticity (STDP) has been introduced to explain biological plasticity [49, 33]. The same paradigm has been introduced in Chapter 3 but is here summarized for sake of clarity.
From Low to High Level Approach to Cognitive Control
281
According to this biologically plausible theory, synaptic weights among living neurons are changed so that a synaptic connection between two neurons is reinforced if there is a causal correlation between the spiking times of the two neurons. The weight of the synapse is increased if the pre-synaptic spike occurs before the post-synaptic spike, decreased otherwise. Taking into account the STDP rule, we implemented a system able to learn the response to “high-level” (conditioned) stimuli, starting from a priori known responses to “low-level” (unconditioned) stimuli. To this aim, successive repetitive occurrences of the stimulus lead to reinforce the conditioned response, according to the paradigms of classical and operant conditioning [41, 46], briefly introduced in the following. Classical conditioning relies on the assumption that, given a set of unconditioned responses (UR) triggered by unconditioned stimuli (US), the animal learns to associate a conditioned response (CR) (similar to the UR) to a conditioned stimulus (CS) which can be paired with the US [46]. In operant conditioning the animal is asked to learn a task or solve a problem (such as running a maze) and gets a reward if it succeeds [46]. These two biological mechanisms, used by living creatures to learn from the interaction with the environment, provided the source of inspiration for the implementation of our algorithm. In our approach learning capabilities are included to implement the main navigation control tasks (obstacle avoidance, target approaching and navigation based on visual cues). These have been implemented through a bottom-up approach: from the most simple task, i.e. obstacle avoidance, to the most complex task, i.e. navigation with visual cues. In the case of obstacle avoidance, contact sensors play the role of unconditioned stimuli (US), while range finder sensors represent conditioned stimuli (CS). In the case of target approach, sensors which are activated in the proximity of a target (called in the following proximity target sensors) play the role of US, while the visual input coming from a camera equipped on the robot represents the CS. In fact, the robot has a priori known reactions for contact and proximity target sensors, but has to learn the response to the conditioned stimuli through classical conditioning. In the case of navigation with visual cues, the development of a layer for visual target approach is fundamental to learn the appropriate response to objects which can become landmarks driving the robot behavior. Several local navigation tasks can thus be incrementally acquired on the basis of the already learned behaviors. The bio-inspired learning approach is potentially relevant to the goal of progressively including more and more details gained from high-level sensors, like visual cues, into the navigation control loop. New visual features could be used, as learning proceeds, to select more appropriate actions, relevant to the current task. The network of spiking neurons with STDP learning is applied to control a roving robot. The loop through the world, which defines the input of the network, is exploited to produce classical conditioning in several continuous behavioral scenarios. 6.3.1
The Spiking Network Model
In this Section the mathematical model of the spiking network applied to all the local navigation skills mentioned in the introduction is discussed. This approach derives from the Mushroom Bodies modeling proposed in Chapter 1 and 3 and is here briefly
282
P. Arena et al.
Fig. 6.10. Class I excitable neurons encode the robot distance from the obstacle into their firing frequency
summarized and further extended to include more complex skills. The network consists of interacting spiking neurons, which may play the role of sensory neurons, interneurons or motor-neurons. Sensory neurons are neurons connected to sensors, while motor-neurons drive robot motors; inter-neurons are all the other neurons involved in the processing for navigation control. Each neuron is modeled by the following equations proposed by Izhikevich [30]: v˙ = 0.04v2 + 5v + 140 − u + I u˙ = a(bv − u) with the spike-resetting
if v ≥ 0.03, then
v←c u ← u+d
(6.10)
(6.11)
where v, u and I are dimensionless variables representing the neuron membrane potential, the recovery variable and the input current, respectively, while a, b, c and d are system parameters. The time unit is ms. According to the parameters chosen [31], this model could reproduce the main firing patterns and neural dynamical properties of biological neurons, such as spiking behaviors (tonic, phasic and chaotic spiking) and bursting behavior. Among the possible behaviors class I excitable neurons are selected. In these neurons the spiking rate is proportional to the amplitude of the stimulus [31]. Such property is really important as a way to encode any measured quantity by means of the firing frequency. It also represents a suitable way to fuse sensory data at the network input level. For the task of obstacle avoidance, the robot is equipped with range finder sensors connected to sensory neurons. These are class I excitable neurons able to encode the distance from the obstacle through their firing rate, as schematically shown in Fig. 6.10. For this reason, neuron parameters are chosen as a = 0.02, b = −0.1, c = −55, d = 6 (class I excitable neurons [31]), while the input I accounts for both external stimuli (e.g. sensorial stimuli) and synaptic inputs. The same model was adopted for the other neurons of the network. Concerning the model of the synapse, let us consider a neuron j which has synaptic connections with n neurons, and let us indicate with ti the instant in which a generic
From Low to High Level Approach to Cognitive Control
283
neuron i, connected to neuron j, emits a spike. The synaptic input to neuron j is given by the following equation: I j (t) = ∑ wi j ε (t − ti ) (6.12) where wi j represents the weight of the synapse from neuron i to neuron j and the function ε (t) is expressed by the following formula: t t e1− τ if t ≥ 0 ε (t) = τ (6.13) 0 if t < 0 Equation (6.13) describes the contribution of a spike, from a presynaptic neuron emitted at t = 0 [18]. In our simulations τ has been fixed to τ = 5ms. To include adaptive capabilities in our model, Hebbian learning was considered. Recent results [49] indicate STDP as a model of experimentally observed biological synaptic plasticity. The synaptic weights of our network are thus allowed to be modifiable according to the STDP rule discussed in [49] and here briefly reported. Let us indicate with w the synaptic weight. A presynaptic spike and a post synaptic spike modify the synaptic weight w by w → w + ∆ w, where, according to the STDP rule, ∆ w depends on the timing of pre-synaptic and post-synaptic spikes. The following rule holds [49]: ∆t A+ e τ+ if ∆ t < 0 (6.14) ∆w = −∆ t A− e τ− if ∆ t ≥ 0 where ∆ t = t pre − t post is the difference between the spiking time of the pre-synaptic neuron (t pre ) and that of the post-synaptic one (t post ). If ∆ t < 0, the post-synaptic spike occurs after the pre-synaptic spike, thus the synapsis should be reinforced. Otherwise if ∆ t ≥ 0, (the post-synaptic spike occurs before the pre-synaptic spike), the synaptic weight is decreased by the quantity ∆ w. The choice of the other parameters (A+ , A− , τ+ and τ− ) of the learning algorithm will be discussed below. Equation (6.14) is a rather standard assumption to model STDP. The term A+ (A− ) represents the maximum ∆ w which is obtained for almost equal pre- and post- spiking times in the case of potentiation (depression). The use of the synaptic rule described by equation (6.13) may lead to an unrealistic growth of the synaptic weights. For this reason, often upper limits for the weight values are fixed [48, 32]. Furthermore, some authors (see for instance [52, 53]) introduce a decay rate in the weight update rule. This solution avoids that the weights of the network increase steadily with training and allows a continuous learning to be implemented. In the simulations, the decay rate has been fixed to 5% of the weight value and is performed every 3000 simulation steps. Thus, the weight values of plastic synapses are updated according to the following equation: 0.95w(t) + ∆ w if t mod 3000 = 0 w(t + 1) = (6.15) w(t) + ∆ w otherwise where t mod 3000 indicates the modulus of t divided by 3000 and ∆ w is given by eq. (6.14). In the following, the upper limit of the synaptic weight was fixed to 8 (excitatory synapses) or −8 (inhibitory synapses). The initial values of synapses with STDP are
284
P. Arena et al.
(a)
(b)
Fig. 6.11. (a) The simulated robot is equipped with range finder sensors centered on the front side of the robot and with a range of [0◦ , +45◦ ] (right sensor) or [−45◦ , 0◦ ] (left sensor), schematically shown with gray dashed lines (outer arcs). The inner arcs refer to the range of contact sensors. The range of the on-board camera view is shown with black solid lines. The circle surrounding the target (the red object) represents the range in which the target is sensed by the proximity target sensors of the robot. (b) On-board camera view.
either 0.05 (excitatory synapses) or −0.05 (inhibitory synapses). All the other synaptic weights (i.e., weights not subject to learning) are fixed to their maximum value. 6.3.2
Robot Simulation and Controller Structure
In this Section the simulated robot, the simulation environment and the interface between the spiking network, the robot and the environment are briefly described. The simulated robot has a cubic shape: in the following the edge of the robot will be used as measure unit, and indicated as robot unit (r.u.). The simulated robot is a dualdrive wheeled robot: the two motors are labeled as left (LM) and right motor (RM). The two actuated wheels are driven by the output of the spiking network structure as discussed below. The robot is equipped with two collision sensors, two range finder sensors, two proximity sensors for target detection and a simulated visual camera. Contact and range finder sensors are centered on the front side of the robot and have a range of [0◦ , +45◦ ] (right sensor) or [0◦ , −45◦] (left sensor). The proximity sensors for target detection are range finder sensors which sense the presence of a target. The simulated robot is shown in Fig. 6.11: a schematic representation of sensor ranges is given in Fig. 11(a), while in Fig. 11(b) the on-board camera view is shown. The general structure of the spiking network is made of three layers of neurons outlined below. The first layer is formed by sensory neurons. We used class I excitable neurons to model this layer of neurons, so that the frequency rate codes the stimulus intensity. The second layer is formed by inter-neurons, whereas the third layer is formed by motorneurons. The output of the motor-neurons is used to generate the signals driving the two wheels of the robot. We now briefly clarify how each layer works in general. The response of the sensory neurons depends on the stimulus intensity through the input I in equation (6.10). Let us firstly consider sensory neurons associated with
From Low to High Level Approach to Cognitive Control
285
collision sensors and let us indicate with d0 the distance between the robot and the closest obstacle. This distance is computed in robot units (r.u.). Collision sensors are activated when the distance d0 is d0 ≤ 0.6r.u.: in this case the input of the neuron associated with the US is fixed to a constant value I = 9. This value is such that the sensory neuron emits regular spikes which drive the robot to avoid the obstacle by turning in one direction. In the case of range finder sensors for obstacle detection, the input is a function of d0 : I = 9e−0.6d0 + 2.2. This function has been chosen so that the range finder sensors approximatively begin to fire when the distance between obstacle and robot is less than 11 r.u. In the case of range finder sensors for target detection, the same input function is used, but the target can be detected if the distance from the robot is less than 5 r.u. As far as the visual sensor is concerned, input is acquired by a camera equipped on the simulated robot. The raw image (388x252 pixels) is pre-processed in order to identify objects of different colors. In particular, to develop the network for target approaching and navigation with visual cues, two color filters have been considered: red objects represent targets whereas yellow objects are used as visual cues. In both the cases the filtered image is tiled in 9 different sectors, each one associated with a class I spiking neuron. A visual sensory neuron is activated if the barycenter of the identified object falls within its corresponding sector. The input is a function of the perimeter of the object, as follows. Let us indicate with p the perimeter of the biggest object in the visual field, then the input I of the active sensory neuron is given by: I = 0.012p + 3.8 if 0 < p ≤ 400pixels or I = 8.6 if p > 400pixels. Inter-neurons constitute an intermediate layer between sensory neurons and motorneurons. We found that this layer is not strictly necessary in the avoidance task where the overall network is quite simple, but is needed for more complex tasks. Finally, the last layer of the network is formed by four motor-neurons. For uniformity these neurons (and the inter-neurons as well) are also class I excitable neurons. Motor control is strictly connected to the robot structure under examination. To clarify how the output of the motor-neurons is used to drive the robot wheels, we need to briefly mention some aspect of the simulation. The robot moves in a simulated environment filled with randomly placed obstacles. At each simulation step, a time window of 300ms of model behavior is simulated. The robot moves according to the number of spikes generated by the motor-neurons during this time window. The number of spikes emitted by the two left (right) motor neurons are cumulated to compute the robot action, after that a new sensory acquisition and network processing will be executed. In our model we assume that the speed of the motor is proportional to the spiking rate of the driver signal. The driver signal for each motor depends on the number of spikes in the variables v (eq.6.10) of the associated motor-neurons. Referring to the spiking network shown in the box of Fig. 6.12, the “go-on” motor-neurons generate the spike train needed to let the robot advance in the forward direction (even in absence of stimuli). The variables v of the other motor-neurons (since this part of the network is needed to make the robot able to turn, we refer to these neurons as “turn” neurons)
286
P. Arena et al.
are then summed to the ones of the “go-on” neurons, so that the driving signals of the motors are the sum of the two spike trains. In presence of collisions, the network structure is such that the “go-on” motor-neurons are inhibited and the forward movement is suppressed. When the left and right motor-neurons emit an equal number of spikes, the robot moves forward with a speed proportional to the number of spikes. In absence of conditioned stimuli the amplitude of the forward movement is about 0.3 r.u. for each step. When the information gathered by the sensory system produces a difference in the number of spikes emitted by left and right motor-neurons, the robot rotates. Let nR (nL ) be the number of spikes in the signal driving the right (left) motor, and let ∆ ns = nR −nL , then the angle of rotation (in the counterclockwise direction) is θ = 0.14∆ ns rad. This means that if for instance nR = 5 and nL = 4, then the difference between the number of spikes emitted by right and left motor-neurons is one spike and the robot will rotate of 0.14 rad (about 8◦ ). In this case, after the rotation the robot proceeds forward. We count the spikes emitted both by the left and the right neuron (i.e., the minimum value of nR and nL ; in the example 4) and let the robot advance with a speed proportional to this number, so that the spike rate codes the robot speed. Without learning, the robot is driven by unconditioned behavior. In this case, the robot is driven by contact sensors and is able to avoid obstacles only after colliding with them. Furthermore, thanks to proximity target sensors the robot can approach a target only if this is within the sensor range. In the case of a front obstacle, the turning direction is random. 6.3.3
Spiking Network for Obstacle Avoidance
The spiking network described in this Section deals with the navigation task of moving through an environment where obstacles are randomly placed. Navigation takes place by using fixed reactions to contact sensors (i.e. to US) and learning the appropriate actions in response to conditioned stimuli (i.e. stimuli from range finder sensors). The network for obstacle avoidance is shown in Fig. 6.12. It is functionally divided in two parts. The first group of neurons (included in the grey box) deals with unconditioned stimuli USR and USL (i.e. contact sensors). Synaptic weights of this network are fixed and represent the a priori known response to unconditioned stimuli, i.e. the basic avoidance behavior of the robot. The structure of the connections between neurons is modeled with direct inhibition and cross-excitation among sensory and motor neurons. This model is biologically relevant: in fact it has been used as model for cricket phonotaxis in [55]. Details on cricket phonotaxis and spiking networks are reported in Chapter 3. Concerning sensory neurons, we assume that the obstacles are detected by contact sensors at a distance that allows the robot to avoid the obstacle by turning in one direction. The second part of the network, whose connections are outlined in gray in Fig. 6.12, deals with conditioned stimuli. The synaptic weights of this network evolve according to the STDP rule discussed above. The synaptic weights of neurons connected to CSL and CSR are not known a priori and should be learned during the exploration. The following parameters have been used for the STDP rule: A+ = 0.02, A− = −0.02, τ+ = 20ms, τ− = 10ms.
From Low to High Level Approach to Cognitive Control
287
Fig. 6.12. Network for obstacle avoidance. RM and LM are right and left motors, respectively. USR and USL represent right and left contact sensors, respectively, while CSR and CSL represent conditioned stimuli (i.e. range finder sensors). Each circle indicates a class I excitable neuron modeled by equations (6.10). Arrows and dots indicate excitatory and inhibitory synapses, respectively. The ‘go on’ box indicates a constant input which makes the motor neurons fire in the absence of sensory input (this constant input is such that the robot can go in the forward direction without sensory input).
Without stimuli, the motor-neurons controlling right and left motors emit the same number of spikes and the robot proceeds in the forward direction. When a stimulus, due either to a contact or a distance sensor, occurs, the number of spikes emitted by the motor-neurons are no longer equal and a steering movement occurs. To illustrate a first example of obstacle avoidance, we take into account two parameters. A steering movement is the response of the robot to an external stimulus. It can be either an unconditioned response (UR) or a conditioned response (CR). To test the behavior of the network controller, we take into account how many steering movements are due to an US or to a CS. We define NUS as the number of avoidance movements (i.e. steering movements directed to avoid an obstacle) which occur in a given time window and are due to the triggering of an UR. We define NCS as the number of avoidance movements in a given time window, due to the triggering of a CR. If the spiking network controller performs well, NCS grows, while NUS decreases. Trajectories generated in a typical experiment are shown in Fig. 6.13. During the first phase the robot avoids obstacles by using UR, triggered by contact sensors (Fig. 13(a)). During this phase the robot learns the correct CR. Proceeding further with the simulation (Fig. 13(b)), obstacles are finally avoided only by using range finder sensors, i.e. CS (Fig. 13(c)). The weights converge towards an equilibrium value given by the balance between new experience, learned when collision sensors are activated, and the weight decay rate. The introduction of the weight decay rate implies a never ending learning. This is not a drawback, since the learning algorithm is simple, it can run in real-time, and a continuous learning contributes to improve experience and also to face with dynamically changing environments. In Fig. 6.14 the number of avoidance movements due to CS (namely NCS ) is compared with the number of avoidance movements due to US (namely NUS ). The time
288
P. Arena et al.
(a)
(b)
(c)
Fig. 6.13. Three meaningful parts of a simulation devoted to learn CRs. (a) Robot behavior is driven by UR. (b) The robot has already learned CR to left obstacles. After two collisions with right obstacles, it avoids a right obstacle with a CR. (c) The robot avoids obstacles using CR. The final position, for each of the simulation phases, is indicated with a filled circle. 100 90 80
N US NCS
70
N
60 50 40 30 20 10 0 0
1000
2000
3000 4000 5000 simulation step
6000
7000
8000
Fig. 6.14. NCS compared with NU S in the case of obstacle avoidance. The time window in which NU S and NCS are calculated is 1000 simulation steps.
window in which NUS and NCS are calculated is 1000 simulation steps. As it can be noticed, as new experience is gained, US are seldom used, and the robot uses CR to avoid obstacles. We observed several emergent properties in this simple network controller. First of all, it is worth noticing that, although the UR network and the structure for the CR network are symmetrical, the weight values of the CR network are not symmetrical. This provides to the robot a decision strategy in case of front obstacles. In this sense
From Low to High Level Approach to Cognitive Control
289
Fig. 6.15. The robot controlled by the spiking network for obstacle avoidance is able to reorganize its turning strategy. The final point of the trajectory followed by the robot is indicated with a filled circle.
the behavior of the network depends on the configuration of obstacles used to train the network and the robot starting point. The asymmetry depends on how many left or right obstacles are encountered during the first phase of the learning. Other simulation tests, dealing with new obstacles placed in the environment after the learning phase, reveal that the new obstacles in the trajectory of the robot are avoided without any problem. Thus, although the learned weight values depend on the obstacle configuration, the global behavior of the network is independent of obstacle positions. Thanks to the introduction of a decay rate in the synaptic weights, the learning phase never stops. This provides the robot with the capability of redefining, during the time, its turning strategies. Let us focus on the example shown in Fig. 6.15. When the robot approaches the arena wall at the area marked with B, it turns to the right, while an optimal strategy would require to turn in the opposite direction. In B the robot performed a wrong turn, but in D the turn is correct. So, during the time needed to go from B to D, the robot weights were adjusted. This occurs during the four right turns between B and D and, in particular, during the left turn marked as C where collision sensors (i.e. evoking an UR) are used. In fact, with the strategy learned in B the robot collides at point C (because the solution is not optimal) and so it learns a new solution. The robot applies the new solution at point D. The emergent result is that the robot is able to correct the not optimal strategy and to adopt a better strategy. In fact, when it approaches area marked with D it now turns in the left direction. These emergent properties derive from the dynamic behavior due to the presence of learning (i.e. to the plasticity of synapses of dynamical spiking neurons). To evaluate in details the performance of the spiking network controller, a systematic analysis was carried on. In particular, an extensive set of simulations were carried out, whose detailed report can be found in [7]. 6.3.4
Spiking Network for Target Approaching
6.3.4.1 Spiking Model The first basic mechanism needed for navigation control is the ability to direct towards a target. We suppose that the robot knows how to direct itself towards the target once
290
P. Arena et al.
the target is within a circle of a given radius. However, this a priori known mechanism provides only a poor target approaching capability useful for very low distances: the robot should learn how to direct to the target on the basis of the visual input, which in this case constitutes the conditioned stimulus. In our case, targets are small red cubic objects. In order to focus on the problem of learning the conditioned response to the visual stimulus, we simplify the network of neurons devoted to process the visual input. We assume that the visual input provides high-level information: red objects are identified and information on their perimeter and their barycenter is provided to the network. The network of neurons is a topographic map of 9 neurons which activate when the object barycenter is within the spatial region associated with them. The input image is thus divided in a regular grid of 9 compartments which define the receptive field of each neuron. The network of spiking neurons, including visual target approaching, is shown in Fig. 6.16. Unconditioned stimuli for target approaching are indicated as TL and TR (left and right): they are range finder sensors, which detect the distance from the target if the robot is within the proximity range outlined in Fig.11(a). The whole network is now divided in three layers: sensory neurons, inter-neurons and motor-neurons. In addition to the plastic synapses of the obstacle avoidance layer, the weights of the synapses between sensory visual neurons and inter-neurons are also modulated by the STDP rule. The other weights are fixed. It has been assumed that the visual input is the result of a segmentation algorithm through the graphical interface of the simulator, which is solved at the level of sensory neurons. This can be assumed to be feasible, since, with current technology, it is possible to draw image segmentation within the interframe rate [2]. 6.3.4.2 Target Approaching: Simulation Results In this Section, simulation results focusing on target approaching are shown. We describe several simulations carried out to test the behavior of the robot in a generic environment with randomly placed obstacles and in specific cases, when for instance there is a target difficult to find. In any case, when there are multiple targets in the environment, to avoid that the robot stops at a target, once it is reached, this target is deactivated. The target is then reactivated when a new target is reached. The first experiment refers to the environment shown in Fig. 6.17. The trajectory followed by the robot in this environment in the first phase of the simulation (i.e. when the robot has not yet learned to use visual input) is shown in Fig. 17(a). After learning, the robot follows the trajectory shown in Fig. 17(b). In the case of Fig. 17(b) instead of proceeding in the forward direction as in Fig. 17(a), at point A the robot rotates since it sees the target that it will reach at point D. This is shown in Fig. 6.18, which reports the robot camera view at point A and the visual input: the target is visible on the right, hence the robot follows a different trajectory with respect to the case of Fig. 17(a). Figure 6.19 compares the number of targets found when the network is trained and when it is not trained. Clearly, visual target approaching allows a greater number of targets to be found.
From Low to High Level Approach to Cognitive Control
291
Fig. 6.16. Network for target approaching based on visual input. DL , DR , CL and CR indicate distance/contact left/right sensors. They have been renominated with respect to Fig. 6.12, in which they were indicated as CSL , CSR , USL and USR , because now unconditioned stimuli are represented by proximity target sensors (TL and TR ) and the conditioned stimulus is the visual input.
(a)
(b)
Fig. 6.17. Trajectory followed by the robot with visual target approaching control: (a) without learning; (b) after learning. Targets are indicated with small red rectangles inside a circle representing their visibility range.
A long run simulation is shown in Fig. 6.20, where there are six targets and five obstacles. The whole trajectory of the robot is shown. It can be noticed that, after learning the target approaching tasks, the robot follows a quite stereotyped trajectory which allows it to visit all the targets avoiding the obstacles. It is worth noticing that, in this case, on the long run, the robot has learnt an optimal trajectory to visit all the targets. To further test the behavior of the network controller for target approaching, a simulation environment (shown in Fig. 21(a)) with only two targets is considered. Since a
292
P. Arena et al.
(a)
(b)
Fig. 6.18. (a) Robot camera view at point A of Fig. 17(b). (b) Visual input: the small rectangle indicates that the red object has been identified as a target.
90
Number of detected targets
80
Not trained Trained
70 60 50 40 30 20 10 0 0
2000
4000
6000
8000
10000
12000
Simulation step
Fig. 6.19. Number of targets found with or without visual input (environment of Fig. 6.17)
Fig. 6.20. Long run simulation of target approaching showing how at the end of the simulation the robot follows a stereotyped trajectory which allows it to visit all the targets avoiding the obstacles
From Low to High Level Approach to Cognitive Control
293
Number of detected targets
25
20
Not trained Trained
15
10
5
0 0
2000
4000
6000
8000
10000
12000
Simulation step
(a)
(b)
Fig. 6.21. (a) Environment referred to the second set of simulations for target approaching. (b) Number of targets found with or without visual input.
target is deactivated once reached, at each time step only a target is active and the robot is forced look for it. Since one of the two targets is hidden by three walls, this simulation allows us to investigate the behavior of the robot when there is a target difficult to be found without visual targeting. The comparison between targets found with and without learning, shown in Fig. 21(b), demonstrates the efficiency of learned visual feedback. It can be observed that after learning the robot is able to find the targets more easily. In fact, the robot, with range finder sensors, trained without visual input, has to wander for long time before approaching the target: as it can be noticed in Fig. 21(b) before learning (dashed curve) the robot for a long time interval (500 < t < 10000 simulation steps) does not find the target hidden by the two walls. 6.3.5
Navigation with Visual Cues
In this Section the use of a network of spiking neurons for navigation based on visual cues is investigated. The objective is to learn the appropriate response to a given visual object which then can become a landmark driving the robot behavior. We refer to the animal behavior observed in experiments in which rats have to find a food reward in a T-maze. In such experiments [50], depending upon training conditions and on “what is invariant throughout the trials” [44], different cognitive responses can be observed: if the food is always in the right arm of the maze, the animal learns an egocentric motor response (i.e. a right-turn response which does not depend on the environmental cues); if the two arms of the maze are physically distinguishable and the cue position is always correlated with the food position, the animal learns to rely on the cue to choose the turning direction; finally, the animal can also learn to rely on extra-maze cues with a response which is triggered by place recognition. In particular, we take into account a T-maze in which the target (again a red object) is in one of the arms of the T-maze, and a yellow object is positioned on the front wall in slightly different positions. Depending on the position of this yellow object and of the target, the yellow object can be considered a meaningful or useless visual cue.
294
P. Arena et al.
Fig. 6.22. Scheme of the spiking network for navigation with visual cue
The scheme of the spiking network for this navigation task is shown in Fig. 6.22. There are now two layers of class I excitable neurons which process the visual input. In particular, the first layer (bottom, left-hand side in Fig. 6.22) exactly acts as in the network of Fig. 6.16: neuron spikes code the presence of a red object in the receptive field of the neuron. We assume that it is a priori known that red obstacles are targets, i.e. we assume that the visual targeting association task has been already solved. The second layer of neurons acts in parallel with the first one. Neuron spikes now code the presence of a yellow object in the receptive field of the neuron. The neurons belonging to this layer are the focus of training and the synaptic connections between the neurons of this layer and the inter-neurons are updated according to the STDP synaptic rule. By updating these synaptic weights the robot has now to learn the appropriate turning direction in correspondence of a given yellow object. A synaptic weight is increased when after a turn the robot sees the target. We focus on different training environments, based on T-mazes, in order to learn: visual cue-based response or egocentric motor response. In correspondence of the T bifurcation there are yellow marks placed in the front wall. These objects are landmark candidates, since they can became landmarks if, after successive presentations of the environment configuration, the robot discovers a relationship between the yellow mark and the target position. As occurs in animals, a given cue/target relationship can be learned if the animal (robot) faces the situation several times. For this reason, the learning protocol consists of several training cycles, Each trial stops either when the robot reaches the target or the end of the incorrect arm. The first case occurs when the robot has performed the right choice at the T bifurcation, while the opposite holds for the second case, in which the robot does not find the target. At the beginning of the training the robot finds the target in the 50% of the trials. During the first 10 training cycles, the robot’s ability to find targets does not change significantly. After 13 training cycles, the robot has already learned the association and finds the target in all the following trials. However, in this case the robot turns in proximity of the T bifurcation. If learning proceeds (in particular
From Low to High Level Approach to Cognitive Control
295
Fig. 6.23. Trajectories followed by the robot for two different cases. The robot learns to turn in the opposite side with respect to the landmark.
Fig. 6.24. Trajectory followed by the robot in a maze which can be solved relying on visual cues. Visual cues are oppositely placed with respect to the right choice as in the training environment of Fig. 6.23.
after 16 training cycles), the robot learns to turn before reaching the T bifurcation (as shown in Fig. 6.23). The navigation abilities of the robot after learning were further evaluated on a maze in which the target can be found by following visual cues. These cues are placed in the opposite side with respect to the correct arm. We let a simulated robot to navigate in this maze; the simulated robot has been learnt in the simulation environment of Fig. 6.23 which takes into account the same rule needed to solve the maze. As shown in Fig. 6.24, the robot is able to successfully navigate in the maze and find the target.
6.4 Application to Landmark Navigation As already discussed, either for animals or for artificial agents the ability to navigate is crucial in order to move autonomously in the surrounding environment. Many animals
296
P. Arena et al.
have proved to be very good at navigating through significant places using a combination of different strategies in their environment, despite of their simple brain structure [15, 40, 16]. In this section, we applied the reactive model for navigation and the STDP learning rule to create correlation among stimuli in order to formalize an homing strategy: homing is a term that robotics has borrowed from biology [19, 56]. It is usually used to describe the ability of various living beings to come back to their nest once accomplished other tasks, like foraging. Experimental results have shown that animals, in particular insects and rodents, apply geocentric coordinate systems in navigating [57, 10, 1]: ants and bees, for example, use a sun compass system exploiting the polarization pattern of the sky. In this way, they build a coordinate system adopting an absolute direction. For landmark navigation rodents seem to use knowledge of distance and direction relative to the landmarks, whereas insects are supposed to receive this information in an indirect way, namely, by image matching. From a biological point of view, the study of insect navigation is subdivided according to the different strategies and modalities used by insects for the homing task. Two of the primary modalities are path integration (i.e. dead reckoning) and the use of visual landmarks. Path integration is the application of orientation and odometry to determine the distance and the phase relative to the target position, e.g. the nest. To correct the loss of precision due to this open loop mechanism, especially in proximity of the nest, insects rely on the visual field, matching the current visual pattern with that one which they have previously stored in memory [58]. In general, two paradigms have emerged to describe visual homing in insects. Moller ¨ defines these as the template and parameter hypotheses. The template hypothesis [15] assumes that an image taken from the goal is stored as the representation for that position. This image is fixed at the retinal positions at which it was originally stored. The parameter hypothesis [1] assumes that some set of parameters is extracted and stored as the representation for the goal position. According to the terminology of Franz and Mallot, this type of navigation is called guidance, which consists in finding a nonvisually marked location using knowledge concerning its spatial relation to visible cues. The problem of determining which of the visual cues present in the environment could be considered as reliable landmarks has been addressed by processing the external information through a simple network of spiking neurons with dynamic synapses plastically tuned by an STDP algorithm: it allows a synaptic connection between two neurons to be reinforced if there is a causal correlation between the spiking times of the two neurons. In particular, the learning process establishes correlations between the incoming stimuli representing features extracted from the scenario and the nest. This kind of approach has already been used to model the paradigm of classical conditioning with a system able to associate the correct response to high-level (conditioned) stimuli, starting from a priori known responses to low-level (unconditioned) stimuli [7]. To this aim, successive repetitive occurrences of the stimulus lead to reinforce the conditioned response, according to the paradigms of classical and operant conditioning [41, 30]. Once established the landmarks, the agent acquires the geometric relationships which hold between them and the goal position. This process defines the parameters for a recurrent neural network (RNN) which drives the robot navigation, filtering the
From Low to High Level Approach to Cognitive Control
297
Fig. 6.25. Network for landmark identification in the case of Ni = 1: the arrows at the bottom of this figure represent the stimuli (Ii ) from the sensor detecting the presence of the target (i = 1) and of the visual cues (i = 2, ..., NS ). Each circle indicates a class I excitable neuron. The connection in black between neurons indicates that the corresponding weight is fixed while the connection in gray indicates that the corresponding weights are plastic, according to the STDP learning rule.
information about landmarks given within an absolute reference system (e.g the North) in presence of noise. The task of the network investigated in this work is to find a location S starting from any position within the (two-dimensional) workspace. This location is not visually marked, but its position relative to other, visible landmarks is known. When the absolute reference is not available, a safety mechanism acts to control the motion maintaining a correct heading. 6.4.1
The Spiking Network for Landmark Identification
The first phase of the proposed algorithm is devoted to landmark identification. The mathematical model of the network of spiking neurons responsible for the landmark identification. For this purpose, at each time step of the agent, we assume to have information about the presence of some specific visual cues within the navigation environment, thanks to some pre-processing mechanism of the visual stimuli. Each different visual cue, extracted during the time evolution, is considered as a landmark candidate and, if repetitive presentations in proximity of the target occur, the agent recognizes that it is stable in time and space and thus it can be considered as a landmark. The network model consists of two layers of spiking neurons. The first layer is made of NS neurons: the first neuron is activated by the receptor for the target detection. The second group of the first layer plays the role of sensory receptor for the different visual cues: in particular, each neuron is activated by the detection of the presence of a specific visual cue. The second layer is made of Ni inter-neurons (in our model Ni = 1) used for setting up the correlation between visual cues and the target, i.e. the nest (Fig. 6.25). Each neuron is modeled by the Izhikevich neuron model [30]:
298
P. Arena et al.
(a)
(b)
Fig. 6.26. (a) Spatial map of three landmarks M1 , M2 , and M3 , a source location S, and the actual position of the agent P. Vectors Mnm connect the landmarks, vectors Sn connect the source with the landmarks, and vectors Pn connect the actual position of the agent with the landmarks. (b) The recurrent neural network for navigation. Only the net for one component (x or y) is shown. Input values are Pn (k) or Sn ; output values are Pn (k + 1). The harmony values H, used to determine the network performance, are determined by a separate system (dashed lines). Stars symbolize summation of squared values. See text for further explanation.(The figure is reported from Chapter 4).
Class I excitable neurons are selected, since the frequency rate of class I excitable neurons may be used to encode any measured quantity by means of the firing frequency, in order to fuse sensory data at the network input level (neuron parameters are chosen as a = 0.02, b = −0.1, c = −55, d = 6). In our network, we fixed the weights related to the neuron activated by the target: in particular ws11 = 8. The other synaptic weights ws1 j ( j = 2, ..., NS ) are allowed to be plastic according to the STDP rule previously introduced. In our simulations, parameters A− and A+ are fixed at values A− = −0.02, A+ = 0.02, while τ+ = 20 and τ− = 10. 6.4.2
The Recurrent Neural Network for Landmark Navigation
In order to solve the guidance task, we implemented the recurrent neural network proposed by Prof. H. Cruse and introduced in Chapter 4. In order to describe how this network can be applied to a navigation task, as illustrated in a previous work [17], we assume that the agent knows the relative position of landmarks Mn (n = 1...Nl , in the example we use Nl = 3) and of an additional, not visually marked, location S, the location of a food source (see Fig. 26(a)). This knowledge has been acquired through earlier learning and is stored in the form of vectors Mnm , pointing from landmark n to landmark m, and the vectors Sn , pointing from landmark n to the source S. Furthermore, we assume that the agent is able to determine Nl vectors Pn (k) that point from each landmark Mn to the actual position P of the agent. The complete network consists of two individual, independent sub-networks, one for the x cartesian components and the other for the y cartesian components of
From Low to High Level Approach to Cognitive Control
299
Fig. 6.27. a. Schematic configuration of navigation environment. The triangle symbolizes the target, stable visual cues are represented with red, green, blue, orange and purple rectangular objects, while objects in black, yellow and cyan represent visual cues randomly placed at each iteration. b. Route of the agent during the environment exploration phase. The visual cues not recognized as stable landmarks are discarded in the successive map creation phase. Cyan (moving) cue
Yellow (moving) cue
2
2
1
1
weight value 0
0 0
1000 2000 3000 4000 Black (moving) cue
5000
2 1 0 0
1000
2000 3000 Green cue
4000
5000
2 1.5 1 0.5 0
2
2
1
1
0
0
1000
2000 3000 Blue cue
4000
5000
0
1000
2000 3000 Red cue
4000
5000
0
1000
2000 3000 Orange cue
4000
5000
0
1000
2000
4000
5000
0 0
1000
2000 3000 Purple cue
4000
5000
2
2
1
1
0
0 0
1000
2000
3000
4000
5000
3000
Simulation cycles
Fig. 6.28. Evolution of synaptic weights during simulation steps. Visual cues that are stable in time and in space and near the target have weights values over the fixed threshold = 1.5 while the other visual cues have no significant synaptic values.
vectors (Fig. 26(b)) built using the North direction as the y-axis and one of the landmark as the origin. Each sub-network contains three units or neurons Pn (k) (n = 1, 2, 3). The scalar values Mnm represent the components stored in memory, while the initial condition Pn (0) are set by the current measurement of the agent-landmarks distances. The following equations show the mathematical structure of the model; they represent a linear model in which weights wnm are arbitrary but fixed.
300
P. Arena et al.
w11 P1 (k) + w21 (P2 (k) − M21) + w31 (P3 (k) − M31 ) w11 + w21 + w31 w12 (P1 (k) − M12 ) + w22 P2 (k) + w32 (P3 (k) − M32 ) P2 (k + 1) = w12 + w22 + w32 w13 (P1 (k) − M13 ) + w23 (P2 (k) − M23 ) + w33 P3 (k) P3 (k + 1) = w13 + w23 + w33
P1 (k + 1) =
(6.16) (6.17) (6.18)
The network has recurrent connections, i.e. for each cycle of simulation, the output values Pn (k + 1) are given to the input (Pn (k) ← Pn (k + 1)). Furthermore, the network receives, through sensors, the measures of vectors Pn (k) which, in general, do not point to the same position in the plane. This case demonstrates a fundamental property of this network: after an arbitrary input is switched off, the network, after some iterations, relaxes towards a stable state. The most important property of this net is that, after relaxation (say, when k = k), all output vectors Pn (k) point to the same position in space even if the input values pointed to different locations. The speed of the relaxation depends on diagonal weights (the higher the weights wnn , the slower the relaxation). In our calculation we use, as suggested by Cruse [17], wnm = 1 if n = m and wnm = 10 if n = m. When the absolute reference is not available, a safety mechanism acts to control the motion maintaining a correct heading. This is briefly showed in the following. The model uses local and sensory information that correlate the current position of the agent, the target and the perceived environment. In solving the guidance task, many bio-inspired models have been developed to emulate insect’s behavior. The model, proposed in this section, refers to the parameter hypothesis; as said above, the parameter hypothesis assumes that some set of parameters is extracted and stored as the representation for the goal position. In particular: • it uses simple sensory information (distances and angles); • it needs at least 3 landmarks in the environment to build an egocentric coordinate system; • it needs, compared with other models, less computational resources: it’s economic and attractive for robotic applications. The structure is divided into two parts: the first part concerns the memorization of the target position array with respect to landmarks in terms of relative distances and angles; the second part relates to the reaching of the target position. Despite of the RNN model described in section 6.4.2, we operate with vectors, i.e. distances and angles, and not in terms of x and y coordinates. The features of the model are explained in the following. Once the model has estimated the distances between landmarks and target (Fig. 26(a)), the model stores these information in two separate arrays, named x( and x(target (Fig. 26(a)): P1 P2 P3 ( x= α β γ S1 S2 S3 ( xtarget = αt βt γt During navigation, the model updates the values of the vector x( trying to reduce the discrepancy with the x(target array. So, the navigation control is actuated through the
From Low to High Level Approach to Cognitive Control
301
comparison of the two arrays; in particular, at iteration h, for the landmark j, the model evaluates the difference between the two arrays in order to decide how to move: steph j = X(i ( j) − X(target ( j)
(6.19)
where x(i ( j) indicates the jth column of the vector x( evaluated at the step h. For each iteration, the movement of the agent is determined by the algebraic sum of steph j on all visual cues: steph = ∑ steph j (6.20) j
After moving, the model updates arrays x(, measuring current distances and angles. The procedure stops when the agent reaches the nest, i.e. x( = x(target . The peculiarity of such algorithm is that it does not need an absolute reference system and uses only visual information. However, as said above, it needs at least three landmarks, even if not contemporary visible by the agent. For a geometrical overview of the model, refer to Fig. 26(a). 6.4.3
Simulation Results
The simulations were made in an arena of 200 × 200 pixels, filled with colored objects. These objects are considered as visual cues, some of which are fixed, while others can move into the environment. In particular, for simulations reported here, the number of visual cues distributed in the environment is fixed at eight: three visual cues have been located near the target and are stable in time and space, other two visual cues have been considered stable but quite far from the target position, while the remaining visual cues have been randomly placed in the arena and are considered as moving objects. The agent is able to detect the presence of the target within a radius of 10 pixels (i.e. low level target detection sensor) while for landmark detection a visibility cone has been defined, centered on the front side of the robot and with a range of [−45◦; +45◦ ]. The simulated robot, during the exploration phase, is controlled by a reflexive navigation strategy like the Weak Chaos Control and the movement, in absence of stimuli, is guided by the evolution of the chaotic system. This behaviour permits a complete exploration of the area. In the following subsection we report the simulation results related to the landmark identification, while in subsection 6.4.3.2 we show the results related to the two navigation approach using the landmark previously identified as the most reliable. 6.4.3.1 Landmark Identification At each simulation step, if the target is within the detection range of the agent, the input for the target detector neuron is I = 30. In the case of visual cues, if the of the visibility conditions are respected, the input of visual cues detectors is a normalized distance, so that the input of these neurons is I j = 50e−γ d j , where d j is the distance in pixels between the robot and j-th visual cue. It has been chosen γ = 0.02, so that the range in which visual cues detectors approximatively begin to fire is approximatively 30 pixels from the corresponding cue. Once the neuron inputs have been calculated, a
302
P. Arena et al.
Fig. 6.29. a. Egocentric system without absolute reference: the initial configuration with the three recognized landmarks and the start position of the agent. b. Trajectory followed by the agent: at each time step, one (randomly chosen) landmark, say Mi , is visible and the coordinate of the distance vector Pi set the initial condition Pi (0) for the x-coordinate RNN and for the y-coordinate RNN (see text for details). Harmony value is shown for 600 iteration cycles.
Fig. 6.30. Egocentric system with absolute reference: example of trajectories in presence of different levels of noise in the estimation of Mi j , Si and in the measurement of Pi at each step (i, j = 1, 2, 3)
new simulation step is computed. For each simulation, the model behavior is simulated with a time window 300ms. Simulation cycles have been fixed at 5000. The simulation environment is represented in Fig. 6.27.
From Low to High Level Approach to Cognitive Control
303
Fig. 6.31. Egocentric system without absolute reference: example of the trajectory using three landmarks. At each time step, all the three landmarks are visible and agent estimates distances and angles with respect to landmarks and moves in order to obtain the equilibrium of reaching the target(see text for details). Dynamic error value is shown for 20 iteration cycles.
During the phase of landmark identification, the robot explores the environment while the STDP rule updates weight values. The simulation stops after 5000 steps of the agent if at least 3 visual cues have overcome the threshold of 1.5. In Fig. 6.28, the evolution of synaptic connection values during simulation is illustrated. So, the model is able to distinguish correctly which of the visual cues are reliable landmarks and which are not. 6.4.3.2
Landmark Navigation
Egocentric system with absolute reference Once three visual cues have been selected as reliable landmarks, the network of Fig. 26(b) can be used to solve the guidance task. When the simulated robot after a foraging task, needs to return to its home (e.g. for recharge its battery), it looks for a landmark. When one of the reliable landmarks is identified, its position with respect to the absolute reference system created with a compass sensor, is passed to the RNN. This information is used to substitute one of the inputs of the recurrent network. The stimulation is applied for only one step and the net has time to relax for the subsequent k − 1 steps of relaxation (i.e. no input is given during that time). In our simulations, we let the network evolve for k = 20 step of relaxation. After the relaxation, the new output Pn (k) points to a position that is nearer to S compared with the actual position P. The redundant structure of the system also gives rise to another interesting property of the network. If this landmark, during the simulation, becomes not visible due to, for example, occlusion or noise, another can be identified and selected. The result of the navigation control, driven by the two RNNs (i.e. one for each cartesian coordinate of the absolute reference system built on the knowledge of the North), is shown in Fig. 6.29 where the trajectory of the robot, during the homing phase is reported.
304
P. Arena et al.
Fig. 6.32. Egocentric system without absolute reference: example of trajectories in presence of different levels of noise in the estimation of Mi j , Si and in the measurement of Pi at each step (i, j = 1, 2, 3)
As said above, the model can work well even if there is a single visible landmark in the environment. The advantage of using more than one landmark, i.e. redundant information, is due to the possibility of filtering some types of noise that can affect the sensory system of the agent. In order to test the model robustness against noise, two types of noises have been introduced: noise on distance estimation Mi j and Si , stored in memory, and in the measurement of Pi , acquired at each iteration. In Fig. 6.30, the model results are shown when the sensory system is affected by both types of noise. The model has been tested with increasing values of noise, with a variation on measures due to noises that reach a maximum value of 50% with respect to the correct measures. The model seems to be able to navigate very well even if there are significant noisy signals acquired by the sensory system. Egocentric system without absolute reference When the absolute reference is not available (e.g. either the compass sensor is not equipped on the robot or the environmental conditions don’t permit such a measure), the agent can utilize, for navigating in the environment, an alternative navigation model, according to the parameter hypothesis, that makes use of local and sensory information to build an egocentric coordinate system. Fig. 6.31 shows an example of the trajectory of the agent in the case of three landmarks navigation control based on the egocentric
From Low to High Level Approach to Cognitive Control
305
model without the absolute reference. In this case, the Harmony index, has been defined an indicator for the relaxation to target, i.e, the sum of square error, which measures the discrepancy between the two arrays X( and X(target . As in the previous case, the model has been tested with different levels of noises on the sensory system (see Fig. 6.32), demonstrating the navigation ability in this extremely difficult environmental conditions.
6.5 Conclusions In this Chapter we discuss on the basic and correlation layers of the cognitive architecture that will be formalized in Chapter 7. The discussion is structured into three main blocks. The first one concerns the application of dynamical systems to model reactive and precognitive behaviours. The second one is focused on the correlation mechanisms used to create temporal dependencies among different stimuli and consequently different motor responses. Finally an application to landmark navigation is proposed to fuse together reactive behaviours and anticipation mechanisms. As presented in the first part of the Chapter, the problem of multi-sensory integration has been treated using a new technique called weak chaos control. This approach takes inspiration from the Freeman’s theory of brain pattern formation, although it makes use of a more abstract model, and has been applied to a navigation control problem. The phenomenon of encoding information stabilizing the unstable orbits endowed in a chaotic attractor has been investigated. The multiscroll chaotic system was chosen for its simple model and for the possibility to extend the emerging multiscroll attractor to one, two and three dimensions varying also the number of scrolls. The feedback from the environment has been introduced by using a continuous multi-reference chaos control technique based on the Pyragas’ method. Extending the discussion for the double scroll to a general nxm scrolls configuration. The analytical study has been exploited to develop a reactive navigation layer for a roving robot. The robot behavior during a food retrieval task has been evaluated in a 3D simulation environment. In the second part of the Chapter, a network of spiking neurons for robot navigation control is introduced. In particular, by using stimuli of increasing complexity, a system based on spiking neurons and able to implement three local navigation skills (obstacle avoidance, target approaching and navigation with visual cue) has been considered. For all the tasks a priori response to low level sensors (i.e. contact sensors in the case of obstacles, proximity target sensors in the case of target approaching or visual targets in the case of navigation with visual cues) is known and the robot has to learn the response to high level stimuli (i.e. range finder sensors or visual input). The learning mechanism of these networks is provided by STDP, which allows unsupervised learning to be realized in a simple way. Our approach is a bottom-up approach: we searched for the minimal structure able to implement the desired behaviors. From our analysis we can conclude that a system of spiking neurons with plastic synapses can be used to control a robot and that STDP with additional mechanisms can be used to include plasticity in the system, learning temporal correlation among stimuli.
306
P. Arena et al.
We believe that the introduced approach can provide efficient navigation control strategies with very simple unsupervised learning mechanisms and at the same time can constitute a constructivist approach which can be interesting for studies on navigation strategies. The approach is constructivist in the sense that the same network structures for the low level tasks are duplicated for dealing with the high level tasks which are based on the acquired (developed) capabilities. Furthermore, this strategy encourages the use of parallel structures which can be included for instance to take into account other visual cues. Finally we applied the reactive and the correlation layers previously described to formalize a new methodology for adaptive bio-inspired navigation. According to experiments performed with insects, these are capable to use different kinds of information to focalize the nest position, in order to reach it from different distances. The introduced methodology make in evidence how to integrate different simple behaviors and learning algorithms, to obtain more complex pre-cognitive capabilities. The proposed simulation shows how a simulated robot can distinguish reliable landmarks in an adaptive way for homing porpoises. This means that moving or distant objects are discarded, while fixed ones are kept. This initial setting of the landmark configuration is then applied to a recurrent linear network in order to guide the robot to the nest. The network, although very simple, shows interesting capabilities to filter out noise, even at a relevant level. The robustness of the approach is relevant, also since the reliable working is guaranteed also when the landmarks are not always all visible by the agent. Experiments with roving robots have been carried out to verify the simulation results; details on the hardware experiments are given in Chapter 11.
References 1. Anderson, A.M.: A model for landmark learning in the honey-bee. Journal of Comparative Physiology A 114(335) (1977) 2. Arena, P., Basile, A., Bucolo, M., Fortuna, L.: An object oriented segmentation on analog CNN chip. IEEE Trans. CAS I 50(7), 837–846 (2003) 3. Arena, P., Crucitti, P., Fortuna, L., Frasca, M., Lombardo, D., Patan´e, L.: Turing patterns in RD-CNNs for the emergence of perceptual states in roving robots. International Journal of Bifurcation and Chaos 18(1), 107–127 (2007) 4. Arena, P., De Fiore, S., Fortuna, L., Frasca, M., Patan´e, L., Vagliasindi, G.: Weak Chaos Control for Action-Oriented Perception: Real Time Implementation via FPGA. In: Proc. International conference on Biomedical Robotics and Biomechatronics (Biorob), Pisa, Italy, February 20-22 (2006) 5. Arena, P., De Fiore, S., Frasca, M., Patan´e, L. (2006), http://www.scg.dees.unict. it/activities/biorobotics/perception.htm 6. Arena, P., Fortuna, L., Frasca, M., Lo Turco, G., Patan´e, L., Russo, R.: A new simulation tool for action oriented perception systems. In: Proc. 10th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Catania, Italy, September 19-22 (2005) 7. Arena, P., Fortuna, L., Frasca, M., Patan´e, L., Barbagallo, D., Alessandro, C.: Learning highlevel sensors from reflexes via spiking networks in roving robots. In: Proceedings of 8th International IFAC Symposium on Robot Control (SYROCO), Bologna, Italy (2006) 8. Arkin, R.C.: Behaviour Based Robotics. MIT Press, Cambridge (1998)
From Low to High Level Approach to Cognitive Control
307
9. Beard, R., McClain, T.: Motion Planning Using Potential Fields, BYU (2003) 10. Bingman, V.P., Gagliardo, A., Hough, G.E., Ioal´e, P., Kahn, M.C., Siegel, J.J.: The Avian Hippocampus, Homing in Pigeons and the Memory Representation of Large-Scale Space. Integr. Comp. Biol. 45, 555–564 (2005) 11. Boccaletti, S., Grebogi, C., Lai, Y.C., Mancini, H., Maza, D.: The Control of Chaos: Theory and Applications. Physics Reports 329, 103–197 (2000) 12. Burgess, N., Becker, S., King, J.A., O’Keefe, J.: Memory for events and their spatial context: models and experiments. Phil. Trans. R. Soc. Lond. B 356, 1–11 (2001) 13. Burgess, N., O’Keefe, J.: Neuronal computations underlying the firing of place cells and their role in navigation. Hippocampus 6, 749–762 (1996) 14. Burgess, N., Recce, M., O’Keefe, J.: A model of hippocampal function. Neural Networks 7, 1065–1081 (1994) 15. Cartwright, B.A., Collett, T.S.: Landmark learning in bees. The Journal of Comparative Phisiology A 151(85) (1983) 16. Collett, T.S.: Insect navigation en route to the goal: Multiple strategies for the use of landmarks. The Journal of Experimental Biology 202, 1831–1838 (1999) 17. Cruse, H.: A recurrent network for landmark-based navigation. Biological Cybernetics 88, 425–437 (2003) 18. Floreano, D., Mattiussi, C.: Evolution of Spiking Neural Controllers for Autonomous Visionbased Robots. Evolutionary Robotics IV. Springer, Berlin (2001) 19. Franceschini, N., Blanes, C.: From insect vision to robot vision. Philosophical Transaction of the Royal Society of London B 337, 283–294 (1992) 20. Franz, M.O., Mallot, H.A.: Biomimetic robot navigation. Robotics and Autonomous Systems 30, 133–153 (2000) 21. Freeman, W.J.: Simulation of chaotic EEG patterns with a dynamic model of the olfactory system. Biol. Cybern. 56, 139–150 (1987) 22. Freeman, W.J.: The physiology of perception. Sci. Am. 264, 78–85 (1991) 23. Freeman, W.J.: Characteristics of the Synchronization of Brain Activity Imposed by Finite Conduction Velocities of Axons. International Journal of Bifurcation and Chaos 10(10) (1999) 24. Freeman, W.J.: A Neurobiological Theory of Meaning in Perception. Part I: Information and Meaning in Nonconvergent and Nonlocal Brain Dynamincs. International Journal of Bifurcation and Chaos 13(9) (2003) 25. Freeman, W.J.: How and Why Brains Create Meaning from Sensory Information. International Journal of Bifurcation and Chaos 14(2) (2004) 26. Fuster, J.M.: Cortex and Mind: Unifying Cognition. Oxford University Press, Oxford (2003) 27. Grossberg, S., Maass, W., Markram, H.: Introduction: Spiking Neurons in Neuroscience and Technology. Neural Networks, special issue on Spiking Neurons 14(6-7), 587 (2001) 28. Harter, D.: Evolving neurodynamics controllers for autonomous robots. In: International Joint Conference on Neural Networks, pp. 137–142 (2005) 29. Harter, D., Kozma, R.: Chaotic Neurodynamics for autonomous agents. IEEE Trans. on Neural Networks 16(3), 565–579 (2005) 30. Izhikevich, E.M.: Simple Model of Spiking Neurons. IEEE Transactions on Neural Networks 14(6), 1569–1572 (2003) 31. Izhikevich, E.M.: Which Model to Use for Cortical Spiking Neurons? IEEE Transactions on Neural Networks 15(5), 1063–1070 (2004) 32. Izhikevich, E.M.: Solving the distal reward problem through linkage of STDP and dopamine signaling. Cerebral Cortex Advance (2007) 33. Izhikevich, E.M., Gally, J.A., Edelman, G.M.: Spike-Timing Dynamics of Neuronal Groups. Cerebral Cortex 14, 933–944 (2004)
308
P. Arena et al.
34. Jensen, O., Lisman, J.E.: Hippocampal sequence-encoding driven by a cortical multi-item working memory buffer. TRENDS in Neurosciences 28(2) (2005) 35. Khatib, O.: Real-time Obstacle Avoidance for Manipulators and Mobile Robots. Intemational Journal of Robotics Research 5(1), 90–98 (1986) 36. Koene, R.A., Gorchetchnikov, A., Cannon, R.C., Hasselmo, M.E.: Modeling goal-directed spatial navigation in the rat based on physiological data from the hippocampal formation. Neural Networks 16, 577–584 (2003) 37. Kozma, R., Freeman, W.J.: Chaotic Resonance - Methods and Applications for Robust Classification of Noisy and Variable Patterns. International Journal of Bifurcation and Chaos 11(6) (2000) 38. L¨u, J., Chen, G., Yu, X., Leung, H.: Design and Analysis of Multiscroll Chaotic Attractors from Saturated Function Series. IEEE Trans. Circuits Syst., I: Regular Paper 51 (2004) 39. Manganaro, G., Arena, P., Fortuna, L.: Cellular Neural Networks: Chaos, Complexity, and VLSI Processing. Springer, Berlin (1999) 40. Nicholson, D.J., Judd, S.P.D., Cartwright, A., Collett, T.S.: View-based navigation in insects: how wood ants (Formica rufa L). The Journal of Experimental Biology 202, 1831–1838 (1999) 41. Pavlov, I.P.: Conditioned Reflexes. Oxford University Press, London (1927) 42. Pyragas, K.: Continuos Control of Chaos by Self-Controlling Feedback. Physical Letters A 170, 421–428 (1992) 43. Pyragas, K.: Predictable Chaos in Slightly Pertirbed Unpredictable Chaotic Systems. Physics Letters A 181, 203–210 (1993) 44. Restle, F.: Discrimination of cues in mazes: A resolution of the ‘place-vs-response’ question. Psychological Review 64(4), 217–228 (1957) 45. Ritter, H., Martinetz, T., Schulten, K.: Neural Computation and Self-Organizing Maps - An Introduction. Addison-Wesley, New York (1992) 46. Shepherd, G.M.: Neurobiology. Oxford University Press, Oxford (1994) 47. Skarda, C.A., Freeman, W.J.: How brains make chaos in order to make sense of the world. Behav. Brain Sci. 10, 161–195 (1987) 48. Song, S., Abbott, L.F.: Cortical development and remapping through Spike TimingDependent Plasticity. Neuron 32, 339–350 (2001) 49. Song, S., Miller, K.D., Abbott, L.F.: Competitive Hebbian learning through spike-timingdependent plasticity. Nature Neurosci. 3, 919–926 (2000) 50. Trullier, O., Wiener, S.I., Berthoz, A., Meyer, J.A.: Biologically-based Artificial Navigation Systems: Review and prospects. Progress in Neurobiology 51, 483–544 (1997) 51. Uexku, J.V.: Theoretical Biology. Harcourt, Brace (1926) 52. Verschure, P.F.M.J., Kr¨ose, B.J.A., Pfeifer, R.: Distributed adaptive control: The selforganization of structured behavior. Robotics and Autonomous Systems 9, 181–196 (1992) 53. Verschure, P.F.M.J., Pfeifer, R.: Categorization, Representations, and the Dynamics of System-Environment Interaction: a case study in autonomous systems. In: Meyer, J.A., Roitblat, H., Wilson, S. (eds.) From Animals to Animats: Proceedings of the Second International Conference on Simulation of Adaptive Behavior, pp. 210–217. MIT Press, Cambridge (1992) 54. Webb, B., Consi, T.R.: Biorobotics. MIT Press, Cambridge (2001) 55. Webb, B., Scutt, T.: A simple latency dependent spiking neuron model of cricket phonotaxis. Biological Cybernetics 82(3), 247–269 (2000) 56. Weber, A.K., Venkatesh, S., Srinivasan, M.: Insect-inspired robotic homing. Adaptative behavior 7, 65–97 (1999) 57. Wehner, R., Michel, B., Antonsen, P.: Visual navigation in insects: Coupling of egocentric and geocentric. The Journal of Experimental Biology 199, 129–140 (1996) 58. Zampoglou, M., Szenher, M., Webb, B.: Adaptation of Controllers for Image-Based Homing. Adaptive Behaviour 14(4), 381–399 (2006)
7 Complex Systems and Perception P. Arena, D. Lombardo, and L. Patan´e Department of Electrical, Electronic and System Engineering, University of Catania, I-95125 Catania, Italy {parena,lpatane}@diees.unict.it
Abstract. This Chapter concludes Part II of the present Volume. Here the hypothesis of an internal model arises is needed at the aim to generate internal representations which enable the robot to reach a suitable behavior so as to optimize ideally arbitrary motivational needs. Strongly based on the idea, common to Behavior-based robotics, that perception is a holistic process, strongly connected to behavioral needs of the robot, here we present a bio-inspired framework for sensing-perception-action, based on complex self-organizing dynamics. These are able to generate internal models of the environment, strictly depending both on the environment and on the robot motivation. The strategy, as a starting simple task, is applied to a roving robot in a random foraging task. Perception is here considered as a complex and emergent phenomenon where a huge amount of information coming from sensors is used to form an abstract and concise representation of the environment, useful to take a suitable action or sequence of actions. In this chapter a model for perceptual representation is formalized by means of Reaction-Diffusion Cellular Nonlinear Networks (RD-CNNs) used to generate self-organising Turing patterns. They are thought as attractive states for particular set of environmental conditions in order to associate, via a reinforcement learning, a proper action. Learning is also introduced at the afferent stage to shape the environment information according to the particular emerging pattern. The basins of attraction for the Turing patterns are so dynamically tuned by an unsupervised learning in order to form an internal, abstract and plastic representation of the environment, as recorded by the sensors. In the second part of the Chapter, the representation layer together with the other blocks already introduced in the previous Chapters (i.e. basic behaviours, correlation layer, memory blocks, and others), has been structured in an unique framework, the SPARK cognitive model. The role assigned to the representation layer inside this complete architecture consists in modulating the influence of each basic behaviour with respect to the final behaviour performed by the robot to fulfill the assigned mission.
7.1 Introduction Morphological patterns and schemes are ubiquitous in Nature: from seashells and animal coats, to gastrulation, patterns play a fundamental role in life. For example, animals can easily and promptly recognize preys or predators from a smell or visual and acoustic information. All these sensations are nothing else than spatial temporal patterns: these elicit either instinctive actions to save the animal life, or learned behaviors, for skills improvement [21]. Our methodology is inspired by the idea that the sensing-perception-action cycle is mediated through spatial-temporal patterns: actions P. Arena and L. Patan`e (Eds.): Spatial Temporal Patterns, COSMOS 1, pp. 309–347. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
310
P. Arena, D. Lombardo, and L. Patan´e
are planned and executed following a pattern flow, which is continuously created and modified via a learning process mediated through the environment. Nature offers many examples of learning used by animals to adapt their behavior to the environment in which they live. As already mentioned in the other Chapters, essentially two learning mechanisms, allowing animals to associate their behavior with particular environmental states, have been formalized: classical and operant conditioning. Classical conditioning is based on Pavlov’s experiments [29]: an initially neutral stimulus, called conditioned stimulus (CS), is presented for a number of trials within a motivational or unconditioned stimulus (US), able to trigger a genetically pre-wired reflex, the unconditioned response (UR). After a sufficient number of trials, the association between CS and US, which plays the role of initial reinforcer for the learning system, causes the CS presented alone to be sufficient to trigger a response, similar to the UR, called conditioned response (CR). For example Pavlov showed how some dogs, after a period of staying in a laboratory, stood up not only in presence of food, but also in presence of the laboratory technician who brought it. In classical conditioning the animal is passive to the learning process. Instead, if the animal has to perform a specific task in its environment, its actions are guided by a reward or by a punishment. This is the core of operant conditioning, based on the experiments by Thorndike [37] and Skinner [34]. Thorndike studied the behavior of cats, placed into a cage. They had to solve the problem of leaving the cage by means of an appropriate action on a bolt. Initially the cats acted randomly, but, repeating the experiment several times, they were able to learn how to avoid useless actions and to speed-up the process. Similarly Skinner observed mices and highlighted that, if accidentally a mouse leaned on a lever, causing a food marble to enter the cage, then, in the future, the mouse would have continued to press the lever, because it was reinforced by the positive consequences of its action. For a robot facing the real world, the ability to interpret information coming from the environment is crucial, both for its survival and for attaining its behavioral goals. Real world differs from structured environment because it contains moving objects and dynamical environmental states, so that it is impossible to programme the robot behavior only on the basis of a priori knowledge. To meet these needs, traditional machine perception research directed its efforts to construct, on the basis of sensorial data, a consistent and complete symbolic or geometric representation of real world (for example a 3D environment model taken out from some video cameras). However, this approach tends to disregard that perceptual needs are a consequence of the motivational and behavioral need of a robot and that perception is connected to the specific task to be performed. Moreover, constructing a complete model of the environment, without verifying if it is really necessary, can be an useless waste of resources. Machine perception research developed a new paradigm which considers perception no longer as a stand-alone process, but as a holistic and synergetic one, tightly connected to the motor and cognitive system [9]. Perception is now considered as a process indivisible from action: behavioral needs provide the context for the perceptual process, which, in turn, works out the information required for motion control. In this view, internal representations are compact and abstract models built on the basis of what is really needed for
Complex Systems and Perception
311
the agent to achieve its behavioral tasks [11] and this process is mediated through a behavioral-dependent internal state [28]. Following this new prospective, we refer here to Representation as the internal state which results from the dynamical processing of the sensory events which relax to a solution representing the arousal of a given pattern. This shows an abstract picture which, through leaning, should more and more mirror the characteristics of the environment in which the robot is situated, aimed at solving the robot mission. For example, aiming to solve the problem of autonomous robot navigation, without a priori knowledge, the successful interaction between a robot and its surroundings could be built through skill-based [35, 36] learning mechanisms. These allow the robot to achieve its tasks by building both an adequate association between sensory events and internal representation and a suitable state-action map. This approach to perception, introduced in the first part of this Chapter, is complementary with respect to those ones presented in the previous parts of the book, i.e modeling basic behaviors and the bottom-up approach. Our approach is based on complex nonlinear dynamics, exploiting the possibility to have the emergence of new solutions in a complex system, that ARE associated to robot behaviors.
7.2 Reaction-Diffusion Cellular Nonlinear Networks and Perceptual States Here we present an implementation for the Representation layer, based on a ReactionDiffusion Cellular Nonlinear Networks (RD-CNNs) generating Turing patterns, leaving details on the general RD-CNN architecture and to Turing Patterns to Appendix I. The strategy is applied to the control of a robot which moves in an environment, trying to avoid randomly placed obstacles and to reach targets. The Representation layer can be divided into functional blocks. The starting point is the preprocessing block, which receives sensorial stimuli from the environment, dynamically clusters and uses them as initial conditions for a two-layer RD-CNN, which is the core of perception. The CNN parameters are chosen appropriately to generate Turing patterns, which form an internal state representation. Each pattern is associated with an action by means of a simple reinforcement learning. To perform its task, the robot is provided with no a priori knowledge and learns by means of trial and error, according to the experiments in [37, 34]. The learning is implemented by two mechanisms: an unsupervised learning acts at the preprocessing block allowing the system to modulate the basins of attraction of the Turing patterns, while a simple reward-based reinforcement learning is devoted to build up the association between Turing patterns and actions. The latter is based on a simplified version of the traditional Motor Map (MM) [30, 32]. We have also introduced a higher level control as an implementation of a memory mechanism. The idea of this layer has been drawn by [40, 39], where the authors develop a perceptual scheme (Distributed Adaptive Control, DAC5) as an artificial neural model of classical and operant conditioning. In DAC5 three tightly connected control layers are introduced: the reactive layer, the adaptive layer and the contextual layer. The reactive control layer implements a set of basic reflexes, where low-level sensorial,
312
P. Arena, D. Lombardo, and L. Patan´e
unconditioned inputs, USs, trigger simple unconditioned actions, URs, via an internal state (IS) representation. The adaptive control layer allows the system to associate more complex stimuli, CSs, with the basic ones, USs. In this way the purely reactive activation of the IS populations, due to USs, is progressively replaced by acquired representations of CSs and by the generation of CRs. The contextual layer constructs a high-level representation of CSs events and CRs associated actions, expressing time correlation by means of a short term memory (STM) and a long term memory (LTM). The main difference of our implementation from [40, 39] is the introduction of complex dynamics in the system implementing the sensing-perception-action loop. Dynamical systems have been already successfully used in bio-inspired locomotion control of walking robots [16, 8]. Nonlinear dynamical systems are used in place of a static neural network, for reasons of biological plausibility, versatility and much improved plasticity. This latter characteristics is obtained by imposing that the set of actions to be performed by the robot is not a priori established, as in [40, 39], but is the result of a simple, and effective learning mechanism built upon the surprisingly huge amount of different solutions that RD-CNNs are able to show, as a function of boundary and initial conditions. This complex layer leaves a simple task to be solve by an efferent associative learning for the best matching among the emergent solution and the behaviour optimizing a given reward. This strategy largely improves the plasticity of the methodology.
7.3 The Representation Layer The Representation layer is made-up of five main blocks (Fig. 7.1): 1. 2. 3. 4.
the preprocessing block, which receives and processes environmental stimuli; the perception block, which creates an internal representation from sensor inputs; the action selection network, which triggers an action to the effectors; the Difference of Reward Function (DRF) block, which evaluates the suitability of actions and contributes to the learning process; 5. the memory block, which stores and manages past successful experiences of the robot. The blocks of preprocessing, perception, action selection, and DRF constitute the low level control layer, whereas the memory block provides a higher level control [5].
Fig. 7.1. Functional block diagram of the implemented framework
Complex Systems and Perception
313
Fig. 7.2. a. Position and function of the sensors on the robot. b. Initialization of the CNN first layer cells. The corner cells are set by obstacle stimuli (Front, Left, Right and Back obstacle distance sensors), while the central cells are set by target stimuli (O,T represent orientation sensor and target-distance detector, respectively).
In the following we will refer to an iteration to indicate the set of operations leading to a single robot action; we will refer to a cycle to indicate the set of iterations between two successive target findings. 7.3.1
The Preprocessing Block
A roving robot is provided with four distance sensors (front, left, right and back) for the detection of obstacles (Fig. 7.2a), one distance sensor for detecting the target, which in the practicality could be a light or a phono source, and one orientation sensor which determines the angle between the robot orientation and the direction robot-target. All the sensors have a limited range. It is desirable that the robot performs a random search in absence of targets within the sensor range. As for the sensor outputs, we make the assumption that they are all scaled in the range [−1, 1]. Each sensorial stimulus is the input for a sensing neuron (SN) with an activation function made-up of variable amplitude steps, learned without any supervision, as it will be outlined in the following. It should be noticed that the target sensors set the initial condition for two cells each, to balance the number of cells set by the obstacle sensors. 7.3.2
The Perception Block
Characteristics of the whole perceptual process are: • ability to represent different environment situations as internal states;
314
P. Arena, D. Lombardo, and L. Patan´e
• ability to connect a specific action to each internal state; • ability to plastically modify these associations thanks to the experience. Internal states are here used to implement the perceptual classes of action-oriented perception. They are the core of the perceptual process since they link sensing to action. They, on the one hand, are the result of the dynamic processing of incoming input stimuli and, on the other hand, represent different ways to interact with the environment. To meet these tasks, it is recommendable to use a dynamical system to generate the internal states. In this chapter we use a RD-CNN [19, 25] as dynamical system and consider Turing patterns [38, 26] as internal states. In particular we use a two-layer 4 × 4 RD-CNN (with zero-flux boundary conditions) with appropriate parameters to generate Turing patterns (See Appendix I for details). Each cell (i, j) of the two-layer RD-CNN is represented by two state variables (x1;i, j for the first layer and x2;i, j for the second layer with i, j = 1, .., 4). The output of each SN sets the initial conditions for the first layer state variable for one or a few cells of the whole CNN (Fig. 7.2b). The single cell (i, j) of the RD-CNN used here is described by the following model: x˙1;i, j = −x1;i, j + (1 + µ + ε )y1;i, j − sy2;i, j + D1 ∇2 x1;i, j x˙2;i, j = −x2;i, j + sy1;i, j + (1 + µ − ε )y2;i, j + D2 ∇2 x2;i, j yh;i, j = 12 (|xh;i, j + 1| − |xh;i, j − 1|) h = 1, 2
(7.1)
where yh;i, j (h = 1, 2) is the output of the layer h of the cell (i, j). Now let us consider the system in the linear region containing the origin (i.e. the two state variables must have modulus smaller than one). Here it holds y = x, and: x˙1;i, j = (µ + ε )x1;i, j − sx2;i, j + D1∇2 x1;i, j x˙2;i, j = sx1;i, j + (µ − ε )x2;i, j + D2∇2 x2;i, j Dividing by D1 and defining t ∗ = tD1 , γ = ∂ x1;i, j (t ∗ ) ∂ t∗ ∂ x2;i, j (t ∗ ) ∂ t∗
1 D1
and d =
(7.2)
D2 D1 :
= γ [(µ + ε )x1;i, j (t ∗ ) − sx2;i, j (t ∗ )] + ∇2 x1;i, j (t ∗ ) (7.3) ∗
∗
∗
= γ [sx1;i, j (t ) + (µ − ε )x2;i, j (t )] + d∇ x2;i, j (t ) 2
In relation to this RD-CNN architecture, the conditions to obtain Turing patterns (See Appendix I for details) are written as: ⎧ µ <0 ⎪ ⎪ ⎨ µ 2 + s2 > ε 2 (7.4) ε > −µ d+1 ⎪ ⎪ ⎩ [d(µ +ε )+(d−1 2 µ −ε )] > µ 2 − ε 2 + s2 4d To obtain the emergence of Turing patterns, it is necessary that some of the modes, related to the chosen geometry, are within the “Band of unstable modes” (Bu). Spatial eigenvalues (and their correlated spatial modes) depend only on the topology of the CNN and on the boundary conditions.
Complex Systems and Perception
315
Now let us analyze how initial conditions influence the emergence of patterns. We have chosen the parameters µ = −0.7, ε = 1.1, s = 0.9, and d = 300 to satisfy the Turing conditions (7.4) and shape appropriately the dispersion curve and therefore Bu (Fig. 7.3). For the chosen geometry and parameters, spatial eigenvalues ki,2 j are: ⎞ ⎛ 0 0.62 2.47 5.55 ⎜ 0.62 1.23 3.08 6.17 ⎟ ⎟ (7.5) ki,2 j = ⎜ ⎝ 2.47 3.08 4.93 8.02 ⎠ 5.55 6.17 8.02 11.10 while the related eigenfunctions are shown in Fig. 7.4. To complete linear analysis on the RD-CNN generating Turing patterns, the last parameter to choose is γ . Setting γ = 5, the dispersion curve shows three modes within the Bu, two associated with the spatial 2 = k2 = 0.62 and one related to the spatial eigenvalue k2 = 1.23. The eigenvalues k1,2 i, j 2,1 first two modes have a real part much bigger than the other one; so prevalent modes will be the ones associated with the eigenvalue 0.62. Of course, the decomposition in eigenvalues and eigenfunctions provides results which hold as far as all the state variables remain within their linear subspace around the origin. As soon as one of them leaves the region, nonlinear competition arises, and new modes can suppress those ones predicted by linear analysis. Therefore, to verify if the considerations drawn from linear theory can be extended to the nonlinear system (7.3), we performed a numerical simulation setting the initial conditions as follows: • each of the first-layer cell (x1 variable) is initialized with x1 = −3, 0, 3. These values correspond, respectively, to negative saturation, linear region and positive saturation in (7.3);
Fig. 7.3. Dispersion curve for µ = −0.7, ε = 1.1, s = 0.9, d = 300 and for different values of γ (See Appendix I for details)
316
P. Arena, D. Lombardo, and L. Patan´e
1 2
1 0
2
3
4
1
2
3
4
1
4
3
4
1
4
1 5.55165
2
3
4
4
2
4
1 8.01905
2
11.1033
3
4 2
8.01905
4 4
3
4 2
4
3
2
2
3
4
2
1 6.1685
2
1 4.9348
4 2
6.1685
4 4
3
4 2
4
3
2
2
3
4
2
1 3.08425
2
1 3.08425
4 2
2
4
3
1 2.4674
4 2
2
5.55165
3
1 1.2337
4 2
2
4
3
4
2
4 2
1 0.61685
1 2.4674
3
4 2
2
2
3
4
2
1 0.61685
4 2
4
2
4
Fig. 7.4. Eigenfunctions related to the spatial eigenvalues. The arrows indicate the ones that will compete for Turing patterns generation for the chosen parameter and γ = 5.
• each of the second-layer cell (x2 variable) is set to a random value close to zero (within the range [−0.005, 0.005]). Since we simulated a 4 × 4 RD-CNN and set three values for every first-layer cell, the complete simulation requires 316 = 43046721 trials. To reduce the computational effort, we considered only a random set of 5000 out of all the possible combinations of the initial conditions of the variables x1 . In our implementation we considered the steady-state patterns represented by the first layer output. In order to further simplify the analysis of the simulation results, we associated a simple integer code with each emerged Turing pattern: 1. the first-layer cell is enumerated from 1 to 16 starting from the high-left corner according to the formula c(i, j) = 4 ∗ (i − 1) + ( j − 1) 2. a symbolic value ysimb,c is associated with each cell c as follows: • if the cell output is y1,c = −1 then ysymb,c = 0 • if the cell output is y1,c = 1 then ysymb,c = 1 3. an integer code is associated with the steady-state pattern: code = ∑ ysymb,c 2c
(7.6)
c
During the simulation, the code of the steady-state pattern was stored after each trial in order to analyze the frequency distributions of the Turing patterns emerged. In spite of
Complex Systems and Perception
317
Table 7.1. Frequency distribution of the stabilized patterns during the simulation Pattern code Frequency Distribution 255 25.5% 13107 25.1% 52428 23.4% 65280 26.0%
the large number of trials and the remarkable differences in the initial conditions, only four Turing patterns have emerged with more or less the same frequency (about 25%) as shown in Tab. 7.1. As Fig. 7.4 shows, patterns are symmetric in pairs and it should be noticed that all the patterns are associated with the two eigenfunctions related to the spatial eigenvalue 0.62 (Fig. 7.5). We hypothesized that initializing corner cells (i.e. cells (1, 1), (1, 4), (4, 1), (4, 4)) has a higher influence on the pattern emergence than setting the initial conditions for the other cells. In fact, unlike the other cells, which have four neighbor-cells, the corner cells have only two neighbor-cells, while the two missing are replaced by two virtual cells, whose state variables are set by boundary conditions. Such virtual cells, according to zero-flux boundary conditions, have the same state variables as the related corner cells (to annihilate the flux). Consequently corner cells present are able to strongly influence the neighbor-cells and so to control the pattern generation. To verify this hypothesis a further simulation was performed. We repeated the simulation by analyzing only four subsets of all the possible permutations of initial conditions: • • • •
Subset 1: x1;1,1 = 3, x1;1,4 = −3, x1;4,1 = 3, x1;4,4 = −3, others = −3, 0, 3; Subset 2: x1;1,1 = −3, x1;1,4 = 3, x1;4,1 = −3, x1;4,4 = 3, others = −3, 0, 3; Subset 3: x1;1,1 = −3, x1;1,4 = −3, x1;4,1 = 3, x1;4,4 = 3, others = −3, 0, 3; Subset 4: x1;1,1 = 3, x1;1,4 = 3, x1;4,1 = −3, x1;4,4 = −3, others = −3, 0, 3;
The frequency distributions of such simulations show that, in most cases, the corner cells values fixed in Subset1 determinate the emergence of a unique pattern, namely 13107, independently from the initial conditions chosen for the others cells. The same happens in Subset2 with the pattern 52428, in Subset3 with the pattern 65280 and in Subset4 with the pattern 255 (Fig. 7.6). These results are due to the symmetry of the RD-CNN used. Therefore, with the chosen parameters, the outcomes of linear theory can be extended to the nonlinear case of 4 × 4 RD-CNN. Furthermore, the pattern formation control via the initial conditions of corner cells has revealed to be very effective. Nevertheless, one of the aims of the perceptual architecture is to associate to each pattern a perceptual state. Thus, in some cases, it could be useful to have a large number of Turing patterns, although the validity of linear theory could be no longer guaranteed in relation to nonlinear case, and pattern control is less selective. Therefore, a compromise between the number of different patterns and easiness to control them by the dispersion curve has to be found. The number of Turing pattern can be modified tuning the parameter γ because, by increasing such parameter (Fig. 7.3), the band of unstable modes increases,
318
P. Arena, D. Lombardo, and L. Patan´e
Fig. 7.5. Most frequently emerged patterns with µ = −0.7, ε = 1.1, s = 0.9, d = 300 and γ = 5 during the simulation
and consequently a higher number of possible modes can be selected. In this way, a stronger competition between the allowed modes is triggered to generate patterns. For example, using a 4 × 4 RD-CNN for the perceptual core and setting γ = 20, the dispersion curve selects 12 unstable modes (Fig. 7.7). To better understand how initial conditions influence the pattern emergence, we have set to zero the initial conditions for all the first layer cells except the top-right corner (C(1, 4)) and bottom-left (C(4, 1)) corner cells, whose initial conditions have been varied in [−1, 1] range. The second layer cells are set to random values in the range [−0.005, 0.005]. Fig. 7.8 shows the geometries of the basins of attraction for the 39 emerged patterns (represented by the different colors), obtained by varying the initial conditions for the two cells above-mentioned. It should be noticed that, in small region at the boundary of two adjacent basins of attraction, the strong competition between the two patterns can lead to a different pattern (Fig. 7.8). Moreover, patterns tend to distribute so that the basins of attraction for complementary configurations are symmetric with respect to the origin (0, 0). Such results led us to associate the initial conditions of each corner cell in the first layer with an obstacle distance sensor, to give higher priority to the obstacle avoidance task. Each of the other two sensors, i.e. those relative to targets, initializes two central cells (Fig. 7.2.b). The cells of the first layer not connected to a sensor are initially set to 0, while all the cells of the second layer are set to random values in the range [−0.005, 0.005]. After completing the design of the CNN for the perceptual pattern generation, in the following the details of the algorithm are reported. At each iteration we reset the
Complex Systems and Perception
319
Fig. 7.6. Pattern frequency distribution for the set of initial conditions named Subset1, Subset2, Subset3, Subset4 Dispersion curve whith γ =20 for a 4x4 RD−CNN 8 7 6 5
Re(λ(k2))
4 3 2 1 ↓0.61685
↓2.4674
↓3.0843
↑2.4674
↑3.0843
↓4.9348
↓5.5517
↓6.1685
↓8.0191
↑5.5517
↑6.1685
↑8.0191
0 ↑0.61685
↑1.2337
↑1
−1 −2 −3
0
1
2
3
4
5
k
6
7
8
9
10
2
Fig. 7.7. Dispersion curve with γ = 20: spatial eigenvalues of a square domain (4 × 4 cells) are indicated with crosses on x-axis when the associated temporal eigenvalue is positive, i.e. generates one or more unstable modes. The filled circle on the x-axis and the double indication of the value point out that the associated eigenvalue has multiplicity 2.
CNN, initialize its cells again according to the current sensor outputs, and let the CNN evolve towards a steady-state Turing pattern. Each pattern has its own code, which is stored in the pattern vector (if it is not yet present) when the pattern emerges for the first time. Each element of the pattern vector contains the pattern code and the number of iterations from its last occurrence (defined as occurrence lag). If the pattern vector
320
P. Arena, D. Lombardo, and L. Patan´e
Fig. 7.8. Basins of attraction for the 39 patterns emerged varying initial conditions for top-right corner cell (x-axis) and bottom-left corner cell (y-axis) in the range [−1, 1]
is full, the new element will overwrite that one containing the code of the pattern least recently used (LRU), i.e. that one with the highest occurrence lag value. The use of the steady states of a dynamical system implies a form of sensor fusion, i.e. we synthesize lots of sensorial information into a single attractor. It is to be noticed that, although we used only distance and orientation sensors, other different kinds of sensors can be chosen. At each iteration, the information coming from sensors is fused to form a unique coherent internal state implemented by a Turing pattern. 7.3.3
The Action Selection Network and the DRF Block
The action selection network (Fig. 7.9) establishes the association between each element q of the pattern vector and an action Aq . An action consists of two elements, the module and the phase of the simulated robot movement: action = (module, phase)
(7.7)
The module and phase of an action determine, respectively, the translational step and the rotation to be performed by the robot. Each element q of the pattern vector is therefore connected to two weights, wq,m and wq,p , representing, respectively, module and phase of the action Aq associated with q. Such an association is plastic thanks to a reinforcement learning implemented by a MM-like [32] algorithm based on a Reward
Complex Systems and Perception
321
Function (RF) for the evaluation of the action fitness. On the basis of the associative learning and of the traditional MM algorithm, we determine the quality of an action by means of a reward function (RF) chosen in a task-dependent way. In a random foraging task, a suitable choice for the RF is: RF = − ∑ i
ki − hD · DT − hA · |φT | D2i
(7.8)
where Di is the distance between the robot and the obstacle detected by the sensor i, DT is the target-robot distance, ΦT is the angle between the direction of the longitudinal axis of the robot and the direction connecting robot and target, and ki , hD ed hA are appropriate positive constants determined in a design phase. The RF summarizes in a single value information about the obstacle/target distance and the robot orientation towards the target. The learning algorithm is designed in order to maximize the RF: small absolute values in (7.8) indicate good situations for the robot. To evaluate an action performed at the time-step t, the simulated robot exploits the variation of the RF: DRF(t) = RF(t)− RF(t − 1). A positive (negative) value for DRF indicates a successful (unsuccessful) action. Successful actions are followed by reinforcements, like in the experiments of [37, 34] (see Appendix II for details). 7.3.4
Unsupervised Learning in the Preprocessing Block
Particular emphasis has been given to the concept of action-oriented perception. A central role in this sense is played by the effect of the RF on the SNs. In fact, SNs have to plastically transform the environmental stimuli into initial conditions for the “best matching” Turing pattern which will emerge and will drive the following action. Therefore, after a learning process, SNs should be able to associate, for each particular environmental stimulus, the Turing pattern whose associated action best match the robot behavior. A choice for the SNs activation function consists in an increasing function, although, of course, other shapes could be used. Such a function is made up of ten variable amplitude steps, θi (1 ≤ i ≤ 10), covering the input range [−1, 1]. The amplitude of each step is determined through the RF. Initially all the steps have zero amplitude (Fig. 7.10). At each time step, if the performed action has positive effects (DRF > 0), then the step amplitude does not change. Otherwise, when the action is negative, the purpose is to change the pattern by changing the configuration of the basins of attraction: the step amplitudes are modified in a random way, because we do not know a priori the direction towards which we should go. Nevertheless, the random search for optimal amplitude for the steps is very effective, in the sense that the learning process has the role to continuously modify the step amplitudes to modulate the basins of attraction for Turing patterns. The result is a suitable clustering of the sensorial stimuli, by means of the basins of attraction, able to associate different sensor configurations with patterns that can be associated with positive actions in the operative conditions of the robot. More in detail, if the action associated with the currently emerged pattern is unsuccessful (i.e. DRF < 0), then the learning algorithm for each SN acts as follows: • determine the step amplitude θi related to the SN current input value; • extract a number rnd from a zero-mean, uniformly distributed random variable r;
322
P. Arena, D. Lombardo, and L. Patan´e
Fig. 7.9. Representation layer divided into its functional blocks: Preprocessing, Perception, Action Selection Network (Action in the figure) and Memory, a block (DRF) with the purpose of evaluating the goodness of performed actions, and a block (S) which manages the memory, reinforcing valid sequences and weakening misleading ones (see [5] for details)
• if rnd is positive, the ten step amplitudes θ j are modified as: θ j (new) = θ j (old) if j < i θ j (new) = θ j (old) + rnd i f j ≥ i • instead, if rnd is negative: θ j (new) = θ j (old) − |rnd| i f j ≤ i θ j (new) = θ j (old) if j > i
(7.9)
(7.10)
An example is shown in the following. Let a neuron have initially the activation function of Fig. 7.10 and suppose that the output of the related sensor (e.g. that of an obstacle distance) is equal to 0.3. If the DRF associated with the emerged pattern is negative, then the learning algorithm acts as follows: • extract a number rnd from a zero-mean, uniform distributed random variable r; • if rnd is positive, the activation function of the considered neuron is modified as reported in Fig. 7.11a, where the step added amplitude is equal to rnd;
Complex Systems and Perception
323
Fig. 7.10. Initial conditions of SNs activation functions
• if rnd is negative, the activation function of the considered neuron is modified as reported in Fig. 7.11b, where the amplitude of the added negative step is equal to the absolute value of rnd; To guarantee the convergence of step amplitude, the variable r varies in the range [-h,h] where h, initially set to 0.5, decreases with an aging coefficient: h(new) = 0.999h(old)
(7.11)
In the developed Representation layer, perception is then a dynamical process: similar sets of sensorial inputs are dynamically translated into the same pattern as long as this pattern does reflect an action which is really an adequate response to the received stimulus (leading to an increase in RF value). So the association between sensorial stimuli and Turing patterns is not static: it is dynamically tuned according to the most recent studies and related hypothesis in neurobiology [17, 18]. Such researches outline that the same stimulus, presented in different times, triggers different actions. In fact, the individual history, i.e. the experience, as well as context, continuously modulate the basins of attraction in the dynamical state space of the brain. 7.3.5
The Memory Block
The low level control layer, made-up of the preprocessing block, the perception block and the Action Selection Network, implements the reactive/adaptive layer of the simulated robot, as outlined in the introduction. In addition to the low level control layer, we introduced a higher level layer: the contextual one. We took inspiration from the Neisser theory of the Perceptual Cycle [27], according to which the scheme (modelled as the sequence of patterns) guides exploration, and in the mean time creates expectations on the exploration itself. If expectations are not matched, the schema is modified to take into account the lack of accuracy of previously stored information. The contextual layer is therefore designed to store environmental cognitive maps created by the robot and those perceptual schemes which, in known situations, have revealed to be successful. It is worth noting that the reactive/adaptive layer is itself sufficient to make the robot able to avoid obstacles and find targets in its environment. The contextual layer is a further aid because it allows the robot to bear past experience in mind and speed-up the
324
P. Arena, D. Lombardo, and L. Patan´e
Fig. 7.11. Activation function of a SN which receives a sensorial stimulus equal to 0.3 and contributes to the emergence of a pattern triggering a negative action. The amplitude step related to the input is θ7 since it covers the [0.2, 0.4] input range. The function is modified depending on the sign of a number, rnd, extracted from a zero-mean, uniformly distributed variable. If the number is positive, changes lead to the situation a). Otherwise, if the number is negative, changes lead to the situation b).
achievement of targets in the case of static environment. In more details, the contextual layer contains a short term memory (STM) to store the sequence of the last 20 emerged patterns and the relative sensing outputs which led to the emergence of those patterns. Environmental states are stored by mean of “objects”: the robot creates a cognitive map in which, for each sensor, it stores the distance from the closest sensed obstacle and an attribute distinguishing different obstacles. Such an attribute (for instance the color) allows the robot to recognize single objects and to create a cognitive map associated with the relative position of objects [24]. Object attributes are very effective to be extracted in real time, in view of the use of visual microprocessors, like CNNs, able to extract in real time many different object characteristics. When the robot reaches a target, the successful sequence of patterns (reflecting the sequence of environmental situations the robot navigated through) is promoted to the long term memory (LTM). This sequence of patterns is associated with a parameter (valid in Fig. 7.9), representing the degree of reliability the robot gives to the sequence itself.
Complex Systems and Perception
325
Fig. 7.12. a. Learning environment: for each real target also the region in which it is visible for the robot is reported. b. Environment of the third phase of the experimental protocol.
At each iteration, the current positions of the objects are matched with the positions stored in the LTM. When the best matching element of the LTM differs from the current position less then 50%, the robot will try to follow the sequence of stored patterns. In the case it reaches the target, the degree of reliability of the sequence will be increased. Otherwise, if the sequence does not lead to a target (or even worse if it causes an impact with an obstacle) the valid parameter (reliability of that sequence) is reduced. Under a certain threshold for valid, the sequence is considered useless and its location is cleared for a new sequence. In Fig. 7.9, these operations are managed by the block S.
7.4 Strategy Implementation and Results The simulated environment, where the robot is placed, is made-up of obstacles, walls (considered as obstacles too) and targets. When the robot finds a target, this is disabled and no longer sensed by the robot, even if the target is within the robot detection range. It is enabled again when the robot finds another target, which is in turn disabled. This mechanism allows the robot to visit different targets. The simulated experiment is made up of three phases: 1. Learning. The robot is placed into a training environment (Fig. 7.12a). During the learning phase the robot plastically adapts, through the modulation of the SN activation functions, the geometry of the basins of attraction for the Turing patterns in the CNN. This leads to suitably associate patterns with actions as a function
326
P. Arena, D. Lombardo, and L. Patan´e
Fig. 7.13. Mean number of new patterns per cycle during the learning phase. Each reported value represents an interval of 5 cycles.
of the environment. Reinforcement learning is used for this aim: actions, initially randomly chosen, are then associated with particular patterns to maximize the RF. 2. Test in the same environment as in the learning phase. In this phase the robot moves in the same environment where it learned to interact during the learning phase, but it no longer performs random actions. 3. Test in an unknown environment. The robot is placed into a new environment. It should be able to cope with additional obstacles (Fig. 7.12b). The task is much more difficult than in the two previous phases: to reach the targets, the robot has to pass among obstacles very close to each other. First we analyze results obtained using only the low level control layer. Afterwards, we will highlight the added value of the contextual layer. To evaluate the performance in different cases we use the following parameters: • Nsteps : mean number of steps between two successive target achievements (cycles); • Pnew : mean number of new patterns per cycle. The learning phase has to perform three main tasks: 1. determine a suitable set of Turing patterns, each one emerging in response to a different environment state; 2. modulate the basins of attraction for each pattern by tuning the amplitude step of the SNs activation function; 3. associate a suitable action with each Turing pattern. To perform task 3, it is necessary that the pattern occurs several times, since the robot learns by trial and error. Initially each new pattern is associated with a random action, but starting from the following emergence of such a pattern, the action selection network
Complex Systems and Perception
327
Fig. 7.14. Activation functions of the SNs connected to all the sensors. The first two rows show activation functions related to obstacle-distance sensors: front (F), right (R), left (L) and back (B). The third row contains the activation functions of target-distance sensor (T) and orientation sensor (O).
tunes the weights in order to optimize the action associated. It is also desirable that new patterns occur only during the first learning cycles. In Fig. 7.13 we report a parameter related to the Pn ew during the learning phase. To limit the effects of the uncertainty introduced by the random actions, which could lead to high oscillations in the number of new patterns, we have chosen to divide the x-axis into intervals of 5 cycles each, and reported the mean number of new patterns per cycle for each interval. To guarantee the convergence of the algorithm, learning cannot be considered ended while new patterns continue to emerge with a certain frequency. Therefore, in the reported simulation, although only less than 1 new pattern per 10 cycles appear after 200 cycles, we have chosen to stop the learning phase after 1000 cycles to let the robot find appropriate actions for each pattern. Once ended the training phase, the robot learned the amplitudes of the steps for the SNs. Fig. 7.14 shows the activation functions of the SNs connected to the four obstacle-distance sensors and the two target sensors. The different amplitude steps associated with each neuron represent different scaling coefficients for the basins of attraction for the emerging patterns. To show the result of the basins of attraction modulation, Fig. 7.15 represents the 15 new basins of attraction for the emerging patterns, result of the variation of the only two SNs inputs, two obstacledistance sensors. A comparison between Fig. 7.8 and Fig. 7.15 shows that the number of basins is tightly decreased from 39, before learning, to 15, after learning. Such a Table 7.2. A comparison between performance parameters in the different considered cases
Nsteps Pnew
ph.2 no cont. ph.2 + cont. ph.3 no cont. ph.3 + cont. 97.2 92.7 176.0 135.0 0.013 0 0.026 0.015
328
P. Arena, D. Lombardo, and L. Patan´e
Fig. 7.15. Basins of attraction for the 15 patterns emerged varying in the range [−1, 1] the SNs input related to right (x-axis) and left (y-axis) distance-obstacle sensors. Initial conditions for upper right-hand corner cell (C(1, 4)) and lower left-hand corner cell (C(4, 1)) are scaled according to the step amplitude of the SNs activation functions.
reduction in the number of patterns and their basins of attraction modulation are the result of the unsupervised learning in the preprocessing block performed in order to: • cluster the Turing pattern into a meaningful set of non-redundant internal states; • adapt the internal states (shape of basins of attraction) to the behavioral needs of the robot. Once fixed the amplitude steps, the robot can refine the actions associated with the emerging patterns. The result of this learning is shown in Fig. 7.16, which reports the actions defined after this learning phase. The presence of similar actions emerged is useful to obtain more refined trajectories. Now we investigate about the possible improvement brought by the use of the contextual layer. In Table 7.2 we report the value of Nsteps and Pnew for four different considered cases: namely phase 2 and 3, both with and without the contextual layer. It is worth noting that there is a consistent difference in the number of steps between the second and the third phase. This is because, during the third phase, the robot has to move in a more complicated environment. Another notable result is that in the phase 2 there is not a meaningful difference between the Nsteps with and without the contextual layer (about 97 and 93 steps). On the contrary such a layer causes a large improvement in the phase 3 (Nsteps is reduced from 176 to 135). Therefore the contextual layer acquires an
Complex Systems and Perception
329
Fig. 7.16. Actions associated with the emerging patterns after the learning at action selection network layer. Blue points represent the movement performed by the robot starting from the origin.
Fig. 7.17. Trajectory of a sample cycle and most frequent emerged patterns. Thanks to the large pattern number, the robot is able to follow a very smooth trajectory to avoid the obstacle and reach the target.
importance increasing with the difficulty of the developed task. In an environment with few obstacles, the added complexity of the contextual layer could not be compensated by the improvement it generates; in an environment with lots of obstacles, where it is often required to worsen temporarily the RF for target reaching, the contextual layer gives “self-confidence” by means of past successful memories. In all the four cases of Table 7.2 the very low values for the parameter Pnew , despite the large number of possible Turing patterns, is a result of the convergence of the
330
P. Arena, D. Lombardo, and L. Patan´e
learning processes and indicates the correct working of the implemented Representation layer. In Fig. 7.17, for the second phase, the trajectory of the robot and the patterns which most frequently emerge during a cycle are shown.
Remarks We proposed to consider perception as a dynamical process, as shown in the last neurobiological research, not only for needs of biological plausibility. Indeed, representing a wide and heterogeneous range of information in a compact way using a spatial-temporal pattern allows the system to be able to generalize from specific environmental situation. In this way a robot could successfully deal also with a dynamically changing environment, thanks to its intrinsic dynamical perceptual core. Furthermore, unlike traditional computation techniques for sensor fusion, this approach avoids the drawback of a large computation time. The parallel processing capabilities of CNNs are well known and allow to perform some tera-operations per second. Hence, this kind of sensor fusion is not time-consuming, once completed the learning processes, which can be done off-line.
7.5 SPARK Cognitive Architecture In the previous part of the Chapter, an approach, based on non linear complex dynamics, was presented as a new way of face with perception problems. The model was derived apart from any other possible pre or proto cognitive skills that could have been acquired by the agent. In this way it was demonstrated that this kind of approach is able to create representations of the environment, useful for deriving complex yet environment dependent internal models, useful for attaining behaviors that incrementally optimise a given reward function. In the following part of the chapter, the approach will be enriched, considering the presence, within the agent, of parallel sensory motor pathways, like those ones deeply investigated in the first part of the book and referring to insect neurobiology. The SPARK framework for action-oriented perception can be conceived as a hierarchical structure where competitive and cooperative control layers coexist at the same time (see Fig. 7.18). It can be divided into functional blocks, acting either at the same layer (concurrent basic behaviors) or at different levels. In particular, at the lowest level, there are the pre-cognitive behaviours, direct sensory-motor pathways where the incoming stimuli trigger simple reflexes, without sophisticate processing. Pre-cognitive behaviours are: cricket phonotaxis, i.e. the behaviour shown by the female cricket to follow a particular sound chirp emitted by a male cricket (see Chapter 3, Section 2.4); optomotor behaviour or the ability to correct the heading by compensating the change in the visual field (see Chapter 3, Section 2.3); obstacle avoidance capabilities based on contact sensor and antennae or on visual information on the obstacles (see Chapter 2 for biological details). These are predefined basic abilities that do not require any plasticity or adaptation: they are in some way genetically coded in the robot. Among the basic behaviours, it is possible to include some other abilities that, although being mostly reflex-based, rely on simple processing. These are the proto-cognitive behaviours such as the landmark navigation and the homing abilities: they are not based on pre-wired
Complex Systems and Perception
331
Fig. 7.18. SPARK action-oriented perception framework
connections and their level of sensorial processing is more relevant than in the precognitive behaviours. The homing behaviour (see Chapter 3), in particular, is realised by many different insects that adopt either local or global strategies or combination of the two. In robot case, we will here demonstrate that this ability could be reached by learning which visual cues are reliable landmarks as in the environment by means of STDP network and then navigating through them by using a MMC network. This stores the distance between landmarks and between landmarks and home position and dynamically uses and filters the input from partially obscured landmarks to guide the robot toward its home. Building upon this layer of parallel pre-proto cognitive behaviors, a higher level, the anticipatory layer, also called MB-model in Fig. 7.18, can be placed. In insects, Mushroom Bodies (MB) are thought to be responsible for the ability to create correlations between different sensory events forming secondary pathways where one reflex (basic behaviour) can be triggered instead of the “usual” stimulus by an anticipatory signal (see chapter 1 for biological details). These characteristics are here modeled by means of temporal correlations between the sensory events and referring to the theory of Classical Conditioning implemented through a Spike-time dependent plasticity rule. The Representation layer is a higher level control where the incoming sensory events are fused together to form an abstract and concise representation (i.e. situation) of the environment. This is a kind of multimodal sensory integrator which creates a representation of the surrounding environment based on the whole sensory system. This layer was implemented exploiting the capabilities, already discussed of RD-CNN implementing Turing patterns. Here an afferent (input) stage associates the current sensory events with the initial conditions for the RD-CNN which forms a Turing pattern as steady state condition. The Turing pattern takes the role of perceptual state of the system and represents the current situation of the environment. A code is
332
P. Arena, D. Lombardo, and L. Patan´e
associated to the emerging pattern for the modulation of the basic behaviours. Representations are therefore collected in self organizing basins of attraction, for the emergence of “situation-related patterns”. Two learning processes act both at the afferent and at the efferent layer of the Representation layer in order to shape the basins of attraction geometry to optimize the association between sensation and Turing patterns and between Turing patterns and action. Learning is guided by the motivation layer that consists in a Reward Function (RF), designed according to the robot tasks, which defines the degree of satisfaction of the robot. The RF, defined a priori, is a nonlinear, possibly non convex function to be maximised. Furthermore, the planning ability can be provided by the experience stored in a long-term memory: situations which led the agent to successfully reach the task are chained, obtaining sequences. These are recalled each time a stored situation represents. However, the memory block outlined does not mean that all the functional memory skills are included. In fact, for example homing, a proto-cognitive behaviour, is modelled with the Mean of Multiple Computations (MMC) model (see Chapter 4 for details), which includes a recurrent neural network inside: this is clearly a memory system. This is also valid for other structures, like, for example the augmented Walknet. The role assumed by Turing patterns in RD-CNNs is no longer a motor action, but a modulation of the basic behaviors. In such a way, while, during the first stages of the learning phase, the robot activity is primarily based on the basic behaviors, the Representation layer incrementally assumes the role of properly modulating the basic behaviors so as to lead to the emergence of higher cognitive capabilities. The first experiments that have already been carried out with real robots testify that just a simple linear modulation of the basic behaviors in space and in time, gives to the robot the capability of escaping from local minima, unavoidable by using each single parallel pathways. Therefore, in this augmented architecture, the functional role of the Pattern is exactly that one of helping to create Representations upon reflex-based basic sensory motor loops, implementing, in such a way, what schematically drawn by the insect brain architecture in Chapter 1. The resulting motor commands are projected in the pre-motor area, where they are transmitted to the actuators. According to the defined strategy the robot can be driven by different combination of the control blocks: e.g., by averaging the concurrent basic behaviors only or by enabling anticipation between the sensors. Furthermore, the representation layer could either commands the final action disabling the basic behaviors, as demonstrated in the previous sections, or modulating them. In fact, the codes associated to each pattern can either define the action to be performed by the agent or modulates the level of activation of each proto/pre-cognitive behaviour. In this way, for instance, such behaviours, like predicting and compensating reafference, typical of the optomotor reflex when phonotaxis is active, could therefore be autonomously learned at this stage. The association pattern-action/modulation parameters reached by using a Reward Function that takes into consideration a global degree of satisfaction, decided at the highest layer, indicating the robot mission. In the following part of the Chapter we will more concentrate on the behavior modulation and on the new role of the representation layer. The blocks related to memory and planning will not be further explored, referring to what mentioned in the earlier part of the Chapter.
Complex Systems and Perception
333
7.6 Behaviour Modulation As in insects, the proposed perceptual architecture is organized in various control levels consisting of functional blocks acting either on the same level, as competitors, or at distinct hierarchical levels. Parallel perceptual processes and vertical hierarchy coexist allowing the robot to show basic skills as well as complex emerging behaviors. The representation layer, works as a nonlinear feedforward complex loop, whose output is trained to combine the basic behaviors that are prewired and give a baseline of knowledge to the system. The loop is finally closed through the robot body and the environment. The control process can be divided into functional blocks: at the lowest level, we place the parallel pathways representing the basic behaviors, each one triggered by a specific sensor; at a higher level we introduce a representation layer that processes the sensory information in order to define the final behavior. At each time t, the final action AF (t) performed by the robot consists of: • a variable turning movement (rotation) • a fixed-length forward movement 7.6.1
Basic Behaviors
The basic behaviors here implemented are, as previously discussed: optomotor reflex, cricket phonotaxis and the ability to avoid obstacles, e.g. detected by the antennae (for details see Chapter 1 and 2 referring to the biological principles and chapter 3 concerning biologically plausible implementation of the basic behaviours). At each time step t, the optomotor reflex tries to compensate for the previously executed rotation, as occurs in crickets that try to compensate its leg asymmetry to maintain the heading. Even if a detailed neural network was developed to carefully model the neural control system for such behavior (see Chapter 3), in this case a very simple rule was adopted consisting in: Ao (t) = −AF (t − 1)
(7.12)
where Ao (t) is the rotation triggered by the optomotor reflex at the time step t and AF (t − 1) is the turn executed by the robot at the previous time step. The obstacle avoidance behavior guides the robot in avoiding obstacles perceived by distance sensors. It is implemented by a simplified version of the traditional potential field [10]: Aa (t) = (dF (t), dL (t), dR (t)) (7.13) here Aa (t) is the rotation triggered by the obstacle avoidance and dF (t), dL (t), dR (t) are the distances provided by the three distance sensors. Finally, phonotaxis proposes a rotation, A p (t), aiming to compensate for the phase between the robot heading and the robot-target direction: A p (t) = f p (p(t))
(7.14)
where p(t) is the phase between the robot and the sound source. The function f p ·, used in this application, is a oversimplified version of the model for phonotaxis behavior, reported in Chapter 3.
334
7.6.2
P. Arena, D. Lombardo, and L. Patan´e
Representation Layer
Growing up from the basic behaviors, we consider as complex behavior the ability to interpret “situations” in terms of robot-environment interaction (i.e. perception for action). The robot perceives using its sensory apparatus and processes at a cognitive level to optimize its behavior in relation to the mission assigned. The aim of the Representation layer, the highest control level within the whole cognitive process, is to transform different environmental situations into representations, which determine the modulation of the basic behaviors. The architecture adopted for this layer has been previously discussed in the Chapter, here the integration with the whole architecture in terms of behaviour modulation is discussed. The Selection Network that in the previous formulation of this layer was devoted to select the final action, is now used to associate each element q of the pattern vector with a set of three parameters (koq , kaq , kqp ) that modulate the basic behaviours (optomotor reflex, the obstacle avoidance and the phonotaxis, respectively). At the first occurrence of the pattern q, these parameters are randomly chosen in the range [0, 1] with the constraint that: koq + kaq + kqp = 1 (7.15) Then, the parameters are modified under the effect of the learning process acting at the efferent (i.e. output) stage of the Representation layer as explained in the following. After completed the learning process, at each time step t, once generated the Turing pattern q(t), the corresponding modulation parameters are selected and the behavior that emerges (i.e. the rotation) is the weighted sum of the actions suggested by the basic behaviors at that time: AF (t) = koq · Ao (t) + kaq · Aa (t) + kqp · A p (t)
(7.16)
The Reward function is now considered as a combination of terms, each one representing the degree of satisfaction related to the corresponding basic behavior i where i = o, a, p: RFo (t) = ro (|A(t − 1)|) (7.17) RFa (t) = ∑i ·ri (eDi (t) ) RFp(t) = r p (|p(t)|) Here A(t) is the action performed at time t, Di (t) is the distance between the robot and the obstacle detected by the sensor i (i = Front(F), Right(R), Le f t(L)) and p(t) is the phase between the robot orientation and direction robot-target.
7.7 Behaviour Modulation: Simulation Results 7.7.1
Simulation Setup
The software simulation environment, developed in C + +, allows to create an arena constituted by walls, obstacles and targets. In the arena a robot equipped with a distributed sensory system can be simulated. The dimensions of the arena are 300 × 300 pixels: the learning and the test have been done with different configurations of
Complex Systems and Perception
335
Fig. 7.19. (a) Learning arena. (b) Testing arena: the numbers indicate the different position of the target.
obstacles (Fig. 7.19). The simulated robot is equipped with three distance sensors, and one target sensor providing the phase between the robot orientation and the direction robot-target. The front side sensor detects obstacles within a limited range of 40 pixels, while the other two obstacle sensors are oriented at −45◦ and 45◦ with respect to the robot heading. All the sensors have a visual conus of [−30◦ , 30◦ ]. It is to be noticed that, for all the distance sensors, the output is saturated to the limit of the detection range, so even if no obstacles are detected, the output of the sensor would be 40 pixels for the front distance sensor, and 20 pixels for the other two distance sensors. The target sensor has an unlimited range and provides the angle between the robot orientation and the robot-target direction. All the sensor outputs are scaled in the range [−1, 1]. The component of the RF in Eq. (7.17) were heuristically defined as: • • • • •
ro (t) = −AF (t − 1) rF (t) = − · e−8(DF (t)+1) rL (t) = − · e−8(DL(t)+1) rR (t) = − · e−8(DR (t)+1) r p (t) = −|p(t)|
where DF (t), DR (t), DL (t) are the distances detected by the sensors F, R, L, while p(t) is the angle between the robot heading and the direction robot-target and AF (t − 1) is the rotation made by the robot in the time step t − 1. All the sensors are normalized in the range [−1, 1]. In the following simulations, the Reward Function is the weighted sum of three main contributions and the gain factor is ho = 1, ha = 10, h p = 10 respectively for the optomotor, avoidance and targeting components. In this way more importance is given to the contribution of the obstacle information than to the target one, because the former is crucial to preserve the robot integrity. In particular the output coming from the front side obstacle sensor has the greatest weight in the RF. Through the definition of this reward function, we give to the robot knowledge about the task to be fulfilled, but it has no a priori knowledge about the correct way to interact with the environment. So the phase of the actions associated with each pattern is randomly initialized within the range [−20◦ , 20◦ ].
336
P. Arena, D. Lombardo, and L. Patan´e
Fig. 7.20. (a)Modulation parameters used in the first 30000 movements. (b) Parameters used in the last 30000 movements with the indication of the region associated with the pattern 9164 (i.e. the most frequently emerged).
7.7.2
Learning Phase
As far as the simulated robot is concerned, the task assigned to the robot consists in aiming a target avoiding obstacles. When the target is found, a new target appears in a random position. The learning phase lasts until one of the two conditions occurs: • the aq averaged on the last 1000 patterns drops below 0.0001; • 5000 targets have been found. At the beginning of the learning phase, the robot randomly modulates the basic behaviors due to the random initialization of the modulation parameters kiq (i = a, o, p), which determine the robot heading. During the learning process, the Motor Map-like algorithm corrects the parameters associated with each pattern. Fig. 7.20 shows the modulation parameters used in the first and in the last 30000 actions of a typical simulation phase. 7.7.3
Testing Phase
The testing phase is made every 30000 actions and consists of 10 target findings with the targets placed in different positions within the testing arena (Fig. 7.19). To evaluate the benefit of the learning process, we match the result of the test with the case of using constant modulation parameters or randomly chosen modulation parameters. The compared results are reported in Tab. 7.3: the learning process leads to a dramatic reduction both in the average number of actions needed to reach a target and in the average number of collisions, demonstrating the effectiveness of the control architecture and its capability to generalize the representations. In particular, this feature has been proven by performing the test in a scenario that is different from the one used for the learning. Fig. 7.21 shows examples of trajectories followed during the testing phase in case of fixed, random and learned modulation parameters, further details are reported in [4].
Complex Systems and Perception
337
Fig. 7.21. Trajectories in the testing arena in the case of constant (a), randomly chosen (b) and learned (c) modulation parameters. The first two architectures (a-b) take a lot of time to reach the target and suffer from many collisions. From the learned modulation parameters, a very straightforward, although safe, behavior emerges. Table 7.3. Simulation Results in terms of Average Number of Actions and Average Number of Collisions needed to find a target for Fixed, Random and Learned modulation parameters Fixed Random Learned Average number of actions 166.8 176.3 28 Average number of collisions 95.4 40.7 5
7.8 Conclusions The framework introduced in this chapter represents the first approach to model action-oriented perception by means of nonlinear dynamical spatial-temporal systems, implemented through RD-CNNs showing Turing patterns. These ones are used as “perceptual states”. They, as solution of a nonlinear PDE discretised in space as a lattice, represent the abstract and concise nonlinear transformation of the environment, hosted in the CNN through its initial conditions. This concise representation of the sensors can be obtained in real time thanks to a possible and feasible VLSI implementation of CNNs. Moreover, plasticity is added to the “front-end”, at the level of the afferent layer (dealing with the sensors) and efferent layer (modulating actions). The capability of the perceptual scheme is enriched by the presence of a contextual layer, able to further extract, from the emergence of patterns, a cognitive map. This contributes to
338
P. Arena, D. Lombardo, and L. Patan´e
gain self-confidence through the exploitation of known situations, above all in complex environment. The approach, applied to navigation control in a roving robot, can be easily migrated to other robotic platforms, redefining the basic behaviors, and other applications, redesigning the reward function. The approach is being actually applied to a more complex structure, an hexapod robot (see Chapter 11), where the control actions are much more complex, and the basic behaviors include, for instance, not only avoiding obstacles by turning, but also climbing over steps. In this case patterns can indicate the particular scheme of leg motions, which should be applied in front of particular environment conditions.
References 1. Adamatzky, A., Arena, P., Basile, A., Carmona-Gal´an, R., De Lacy Costello, B., Fortuna, L., Frasca, M., Rodriguez-V´azquez, A.: Reaction-Diffusion Navigation Robot Control: From Chemical to VLSI Analogic Processors. IEEE Transactions On Circuits And Systems I 51, 926–938 (2004) 2. Arena, P., Baglio, S., Fortuna, L., Manganaro, G.: Self Organization in a two-layer CNN. IEEE Trans. on Circuits and Systems - part I 45(2), 157–162 (1998) 3. Arena, P., Caponetto, R., Fortuna, L., Manganaro, G.: Cellular neural networks to explore complexity. Soft Computing Research Journal 1(3), 120–136 (1997) 4. Arena, P., Costa, A., Fortuna, L., Lombardo, D., Patan´e, L.: Emergence of perceptual states in nonlinear lattices: a new computational model for perception. In: Proceedings of IEEE/RSJ International Conference on Intelligent RObots and Systems (IROS), Nice, France (2008) 5. Arena, P., Crucitti, P., Fortuna, L., Frasca, M., Lombardo, D., Patan´e, L.: Turing patterns in RD-CNNs for the emergence of perceptual states in roving robots. International Journal of Bifurcation and Chaos 18(1), 107–127 (2007) 6. Arena, P., Fortuna, L., Frasca, F., Pasqualino, R., Patan´e, L.: CNNs and Motor Maps for Bio-inspired Collision Avoidance in Roving Robots. In: Proceedings of The 8th IEEE International Biannual Workshop on Cellular Neural Networks and their Applications(CNNA), Budapest, Hungary (2004) 7. Arena, P., Fortuna, L., Frasca, F., Patan´e, L.: A CNN-based chip for robot locomotion control. Circuits and Systems I: Regular Papers. IEEE Transactions 52(9), 1862–1871 (2005) 8. Arena, P., Fortuna, L., Frasca, M., Sicurella, G.: An adaptive, self-organizing dynamical system for hierarchical control of bio-inspired locomotion. IEEE Transactions on Systems, Man and Cybernetics, Part B 34, 1823–1837 (2004) 9. Arkin, R.C.: Behaviour Based Robotics. MIT Press, Cambridge (1997) 10. Borenstein, J., Koren, Y.: The vector field histogram fast obstacle avoidance for mobile robots. IEEE Journal of Robotics and Automation 7, 278–288 (1991) 11. Brooks, R.A.: Intelligence without reason. In: Mylopoulos, J., Reiter, R. (eds.) Proceedings of 12th International Joint Conference on Artificial Intelligence, San Mateo, California (1994) 12. Carmona, R., Jim´enez-Garrido, F., Dom´ınguez-Castro, R., Espejo, S., Rodr´ıguez-V´azquez, A.: Bio-inspired analog VLSI design realizes rogrammable complex spatio-temporal dynamics on a single chip. In: Proceedings of Design, Automation and Test in Europe Conference and Exhibition (DATE), Paris, France (2002)
Complex Systems and Perception
339
13. Chua, L.O. (ed.): Special Issue on Nonlinear Waves, Patterns and Spatio-Temporal Chaos. IEEE Trans. on Circuits and Systems - Part I 42(10) (1995) 14. Chua, L.O., Yang, L.: Cellular neural networks: theory. IEEE Trans. on Circuits and Systems 35, 1257–1272 (1988) 15. Chua, L.O., Yang, L., Krieg, K.R.: Signal processing using cellular neural networks. Journal of VLSI Signal processing 3, 25–51 (1991) 16. Frasca, M., Arena, P., Fortuna, L.: Bio-Inspired Emergent Control Of Locomotion Systems. World Scientific Series on Nonlinear Science Series A 48 (2004) ISBN 981-238-919-9 17. Freeman, W.J.: How Brains Make Up Their Minds. Weidenfeld and Nicolson, London (1999) 18. Freeman, W.J.: How and why brains create meaning from sensory information. International Journal of Bifurcation and Chaos 14, 515–530 (2004) 19. Goras, L., Chua, L.: Turing Patterns in CNNs - Part I-II. IEEE Trans. Circuits and Systems I 42, 602–626 (1995) 20. Goras, L., Chua, L.O., Leenaerts, D.M.W.: Turing Patterns in CNNs-Part I: Once Over Lightly. IEEE Trans. on Circuits and Systems - part I 42, 602–611 (1995) 21. Kelso, J.A.S.: Dynamic patterns: The self-organisation of brain and behavior. MIT press, Cambridge (1995) 22. Kohonen, T.: Self-organized formation of topologically correct feature maps. Biological Cybernetics 43, 59–69 (1972) 23. Koren, Y., Borenstein, J.: Potential field methods and their inherent limitations for mobile robot navigation. In: Proceedings of the IEEE Conference on Robotics and Automation (ICRA), Sacramento, CA, pp. 1398–1404 (1991) 24. Lynch, K.: The Image of the City. MIT Press, Cambridge (1960) 25. Manganaro, G., Arena, P., Fortuna, L.: Cellular Neural Networks: Chaos, Complexity and VLSI Processing. Springer, Heidelberg (1999) 26. Murray, J.D.: Mathematical Biology I: An Introduction, 3rd edn. Springer, NY (2002) 27. Neisser, U.: Cognition and Reality: Principles and Implications of Cognitive Psychology. W.H. Freeman, San Francisco (1976) 28. Nolfi, S.: Power and Limits of Reactive Agents. Neurocomputing 42(1), 119–145 (2002) 29. Pavlov, I.: Conditioned Reflexes. Translated by G.V. Anrep. Oxford University Press, London (1927) 30. Ritter, H., Schulten, K.: Kohonen’s Self-Organizing Maps: Exploring their Computational Capabilites. In: Proceedings of the IEEE International Conference on Neural Networks, San Diego, CA, pp. 109–116 (1988) 31. Roska, T., Chua, L.O.: The CNN universal machine: an Analogic Array Computer. IEEE Trans. on Circuits and Systems - Part II 40, 163–173 (1993) 32. Schulten, K.: Theoretical biophysics of living systems. In: Ritter, H., Martinetz, T., Schulten, K. (eds.) Neural computation and self-organizing maps: An introduction. Addison-Wesley, New York (1992), http://www.ks.uiuc.edu/Services/Class/PHYS498TBP/ spring2002/neuro book.html 33. Scott, A.: Nonlinear Science. Oxford Univ. Press, Oxford (1999) 34. Skinner, B.F.: About behaviorism. Alfred Knopf, NY (1974) 35. Tani, J., Fukumura, N.: Embedding task-based behavior into internal sensory-based attractor dynamics in navigation of a mobile robot. In: Proceedings of the IEEE Int. Conf. of Intelligent Robot and Systems, pp. 886–893 (1994) 36. Tani, J., Fukumura, N.: Learning goal-directed sensory-based navigation of a mobile robot. Neural Networks 7(3), 553–563 (1994) 37. Thorndike, E.L.: A constant error in psychological ratings. Journal of Applied Psychology 4, 469–477 (1920) 38. Turing, A.M.: The chemical basis of morphogenesis. Phil. Trans. Roy. Soc. Lond. B 237, 37–72 (1952)
340
P. Arena, D. Lombardo, and L. Patan´e
39. Verschure, P.F.M.J., Althaus, P.: A real-world rational agent: unifying old and new AI. Cognitive Science 27, 561–590 (2003) 40. Verschure, P.F.M.J., Voegtlin, T., Douglas, R.J.: Environmentally mediated synergy between perception and behaviour in mobile robots. Nature 425, 620–624 (2003)
Appendix I CNNs and Turing Patterns The classical CNN architecture, in the particular case where each cell is defined as a nonlinear first order circuit, is shown in Fig. 7.22, in which ui j , yi j and xi j are the input, the output and the state variable of the cell Ci j respectively; the cell non linearity lies in the relation between the state and the output variables by the Piece Wise Linear (PWL) equation (Fig. 7.22(c)): yi j = f (xi j ) = 0.5 · (| xi j + 1 | − | xi j − 1 |) The CNN architecture is classically defined as a two-dimensional array of MxN identical cells arranged in a rectangular grid, as depicted in Fig. 7.22(a). Each cell (Fig. 7.22(b)) mutually interacts with its nearest neighbors by means of the voltage controlled current sources Ixy (i, j; k, l) = A(i, j; k, l)ykl and Ixu (i, j; k, l) = B(i, j; k, l)ukl . The coefficients A(i, j; k, l) and B(i, j; k, l) are known as the cloning templates: if they are equal for each cell, they are called space-invariant templates and take on constant values. The CNN is described by the state equations of all cells: Cx˙i j = −
1 xi j (t) + ∑ Rx C(r,s)∈N
σ (i, j)
A(i, j; r, s)yrs +
∑
B(i, j; r, s)urs + I
C(r,s)∈Nσ (i, j)
with 1 ≤ i ≤ M, 1 ≤ j ≤ N where Nσ (i, j) = {C(r, s) | max(| r − i |, | s − j |) ≤ σ } with 1 ≤ r ≤ M, 1 ≤ s ≤ N is the σ − neighborhood and xi j (0) = xi j0 ; xi j0 ≤ 1; C > 0; Rx > 0. The classical CNN structure can be easily generalized in many ways, leading to the most complex CNN architecture: the so-called CNN Universal Machine (CNNUM) [31]. Basically, it consists of an electronic architecture in which the analog CNN has been completed by digital logic sections. Here the term dual computing has been introduced. In this architecture the templates play the role of the instructions of a CPU, i.e. the templates determine the task that the CNNUM processor must accomplish. So, in the CNNUM, the programmability (i.e. the ability to change the templates in order to execute the various steps of a dual algorithm) is a central issue. The easy VLSI implementation [14, 15] is due to some key features of CNNs with respect to traditional
Complex Systems and Perception
341
artificial neural systems. One of these, of course, is the local connectivity, while another is the fact that the cells are mainly identical. This advantage has permitted the development of many CNN real implementations [25]. From the previous considerations, the CNN paradigm is well suited to describe locally interconnected, simple dynamical systems showing a lattice-like structure. On the other hand, the emulation of PDE solutions requires the consideration of the evolution time of each variable, its position in the lattice and its interactions deriving from the space-distributed structure of the whole system. Indeed, the numerical solution of PDEs always requires a spatial discretization, leading to the transformation of a PDE into a number of ODEs. Therefore the original space-continuous system is mapped into an array of elementary, discrete interacting systems, making the CNN paradigm a natural tool to emulate in real-time spatio-temporal phenomena, such as those ones described by the solutions of PDEs. This led to define a particular CNN architecture for emulating the Reaction-Diffusion equations: the so-called Reaction-Diffusion CNN (RD-CNN) [13]. The spatial discretization and the template definition are the two main steps to ”electronically model” a PDE. Of course, it can be possible to start from the CNN, i.e. to design a CNN able to generate spatio-temporal signals that behaviorally represent solutions typically shown by nonlinear PDEs. In such a way, once the suitable template set has been derived, the analytical solution of some particular, space-discretized PDEs can be approximated by the CNN state equations. In the particular case in which the single cell dynamics is represented by a neuron model, the CNN architecture can be used as
Fig. 7.22. CNN architecture. (a) Scheme of a lattice of locally coupled systems. (b) circuit implementation of a single cell. (c) The cell non linearity ia a simple PWL.
342
P. Arena, D. Lombardo, and L. Patan´e
an efficient tool to study the emergence of complex phenomena in locally connected neuron membranes. This approach was used in the efficient implementation of Central Pattern Generators (CPGs) for biologically inspired walking machines, as reported in [16]. The advantage of this approach was the possibility to explore and investigate both a simulation approach, and a reliable VLSI hardware implementation [7]. In this framework, the same cell structure used for the generation of the CPG dynamics in an RD-CNN was used, via a rearrangement of the cell parameters, to show plateau membrane potentials. That cell, if connected once again via diffusion coefficients within a neural lattice, gives rise to an RD-CNN able to show, under some analytical conditions, discussed in the following, steady state dynamics corresponding to the so-called Turing patterns. Turing’s theory [38] poses reaction-diffusion mechanisms at the basis of pattern formation, typical of many natural phenomena. In the perceptual architecture, we consider the reaction-diffusion equation restricted to the case of two morphogens: ∂H ∂t
= F(H, K) + DH ∇2 H
∂K ∂t
= G(H, K) + DK ∇2 K
(7.18)
where F and G represent the nonlinear reactive terms while DH and DK are the diffusion coefficients. According to Turing theory, in absence of diffusion (DH =DK =0), H and K tend to a stable uniform state, but for some value of DH and DK a not-homogeneous pattern emerges thanks to diffusion (diffusion-driven instability). The reaction-diffusion equation can be rewritten using dimensionless variables as: u˙ = γ f (u, v) + ∇2 u v˙ = γ g(u, v) + d∇2v
(7.19)
where d is the ratio DH /DK between the diffusion coefficients and γ is the strength of the reactive term. Now we will find out the positions to obtain Turing patterns considering fixed initial conditions and zero-flux boundary conditions. Let us consider the system in absence of diffusion: u˙ = γ f (u, v) (7.20) v˙ = γ g(u, v) Linearizing around the stationary state (u0 , v0 ) and posing: u − u0 w= v − v0
(7.21)
for small deviation from (u0 , v0 ), the system becomes: w˙ = γ Aw A= where hr =
∂h ∂r
(h = f , g and r = u, v).
fu fv gu gv
(7.22) (u0 ,v0 )
Complex Systems and Perception
343
The solution of (7.22) has the form: w(t) ∝ eΛ t w(0)
(7.23)
where Λ = diag(λ1, λ2 ) and λ1 ,λ2 are the eigenvalues of A. Stationary state is linearly stable if Re(λ ) < 0 and consequently: tr(A) = fu + gv < 0
(7.24)
|A| = fu gv − fv gu < 0
(7.25)
Now let us consider the whole linearized reaction-diffusion equation: w˙ = γ Aw − D∆ w,
where: D=
10 0d
(7.26)
(7.27)
The evolution of each cell can be described by two differential equations so that, if the domain is bi-dimensional and made up of M × N cells, the whole system can be described by a system of 2M × N differential equations coupled by the diffusive terms. To solve the system, we apply the method of partial differential equations decomposition to obtain a M × N uncoupled systems, each one made up of two linear first-order differential equations [19]. The solution of such a system is the weighted sum of M × N space-dependent but time-independent equations, the eigenfunctions W (x, y) of discrete Laplacian, such as: ∆ W + k2W = 0, (7.28) where k2 is a spatial eigenvalue. Assuming zero-flux boundary conditions and considering that, for each cell (x, y), it holds that 0 ≤ x ≤ M and 0 ≤ y ≤ N, eigenfunctions (or possible spatial modes) are: Wm,n = cos mMπ x cos nNπ y , x, y ∈ N
(7.29)
Related spatial eigenvalues are: 2
2
m n k2 = π 2 ( M 2 + N2 )
(7.30)
So, spatial eigenvalues depend only on the topology of the system (M,N) and on the boundary conditions, not on the time evolution. Let Wk (x, y) be the eigenfunction related to eigenvalue k2 . Since the (7.22) is linear, it is possible to find its solution in the form: w(r,t) = ∑k ck eλ t Wk (x, y)
(7.31)
where ck are constants which can be found out from initial conditions and λ is the temporal eigenvalue. Substituting (7.25) into (7.22), we can determine the temporal eigenvalues from: |λ I − γ A + Dk2| = 0 (7.32)
344
P. Arena, D. Lombardo, and L. Patan´e
and substituting A and D with the related definitions:
λ 2 + λ [k2 (1 + d) − γ ( fu + gv )] + h(k2 ) = 0
(7.33)
h(k2 ) = dk4 − γ (d fu + gv )k2 + γ 2 |A|
(7.34)
with: and consequently the equation which links temporal eigenvalues to spatial eigenvalues is: λ1,2 = 12 {−k2 (1 + d) + γ ( fu + gv ) ± [k2 (1 + d) − γ ( fu + gv )]2 − 4h(k2)} (7.35) To have Turing patterns it is necessary that the homogenous stationary state is unstable in presence of diffusion: therefore at least one of the two temporal eigenvalues must have positive real part. To meet this condition, recalling (7.24) and taking into consideration that d > 0, the bias of (7.33), h(k2 ), must be negative for some value of k2 and consequently: d f u + gv > 0 (7.36) min(h(k2 )) < 0 Once satisfied conditions (7.24) and (7.36), h(k2 ) will be negative for a range of the eigenvalues: k12 < k2 < k22 (7.37) Such a relation represents the so called “Band of unstable modes” (Bu). In our framework these conditions are rewritten as reported in (7.4). Since we have considered a limited discrete domain M × N, the number of eigenvalues is limited. So for emergence of Turing pattern, at least one of the eigenvalues must verify equation (7.37). From equation (7.35), the relation specifying the real part of the temporal eigenvalues is called dispersion curve: Re(λ ) = 12 Re{−k2 (1 + d) + γ ( fu + gv ) ± [k2 (1 + d) − γ ( fu + gv )]2 − 4h(k2 )} (7.38) From equation (7.31) it is evident that, during the time evolution, prevailing eigenfunctions will be the ones related to the positive eigenvalues (λ (k2 ) > 0). The final pattern will tightly depends on initial conditions because these latter determine the mode excitation strength. Since we have chosen random initial conditions, all the modes are excited meanly with the same strength so the mode related to the eigenvalue with the highest real part should prevail. Nevertheless, this is true only for the linearized system and the effect of nonlinearity could lead to Turing patterns resulting from the competition among different modes. To study in this condition which modes contribute to the emerging Turing pattern, we have performed numerical simulations, shown in Section II.
Appendix II From Motor Maps to the Action Selection Network The importance of topology-preserving maps in the brain relies on both the representation of sensory input signals and the ability to perform an action in response to a given
Complex Systems and Perception
345
stimulus. Neurons in the brain are organized in local assemblies mainly constituting two-dimensional layers in which the locations of the excitation are mapped referring to motor areas into movements. Topology-preserving structures able to classify input signals inspired the paradigm of Kohonen Networks [22]. These artificial neural networks formalize the self-organizing process in which a topographic map is created. Neighboring neurons are thus excited by similar inputs. An extension of these neural structures is represented by Motor Maps [30, 32]. These are networks able to react to localized excitation by triggering a movement (like the motor cortex or the superior colliculus in the brain). To this end, motor maps, unlike Kohonen’s networks, should include storage of an output specific to each neuron site. This is achieved by considering two layers: one devoted to the storage of input weights and one devoted to output weights. The plastic characteristics of the input layer should also be preserved in the assignment of output values, so the learning phase deals with updating both the input and the output weights. This allows the map to perform tasks such as motor control. These considerations led to the idea of using MMs as adaptive self-organizing controllers. Formally, a MM can be defined as an array of neurons mapping the space of the input V patterns onto the space of the output actions: Φ :V →U (7.39) The learning algorithm is the key to obtain a spatial arrangement of both the input and output weight values of the map. This is achieved by considering an extension of the winner-take-all algorithm. At each learning step, when a pattern is given as input, the winner neuron is identified: this is the neuron which best matches the input pattern. Then, a neighborhood of the winner neuron is considered and an update involving both the input and output weights for neurons belonging to this neighborhood is performed. The unsupervised learning algorithm for the MM can be described in the following five steps: 1. The topology of the network is established. The number of neurons is chosen and a Reward Function is established. The number of neurons needed for a given task is chosen by a trial-and-error strategy. Thus once numerical results indicate that the number of neurons is too low, one must return to this step and modify the dimensions of the map. At this step the weights of the map are randomly fixed. 2. An input pattern is presented and the neuron whose input weight best matches it is established as the winner according to the winner-take-all algorithm. 3. Let q be the winner neuron: its output weight is used to perform the control action Aq . This weight is not used directly, but a random variable is added to guarantee a random search for possible solutions, as follows: Aq = wq,out + aqλ
(7.40)
where wq,out is the output weight of the winner neuron q, aq is a parameter determining the mean value of the search step for the neuron q, and λ is a Gaussian random variable with a zero mean. Then the increase DRF in the Reward Function is computed and, if this value exceeds the average increase bq gained at the neuron q, weight update (step 4) is performed; otherwise this step is skipped. The mean increase in the reward function is updated as follows:
346
P. Arena, D. Lombardo, and L. Patan´e
bq (new) = bq (old) + ρ (DRF − bq(old))
(7.41)
where ρ is a positive value. Moreover, aq is decreased as more experience is gained (this holds for the winner neuron and for the neighboring neurons), according to the following rule: ai (new) = ai (old) + ηa ξa (a − ai(old))
(7.42)
where i indicates the generic neuron to be updated (the winner and its neighbors), a is a threshold the search step should converge to, and ηa is the learning rate, while ξa takes into account the fact that the parameters of the neurons to be updated are varied by different amounts, defining the extent and the shape of the neighborhood. 4. If DRF > bq , the weights of the winner neuron and those of its neighbors are updated following the rule: wi,in (new) = wi,in (old) + ηξ (v − wi,in(old)) (7.43) wi,out (new) = wi,out (old) + ηξ (A − wi,out (old)) where η is the learning rate, ξ , v, win , and wout are the neighborhood function, the input pattern, the input weights and the output weights, respectively. The subscript takes into account the neighborhood of the winner neuron. In supervised learning A is the target, otherwise it is varied, as discussed above (7.40). 5. Steps 2) to 4) are repeated. If one wishes to preserve a residual plasticity for a later re-adaptation, by choosing a = 0 in step 3), the learning is always active and so steps 2) and 4) are always repeated. Otherwise, by setting a = 0, the learning phase stops when the weights converge. A MM, though very efficient for the learning, could be difficult to be implemented in hardware because of the high number of afferent and efferent weights. In this chapter, in view of a possible physical realization of the developed perceptual system, we designed and used a simplified version of the original MM. The first fundamental difference from MM is that the afferent layer has been eliminated: the winner neuron is replaced by the element of the pattern vector which contains the last emerged pattern. Moreover the efferent layer is now constituted by two weights for each element of the pattern vector. The element q is connected to the weights wq,m and wq,p , which represent respectively module and phase of the action Aq associated with q. At each step, the robot does not perform the exact action suggested by the weights of q (wq,m and wq,p ), but the action: Aq = (Aq (1), Aq (2)) = (wq,m + aq λ1 , wq,p + aqλ2 )
(7.44)
where λ1 and λ2 are random gaussian variables distributed in the range [0, 0.2] and [0, 1] respectively. aq gives flexibility to the output of the network and initially has a high value to allow a wide range search for the optimal action. Every time the pattern q emerges, aq is reduced to focus the action search on a smaller range so to guarantee the convergence of efferent weights. Once performed the Aq , the current DRF is evaluated. If DRF ≤ 0 the action is replaced by a random one. If DRF is positive the action is confirmed and if DRF > bq the weights wq,m and wq,p are updated as follows:
Complex Systems and Perception
wq,m (new) = (1 − ρ )wq,m (old) + ρ (Aq(1) − wq,m (old)) wq,p (new) = (1 − ρ )wq,p(old) + ρ (Aq(2) − wq,p(old))
347
(7.45)
bq stores the mean increase of the RF, due to previous selections of the element q as current emerged pattern. As above, an action is learnt (7.45) only if it led to an increase of RF greater than bq . To speed-up the learning process the parameter ρ depends on the difference (PI, Performance Improvement) between the current increase (DRF) and the mean increase (bq ) of the RF: DRF − bq i f PI < 1 (7.46) ρ= 1 i f PI ≥ 1 The linear-saturated function (7.46) avoids that, if PI results very high, ρ becomes greater than one, loosing the meaning of weighted average.
Part III
Software/Hardware Cognitive Architecture and Experiments
8 New Visual Sensors and Processors L. Alba1 , R. Dom´ınguez Castro1 , F. Jim´enez-Garrido1, S. Espejo2 , S. Morillas1 , J. List´an1 , C. Utrera1 , A. Garc´ıa1 , Ma.D. Pardo1, R. Romay1, C. Mendoza1, ´ Rodr´ıguez-V´azquez1 A. Jim´enez1 , and A. 1
AnaFocus (Innovaciones Microelectr´onicas S.L.) Av. Isaac Newton 4, Pabell´on de Italia, Planta 7, PT Isla de la Cartuja, 41092, Sevilla IMSE-CNM/CSIC and Universidad de Sevilla PT Isla de la Cartuja, 41092, Sevilla
2
Abstract. The Eye-RIS family is a set of vision systems which are conceived for single-chip integration using CMOS technologies. The Eye-RIS systems employ a bio-inspired architecture where image acquisition and processing are truly intermingled and the processing itself is realized in two steps. At the first step processing is fully parallel owing to the concourse of dedicated circuit structures which are integrated close to the sensors. These circuit structures handle basically analog information. At the second step, processing is realized on digitally-coded information data by means of digital processors. Overall, the processing architecture resembles that of natural vision systems, where parallel processing is made at the retina (first layer) and significant reduction of the information happens as the signal travels from the retina up to the visual cortex. This chapter outlines the concept of the Eye-RIS systems used within the SPARK project, these are, the Eye-RIS v1.1 vision system based on the ACE16K smart image sensor (used within the first half of the project) and the Eye-RIS v1.2 vision system which supersedes the v1.1 and is based on a new generation of smart image sensors named Q-Eye. Their main components, features and experimental data that illustrate its practical operation are presented as well in this chapter.
8.1 Introduction CMOS technologies enable on-chip embedding of optical sensors with data conversion and processing circuitry, thus making possible to incorporate intelligence into optical imagers and eventually to build vision1 systems in the form of CMOS chips. During the last years many companies have devised different types of CMOS sensors with different sensory-pixel types and different levels of intelligence. These CMOS devices are either targeted to replace CCDs in applications where smartness is important or to make optical sensing and eventually vision feasible for applications where compactness, power consumption and cost are important. Most of these smart CMOS optical sensors follow a conventional architecture where sensing is physically separated from processing and processing is realized by using either PCs or DSPs (see Fig. 8.1). Some of the 1
Vision is defined as the set of tasks to interpret the environment from the information contained in the light reflected by the objects in such environment. It involves signal acquisition, signal conditioning, and information extraction and processing. Sensor intelligence refers to the incorporation of processing capabilities into the sensor itself.
P. Arena and L. Patan`e (Eds.): Spatial Temporal Patterns, COSMOS 1, pp. 351–369. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
352
L. Alba et al.
intelligence attributes embedded by these sensors are related to signal conditioning and typically include: • • • • •
Electronic shutter and exposure time control; Electronic image windowing; Black calibration and white balance; Fixed Pattern Noise (FPN) cancellation; Etc.
Other intelligence attributes are related to information processing itself and are supported by libraries and software to cover typical image processing functions and to allow users implement image processing algorithms. Note that in the conventional architecture of Fig. 1 most of the intelligence is far from the sensor. Hence all input data, most of which are useless, must be codified in digital form and processed. On the one hand, this fact stresses the system requirements regarding memory, computing resources, etc.; on the other, it causes a significant bottleneck in the data-flow.
Fig. 8.1. Conventional Smart Image Sensor Concept
Such way of processing is actually quite different from what is observed in natural vision systems, where processing happens already at the sensor (the retina), and the data are largely compressed as they travel from the retina up to the visual cortex. Also, processing in retinas is realized in topographic manner; i.e. through the concourse of structures which are spatially distributed into arrangements similar to those of the sensors and which operate concurrently with the sensors themselves. These architectural concepts, borrowed from nature, define basic attributes of the Eye-RIS vision system. In particular: • realization of the processing tasks by an early processing step followed by a post processing step; • incorporation of the early processing structures right at the sensor layer; • concurrent sensing and processing operations through the usage of either topographic or quasi-topographic early-processing architectures.
New Visual Sensors and Processors
353
Some of these attributes, particularly the splitting of processing into pre-processing and post-processing, are also encountered in the smart camera Inca 311 from Philips. This camera embeds a digital pre-processing stage based on the so-called Xetal processor [4]. As a main difference to this approach, in the Eye-RIS vision system pre-processing is realized by using mixed-signal circuits distributed in a pixel-wise area arrangement and embedded with the optical sensors. Because pre-processing operations are realized in truly parallel manner in the analog domain, power efficiency and processing speed of the Eye-RIS system are both very large. Specifically, the last generation sensor/pre-processor used at the Eye-RIS, the so called Q-Eye, exhibits a computational power of 1 250GOps2 with a power consumption of 4mW per GOps.
8.2 The Eye-RIS Vision System Concept Eye-RIS is a generic name used to denote the bio-inspired vision systems from AnaFocus. These systems are conceived for on-chip integration of all the structures needed for: • • • • •
Capturing (sensing) images; Enhancing sensor operation, such as to enable high dynamic range acquisition; Performing spatial-temporal processing; Extracting and interpreting the information contained into images; Supporting decision-making based on the outcome of that interpretation.
The Eye-RIS are general-purpose, fully-programmable hardware-software vision systems. They are complemented with a software layer and furnished with a library of image processing functions which are the basic instructions for algorithm development. Two generations of these systems have been already devised and used within the SPARK project (Eye-RIS v1.1 and v1.2) in a road-map towards single chip implementation (Eye-RIS v2). All these generations follow the architectural concepts depicted in Fig. 2. The main difference between the concept in Fig. 8.2 and the conventional one depicted in Fig. 1 comes from the “retina-like” structure placed at the front-end in Fig. 8.2. This “retina-like” front-end stage is conceptually depicted as a multi-layer one. In practice it is a multi-functional structure where all the conceptual layers depicted in Fig. 2 are actually realized on a common semiconductor substrate. These functions include: • 2-D image sensing. • 2-D image processing. Programmable tasks in space (across the spatial pixel distribution) as well as in time are contemplated. • 2-D memorization of both analog and digital data. • 2-D data-dependent task scheduling. • Control and timing. itemAddressing and buffering the core cells. • Input/output. 2
GOps = Giga operations per second.
354
L. Alba et al.
Fig. 8.2. Eye-RIS system Conceptual Architecture with Multi-functional Retina-like Front-end
• Storage of user-selectable instructions (programs) to control the execution of operation sequences. • Storage of user-selectable programming parameter configurations. Note from Fig. 8.2 that the front-end largely reduces the amount of data (from F to f ) which must first be codified into digital representations and then processed. At this early processing stage many useless data are hence discarded through processing and only the relevant ones are kept for ulterior processing. Quite on the contrary, in the conventional architecture of Fig. 1 the whole data amount F must be codified and processed. This reduction of data supports the rationale for advantages of the Eye-RIS vision system architecture. In order to quantify the advantages let us calculate the latency time needed for the system to react in response to an event happening in the image. In the case of Fig. 8.1, conv
t |LAT = t acq + N × R × t A/D + N × t com p + t
proc
(8.1)
Where N is the number of pixels in the image, R is the number of bits employed for coding each pixel value, tact is the time required for the sensors to acquire the input scene, tA / D is the per-bit conversion time, tconv is the time needed to compare the new image with a previous one stored in memory as to detect any change, and t proc is the time needed to understand the nature of the change and hence prompt a reaction. Although the exact value of the latency time above may significantly change from one case to another, let us consider for reference purposes that most conventional systems produce values in the range of 1/30 to 1/50 sec. Similar calculations for Fig. 8.2 yield, Eye−RIS
t | LAT = t acq + M × R × t A/D + t com p + t
proc
(8.2)
Where M denotes the reduced number of data obtained after early processing. Comparing the latency times for figures yield,
New Visual Sensors and Processors
355
Fig. 8.3. Conceptual Block Diagram of the Eye-RIS v1.2 Eye−RIS
conv
t |LAT − t | LAT = (N − M) × R × t A/D + (N − 1) × t com p
(8.3)
Since typically it is N >> 1, the equation above can be simplified as, Eye−RIS
conv
t |LAT − t | LAT ≈ (N − M) × R × t A / D + N × t com p
(8.4)
Also, since in most cases the number of changes to be tracked will be small, we can assume that M << N and hence further simplify the equation above into, conv
Eye−RIS
t |LAT − t | LAT ≈ N × (R × t A/D + t com p )
(8.5)
It shows that the time saving reported by the Eye-RIS system approximately equals the sum of the time needed for conversion and the time needed in the conventional architecture for pixel-wise comparison. This time saving enables Eye-RIS system be employed for applications that require on-line operation with rapidly changing scenes3 . Besides on-line operation, the data reduction featured by the Eye-RIS system relaxes the computational demands on the pots-processing structures and hence the complexity and power consumption of these structures. Fig. 8.3 shows a conceptual block diagram of one instance of the so-called Eye-RIS system family, namely the Eye-RIS v1.2. Its basic functional features include: • Early processing. Front-end, retina-like sensor-processor. Particularly, current EyeRIS systems employ the so-called Q-Eye front-end sensor-processor which will be described in the next section. • Post processing: – 32-bit RISC uP at 70MHz - realized on a FPGA. – 32Mb SDRAM for program and image/data storage and 4Mb Flash for FPGA configuration. • Interfaces: – 2 Serial Programming Interface (SPI). – UART: General purpose RS232 port. 3
In a traffic collision at 80Km/h, the time lag for a standard driver to hit the steering wheel is 12,5m/s. In order to properly estimate the distance from the driver to the wheel as to control the triggering of the air-bag, vision systems must acquire-and-process at rates higher than 500 frames/sec which is difficult to achieve with conventional systems but which are intrinsic to the operation of the Eye-RIS vision systems.
356
L. Alba et al.
– 16 general-purpose 3.3V TTL I/Os. – USB 1.1 Interface for JTAG control. – USB 2.0 Interface for high-speed image I/O. • System tools: – Eye-RIS ADK (Application Development Kit), an Eclipse-based development environment including all the tools needed for programming Eye-RIS, namely: project manager, code editor, C/C++ compiler, assembler and linker, sourcelevel debugger and etc. – Image-processing library including basic routines such as: point-to-point operations, spatio-temporal filtering operations, morphological operations, statistical operations, blob analysis, etc.
8.3 The Retina-Like Front-End: From ACE Chips to Q-Eye A key component of the Eye-RIS vision system is the retina-like front-end which combines signal acquisition and processing embedded on the same physical structure. Since the late nineties several versions of this key component have been devised as standalone chips by researchers at the Institute of Microelectronics of Seville belonging to the Spanish Council of Research -called ACE and CACE chips. The principles, concepts and circuits of these chips are very well described in a couple of monographs from Dr. Ricardo Carmona [1] and Dr. Gustavo Linan [5], respectively. The most representative among ACE chips instances are collected in Fig. 8.4.
Fig. 8.4. The three generations of ACE chips
There three micro-photographs are included and data pertaining to each of the corresponding chips are presented. These data show a progressive increase of spatial resolution, cell density and embedded functionality from the so-called ACE400 chip [3] to the so-called ACE16k [7] chip. Fig. 8.5 shows the block diagram of a sensory-processing pixel in the ACE16k chip [7]. It is seen that this pixel embeds much more functions than those found in conventional imagers. These latter contain only the optical module (the structure embedding
New Visual Sensors and Processors
357
the optical sensors) and the I/O module. However, the pixels of retina-like vision frontends contain also structures to store information locally, to support interactions among pixels and to perform different types of processing tasks. A consequence of the multifunctional pixel is that pixels have large area (i.e. the number of pixels per mm2 is small - see Fig. 8.4) with only a portion of the total area available for light sensing. Hence: • On the one hand, the fill factor decreases. In practice the impact of this can be largely attenuated by using microlenses on top of the chip, since most of the light received by a pixel is focused onto its active sensing area. • On the other hand, since the total silicon area is constrained due to reliability considerations, the spatial resolution is constrained as well. However, limited spatial resolution may not be a problem if the system is intended for high-level tasks such as segmentation, object detection, etc. This is illustrated in Fig. 8.6. The top pictures in Fig. 8.6 show the faces of Lena with decreasing resolutions. Even with the lowest resolution, Lena will most probably be recognized by readers familiar with image processing. The bottom picture in Fig. 6 shows a picture corresponding to a traffic scenario with different spatial resolution values. It is seen that main attributes of the scenario can be recognized even with low pixel counts. In both cases the information needed to recognize relevant attributes emerges from the geometrical features of the image, most of which are kept when the number of pixels decrease. This qualitative reasoning suggests that there are many applications where spatial resolution can be traded by system efficiency in terms of operation speed, power consumption and system compactness. As Fig. 8.5 shows the processing tasks realized by retina-like front-ends and in particular by the ACE chips, involve interactions among pixels. Asides from the optical module, the most relevant analog design challenges for ACE chips, and in general for
Fig. 8.5. ACE16K Cell Block Diagrams [7]
358
L. Alba et al.
Fig. 8.6. Illustrating the Impact of Resolution Decrease on Recognition
retina-like chips, are related to storage and to the realization of linear convolution operations with programmable kernels. In a linear convolution operation a kernel of weights is multiplied by each image spatial sample and its neighborhood in a small region, the results summed, and the outcome used to change the sample [9]. In the case of ACE and CACE chips two types of kernels per pixel are employed in accordance with the so-called CNN paradigm [2]. One kernel is applied to input data (they can be either input image samples or data stored in the registers at the pixels) while the other is applied to state variables. The convolution operations realized in ACE and CACE chips are illustrated at conceptual level in Fig. 8.7 where inputs and state variables are denoted as x and y respectively. Convolution multiplications in ACE and CACE chips are realized by using tunable transconductors. To save silicon area, each transconductor employs single transistor operating in ohmic regime [8]. Calibration and robust control of the operation of these transconductors define the most significant analog design challenges of ACE and CACE chips [1] [5] and cause the most significant limitations to their practical operation. These limitations are overcome with the so-called Q-Eye chip.
8.4 The Q-Eye Chip A redesign of the ACE16k chip, the so-called ACE16K-v2, was employed to build the first two generations of the Eye-RIS system. These first two generations are referred to as Eye-RIS v1.0, v1.1 respectively. During the last two years a new chip, called Q-Eye, was devised which is employed at the front-end of the last generation Eye-RIS, namely the so-called Eye-RIS v1.2. The Q-Eye chip significantly differs from the ACE chips. Main drawbacks of the ACE chips are lack of robustness, large power consumption and reduced cell density. The Q-Eye chip overcomes these drawbacks by making significant changes at both
New Visual Sensors and Processors
359
Fig. 8.7. Block Diagram of a Cell belonging to two coupled CNN layers
architectural and circuital design level. At the outcome, the cell density is increased by about 6.5 times and the power consumption is largely reduced. Also, these improvements are not detrimental of the functionality embedded per pixel; rather on the contrary, the Q-Eye includes at pixel level co-processing structures which are not found in the ACE chips. Fig. 8.8 is a block diagram of a pixel of the Q-Eye chip. As for the ACE chips basic analog processing operations among pixels are linear convolutions with programmable masks. However, the Q-Eye does not employ transconductance multipliers (as it happens in the ACE chips) but a Multiplier-Accumulator Circuit unit (MAC) which processes neighbor pixels into an algorithmic sequence. Despite this sequential operation, computation times are similar to those obtained for ACE chips since no calibration of the transconductors is needed [8]. The area saving reported by the absence of spatially replicated structures (i.e. the transconductance multipliers employed for the linear convolutions) enables the incorporation at the Q-Eye pixel of functions which are not found in the ACE chips. These include: • A pattern matching block to perform fast morphological operations on binary images and which complements the Local Logic Unit already found in the ACE chips. • An additional bank of analog memories to allow: 1) shifting of grey-scale and binary images through an analog multiplexer and 2) swapping between analog memories. • A circuitry for analog thresholding. • Three additional sensors for color RGB sensing. Besides these functions the Q-Eye array includes a resistive grid with programmable diffusion time. Fig. 8.9 shows a floor-plan of the Q-Eye chip. The external interface of the Q- Eye is completely digital and synchronous. It is composed of a 32-bit data bus for image I/O and two additional buses, namely a 10-bits data bus and 12-bits address bus. These latter buses are employed to program a digital control system which contains 256 control
360
L. Alba et al.
Fig. 8.8. Block Diagram for Q-Eye Pixel
words of 60-bit and individual register for analog references and miscellaneous configuration. This system controls the array of processing-sensing cells, on the one hand, and the I/O control unit which handles all basic I/O process, on the other. The I/O interface can operate in three modes: • loading-downloading of grey-scale images; • loading-downloading of binary images; • address-event mode. Grey-scale values are coded into digital form by on chip 8-bits AD Flash converters, and decoded by on chip 8-bits DA resistors string converters. To the purpose of improved power management, and hence reduced power consumption, most of the processing blocks in the Q-Eye and the analog reference buffers used
New Visual Sensors and Processors
361
Fig. 8.9. Block Diagram of the Q-Eye chip
for biasing have independent power up/down signals Also, the operation speed of most blocks is programmable. Thus, the chip can be tuned to process either very high frame rates or low frame rates with optimum power consumption for each configuration. Robustness enhancement is achieved through improved calibration techniques. In the ACE16k chip, offsets were stored into analog memories which experienced significant degradation especially at high temperatures. Instead, offsets in the Q-Eye chip are stored in static (non volatile) digital memories. Automatic calibration in the Q-Eye is performed by dedicated state machines which control in-loop A/D converters. Also, a temperature sensor and a temperature controlled correction loop are embedded in the Q-Eye to preclude the impact of junction temperature increases onto optical sensors and analog memories. Main features of the Q-Eye chip are listed below: • 176 x 144 cell array with 29.1m x 29.1m area per cell (cell density of 1,180 cells/mm2 ).
362
L. Alba et al.
• Monochrome or color RGB (1 multi-mode grey-scale pixel plus 3 RGB pixels per cell). • High-speed non-rolling electronic shutter. Programmable exposure time (controlling step-down to 20ns). • Sensitivity above 1.0V/lux-sec at 550nm (with microlenses). • Fill factor above 50% (with microlenses). • Frame rate above 10,000 fps. • 4 + 1 (two banks) high-retention analog and 4 binary memories. • Analog multiplexer for image shifting. • Analog MAC unit. • Programmable, 3 x 3 neighborhood pattern matching with 1/0/d.n.c. pattern definition (fast morphological functions). • Programmable local logical unit (LUT table). • Resistive grid for controllable image smoothing. • 37.5mm2 die area. • 0.18m 1.8V (core), 3.3V (I/O) CMOS technology. • 50MHz clock frequency. • < 100mW typical power consumption with 300mW peak during grey-scale image I/Os. • Binary and analog image I/Os. • On-chip bank of 4 ADCs and 4 DACs (8-bit@50MHz) for grey-scale image I/Os.
8.5 Eye-RIS v1.1 Description (ACE16K Based) The Eye-RIS v1.1 Vision System is a compact Vision System on Board. This means that all the hardware needed to create and run vision applications is integrated on a single PCB board. Fig. 8.10 shows a photograph of the Eye-RIS evaluation board. A simplified block diagram is shown in Fig. 8.11. The PCB processing module is composed by the following set of elements: • One sample of AnaFocus ACE16kv2 focal-plane processor. • A Cyclone EP1C20F400C6 FPGA by ALTERA containing a synthesized instance of the Nios II processor plus some control logic. • An FPGA configuration circuitry (EPCS4): this is the FPGA configuration device. It stores the configuration file used to program the FPGA with the Nios II processor at power up. • On-board memory: – 16 Mbytes of flash memory. – 16 Mbytes of SDRAM for Nios II program and data. – 16 Mbytes of SDRAM for FPP data (images). • Three expansion headers with power, ground and FPGA I/O pins. Please contact Anafocus if a system expansion through these headers is needed. • Four interrupt input pins. • JTAG probe connector. A connector with the ALTERA USB-Blaster JTAG probe required to download programs to the FPGA and to debug Nios II programs.
New Visual Sensors and Processors
363
Fig. 8.10. Eye-RIS Vision System PCB Processing Module
Fig. 8.11. Eye-RIS Vision System PCB Processing Module. Simplified view.
• USB 2.0 interface. It provides a communication channel between Eye-RIS v1.1 Vision System and a PC host. • On-board power regulation allows stand-alone operation with single +5V supply. • On-board oscillators: – 50 MHz oscillator and zero-skew clock distribution circuitry and 6 MHz crystal for USB controller device. The Eye-RIS v1.1 Vision System evaluation kit has been designed to develop, debug and run vision applications. During the development process of a vision application the
364
L. Alba et al.
Fig. 8.12. Connections between the PC and the Eye-RIS PCB for debugging Vision applications
Eye-RIS v1.1 evaluation platform needs to be connected to a PC host as depicted in Fig. 8.12: The JTAG connection with the ALTERA USB-Blaster is used to download programs to the Nios II processor and to perform debugging tasks. The connection with the USB cable generally supports FPP data transfer to/from the PC. This channel is mainly used to debug the FPP . Nevertheless, once the application development process finalizes, the Eye-RIS evaluation kit can also be used autonomously. For this purpose, the application code shall be stored in the flash memory of the PCB. Once the system powers-up, the boot-loader will automatically start running the application. For programming the system for autonomous operation, please contact AnaFocus. 8.5.1
Interrupts
The system incorporates four interrupt inputs that let the user synchronize external events with the execution of code within the Eye-RIS v1.1 Vision System. Two of these interrupts are preconfigured to be active when rising edge events are detected in the corresponding pin and the other two the same with falling edge. These four inputs are mapped in the “C” expansion connector as follows:
8.6 Eye-RIS v1.2 Description (Q-Eye Based) The Eye-RIS v1.2 is a programmable, autonomous vision system using AnaFocus’ proprietary Smart Image Sensor (SIS) technology. Thanks to the incorporation of a Smart Image Sensor, Eye-RIS v1.2 delivers image-processing capabilities and speed comparable to high-end conventional vision systems in a compact size and with low power consumption. Additionally, Eye-RIS v1.2 includes all the components needed to develop and debug in real-time vision applications: an ALTERA NIOS-II 32-bit RISC microprocessor performs post-processing and system control tasks and several I/O and high-speed communication ports allow the system to communicate and/or to control external systems.
New Visual Sensors and Processors
365
Fig. 8.13. C Header Location
Fig. 8.14. Interrupt Pins Location
Its multiple board architecture allows the Eye-RIS v1.2 high flexibility, permitting its easy adaptation to the requirements of specific applications. It is furnished with software layers, drivers and libraries making the internal system-level programming details transparent for application developers. The Eye-RIS v1.2 Vision System is a perfect choice to incorporate vision in applications where compactness, cost, power consumption and operation speed define major targets. The Eye-RIS v1.2 Vision System is composed of the following modules, which are described in this manual, • AnaFocus’ Q-Eye Smart Image Sensor (SIS). • ALTERA NIOS II Processor. • I/O module.
366
L. Alba et al. Table 8.1. Eye-RIS v1.1 Interrupt Pins Interrupt
Active Edge
“C” Header Pin
RISING1 RISING2 FALLING1 FALLING2
RISING RISING FALLING FALLING
5 7 9 11
Fig. 8.15. Eye-RIS v1.2 Vision System
8.6.1
Digital Input/Output Ports
The Eye-RIS v1.2 Vision System Evaluation Kit provides the following I/O ports: • One USB 1.1 JTAG Debug Module to load and execute programs on the Nios II Processor, pause them, analyze and trace data (internal registers and memory), and etc. • One USB 2.0 port mainly used for fast exchange of images between the Eye-RIS Vision System and the PC for algorithm debugging and monitoring purposes. • A set of I/O peripherals connected to an external, female SCSI-2 I/O connector. It complies the connection standard and it’s made of two 25 pins, 1.27mm pitch rows. Following, the set of available peripherals is listed: – 1 SPI Master – 1 SPI Slave – 1 UART – 2 PWMs – GPIO[15:0]: 16 software-programmable general purpose digital I/O lines. – 1 Rising Edge Interrupt input pin – 1 Falling Edge Interrupt input pin
New Visual Sensors and Processors
367
Fig. 8.16. Eye-RIS v1.2 I/O Connector Table 8.2. I/O Connector Pin Description Pin 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Peripheral
Interrupts Interrupts
SPI Slave SPI Slave
SPI Slave
Function
Pin
GPIO(14) 5V GPIO(15) GND GND GPIO(0) Rising Interrupt GPIO(1) Falling Interrupt GND GND GPIO(2) PWM1 GPIO(3) PWM2 GND GND GPIO(4) MOSI GPIO(5) SCLK GND GND GPIO(6) SSn
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
Peripheral SPI Slave
SPI Master SPI Master
SPI Master
SPI Master UART
UART UART UART
Function GPIO(7) MISO GND GND GPIO(8) MOSI GPIO(9) SCLK GND GND GPIO(10) SSn GPIO(11) GND GND MISO GPIO(12) RXD GPIO(13) GND GND RTS CTS 5V TXD
All these ports can be utilized by means of calling the proper functions in Eye-RIS Basic Library. Fig. 8.16 depicts the SCSI-2 female connector Table 8.2 details the function of each pin in the connector.
8.7 NIOS II Processor As you may know from the previous sections, Eye-RIS v1.1 and Eye-RIS v1.2 share the same digital processor, i. e., the NIOS II microprocessor from Altera. This section aims to briefly describe the main features of it.
368
8.7.1
L. Alba et al.
NIOS II Processor Basics
Nios II is a FPGA-synthesizable Digital Microprocessor. It is a configurable soft-core processor, as opposed to a fixed, off-the-shelf microprocessor. “Soft-core” means the CPU core is offered in “soft” design form (not fixed in silicon), and can be targeted to any Altera FPGA family or system on-chip4. AnaFocus took advantage of such configurability to optimally fit it with the AnaFocus ACE16K and Q-EyeTM, developing the Eye-RIS v1.1 and v1.2 vision systems. The main features of the Altera Nios II processor are: • • • • • • • •
General-purpose RISC processor core Full 32-bit instruction set, data path, and address space 32 general-purpose registers 32 external interrupt sources Single-instruction 32 x 32 multiply and divide producing a 32-bit result Dedicated instructions for computing 64-bit and 128-bit products of multiplication Single-instruction barrel shifter Access to a variety of on-chip peripherals, and interfaces to off-chip memories and peripherals through Avalon Bus. • Hardware-assisted JTAG debug module enabling processor start, stop, step and trace. • Instruction set architecture (ISA) compatible across all NIOS II processor systems
8.8 Conclusion While vision in living beings handles a significant percentage of the information needed to interact with the environment, the use of vision in machines is limited due to very high cost/performance ratio. The Eye-RIS vision system overcomes this drawback by employing a bio-inspired architecture which can be realized at low cost in single chip form and which is capable of high-speed on-line operation. Acknowledgement. Authors would like to acknowledge the many constructive discussions with Dr. Gustavo Li˜na´ n and Dr. Ricardo Carmona from IMSE-CNM.
References 1. Carmona Gal´an, R.: Analysis and Design of Mixed-Signal Chips for Real-Time Image Processing, PhD dissertation, University of Seville (2002) 2. Chua, L.O., Roska, T.: Cellular Neural Networks and Visual Computing. Cambridge University Press, Cambridge (2002) ´ Carmona, R., F¨oldesy, P., 3. Dom´ınguez-Castro, R., Espejo, S., Rodr´ıguez-V´azquez, A., ´ Szolgay, P., Szir´anyi, T., Roska, T.: A 0.8m CMOS 2-D Programmable MixedZar´andy, A., Signal Focal-Plane Array Processor with On-Chip Binary Imaging and Instructions Storage. IEEE J. Solid-State Circuits (1997) 4
AnaFocus has developed a proprietary methodology for synthesizing NIOS-II microprocessors on-chip starting from the reference VHDL code generated by the ALTERA NIOS II IDE software.
New Visual Sensors and Processors
369
4. Kleihorst, R.P., Abbo, A.A., Van der Avoird, A., Op de Beeck, M.J.R., Sevat, L., Wielage, P., van Veen, R., van Herten, H.: Xetal: A Low-power High-Performance Smart Camera Processor. In: Proceedings of the IEEE International Symposium on Circuits and Systems (2001) 5. Li˜na´ n Cembrano, G.: Design of Programmable Mixed-Signal Low-Power Consumption Chips for Vision Systems, PhD dissertation, University of Seville (2002) ´ ACE4k: An Analog 6. Li˜na´ n, G., Espejo, S., Dom´ınguez-Castro, R., Rodr´ıguez-V´azquez, A.: I/O 64x64 Visual Microprocessor Chip with 7-bit Analog Accuracy. International Journal of Circuit Theory and Applications (2002) ´ Carmona, R., Jim´enez-Garrido, F., Espejo, S., Dom´ınguez7. Li˜na´ n, G., Rodr´ıguez-V´azquez, A., Castro, R.: A 1000 FPS at 128 x 128 Vision Processor UIT 8-Bit Digitized I/O. IEEE Journal of Solid-State Circuits (2004) ´ Li˜na´ n, G., Espejo, S., Dom´ınguez-Castro, R.: Mismatch-Induced 8. Rodr´ıguez-V´azquez, A., Trade-Offs and Scalability of Analog Preprocessing Visual Microprocessor. In: Analog Integrated Circuits and Signal Processing. Kluwer Academics, Dordrecht (2003) 9. Russ, J.C.: The Image Processing Handbook. CRC Press, Boca Raton (1992)
9 Visual Algorithms for Cognition ´ Zar´andy and Cs. Rekeczky A. AnaLogic Computers Kft., Vahot u. 6., H-1119 Budapest (Hungary)
Abstract. In this chapter, the implementation of visual routines for topographic cellular processors is described in detail. In specific, AnaLogic Computers Kft. (called AnaLogic henceforth) as part of their contribution to the SPARK project - has developed special visual algorithms for cognition, and implemented them on the Eye-RIS v1.2 system of AnaFocus, described in Chapter 8. A significant requirement was to be able to run these routines in real-time, enabling the roving robots to react to their environment instantaneously. This was achieved, and a library of image processing functions was efficiently implemented on the Eye-RIS system, utilizing the capabilities of the Q-Eye chip. This significant speedup enables real-time image processing with the system even in case of complex tasks. The new functionalities of the Eye-RIS v1.2 visual device enabled the implementation of several advanced visual routines. These routines run at a speed of 200 to 1,000 frames per second (fps) on the system. The following is a list of the specific routines that were implemented: • • • •
Global displacement calculation; Foreground-background separation based segmentation; Active contour; Multi-object tracking.
In the following sections, each of the above routines is described, along with example programs. Examples are also provided showing the results of the processing, both in terms of what output they produce and also their performance on the Eye-RIS system.
9.1 Global Displacement Calculation This function calculates the optical flow on an image sequence. The optical flow calculates local displacements of image segments on two consecutive 176x144 sized grayscale or binary images of an image flow. The displacement calculation is made of two steps. In the first step, two projections of the image are calculated, one for the X, and one for the Y axis. Each of these projections makes possible the calculation of the displacement along one axis. This means, that we do not have to make full search, to find the matching shift position, rather we have to do search along each axis only. Hence the execution time is proportional to the search window border length, and not with its area, hence it is not quadratic. The projection calculation for an image depends on the window size and the number of calculated points. As an example for 5x5 search windows in 25 search positions, it takes roughly 10 ms total. In navigation algorithms, it is very typical to calculate it only in a few dozens of search positions. P. Arena and L. Patan`e (Eds.): Spatial Temporal Patterns, COSMOS 1, pp. 371–384. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
´ Zar´andy and Cs. Rekeczky A.
372
The optical flow calculation uses the following two functions: 1. int Q Eye OptFlowInit(int TempBinImgInxTop, int SearchPosNumX, int SearchPosNumY); in particular: a) The function is for initialization. b) TempBinImgInxTop points to an image in the NIOS memory space. The Optical flow operation uses n+4 binary images altogether, where n is the larger search window size. The IDs of the used memories are from TempBinImgInxTop-n-4 to TempBinImgInxTop. c) SearchPosNumX and SearchPosNumY define the size of the search matrix. The return value is 0, if it was executed correctly, and -1 if the number of search points exceeded 400. 2. int Q Eye OpticalFlow(int PrevImageInx, int OrigImageInx, int WinSizeX, int WinSizeY, int NumSearchPoints, int TempGreyImgeInxTop, int TempBinImgInxTop, int Threshold, int Debug); where: a) b) c) d) e)
PrevImageInx is the input image. OrigImageInx is the reference image. WinSizeX and WinSizeY are defining the search window size. NumSearchPoints is the number of the search points. TempGreyImgeInxTop points to an image in the NIOS memory space. The Optical flow operation uses 3 grey images altogether. The IDs of the used memories are from TempGreyImgInxTop-3 to TempGreyImgInxTop. f) TempBinImgInxTop: see above. g) Threshold: sets the sensitivity of the function. Correct values are typically around 40 and above. h) Debug: when it is 1, it displays a few binary images, which indicate whether the threshold value is set correctly. The images show the results of the shift, difference, and threshold operation sequences in five different horizontal shift displacement positions. In case of a stationary input image flow, the first image should not contain any pattern, because that shows the zero shift image, while the rest should contain thicker and thicker black stripes, as the displacement is increasing.
The return value is 0, if it was executed correctly, and -1 if the number of search points exceeded 400. The result is stored in the FLOW matrix. The number of the maximum search points are 400. The search points are allocated automatically by the Q Eye OptFlowInit routine. They are distributed equally in the image field. However, user can overwrite their coordinates after the Q Eye OptFlowInit is called. int Flow[i][0]: Search Position X int Flow[i][1]: Search Position Y int Flow[i][2]: Motion X in particular SearchPos location int Flow[i][3]: Motion Y in particular SearchPos location
Visual Algorithms for Cognition
373
Sample program: void GOF UserProgram() { const int OrigImageInx = 0; const int PrevImageInx = 1; const int TempBinImgInxTop = 511; const int TempGreyImgeInx =400; int WinSizeX = 5; int WinSizeY = 5; int NumPointsX =7; int NumPointsY =7; int Threshold = 45; int Debug = 0; int i, j; SetNumberImages(1,1); Q Eye OptFlowInit(TempBinImgInxTop, NumPointsX, NumPointsY); WriteFPPRegister(CaptureID,OrigImageInx); unsigned char* OrigImgPtr = GetImagePointer(OrigImageInx, GREY); unsigned char* PrevImgPtr = GetImagePointer(PrevImageInx, GREY); for (j=0; j<100; j++){ ExecuteSection(FPP Capture); Q Eye OpticalFlow(PrevImageInx, OrigImageInx, WinSizeX, WinSizeY, NumPointsX*NumPointsY, TempGreyImgeInx, TempBinImgInxTop, Threshold, Debug); PrintMessage(”Calc. disp : %d,%d, in location (%d, %d)”,Flow[0][2], Flow[0][3],Flow[0][0], Flow[0][1]); memcpy(PrevImgPtr,OrigImgPtr,GREY IMAGE SIZE CHAR); ClearImageFigure(OrigImageInx, GREY, ALL); for(i=0; i¡NumPointsX*NumPointsY; i++) SetImageFigure(OrigImageInx, GREY, i, LINE, Flow[i][0], Flow[i][1], Flow[i][0]+Flow[i][2], Flow[i][1]+Flow[i][3], 1, 255, 0, 0); DisplayImage(OrigImageInx,0); } }
Examples of the operation can be seen in Fig. 9.1. In our example, there was a diagonal camera motion between the two consecutive frames. The algorithm correctly identified the egomotion of the camera in those search positions, where any contoured pattern was found. In those search position, where the pattern was vertical only, the horizontal component of the displacement was calculated only, because the vertical
374
´ Zar´andy and Cs. Rekeczky A.
(a)
(b)
Fig. 9.1. Screenshots of the displacement algorithm
component cannot be calculated theoretically. In those location, where the images did not contain any patter, the algorithm obviously could not identify the displacements.
9.2 Foreground-Background Separation Based Segmentation A possible way to extract the scene background from an image flow can based on the idea, that the background changes more slowly, than things we want to extract as foreground. This means, that if we consider the past of each pixel, we should find the background value(s) more frequently than the values belonging to moving or quickly changing objects. Based on this assumption, we have implemented two versions of the foregroundbackground separation algorithm. One is based on temporal features of the image flow, while the second is based on spatial-temporal features. The input of these algorithms is the captured raw images, while the output is a binary marker of the moving objects. Both of them assume stationary platform (non moving camera. The first algorithm learns the background in roughly 8 frames, while the second algorithm can be applied for consecutive frames also. In case of video speed processing, it is 40-320ms. Both of the algorithms can adapt to the slow changes in the background, which might be caused of the slow changes of the illumination. This means that the moving of the sun or the clouds do not causes any false classification. Moreover, both algorithms can adapt to quick changes of the illumination (e.g. switching on the light in a room), however, it might causes partial misclassification in a few consecutive frames, while the algorithm adapts to the new background. Both algorithms apply some binary morphological post processing, to clean the resulting marker.
Visual Algorithms for Cognition
(a)
375
(b)
Fig. 9.2. Screenshots of the temporal foreground-background algorithm. The left hand side images are the input images, the middle ones are the updated background model, while the right ones are the extracted features. The algorithm cannot extract the object in those places, where the input image is in saturation.
9.2.1
Temporal Foreground-Background Separation
The temporal foreground-background separation algorithm is a modified version of AnaLogic’s foreground-background separation algorithm. The original version required large amount of memory a numerous complex statistical operation. Since that, we could not implement on Q-EYE, only on NIOS, hence it was very slow ( 1 frame/sec). This modified adaptation builds a background model, which is maintained on the chip. All the captured images are compared to it and those areas, where the differences are above a certain threshold are considered to be the moving object. The adaptation speed can be through the modification of the background update frequency. If the adaptation is too slow, it reacts slowly to the changing background. When it is too fast, it might adapt to the slowly moving objects, which might partially disappear. It is based on two C++ functions, namely: 1. void Q Eye UpdateBackground(int BackgroundImgID); in particular: a) The function is for initialization. b) BackgroundImgID is grayscale an image ID in Nios’ memory space, where the background image is stored. c) The input image (typically captured by the sensor of photo sensor array of the Q-Eye chip should be in LAM 0. (Not corrupted during the process.) This routine uses LAM 1, OP1, OP2, MASK, MORPH. 2. void Q Eye GetForeground(int BackgroundImgID, int ForeGroundID, int ForegroundThreshold, int Filter, int Range) where: a) BackgroundImgID is grayscale an image ID in Nios’ memory space, where the background image is stored. b) ForegroundImgID is binary an image ID in Nios’ memory space, where the foreground image is stored. The input image (typically captured by the sensor of photo sensor array of the Q-Eye chip should be in LAM 0. (Not corrupted during the process.) c) ForegroundThreshold is the threshold value, which defines the sensitivity of the foreground separation. Obviously, with increased sensitivity, more noise is introduced. Typical value is 20. Its range is: [0-255]. d) Filter: enables or disables morphological post filtering on the foreground image.
376
´ Zar´andy and Cs. Rekeczky A.
e) Range: [0,1]. f) This routine uses LAM 1, LAM 2, LAM 3, OP1, OP2, MASK, MORPH. MASK and MORPH are used only, when filtering is enabled. The running times of these functions are 0.9 and 0.4 milliseconds respectively. Example routine, embedding foreground background separation: void TFBG UserProgram() { const int InputImageInx = 0; const int ForegroundImageInx = 10; const int BackgroundImageInx = 20; SetNumberImages(3,3); WriteFPPRegister(CaptureID,InputImageInx); int CycleNum = 0; while (1){ ExecuteSection(FPP Capture); // captures an image to LAM 0, and downloads it to CaptureID if (0 == CycleNum % 3) { Q Eye UpdateBackground(BackgroundImageInx); } Q Eye GetForeground(BackgroundImageInx, ForegroundImageInx, 20, 1); DisplayImage(InputImageInx,0); DisplayImage(BackgroundImageInx,1); DisplayBinaryImage(ForegroundImageInx,2); CycleNum++; } }
9.2.2
Spatial-Temporal Foreground-Background Separation
The spatial-temporal foreground-background separation algorithm is based on an image descriptor calculation. This image descriptor is a dynamic edge calculation, which means, that it extracts those locations, where the edge map is significantly different on the two images. The descriptor, which is already a binary image, contains the local features of an image. If we calculate the descriptor of the background image and the new image, the difference of the binary descriptor image will contain the moving objects. The special feature of this descriptor calculation is, that the shadows of the moving objects fools significantly less this algorithm than the first one. This means, that in most cases the detection of the shadows as moving objects can be eliminated with appropriate morphological post-filtering (Figure 3). The algorithm is so robust, that it even provides good detection results without building a background model. In this case, one of the previous input images in the flow can be considered as a background image.
Visual Algorithms for Cognition
377
It is based on C function call functions, namely: 1. void Q Eye Foreground(int InpImgID, int RefImgID, int DiffImgID, int Threshold, int Filter);, where: a) InpImgID is grayscale an image ID in Nios’ memory space, where the current input image is stored. b) RefImgID is grayscale an image ID in Nios’ memory space, where the background (reference) image is stored. c) DiffImgID is binary an image ID in Nios’ memory space, where the foreground (result) will be stored. d) Threshold sets the sensitivity of the algorithm. The higher the number, the more sensitive the algorithm will be. Typical value: 105. e) Filter sets the number of dilation, executed on the image as a post morphological filter. void STRFBG UserProgram() { int RefImageID=0, InpImageID=1, DiffImageID=2, Threshold = 105; int i; SetNumberImages(3,3); WriteFPPRegister(CaptureID, RefImageID); ExecuteSection(FPP Capture); // captures an image to LAM 0, and downloads it to CaptureID memory of Nios DisplayImage(RefImageID, 0); WriteFPPRegister(CaptureID, InpImageID); for(i=0; i<100; i++) { ExecuteSection(FPP Capture); // captures an image to LAM 0, and downloads it to CaptureID memory of Nios Q Eye Foreground(InpImageID, RefImageID, DiffImageID, Threshold, 1); DisplayImage(InpImageID, 1); DisplayBinaryImage(DiffImageID, 2); } } The execution time of the descriptor extracting for two images and the difference calculation is roughly 0.8 millisecond.
9.3 Active Contour Algorithm Since they were introduced by Kass et al. [3] active contours have become a popular tool in multiple image processing tasks like segmentation, tracking and modeling. An active contour is defined as an elastic curve which deforms controlled by image features and shape constraints to adapt itself to the boundaries of the objects of interest.
378
´ Zar´andy and Cs. Rekeczky A.
Fig. 9.3. Screenshot of the spatial-temporal foreground-background algorithm. The left hand side image is the input, while the right one contains the extracted features. The algorithm can separate the real moving object from the background, and the moving shadow does not fool it.
The active contour techniques are usually classified as either energy-based (parametric models, [3]) or level-set based (implicit models, [1], [4]). The first, also called snakes, are physically motivated and represent the contours explicitly as parameterized curves in a Lagrangian formulation. The second are based on the theory of curve evolution and implement the curves implicitly as a level set of a higher order function which evolves according to an Eulerian formulation. This classification obeys to the contour representation, implementation and capabilities from the operation point of view (particularly, the capability to manage changes in the contour topology). Nevertheless almost all of them have in common high computational requirements which might limit their use in those tasks needing fast time response. This inconvenience is alleviated by the development of new strategies for the numerical simulation of the equations which govern the dynamic of the curve evolution which usually lead towards a compromise between processing speed and flexibility in the contour evolution. As a difference with the commented strategies the cellular active contours appear originally intended to resolve the high computational cost inherent to the classical active contour techniques. They are based on a pixel-level discretization of the contours and on a massively parallel computation on every contour cell which lead to a high speed processing without penalizing the efficiency of the contour location. Up to the present two different cellular active contour approaches have been thoroughly investigated. In [5] an active wave computing approach is introduced. This consists of a topographic non-iterative region propagation technique where the contours are defined by the boundaries of trigger waves. Therefore, as in the level-set approaches, the contour evolution is implicitly represented as a wavefront propagation. This approach has demonstrated a high flexibility in the contour evolution and like the implicit models gives a simple solution to the changes of topology required when two different wavefronts collide. Nevertheless in this kind of techniques sophisticated stop criteria are usually required to conveniently control the
Visual Algorithms for Cognition
379
wave-front propagation which may increase considerably the computation complexity in real applications. In [6] the called pixel-level snakes (PLS) are addressed. They represent a topographic iterative active contour technique where the contours are explicitly represented and evolve towards (local) minimal distance curves based on a metric defined as a function of the features of interest. In the PLS method contours are guided by local information and regularizing terms dependent on the contour curvature. A comprehensive survey on cellular active contour methods can be found in [7] along with a new cellular active contour algorithm called the moving patch method (MPM). We have implemented the Vilarino’s Pixel Level Snake (PLS) method [6]. The method is based on the calculation of the internal and the external potential along the boundaries. Each pixel position then determines which way to move in each iteration step. Naturally, the pixel snake should be kept continuous in each step. The algorithm is implemented two C++ function on. 1. void Q Eye PLSInit(int InImg, int ConImg, int LastConImg, int ExtPotImg, int IntPotImg, int InfPotImg, int SumPotImg, int GFEImg, int DCEImg, int DCTImg, int SignInflPot, int ThrVal), where: a) This initialization function sets the basic parameters of the PLS function. b) InpImg is the grayscale input and ConImg is the binary output image ID in the Nios memory space. c) LastConImg is a temporally binary file location in the Nios memory space. d) ExtPotImg, IntPotImg, InfPotImg, SumPotImg, GFEImg, DCEImg, DCTImg, are temporally grayscale file locations in the Nios memory space. e) SignInflPot means inflation or deflation type boundary following. It has relevance when there are large boundary jumps between frames (1 - deflating, 2 inflating). f) ThrVal is the threshold value applied in the algorithm. 2. void Q Eye PLS(const int IterationNum), where: a) IterationNum indicates the iteration number. In one iteration, the contour can migrate one pixel only. Sample program: #define INSTANTVISION DIR ”C:/InstantVisionISE” const int InputImageInx = 0; const int ContourImageInx = 10; const int LastContourImageInx = 11; const int ExtPotImageInx = 210; const int IntPotImageInx = 211; const int InfPotImageInx = 212; const int SumPotImageInx = 213; const int GFEImageInx = 220; const int DCEImageInx = 221; const int DCTImageInx = 222; const unsigned int SignInflationPot = 2; // 1 - deflating, 2 -inflating
380
´ Zar´andy and Cs. Rekeczky A.
const unsigned int InputThresholdVal = 127; void Q Eye PLS(const int ); void Q Eye PLSInit(int,int,int,int,int,int,int,int,int,int,int,int); void PLS UserProgram() { char file[256]; SetNumberImages(2,2); Q Eye PLSInit(InputImageInx,ContourImageInx,LastContourImageInx, ExtPotImageInx,IntPotImageInx, InfPotImageInx, SumPotImageInx, GFEImageInx, DCEImageInx, DCTImageInx, SignInflationPot,InputThresholdVal); sprintf(file, ”%s/samples/EyeRis/PlsSample/Data/pls test img cont.bmp”, INSTANTVISION DIR); RequestImageFile(file, ContourImageInx, BINARY); for (int CycleNum = 0; CycleNum < 60; CycleNum++) { sprintf(file, ”%s/samples/EyeRis/PlsSample/Data/pls test img %04d.bmp”, INSTANTVISION DIR,CycleNum); RequestImageFile(file, InputImageInx, GREY); if (0 == CycleNum){ Q Eye PLS(50); } else { Q Eye PLS(10); } DisplayImage(InputImageInx,0); DisplayBinaryImage(ContourImageInx,1); PrintMessage(”Cycle: %d”,CycleNum); } } The running time of the active contour algorithm is 3 milliseconds per iterations. In ach iteration, the active contour can move one pixel. This typically indicates that after the contour is found, 2-5 iterations are enough. The execution time does not depend on the number of the contours on the image.
9.4 Multi-target Tracking The multi-target tracking algorithm itself is not dealing with images. Rather than that, its input parameters are coordinates of the candidate objects and the output parameters are the tracks. Since that, it is naturally executed on sequential processors rather than on array processors. The algorithm itself is not slow, because it is not dealing with large image matrices, but only with small parameter matrices. The slow part is typically to calculate the coordinates, when it is implemented on a sequential microprocessor. To
Visual Algorithms for Cognition
(a)
381
(b)
Fig. 9.4. Screenshot of the active contour algorithm
find the coordinates of the objects, one has to perform a segmentation, morphological filtering, and centroid calculation. In the first implementation of the AnaLogic’s Instant Vision Library to the Eye-RIS v1.1 vision system, the feature extraction and the coordinate calculation of the candidate objects took unacceptably long time, since those all run on the Nios processor. In the Eye-RIS v1.2, we have implemented already two fast segmentation algorithms. Moreover, the morphological processing routines, and the centroid calculation is in the basic image processing library of the Eye-RIS v1.2. Since all of these functions are implemented efficiently on the Q-Eye, the whole multi-target tracking algorithm can run well above video rate. Sample code: int ENTRY POINT(void) { const int MaxTrkNum=6; const int H=128; const int W=128; SetLogLevel(IVLOG INFO); TVideoIn VideoIn(”SampleTest2.avi”); if (VideoIn.bad()){ LogMsg(IVLOG ERROR, ”Unable to %s.”,VideoIn.geterror()); return 0; }
to
// craeting images to display train and test data TByteMatrix Pic(W,H); TBitMatrix PicBin(W,H); TBitMatrix TrackBin(W,H); TByteMatrix TrackTraces[MaxTrkNum]; //Windows to display data TStd MeasChannel(1);
open
video
file,
reason:
382
´ Zar´andy and Cs. Rekeczky A.
TStd TrkChannel(2); TMeas Meas; static TMTTracker Tracker; Tracker.AssocMethod=JVC; Tracker.Tracks.Resize(MaxTrkNum); //Tracker.Tracks.SetEstimationMethod(AlphaBetaGamma); Tracker.Tracks.SetEstimationMethod(AlphaBeta); Tracker.Tracks.SetTimeConst(1.0f); Tracker.Tracks.SetManeuveringIndex(10); MeasChannel.SetImgMode(true); TrkChannel.SetImgMode(true); for (unsigned int i=0; i ¡ MaxTrkNum; ++i){ TrackTraces[i].Resize(W,H); TrackTraces[i]=0; } const int Length=VideoIn.GetLength(); for (int t=0; (t < Length) && !GetStopStatus() ; ++t){ //get measurements VideoIn >> Pic; PicBin = Pic > Num(3); Erode8(PicBin, PicBin, 1); unsigned int ObjNum=0; TFeatureMatrix Features(MaxTrkNum,1); CalcFeatures(ObjNum, Features, PicBin, FEAT CENTROID, InstantVision::CONN4, 256, CALCACC FLOAT, CALCMODE SQUARED); Meas.Resize(ObjNum,1); for (unsigned int k=0; k < ObjNum; ++k){ Meas(k).x=Features(k).Centroid.x; Meas(k).y=Features(k).Centroid.y; Meas(k).c3=0.0f; Meas(k).c4=0.0f; } Tracker.DataAssociation(Meas); Tracker.Tracks.UpdateStates();
Visual Algorithms for Cognition
383
LogMsg(IVLOG INFO,”Track count=%d”,Tracker.Tracks.GetCount()); for (TTrackIndex i=0; i < Tracker.Tracks.GetCount(); ++i){ if (Tracker.Tracks.IsLive(i)) { DrawLocation(TrackTraces[i], Tracker.Tracks.PrevState(i).x, Tracker.Tracks.PrevState(i).y); TrackBin=TrackTraces[i] > Num(0); TrkChannel.SetId(i+2); TrkChannel ¡¡ TrackBin; } } MeasChannel << Pic; Tracker.NextFrame(); } return 0; }
Fig. 9.5. Screenshot of the multi-target-tracking sample. The image on the left-hand-side shows the input with two moving objects, while the two other images shows the identified tracks.
9.5 Conclusions Visual routines have been implemented and optimized to provide a number of different algorithms on topographic cellular processors, able to extract in real time different kinds of visual details useful for cognitive purposes.
References 1. Caselles, V., Catte, F., Dibos, F.: A Geometric Model of Active Contours in Image Processing. Numer. Math. 66 (1993) 2. Hillier, D., Binzberger, V., Vilarino, D.L., Rekeczky, C.: Topographic Active Contour Techniques: Theory, Implementations and Comparisons. International Journal on Circuit Theory and Applications (2005)
384
´ Zar´andy and Cs. Rekeczky A.
3. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active Contours Models. International Journal on Computer Vision 1, 321–331 (1988) 4. Malladi, R., Sethian, J.A., Vemuri, B.C.: Shape Modeling with Front Propagation: A Level Set Approach. IEEE Trans. Patt. Anal. Mach. Intell. 17(2), 158–174 (1995) 5. Rekeczky, C., Chua, L.O.: Computing with Front Propagation: Active Contour and Skeleton Models in Continuous-Time CNN. Journal of VLSI Signal Processing Systems 23(2/3), 373– 402 (1999) 6. Vilarino, D.L., Brea, V.M., Cabello, D., Pardo, J.M.: Discrete-Time CNN for Image Segmentation by Active Contours. Pattern Recognition Letters 19(8), 721–734 (1998) 7. Vilarino, D.L., Rekeczky, C.: Pixel Level Snakes on the CNN-UM: Algorithm Design, Onchip Implementation and Applications. International Journal on Circuit Theory and Applications 33, 17–51 (2005)
10 SPARK Hardware L. Alba1 , P. Arena2 , S. De Fiore2 , and L. Patan´e2 1 2
AnaFocus (Innovaciones Microelectr´onicas S.L.) Av. Isaac Newton 4, Pabell´on de Italia, Planta 7, PT Isla de la Cartuja, 41092, Sevilla Department of Electrical, Electronic and System Engineering, University of Catania, I-95125 Catania, Italy {parena,lpatane}@diees.unict.it
Abstract. In order to apply the proposed algorithms for action-oriented perception in robotic platforms, a complete hardware architecture has been designed and realized. The core of the system is the SPARK board, an FPGA-based hardware designed to fulfill the requirements needed by the SPARK cognitive architecture. Besides the SPARK board, a distributed sensory system has been designed to be embedded on wheeled and legged robots. In this chapter the main aspects that characterize the SPARK hardware are discussed.
10.1 Introduction Multi-sensory integration can be defined as the synergistic use of the information provided by multiple sensory devices to assist the accomplishment of a task by a system [7]. Practically, multi-sensory fusion refers to any stage in the integration process where there is a combination of different sources of sensory information into one representational format. Multi-sensory fusion is extremely important for an autonomous robot in order to build an environment map using different sensors. One of the most relevant questions is the hardware and software architecture required to manage a great number of sensors and, consequently, a large amount of data. In particular, for a roving robot to successfully perform a navigation task, the control loop must be able to process the different stimuli of the environment in a time that must be compatible to real time applications. For a robot in the real world, the ability to interpret information coming from the environment is crucial, both for its survival and for reaching its target. The real world differs from structured environments because it is dynamic, so that it is impossible to fully programme the robot’s behavior only on the basis of a priori knowledge. As reported in Chapter 6 and 7, different cognitive architectures have been presented in the framework of the SPARK Project. A cognitive system must be able to manage all the sensors needed in walking and roving robots to successfully complete the assigned task. From the hardware implementation point of view, a cognitive architecture should be characterized by a very efficient and fast multi sensory system to handle the low level (touch, posture, load, distance) and the high level (visual, hearing) sensors. Nowadays, autonomous robots are used in a growing number of applications. Therefore, recently many papers have focused on the multi- sensory integration problem, P. Arena and L. Patan`e (Eds.): Spatial Temporal Patterns, COSMOS 1, pp. 385–397. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
386
L. Alba et al.
proposing different solutions for various applications [1, 3, 5]. Other researchers introduce a multi-sensory network based on smart sensors for a next-generation rocket test facility [9]. A different approach was proposed in [4] where Neuro-Fuzzy logic is used in a fault tolerant sensory fusion system for humanoid robots. Even if these approaches are suitable for the corresponding applications, they are too complex for real time implementation in walking robots. Another solution, exploiting environment modeling, is introduced in [8]. The proposed solution is based on low level sensors and a micro-controller system that does not permit the use of a great number of sensors. In this chapter a new FPGA based architecture explicitly designed to deal with a distributed sensory system for cognitive experiments is presented. The characteristics that make an FPGA-based hardware an optimal solution for our purposes are the flexibility of a reprogrammable hardware and the high computational power obtained with a parallel processing [2]. So the proposed system can independently manage several sensors with different frequencies. Dedicated channels are created for the sensors that require a high band for data transfer (visual system). In order to optimize the FPGA resources, a serial bus is used for the other sensors. The computational power has also been considered to guarantee the implementation of the whole cognitive architecture.
10.2 Multi-sensory Architecture Taking advantage of the FPGA parallelism in data management, the proposed architecture allows processing of a large number of sensors with an high degree of flexibility, speed and accuracy. The architecture permits multi-sensory fusion processes. The different sensory sources are accessible by a unique interface; in fact the system can manage digital and analog sensors connected with parallel or serial buses. Fig.10.1 shows the main blocks of the multi-sensory architecture that we designed, and the interconnections among the modules and the distributed sensors. The main board is based on Altera StratixII FPGA [10] that hosts the soft-core 32bit microprocessor NiosII. Three I2C Master VHDL blocks have been included in the FPGA to interface the SPARK board with the Analog sensory board. All the I2C blocks are able to work simultaneously with different levels of priority. The first and the second I2C bus are dedicated to the devices that need analog to digital conversion. To make this conversion 12bit-4channels ADC have been used, with a maximum of five ADC for each line. So each bus can handle up to twenty analog devices. For instance, taking into account the legged robot Gregor III (see Chapter 11), the first I2C bus processes 8 infrared sensors and joint position sensors; the second I2C channel is dedicated to acquire two auditory circuit outputs and up to 18 analog inputs that are available for extra sensors. Finally the third I2C bus is used to acquire those devices, such as the electronic compass and the accelerometer, that already have a I2C interface (see Fig. 10.1). The choice of the I2C bus has been made for three main reasons: it is a serial bus operating with an acceptable speed according to the desired final application; only two I/O pins are needed; it is a widespread interface in a lot of devices available on the market. From the other hand, a parallel interface has been used only for high-level sensors (e.g. connection with the visual system) that need to process huge amount of data. Finally, the digital sensors
SPARK Hardware
387
(ground contact and long distance sensors) have been connected directly to the digital input of the SPARK board. In the following the main blocks of Fig. 10.1 will be described in more details.
Fig. 10.1. Block diagram of the main board and sensory system
10.2.1
Spark Main Board
During the preliminary design stage of the hardware architecture, a first attempt was done considering an Altera DSP Development Board equipped with a Stratix II FPGA (EP2S60) [10]. This solution encounters some problems due to the big size of the board, the unused number of components and mainly its lack of room within the FPGA to implement the needed algorithms. For these reasons the design of a new specific board arose. Different approaches were taken into consideration during the design phase. • One single board including a large FPGA (Stratix II EP2S180); • One single board comprising two medium-size FPGA (Stratix II EP2S60); • Two boards, each one with a medium-size EP2S60 FPGA connected via cables. After studying all the possibilities the third option was chosen, as it was more robust in terms of developing time, costs and flexibility. A simplified diagram of the whole system is depicted in Fig. 10.2. This solution permits the use of only one board when the resources are enough for the application, but also two or even more boards, when the algorithm requires more computational power. Fig. 10.3 shows the SPARK 1.0 Main Board features in details. It provides a hardware platform for developing embedded systems based on Altera Stratix II EP2S60 FPGA with the following features: • Startix II EP2S60F672C3N FPGA with more than 13,500 adaptive logic modules (ALM) and 1.3 million bits of on-chip memory;
388
L. Alba et al.
Fig. 10.2. SPARK system scheme: two or even more SPARK boards can be connected to fulfill the algorithm requirements
• • • • • • • • • • • • • •
8 Mbytes SPI flash memory; 64 Mbytes SDRAM; On board logic for configuring the FPGA from flash memory; RS-232 DB9 serial port; 1 wireless UART port; 8 expansion headers with access to 249 FPGA user I/O pins (3.3V tolerant); 1 expansion header with access to 16 FPGA user I/O pins (5V tolerant); 4 push-button switches connected to FPGA user I/O pins; 8 LEDs connected to FPGA user I/O pins; Dual 7-segment LED display; Integrated USB Blaster circuitry to connect to the Altera software tools; 50 Mhz oscillator and zero skew clock distribution circuitry; 8-bit position DIPSwitch SPST; USB 1.1 Communication port.
A scheme with the components locations of the SPARK board is shown in Fig. 10.4 and a brief description of all board features is also given in Tab. 10.1. 10.2.2
Analog Sensory Board
The analog sensory board is an important part of the whole system presented in Fig. 10.1 with the following functionalities: • Analog to digital conversion; • Signal processing.
SPARK Hardware
389
Fig. 10.3. SPARK board 1.0 block diagram
The board permits the acquisition of all the analog sensory signals used. As far as the potentiometers are concerned, the board has also a conditioning part to amplify and adapt the characteristics of the analog signal for digital conversion. In particular, the conversion process is done through 10 A/D AD7994 converters; these can be managed through the I2C bus fast mode (400Khz) [11]. The first and second on-board I2C lines are able to process and convert up to 40 analog sensors. The third I2C line is dedicated to sensors with an integrated analog to digital conversion system (compass, accelerometer, etc). The I2C bus used to connect the sensory board with the main FPGA-based control unit has been completely developed creating an I2C Master/Slave VHDL block able to write on 8bit devices, to read from 8bit/16bit devices and to manage up to 4 sequentially readings from the same device without re-interrogation. The mentioned core is able to work both in standard mode (100kHz) and in fast mode (400kHz). To manage and memorize all the data from sensors, a VHDL block (I2C LINK) has been designed. This block is the common interface between the Nios microcontroller, the I2C block and the RAM blocks used to memorize the acquired data. During the start-up phase, three RAM blocks are filled with the data needed to initialize the devices physically connected on the bus. These three RAM blocks are respectively for: I2C device write address, I2C device read address, I2C device register address. After that, the I2C LINK block receives a start signal from the microprocessor and begin to query all the devices
390
L. Alba et al.
Fig. 10.4. Scheme of the SPARK board 1.0
defined during the start phase. All the data acquired from sensors are memorized on four RAM blocks, one for each channel of the A/D converter. As soon as the I2C LINK has finished this cycle for all the devices, an interrupt signal of data-ready is sent to the microcontroller. Using this architecture, each device is unequivocally identified by a 3bit RAM address. So the NiosII, when receives the interrupt, is able to obtain the desired data by a RAM address and a RAM selector. The block diagram is shown in Fig. 10.5. The choice to realize an I2C not embedded in the microcontroller was made to optimize the NiosII performance, to avoid charging it with the I2C operations and having idle time. Therefore the only task the NiosII has to carry out is to give the starting signal to the I2C LINK block. After that, the microprocessor is free to elaborate other computations, because of the parallelism in the VHDL design. Analyzing both simulations and real-time tests, a time acquisition of 115us is estimated for an 8bit-output device whereas from a 16bit-output device a time of 130 µ s is needed. Because the analog to digital converter used (AD7994 [11]) has four channel sequentially converted, it needs 270 us to be read. So, to read all the 20 analog channels available on the first or on the second I2C bus, it is necessary a time of 1.4 ms. Due to the parallelism of the FPGA architecture, which simultaneously starts both the first and the second I2C bus, all the 40 analog channels can be read simultaneously in 1.4 ms. According to this analysis, the maximum usable sample rate for the first and second I2C, in the case where all channels are used, is 715 Hz. If only one channel is acquired, the maximum sample frequency rises up to 7.7 kHz. The same sample rate is valid for the third I2C channel. About the accuracy of the measurements, in a range of 0V to 3.3V, the error is estimated in a maximum of 120mV (3%). Finally, the robustness and accurateness of the analog sensory board have been tested through a series of experimental sensors.
SPARK Hardware
391
Table 10.1. SPARK 1.0 Main Board Components and Interfaces Number Name 1 Stratix II FPGA 2 Expansion Connector
3
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Description EP2S60F672C3N device 8 Expansion headers connecting to 249 pins (3.3 V tolerant) on the FPGA. Supplies 5 V and 3.3 V for use by a daughter card Expansion Connector 1 Expansion headers connecting to 16 pins (5 V tolerant) on the FPGA. Supplies 5 V for use by a daughter card 1.2 V power supply 1.2 V power module providing 6 A 5 V power supply 5 V power module providing 6 A Power Connector 2 mm jack power connector 3.3 V power supply 3.3 V power module providing 6 A Individual LEDs Eights individual LEDs driven by the FPGA Push-button switches Four momentary contact switches for user input to the FPGA SDRAM Memory 32 Mbytes of SDRAM Dual 7-segment Display Dual 7-segment LEDs that display numeric output from the FPGA USB Blaster Integrated USB Blaster to connect to ALTERA Software tools in a PC SDRAM Memory 32 Mbytes of SDRAM Serial Connector RS-232 serial connector with 5 V tolerant buffers. Wireless UART Wireless UART Communication Port USB 1.1 USB 1.1 Communication Port Oscillator 50 MHz clock signal driven to FPGA Flash Memory 8 Mbytes SPI Flash Memory CPU Reset Button Push-button switch to reboot the NIOS II processor configured in the FPGA DIP Switch 8-bit DIP switch
10.3 Sensory System In this section all the sensors available on the robotic test beds will be analyzed in detail. Particular attention will be focused on the characterization and conditioning of each single sensor. Infrared sensors The sensory system includes infrared distance sensors that can be a suitable resource to detect obstacles that should be avoided by the robotic system. We adopted the Sharp GP2D12 model commonly used for robotic applications [12]. The sensor response according to the distance from the object detected shows how under 10 cm there is an ambiguity, so the measure is not reliable. The maximum value of outgoing voltage (2.4V) is reached when the object is 10 cm far from the sensor. The Vout value incrementally decreases as the object moves away from the sensor. Over a distance of 80 cm, the Vout
392
L. Alba et al.
Fig. 10.5. Block diagram of the VHDL architecture designed to manage analog sensors
stabilizes on the value of 0.4V. The color of the reflective object is mostly influent in the detection process. Compass sensor The electronic CMPS03 compass [13] is able to measure its orientation with respect to the terrestrial magnetic field, using two magnetic sensors Philips KMZ51. A PICmicro acquires the magnetic flow intensity measured by each sensor and calculates the horizontal component of the rotation angle, with respect to the terrestrial magnetic field. It is possible to store an initial position, during calibration. The rotation value is provided both as simple PWM signal and in digital form through I2C bus. Servo-motor potentiometer As far as the legged robot is concerned, Hitec digital servomotors have been used [14]. In order to monitoring in real time the position of each single leg joint, the internal potentiometer of the servomotor has been used. To acquire the position signal of the motor, two new wires have been connected in parallel to the cables used for the potentiometer internal control. In Fig. 10.6 the modified motor is shown together with the potentiometer conditioning circuit. Accelerometer sensor The LIS3LV02DQ device [15] is a three axes accelerometer with I2C interface. The main features consist in having an “embedded self test” and a programmable range from ± 2g to ± 6g. Long distance contact sensors A biological inspired antenna has been realized to be used as a long distance contact sensors. The system consists of two plastic tubes linked to the robot with a pan-tilt
SPARK Hardware
(a)
393
(b)
Fig. 10.6. (a) Modified servomotor and (b) conditioning circuit. It is an operational amplifier in differential configuration, the IC used is the TS914; R1 ed R2 are 100K resistors, R5 1K resistor, R3 and R4 200K trimmer, C1 value is 100nF.
realized with two micro-servos. An antenna contains four active elements distributed along the tube. Each element consists of an Ionic Polymer Metal Composite (IPMC) membrane material used as contact sensor to detect objects close to the robot’s head. IPMC are a class of electro-active polymers characterized by an interesting electromechanic bidirectional coupling: if subject to an electrical stimulus (1-10V), wide deformations are shown, otherwise, if subject to a deformation it is possible to measure an electrical signal [16]. These properties, together with lightness, robustness and flexibility, make the use of the IPMC extremely interesting in several applications, with significant advantages with respect to the conventional transducers. Nafion is an Ionic polymer, produced and commercialized in many formats: as different thickness membrane, as resin, as a liquid reacting. In sensor working mode, a mechanical deformation is applied to the membrane, and a voltage can be measured on it. The characterization of IPMC as sensors and as actuators, together with all the conditioning circuit structures were designed and tested during the activities of the ISAMCO EU project [6], recently concluded. The antenna realization has been partially inspired to biology (see Chapter 2). The antennas in a real biological system display a central flexible and strong body, with several receptors. To realize something similar, a pipe for pneumatic realizations (6mm) has been used as the central antenna body. It shows flexibility, low weight and internal cavity for the wiring. The receptors have been realized with segments of IPMC (30mm x 5mm) fixed on the central body. Each antenna, 18cm of length, has been divided into 6 identical segments, to identify a generic contact point within a range of 3cm. Fig. 10.7 shows the structure of the antenna system. Due to the fact that IPMC membranes give analog output signals, a circuit for conditioning has been designed. In this way a digital output of sensors can be connected directly to the digital input of the SPARK board. Details about the conditioning circuit, the tactile sensors based on IPMC and the actuation mechanism are shown in Fig. 10.8. To move each single antenna, because of the low weight, two Hitech microservomotors have been used. Each antenna has two degrees of freedom, therefore a “pan-tilt” like structure has been realized. Biological studies showed that, in some cases, the movements of the antennas can be described by a series of complex trajectories in
394
L. Alba et al.
Fig. 10.7. Antennae equipped on the hexapod robot
the space. To guarantee the complete exploration of the space in front of the robot each antenna is controlled through the state variables of a chaotic system. In the preliminary tests the Rossler system evolving in a single scroll has been considered. The equations are: ⎧ ⎨ x˙ = −y − z y˙ = x + ay (10.1) ⎩ z˙ = b + xz − cz where a = 0.17, b = 0.4, c = 8.5. The two antennas adopt different initial conditions to obtain, for the same parameters, different movements. The antenna motors are controlled with PWM signals generated by the SPARK board. Phonotaxis system Phonotaxis is the ability to approach sound sources as deeply discussed in Chapter 3. Female crickets are able to recognise the species specific pattern of male calling song, produced by opening and closing their wings, and move towards it. For Gryllus bimaculatus the calling song consists of four 20 millisecond syllables of 4.7 kHz, separated by 20 millisecond intervals, which make up a “chirp”, produced several times a second. Females appear to be particularly selective for the repetition rate of syllables within each chirp. Further details about this behaviour and its biological basis are given in Chapter 3. The Ears circuit used on the robots, consists of two microphones and conditioning circuit fine-tuned to the carrier frequency of the cricket song. The output from each ear is an analog signal in the range from 0 to 5 V.
SPARK Hardware
395
Fig. 10.8. Elements constituting the antennae system: conditioning circuit, the tactile sensors based on IPMC and the actuation mechanism
The input to the circuit is given by two microphones placed on the robot in a specific configuration: the distance among the two sensors is equivalent to a quarter of the wavelength (λ /4) of the carrier frequency (4.7 kHz) (i.e. 18mm). The input from each microphone is delayed by a period equivalent to a quarter of the phase of the carrier frequency, i.e. approximately 53 µ s, and this is used to provide the signal output. The output from a particular ear is given by the difference between its own input from the microphone and the input from the other microphone delayed by this period. This is closely based on how biological systems work, and the two resulting analog signals represent the amplitude of vibration of the posterior tympani in the cricket. This also facilitates high directionality, as the amplitude of the resulting waves is direction dependent. Fig. 10.9 shows the realised ears circuit, built by the group of Prof. B. Webb (see Chapter 3), with a microphone attached to each channel. A visual representation of the cricket song, as acquired by the hearing board, is shown in Fig. 10.10.
396
L. Alba et al.
Fig. 10.9. Phontaxis board and microphones equipped on the hexapod robot
Fig. 10.10. Chirp waveform acquired through the two microphones of the ears board
10.4 Conclusion In this Chapter we discuss on the realization of a hardware platform where to evaluate the cognitive models presented in the previous chapters. The SPARK hardware architecture consists of an FPGA-based board (SPARK 1.0), completely designed and developed in order to guarantee the computational resources needed by the cognitive algorithms and to handle the distributed sensory system embedded on the roving and
SPARK Hardware
397
walking robots used as test beds. The basic hardware blocks, here described, have been used to develop the whole architecture that has been used to host the perceptual algorithms. In the next Chapter the final robotic platforms will be presented together with the most relevant experiments carried out to demonstrate the system potentialities.
References 1. Bailey, J.K., Willis, M.A., Quinn, R.D.: A multi-sensory robot for testing biologicallyinspired odor plume tracking strategies. In: Proceedings of Advanced Intelligent Mechatronics International Conference (ASME) (2005) 2. Chappell, S., Macarthur, A., Preston, D., Olmstead, D., Flint, B., Sullivan, C.: Exploiting real-time FPGA based adaptive systems technology for real-time sensor fusion in next generation automotive safety systems. In: Proceedings of Design Automation and Test in Europe (2005) 3. Garcia, J.G., Robertsson, A., Ortega, J.G., Johansson, R.: Sensor fusion of force and acceleration for robot force control. In: Proceedings of International Conference on Intelligent Robots and Systems (2004) 4. Giesen, K., Deutscher, R., Milighetti, G., Frey, C.W., Kuntze, H.B.: Structure variable multisensoric supervisory control of human interactive robots. In: Proceedings of Control Applications International Conference (2004) 5. Hyeong-Soon, M., Baek, K.Y., Beattie, R.J.: Multi sensor data fusion for improving performance and reliability of fully automatic welding system. Proceedings of International Journal of Advanced Manufacturing Technology (2006) 6. Isamco Project, http://www.mediainnovation.it/progetti/isamco 7. Luo, R.C., Kay, M.G.: A tutorial on multisensor integration and fusion. In: Proceedings of 16th Annual Conference of IEEE Industrial Electronics Society (IECON) (1990) 8. Park, S.H., Kim, J.R., Oh, S.R., Lee, B.H.: Development of a practical sensor fusion module for environment modeling. In: Proceedings of Advanced Intelligent Mechatronics International Conference (2003) 9. Schmalzel, J., Figueroa, F., Morris, J., Mandayam, S., Polikar, R.: An architecture for intelligent systems based on smart sensors Instrumentation and Measurement. Proceedings of IEEE Transactions (2005) 10. Hardware - Altera home page, http://www.altera.com 11. Hardware - ADC producer home page, http://www.analog.com 12. Hardware - Datasheet distance sensors, http://www.alldatasheet.com/datasheet-pdf/pdf/SHARP/GP2D12. html 13. Hardware - Datasheet componet, http://www.robot-italy.com/ 14. Hardware - Motor information, http://www.hitecrcd.com/ 15. Hardware - Datasheet component, http://www.st.com/ 16. Hardware - Antenna’s material, http://www.dupont.com/fuelcells/products/nafion.html
11 Robotic Platforms and Experiments P. Arena, S. De Fiore, D. Lombardo, and L. Patan´e Department of Electrical, Electronic and System Engineering, University of Catania, I-95125 Catania, Italy {parena,lpatane}@diees.unict.it
Abstract. To reach the challenging objectives of the SPARK project the research pathway followed different levels of abstraction: mathematical models, insect behavior study and bio-inspired solutions for robotics. We investigated biological principles as a source of inspiration and we finally applied the mechanisms provided by the complex dynamical system theory to realize mathematical models for cognitive systems. All these aspects deeply investigated in previous Chapters, are here applied to solve specific tasks. A series of wheeled and legged robots are described and applied as test beds for the proposed action-oriented perception algorithms. The cognitive architecture has been experimentally tested at multiple levels of complexity in different robots. Details on the robotic platforms are given together with a description of the experimental results that include multimedia materials collected in the project web page.
11.1 Introduction The complete model for action-oriented perception, that has been outlined in the previous chapters, is a modular architecture structured with parallel pathways and hierarchical structures. Basically the inspiration comes from the insect world even if the idea was not to biologically reproduce an insect brain model but to deeply understand insect cognitive behaviours for a possible implementation. The experimental phase was the last step of our activities, aiming at validate the proposed control system. The evaluation procedure was carried out both by using virtual agents working in dynamic simulation environments and with real robots. In particular to test at different levels of complexity the cognitive architecture, both wheeled and legged robots were designed, realized and used. In particular, when the whole complex perceptual architecture had to be tested, a dual drive wheeled structure was used, due to its simple structure and motion control strategy. On the other hands, the design and realization of legged machines was performed primarily to implement low level bio-inspired locomotion strategies, like the Central Pattern Generator and the decentralized locomotion control (Walknet). Of course, once the high level perceptual algorithms will be optimized and tested, these will be transferred from the wheeled to the legged machines to explore the emergence of perceptual algorithms from locomotion primitives. Two roving platforms have been used for the demonstration: Rover I and II. The control architecture of both rovers consists in a low level layer based on microcontrollers P. Arena and L. Patan`e (Eds.): Spatial Temporal Patterns, COSMOS 1, pp. 399–422. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
400
P. Arena et al.
that handle the motor system and a high level layer that includes the Eye-RIS visual system as a main controller together with the SPARK board. The rovers are also endowed with a suite of different sensors and can be interfaced with a PC through a wireless communication link. As far as the legged structure is concerned, mainly two different robots have been developed. The former is called MiniHex. It is a mini-hexapod robot with 12 DoF controlled with a Central Pattern Generator (CPG), realized with a CNN-based chip. The robot can be telecontrolled from a PC, but can also navigate autonomously showing phobic or attractive behaviours [6, 1] using three infrared distance sensors equipped on its head. The latter hexapod is called Gregor III. It is a cockroach-inspired legged robot and is the final prototype of a series. The core of the system is the SPARK board that handles the complex sensory system distributed on the robot. In Gregor III the locomotion control problem has been solved by using both a CPG and a reflex-based decentralized approach (based on Walknet) that has been exploited to realize climbing and turning strategies. The experiments, discussed in this chapter, aim at demonstrate the applicability of the SPARK cognitive architecture analyzing some specific aspects of the model as for example: the basic behaviours (e.g. obstacle avoidance), proto-cognitive behaviours (e.g. visual homing), correlation mechanisms (e.g. learning anticipation through STDP), representation layer (e.g. Turing pattern approach to perception). Eight different experiments are here presented, showing the objectives, the experimental set-up and the obtained results: 1. 2. 3. 4. 5. 6. 7. 8.
Visual homing and hearing targeting; Reflex-based locomotion control with sensory fusion; Visual perception and target following; Reflex-based navigation based on WCC; Learning anticipation via spiking networks; Landmark navigation; Turing Pattern Approach to perception; Representation layer for Behaviour Modulation.
Multimedia materials on the robotic platforms and videos of the experiments are available on the SPARK Project web page [8].
11.2 Robotic Test Beds: Roving Robots 11.2.1
Rover I
The roving robot Rover I is a classic four wheeled drive rover controlled through a differential drive system. The robot dimensions are 35 cm x 35 cm. Rover I is equipped with four infrared distance sensors GP2D12 from Sharp, with low level target sensors (i.e. able to detect black spots on the ground used as targets) and with an RF04 USB radio telemetry module for remote control operations and sensory acquisition. The detection range of the infrared sensors is set from 14 to 80 cm. A visual processor (i.e. Eye-RIS v1.1) can be equipped on the rover in two different configurations: in the front
Robotic Platforms and Experiments
(a)
401
(b)
Fig. 11.1. Rover I. (a) Configuration with the visual system in frontal position, (b) panoramic view configuration.
Fig. 11.2. Rover I control architecture
position for target aiming tasks (see Fig. 1(a)) and in a panoramic view configuration, obtained with a conic mirror, for homing purposes (see Fig. 1(b)). The control architecture is shown in Fig. 11.2: the core of the system is the visual processor Eye-RIS v1.1 that can be used, in substitution of the more powerful SPARK board, to implement simple pre- proto-cognitive behaviours. A I2C bus interfaces the visual processor with an ADC board that handles all the other sensory systems: distance sensors, hearing chip, low level target sensors and a battery pack control device. The robot is equipped with two 12V 3.5A batteries that guarantee an autonomy of about 1.5 hours; moreover a control system to monitor the battery charge level is implemented together with a
402
P. Arena et al.
Fig. 11.3. Rover II
recharging mechanism that extends the autonomy of the system in case of time consuming learning sessions. 11.2.2
Rover II
Rover II, shown in Fig. 11.3, is an optimized version of Rover I. It is equipped with a bluetooth telemetry module, four infrared short distance sensors Sharp GP2D120 (detection range 3 to 80 cm), four infrared long distance sensors Sharp GP2Y0A02 (maximum detection distance about 150 cm), a digital compass, a low level target detection system, an hearing board for cricket chirp recognition and with the Eye-RIS v1.2 visual system. The complete control architecture, reported in Fig. 11.4, shows how the low level control of the motors and the sensor handling are realized through a microcontroller STR730. This choice optimizes the motor control performances of the robot maintaining in the SPARK board and in the Eye-RIS visual system the high level cognitive algorithms. Moreover Rover II can be easily interfaced with a PC through a bluetooth module: this remote control configuration allows to perform some preliminary tests debugging the results directly on the PC.
11.3 Robotic Test Beds: Legged Robots 11.3.1
MiniHex
The MiniHex robot, shown in Fig. 11.5 is a hexapod robot whose dimension are 15x10x10 cm3 [2]. It is a legged mini-robot with 12 degrees of freedom actuated with the HS-85MG metal gear servo, which have a weight of 22g each and a stall torque of 3 kg·cm. The architecture, endowed on the robot, is completely devoted to suitably control all the robot actuators for locomotion purposes. The generation of the locomotion patterns
Robotic Platforms and Experiments
403
Fig. 11.4. Rover II control architecture
Fig. 11.5. MiniHex
is assolved by a CNN-based VLSI CPG chip realized with the switched-capacitor (SC) technique that permits a stepping frequency regulation by using an external clock. The MiniHex is also autonomous from the power supply point of view. The servomotors are supplied with a pack of four AA Ni-MH batteries at 1.5 V whereas the chip and the electronics are sustained by two 9V batteries stabilized through a voltage regulator. The robot can work in an autonomous configuration using three distance sensors for obstacle avoidance and target following tasks. It can be also tele-controlled thanks to a wireless communication module equipped on board. The robot speed is in a range between 1 cm/s and 10 cm/s. The robot autonomy is estimated in a range between 0.5-1h on a flat terrain. Power consumption ranges between 10 W during walking on even terrain and 15 W during obstacle course.
404
P. Arena et al.
Fig. 11.6. GregorIII
11.3.2
Gregor III
The hexapod Gregor III is the final prototype of the Gregor series. The robot, as shown in Fig. 11.6, is equipped with a distributed sensory system. The robot’s head contains the Eye-RIS v1.2 visual processor, the cricket inspired hearing circuit and a pair of antennae developed using Ionic Polymer-Metal Composite (IPMC) material. A compass sensor and an accelerometer are also embedded in the robot together with four infrared distance sensors used to localize obstacles. A set of distributed tactile sensors
Fig. 11.7. GregorIII control architecture
Robotic Platforms and Experiments
405
Fig. 11.8. Examples of the robot interfaces created to monitoring and storing the status of the roving and legged robots during the experiments
(i.e. contact switches) is placed in each robot leg to monitor the ground contact and to detect when a leg hits with an obstacle. The core of the control architecture is the SPARK board, as shown in Fig. 11.7. The robot sensory system is handled with an ADC board that is addressed by using an I2C bus, whereas the Eye-RIS v1.2 is interfaced with the main board through a dedicated parallel bus. The robot is completely autonomous for the power supply. Two 11.1V , 8A Li Poly battery packs are used: one for the motors and the other for the electronic boards. The robot can be used as a walking lab able to acquire and process in parallel a huge amount of different types of data. For monitoring the system status and for storing purposes, a wireless module is used to transmit the sensible information to a remote PC (see Fig. 11.8).
11.4 Experiments and Results 11.4.1
Visual Homing and Hearing Targeting
Objectives: This demo aims at proving the reliability of the Rover I robot endowed with algorithms implementing both visual and hearing routines and related circuits. The task to be accomplished consists in a phonotaxis behaviour performed until the battery level goes under a warning threshold. In this condition, the system triggers a basic behaviour
406
P. Arena et al.
Fig. 11.9. The Rover I, equipped with the Eye-RIS 1.1 in a panoramic configuration and the hearing board, performs an hearing and visual homing task. The robot is attracted by the cricket calling song but when the battery level is dangerously low, the homing behaviour is triggered to find the recharging station.
“inherited” for survival purposes: homing. In the proposed experiment the robot adopts a visual homing behaviour using the visual system in a panoramic configuration. Experimental set-up: The robot moves in a 3x3 m2 arena, attracted by the sound sources that reproduce the cricket calling song. Two speakers are placed near two opposite walls, whereas a recharging station is located in a corner of the arena. Rover I is equipped with: Eye-RIS v1.1 in a panoramic configuration, hearing circuit, distance sensors, battery level sensor, low level sensors (i.e. for recharging station detection) and a connector that allows the docking in the recharging station. Results The experiment, described with a block diagram in Fig. 11.9 can be divided into three different sub-behaviours: A - Phonotaxis; B - Visual Homing; C - Docking. Phonotaxis: The Rover I navigates in the arena, attracted by the two speakers that alternatively reproduce the cricket calling song. The hearing chip, equipped on the robot, processes the auditory information through a biologically-inspired spiking network (described in Chapter 3) and selects the goal direction comparing the number of spikes emitted by the right and left motor neurons (see Fig. 11.10).
Robotic Platforms and Experiments
407
Fig. 11.10. Results of the hearing targeting and homing experiment
Visual Homing: the homing mechanisms is a “life saving” behaviour that is triggered by the battery level sensor. The idea is to increase the robot operation time through a recharging mechanism. This procedure is important in case of extensive experiments performed during learning phases. In the proposed experiments the “home” is represented by a recharging station located in a corner of the arena. At the beginning of the experiment, the robot acquires information about the home position, saving in its memory a panoramic view of the arena acquired from the home position. When the homing procedure is activated, the home image is compared with the actual image. The direction to be followed is obtained using a gradient-based algorithm that is developed inside the Eye-RIS v1.1 device and is based on the XOR function (for details see [3]). Following the ascending direction of the XOR-based index, as shown in Fig. 11.10, the robot can find the recharging station position. When the low level sensors detect black strips on the ground, the homing algorithm is stopped. Docking: the homing phase is followed by a docking procedure that is needed to connect the recharging plates equipped on the robot with the station. Two black lines on the ground indicate the position of the connectors in the base station. Using two photodiodes the robot can find the position of the lines and perform the docking. A microswitch detects the connection and actives the recharging circuits: at the end of the process the battery level system stimulates the phonotaxis behaviour again. Data acquired during an experiment are reported in Fig. 11.10. The homing mechanism is here activated after two targets retrieving.
408
P. Arena et al.
(a)
(b)
Fig. 11.11. Walknet on Gregor III. (a) Obstacle climbing, (b) autonomous navigation in a cluttered environment.
11.4.2
Reflex-Based Locomotion Control with Sensory Fusion
Objectives: This demo aims at proving the reliability of the reflex-based Walknet controller for Gregor III [7, 4]. Tactile sensors will be used for autonomous navigation and obstacle climbing. Experimental set-up: The arena used for the experiments is the same of the previous demo with the inclusion of several obstacles that can be avoided or climbed. The hexapod platform, Gregor III, is endowed with: distributed tactile sensors, IPMC-based antennal sensor, distance sensors, an accelerometer, the hearing board and the visual processor Eye-RIS v1.2. Results The hexapod robot Gregor III has been designed as a moving laboratory used to test the bio-inspired locomotion principles introduced in Chapter 2. The robot equipped with a Li-Poly battery pack, can walk autonomously for about 30 minutes, exchanging data with a remote workstation through a wireless communication link. When the robot stability is compromised during an experiment, a recovery procedure is performed. Dangerous situations are detected using a posture controller implemented through a three-axis accelerometer equipped on board. Fig. 11.11 shows two different scenarios used to test the robot capabilities for autonomous navigation in cluttered environments and obstacle climbing. Further experiments have been performed on the Gregor III to deeply analyze other sensing capabilities as for instance the auditory system. The cricket-inspired hearing board, equipped on the robot, has been used to test the basic behaviour of phonotaxis. Due to the complexity of the robot structure, an analysis of the disturbances introduced by the 20 servomotors (i.e. 16 for the legs and 4 for the antennae) was carried on. The results, reported in Fig. 11.12, show how the sound recognition system, even if affected by the noise introduced by motors, easily permits to identify the sound source location.
Robotic Platforms and Experiments
409
Fig. 11.12. Output of the hearing circuit, adopting a sample frequency of 500Hz and after a 1 Hz low pass filter. The sound source diffusing the cricket calling song is initially located in front of the robot and then is moved on the left, afterwards on the right and finally another time on the left of the robot. The disturbance introduced by motors is evident but the source direction can be easily distinguished.
11.4.3
Visual Perception and Target Following
Objectives: This demo is focused on the application of visual perceptual algorithms on Rover II. Experimental set-up: Rover II and MiniHex are placed in a 3x3m2 arena filled with different objects. The visual processor Eye-RIS v1.2 equipped on the rover is used to detect the presence of the MiniHex and to follow it while moving in the arena. Results This demo emphasizes the processing capabilities of the Eye-RIS v1.2 system. The visual system, equipped on Rover II is able to process in real-time the images acquired by the robot aiming at recognizing the presence of the MiniHex robot among different other objects visible in the scene (see Fig. 11.13). The designed visual algorithm is based on a sequence of operators that deeply exploit the parallel processing capabilities of the system. In Fig. 11.14, the output of each step executed within the interframe rate (about 30ms) is given. A gaussian filter together with some MAC operations is applied to the acquired image to increase the contrast in order to easily identify the different objects in the scene. A dynamic threshold is then used to eliminate the background from the image and a mask is applied to leave out the edge from the processing avoiding to process objects that are only partially seen in the acquired image. Templates for erosion and dilation are successively applied to filter noise and fill holes creating well defined blobs. The position of each blob is then obtained by using the centroid operator. The dimensions of the blobs can be also found tracing horizontal and vertical lines starting from the centroid, making a logical operator with the output of the erosion and dilation step, and finally counting the number of pixels. The ratio between the horizontal and vertical dimension of each object is then
410
P. Arena et al.
Fig. 11.13. The Rover II recognizes and follows the MiniHex
Fig. 11.14. Image processing on Eye-RIS 1.2
used as a characteristic feature to identify the MiniHex robot in the scene. The addition of further filtering functions can enhance the detection of the detection of the MiniHex structure among different kinds of objects.
Robotic Platforms and Experiments
411
Fig. 11.15. Reference cycles representing the sensors of the robot
11.4.4
Reflex-Based Navigation Based on WCC
Objectives: A chaotic system is applied to navigation control in a roving robot. The reflexive strategy, based on the weak chaos control technique (WCC), is compared with other standard methods like the potential field. Experimental set-up: The experimental platform is constituted by the SPARK board that implements the navigation algorithm, a mobile robot and a wireless communication system. A computer can be used to make a backup of the data for a post process analysis. See reference [5] for details on the implementation aspects. Rover I, equipped with four distance sensors, is used in these experiments. The aim is to underline the obstacle avoidance and exploration capabilities of the roving robot. For these reasons, different types of arenas have been considered. Results The robot sensory system, that represents the input of the control system, is directly linked to the cycle used to enslave the chaotic system as discussed in Chapter 6. Considering the sensor positions and orientations (Fig. 11.15), sensor Si is associated to the reference cycle Re fi . Sensors S3 and S4 have been configured in order to have a limited activation range with respect to the others. In this way the robot could pass through narrow spaces. The NiosII microprocessor is devoted to the execution of the deterministic navigation algorithm and to the supervision of the activity of the VHDL entity implementing the weak chaos control. The WCC process lasts about 2.8 ms and the whole control algorithm running on NiosII about 80 ms. An experiment in which the robot faces with a complex environment is shown in Fig. 11.16 together with the corresponding evolution of the controlled multiscroll system. Fig. 11.16 (a) and Fig. 11.16 (b) show a sensor output; a high value means a low distance from an obstacle. The sample time is about 0.350 s. In Fig. 11.16 (c) the robot senses an obstacle on the right side; both sensors S1 and S4 are activated but in a different way: the control gain associated with S4 is higher than the other gain. Therefore, the resulting cycle is placed between Ref1 and Ref4 but closer to Ref4 (Fig. 11.16 (d)). In this case the robot turns in the opposite direction, and the speed and rotation angle depend on the characteristics of the emerged cycle. Subsequently, the sensors do not see any obstacle
412
P. Arena et al. 70
50
70
(c−d)
Sens1 Sens2 Sens3 Sens4
50
40
40
(e−f)
30
(g−h)
30
20
20
10 0
Sens1 Sens2 Sens3 Sens4
60 sensors value
sensors value
60
10
10
20 sample
30
40
45
50
(a)
55
60 65 sample
70
75
80
(b) 400 300 200
Ref1 Ref2 Ref3 Ref4 System
Y
100 0 −100 −200 −300 −300 −200 −100
0
100
200
300
400
200
300
400
X
(c)
(d) 400 300 200
Y
100 0
Ref1 Ref2 Ref3 Ref4 System
−100 −200 −300 −300 −200 −100
0
100 X
(e)
(f) 400 300 200
Ref1 Ref2 Ref3 Ref4 System
Y
100 0 −100 −200 −300 −300 −200 −100
0
100
200
300
400
X
(g)
(h)
Fig. 11.16. Screen shoots of the robot during the exploration of an unknown environment and the evolution of the controlled multiscroll system. (a)(b) Sensory signals acquired during the experiment with a sample time of about 0.350 s; the acquired infrared sensor output reported on the y-axis goes from 0 to 70 that correspond to a distance from 80 cm to 25 cm. (c)(d) More than one sensor is concurrently active. (e)(f) No obstacle is considered relevant for the robot movement. (g)(h) The right sensor (S3) detects an obstacle.
Robotic Platforms and Experiments
(a)
413
(b)
Fig. 11.17. Environments used to evaluate the performance of the proposed architecture controlling a roving robot. The dimensions of both arenas are 10x10 robot units. An example of the trajectory followed by the robot controlled through the WCC f algorithm is shown.
(Fig. 11.16 (e)), so the multiscroll system shows a chaotic behavior (Fig. 11.16 (f)) and the robot continues to explore the environment, moving with constant speed and without modifying its orientation. In the last picture, Fig. 11.16 (g), only S1 sensor is slightly activated, so the emerged cycle is placed on the reference Ref1 although a small control gain is used (Fig. 11.16 (h)). Another experiment, whose video is reported in [8], shows the robot in an arena. It continuously explores the environment until it founds an exit. As performed for the simulated robot, also with the real one we made a series of tests in two different environments (Fig. 11.17). Five experiments for each arena have been carried out placing the robot in a random initial position. An example of the trajectory followed in each environment by the robot controlled through the WCC f method is shown in Fig. 11.17. In each experiment the robot explores the environment for 7 minutes. The behaviour of the robot has been recorded through a video camera. A movie, has been used to extract the robot trajectory and to evaluate the explored area. The results obtained in the two arenas are shown in Fig. 11.18. In this case the WCC f algorithm was selected to be experimentally shown. From the analysis of Fig. 11.18 the capability of the algorithm to densely explore the environment in a few minutes can be appreciated. 11.4.5
Learning Anticipation via Spiking Networks
Objectives: A roving robot is engaged in learning how to deal with higher level sensors by using, as teaching signals, those coming from low level sensors. The learning paradigm used is the Spike Timing Dependent Plasticity (STDP). Experimental set-up: Rover I is equipped with the Eye-RIS v1.1 in frontal configuration, distance sensors and low level target sensors. The arena contains obstacles and two black circles on the ground that are used as targets.
P. Arena et al.
90
90
80
80
70
70
60
60
explored area (%)
explored area (%)
414
50 40 30
50 40 30
WCC
f
20
20 WCCf
10 0 0
10 2
4
6 8 time (windows of 30s)
(a)
10
12
0 0
2
4
6 8 time (windows of 30s)
10
12
(b)
Fig. 11.18. Area explored for the environments shown in Fig. 11.17 (a) and (b). The arena whose dimension is 10x10 robot units, has been divided into locations of 1x1 robot units. The experiment time is 420s and the mean value of area explored, mediated over 5 experiments, calculated with time windows of 30s, is indicated. The bars show the minimum and maximum value.
Results The problem of correlation among different sensory stimuli and the possibility to anticipate events has been discussed in Chapter 6. The control architecture, based on a spiking network and implemented on the SPARK hardware, is here applied to Rover I. The results that we are here reporting demonstrate how the robot is able: • to learn distance sensors based on information coming from contact sensors (here simulated using short range sensors), that represent unconditioned stimuli; • to learn visual qualities related to an object and correlating them to information related to a target sensor. In our demo the target sensor is a light sensor, positioned in the bottom part of the robot, facing the ground, while the conditioned sensor is a visual one (the Eye-RIS v1.2 was used). In Fig. 11.19, the trajectory followed by the robot at the beginning and after 30 minutes of learning is reported. As can be easily seen, the robot learnt through its basic behaviours (i.e. collision reaction and low level target attraction) how to deal with other more complex stimuli as the visual one, in order to improve its capabilities. The improvement is quantitatively summarized in Fig. 11.20 in which the decrement of the
Fig. 11.19. Trajectories before and after learning with Rover I
Number of bumps in windows of 100 steps
25
20
15
10
5
0 0
5
10 15 Simulation Steps/100
20
(a)
25
Number of target found in windows of 200 steps
Robotic Platforms and Experiments
415
5
4
3
2
1
0
0
2
4
6 Simulation Steps/200
8
10
12
(b)
Fig. 11.20. Learning obstacle avoidance and target retrieving through STDP. (a) Number of bumps occurred in time windows of 100 steps, (b) number of targets found in time windows of 200 steps.
number of bumps and the increment in terms of number of targets found during the learning phase is shown. 11.4.6
Landmark Navigation
Objectives: The focus of this demo is to outline the robot capability to learn to discriminate between relevant and useless pictures in the arena. The relevant pictures can work as landmarks for homing purpose. Once learned the landmarks, homing takes place exploiting the MMC recurrent network, even in front of partially obscured landmarks. Experimental set-up: Rover II is equipped with the Eye-RIS v1.2, distance sensors and low level target sensors. The arena contains a target (i.e. nest) and five different possible landmarks (i.e. black picture frame with objects inside, attached to the walls). Results The landmark navigation algorithm, as discussed in Chapter 6, is characterized by two distinct phases. Phase I: Landmark identification In this phase, the roving robot randomly explores the arena filled with different types of visual cues. At each step the robot acquires information about the presence of different visual cues, provided by the Eye-RIS v1.2. The robot is also able to detect the presence of the nest only within its proximity. The most reliable visual cues (three in our demo) will have, at the end of the learning phase, the highest values for their synaptic weights (through STDP learning), and will be selected as landmarks for the next phase. The arena used for the experiments is shown in Fig. 11.21. Phase II: Landmark navigation In this phase the roving robot, placed on the nest position, acquires, via the Eye-RIS v1.2 and a compass sensor, information about the three most reliable landmarks, discovered in the previous phase, in order to build a map of the geometrical relationships between landmarks and nest. The comparison between the real map and the map generated by the algorithm is shown in Fig. 11.22 and relevant data are summarized in Table 11.1. The error due to measurement noise (in particular on the compass sensor)
416
P. Arena et al.
Fig. 11.21. Environment used for landmark navigation
(a)
(b)
Fig. 11.22. Map reconstruction for landmark navigation. (a) real map, (b) map generated using sensory data acquired through a compass sensor and the Eye-RIS v1.2 visual system.
is not a problem for the navigation algorithm, thanks to the filtering capabilities of the RNN structure. After that, we let the robot forage for some steps and, once it needs to come back to the nest, it turns around looking for a landmark. The information (distance and angle) of the relative position between robot and landmark is acquired via the visual system and enters in the RNN. The RNN output is a vector which is translated in a motor command. After some iterations, the rover will reach the nest position (see Fig. 11.23). 11.4.7
Turing Pattern Approach to Perception
Objectives: The roving robot is able to autonomously learn navigation strategies, through the emergence of Turing Pattern in the SPARK hardware. The perceptual model
Robotic Platforms and Experiments
417
Table 11.1. Comparison between the real map and the map estimated through data acquired by using compass sensor and visual system. Angles are calculated in degrees with respect to the south direction whereas distance is in cm.
Landmark L1 L2 L3
Real Dist 260 170 130
Ph 147 -128 -111
X 251 109 44
Y 67 -130 -122
Estimated Dist 264 164 120
Ph 165 -130 -110
X 222 101 44
Y 142 -129 -112
Fig. 11.23. Trajectory followed by the robot guided by the reliable landmarks
shapes, through learning, the basins of attraction of each pattern to account for the environmental needs. Experimental set-up: Rover I is equipped with distance sensors and a compass sensor. The perceptual core was designed in VHDL and implemented in the SPARK board. Results The cognitive architecture based on the Turing Pattern Approach (TPA) has been already discussed in Chapter 7. One of the key point of this control architecture consists in the hardware implementation of the whole sensing-perception-action loop. Fig. 11.24 shows the hardware scheme. The perceptual core, dedicated to the Turing patterns generation, has been completely developed in VDHL, whereas the action system together with the interface with the Rover has been realized by using the Nios II soft-core microprocessor (i.e. programmed in C++ language). The low level hardware implementation of the perceptual core represents the final result of an optimization process. The first implementation step was devoted to validate the correctness of the algorithm; to do that on the Spark board, the Nios II development environment has been chosen. The results were good in term of hardware resources needed, but not feasible for real-time application due to a computational time of about 90 seconds. To improve this results, custom instructions (i.e. dedicated hardware
418
P. Arena et al.
Fig. 11.24. Control scheme for the Turing Pattern Approach
Fig. 11.25. Performances of different TPA implementations on the SPARK board
modules for floating point operations) were introduced in the code obtaining a decrement of about 50% with respect to the standard Nios code. Aiming at reach a control loop time under one second (i.e. orders of magnitude under the obtained time), a VHDL implementation was taken into consideration. The final hardware design is able to elaborate Turing Patterns in no more than 98 ms without exceeding in resources needed, that are under the 25% of the total. A comparison between the different implementation strategies is reported in Fig. 11.25. In order to show the performance of the roving robot controlled through the TPA implemented on the SPARK board, the rover was placed in a arena whose dimensions are 3x3 m2 , filled with 3 obstacles in addition to the walls. The mission assigned to the robot consists into follow a target direction avoiding obstacles. Fig. 26(a) shows the cumulative number of new patterns emerging during the learning phase. It is evident how the total number of emerged patterns is constant after about 800 learning cycles. In the experiment here reported, the robot can use about 30 different patterns to specialize its behavior depending on the environment conditions. All the possible actions, that the robot can perform at the end of the learning phase, are shown in Fig. 26(b).
Robotic Platforms and Experiments
(a)
419
(b)
Fig. 11.26. Experimental results for TPA. (a) Cumulative number of new patterns that emerge during learning. (b) Distribution of robot actions obtained at the end of the learning phase.
An example of the robot behavior is shown in Fig. 11.27 in which the trajectory followed by the roving robot is reported together with the different Turing patterns
Fig. 11.27. Trajectory followed in heading the target direction, avoiding walls and obstacles. The robot navigation is guided by the emerging patterns.
420
P. Arena et al.
Fig. 11.28. Number of occurrences for each Turing pattern in the previously shown trajectory. The action, in term of turning angle, associated with each pattern are also reported in radians.
that constitute the specific internal representations of the robot surroundings as learned during the learning phase. Finally, Fig. 11.28 shows all the Turing patterns used in the previous trajectory, reporting for each pattern the number of occurrences and the associated action. 11.4.8
Representation Layer for Behaviour Modulation
Objectives: Here Turing Patterns are used to provide no longer a single action at the output stage, but to modulate the basic behavior responses in the pre-motor area. A roving robot will be used as a demonstrator of the learning phase executed on the PC. Experimental set-up: Rover II is equipped with distance sensors, a compass sensor and the hearing board. The emulated lower level basic behaviors are:
(a)
(b)
Fig. 11.29. (a) Experimental set-up (b) Rover II navigates in the arena
Robotic Platforms and Experiments
421
Fig. 11.30. Best and worst trial in the case of learned (a − b) and constant (c − d) modulation
• phonotaxis; • optomotor reflex; • obstacle avoidance reflex. Results The experiments have been carried out within a 3 × 3 m2 arena where two obstacles and a sound source have been placed (Fig. 11.29). In the experiments, we test the architecture with the learned modulation parameters compared with the case of fixed modulation. For each case, we let the robot perform three trials starting from the same position. Fig. 11.30 shows the trajectory followed in the best and in the worst trial for the two cases under analysis, while Fig. 11.31 reports the path length for all the trials. As well as in the simulations reported in Chapter 7, in the presented experiments, the performance increase is evident in terms of path length to reach the target and robustness of the behavior against noise sources introduced in the environment. The behaviour robustness is confirmed by the low variance in the path length along the different trials. Multimedia material, including tests in very noisy and dynamically changing environments, is available in [8].
422
P. Arena et al.
Fig. 11.31. Path length for all the experiments with the roving robot
11.5 Conclusion The emergence of cognitive capabilities in artificial agents is one of the big challenges of our decades. Different directions are currently followed by researchers trying to develop artificial cognitive agents. This chapter gives a contribution to this research activity proposing a series of experiments that underline some of the potential applications of the SPARK architecture. These results do not represent the end of a path but just the beginning of an activity devoted to improve the preliminary results here proposed to further generalize the application capabilities and its fields of application. Acknowledgement. Authors acknowledge the support of STMicroelectronics for supplying technical help, microcontrollers and boards useful for the realization of Rover II.
References 1. Arena, P., Fortuna, L., Frasca, M., Patan´e, L.: Sensory feedback in CNN-based central pattern generators. International Journal of Neural Systems 13(6), 349–362 (2003) 2. Arena, P., Fortuna, L., Frasca, M., Patan´e, L., Pollino, M.: An autonomous mini-hexapod robot controlled through a CNN-based CPG VLSI chip. In: Proc. of CNNA, Istanbul, Turkey, pp. 401–406 (2006) 3. Arena, P., De Fiore, S., Fortuna, L., Nicolosi, L., Patan´e, L., Vagliasindi, G.: Visual Homing: experimental results on an autonomous robot. In: 18th European Conference on Circuit Theory and Design (ECCTD 2007), Seville, Spain (2007) 4. Arena, P., Patan´e, L., Schilling, M., Schmitz, J.: Walking capabilities of Gregor controlled through Walknet. In: Proc. of Microtechnologies for the New Millennium (SPIE 2007), Gran Canaria, Spain (2007) 5. Arena, P., De Fiore, S., Fortuna, L., Frasca, M., Patan´e, L.: Reactive navigation through multiscroll systems: from theory to real-time implementation. Autonomous Robots (2008), ISSN: 0929-5593, doi:10.1007/s10514-007-9068-1 6. Braitenberg, V.: Vehicles: experiments in synthetic psycology. MIT Press, Cambridge (1984) 7. Cruse, H., Durr, V., Schmitz, J.: Insect walking is based on a decentralised architecture revealing a simple and robust controller. Phil. Trans. R. Soc. Lond. A 365(1850), 221–250 (2006) 8. Robot Multimedia material, http://www.spark.diees.unict.it/Robot.html
Index
Action Selection Network 320, 344 Active contour 377 Actuators 83 McKibben muscles, 85 preflex, 84 Shape Memory Alloys (SMA), 85 Adaptive Resonance Theory (ART) 12 Affordances 6 Artificial Intelligence (AI) 181 Behaviour Modulation 333, 420 Body model 179 Braitenberg vehicle 15 Central Complex (CX) 27 Central Nervous System (CNS) 19, 45 Central Pattern Generators (CPGs) 342, 403 Chaos Control, 13 Weak Chaos Control (WCC), 271, 411 Chaotic Multiscroll 272 Pyragas’s control, 273 Chemotaxis 260, 262, 265 Cognition 7, 10 Dynamical system, View of, 10 Proto-cognitive behaviours, 97, 330 SPARK Cognitive Architecture, 330 Distributed Adaptive Control (DAC) 15, 186, 311 Docking 406 Embodiment 180, 189 Eye-RIS vision system 352, 371, 400, 402, 404–406, 408, 409, 413, 415
ACE16K, 362 Q-Eye Chip, 358, 364 Foreground-background separation FPGA 386 Cyclone, 362 Nios II, 368, 386 StratixII, 386, 387 Genetic Algorithms (GA) 142 Homing behaviour
374
126, 129,
129, 137, 296, 405
Insect brain 30 Instrumental learning 209, 211, 310 Internal models 4 Ionic Polymer Metal Composite (IPMC) 393 IQ test 253 robot with memory, 255 Landmark Navigation 189, 206, 209, 293, 303, 415 Lateral horn interneurons (LHI) 160 Leg Controller 52 AEP, 64 PEP, 64 Memory 180, 191, 192, 204, 205, 209, 254, 311, 323 Memotaxis 261, 262, 265 Minimal cognition 249 Motoneurons 81 Motor Map (MM) 311, 336, 344
424
Index
Multi-target tracking 380 Mushroom Bodies (MB) 21 Kenyon cells, 23, 32, 160, 164 Optical flow 371 Optomotor reflex 102, 124, 127 Path integration in insects 13, 129 Neural network for, 129 Pattern self-organising, 10 Pavlovian conditioning 202, 209, 281, 296, 310 Perception Active perception, 8, 9, 98 Perception as transformation, 6 Perception for action, 3 Phonotaxis 117, 125, 127, 394, 406 Pixel Level Snake (PLS) 379 Planning ahead 180, 184, 186, 189 Potential Field (PF) 278 Probehandeln 184, 188 Projection Neurons (PN) 160, 164 Reaction-Diffusion Cellular Nonlinear Networks (RD-CNNs) 311, 340 Recurrent Neural Networks (RNN) 192, 220, 298, 416 DE nets, 196, 200 learning, 203, 223 MMC nets, 197, 199 MSBE nets, 195, 200 nonlinear coupling, 242 Representation layer 312, 331, 420 Resonate-and-fire neuron 120 Reward Function (RF) 321, 332, 334, 345 Difference of RF (DRF), 321 Robots Gregor III, 386, 394, 396, 400, 404, 408 Koala, 126, 133, 143 MiniHex, 400, 402, 409 Rover I, 399, 400, 405, 411, 413, 417 Rover II, 399, 402, 409, 415, 420, 422 Robot sensory system 391 Accelerometer sensor, 392
Antennae, 392 Compass sensor, 392 Infrared sensors, 391 Servo-motor potentiometer, 392 Self-organization 10, 345 in leg controllers, 70 self-organizing dynamics, 309 Sensing Neurons (SNs) 313 Sensors in insects 48 Antennae, 51, 98 Campaniform sensilla, 50, 75, 81 Cerci, 99 Chordotonal organ, 50 Mechanosensors, 49 Olfactory system, 100 Visual system, 102 Sensory-motor connection, 79 contingencies, 8 pathways, 330 reflexes, 12 Sensory Neurons (SNs) 321 Situatedness 182, 189 SPARK board 387, 390, 400, 405 Spike Time-Dependent Plasticity (STDP) 158, 270, 280, 283, 298, 413, 415 Spiking neuron 157, 282, 298 Stance movement 57 Stick insect 45 Carausius morosus, 48, 58, 76, 105 Subsumption architecture 14 Swing movement 52 Turing patterns
311, 314, 331, 342, 416
Visual Homing Mutual Information Image Similarity, 154 RMS approach, 138, 145, 151 RunDown approach, 148 XOR approach, 407 Walknet 52, 57, 66, 73, 111, 184, 191, 212, 408
Author Index
Alba, L. 351, 385 Arena, P. 269, 309, 385, 399 Castellanos, N.P. 219 Cruse, H. 43, 179 De Fiore, S. 269, 385, 399 Dom´ınguez Castro, R. 351 D¨ urr, V. 43, 179 Espejo, S.
351
Frasca, M.
269
Garc´ıa, A.
351
Haferlach, T.
Utrera, C. 351
List´ an, J. 351 Lombardo, D. 219, 269, 309, 399 Makarov, V.A. 219 Mendoza, C. 351 Morillas, S. 351
Rekeczky, Cs. 371 ´ Rodr´ıguez-V´ azquez, A. Romay, R. 351 Rosano, H. 97 Russo, P. 97 Schilling, M. 43, 179 Schmitz, J. 43, 179 Song, Y.L. 219 Szenher, M. 97
97
Jim´enez, A. 351 Jim´enez-Garrido, F.
Pardo, Ma.D. 351 Patan´e, L. 269, 309, 385, 399
351
Velarde, M.G.
219
Webb, B. 3, 97 Wessnitzer, J. 3, 97 Zampoglou, M. 97 ´ Zar´ andy, A. 371
351
Cognitive Systems Monographs Edited by R. Dillmann, Y. Nakamura, S. Schaal and D. Vernon Vol. 1: Arena, P.; Patanè, L. (Eds.) Spatial Temporal Patterns for Action-Oriented Perception in Roving Robots 425 p. 2009 [978-3-540-88463-7]