Ulrich Ramacher, Christoph von der Malsburg, editions
مشخصات ظاهری
نام خاص و کميت اثر
1 online resource (viii, 359 pages) :
ساير جزييات
illustrations
یادداشتهای مربوط به کتابنامه ، واژه نامه و نمایه های داخل اثر
متن يادداشت
Includes bibliographical references and index
یادداشتهای مربوط به مندرجات
متن يادداشت
Cover -- Contents -- Prologue -- 0.1 Main Results -- 0.2 Prehistory of Our Project -- 0.3 Acknowledgement -- 1 The Difficulty of Modelling Artificial Brains -- 1.1 McCullogh-Pitts Model -- 1.2 Learning Nets -- 1.3 Spiking Neurons -- 1.4 Architecture of Vision -- 1.5 The Steps of the Construction Process -- 1.6 Summary -- 2 Information Processing in Nets with Constant Synapses -- 2.1 Generic Signal Equations for Pulse Neurons and Synapses -- 2.2 Partitions and Their Time Development -- 2.3 Experiments with Constant Synapses -- 2.4 Entropy and Transfer Function of a Net -- 2.5 Operating Range of a Net -- 2.6 Pulse Rates -- 2.7 Resolution and Net Size -- 2.8 Application Potential -- 2.9 Limited Simulation Time -- 2.10 Summary -- 3 Theory of Nets with Constant or Dynamic Synapses -- 3.1 Derivation of the Signal Energy -- 3.2 Temporal Mean and Spatial Mean -- 3.3 Determination of the Frequency Distribution -- 3.4 Summary -- 4 Macro-Dynamics of Nets with Constant Synapses -- 4.1 Known Synapses -- 4.2 Known Distribution of Synapses -- 4.3 Agreement of Theory with Experiment -- 4.4 Lack of Correlation -- 4.5 Determining the Signal Energy and Entropy by Pulse Rates -- 4.6 Summary -- 5 Information Processing with Dynamic Synapses -- 5.1 The Types of Solutions of Synaptic Equations -- 5.2 Synchronisation of Neurons -- 5.3 Segmentation per Synchronisation -- 5.4 Calculation of Pulse Differences and Sums -- 5.5 Simple Applications -- 5.6 Time Coding and Correlation -- 5.7 Entropy and State Space -- 5.8 Preliminary Considerations on the Statistics of Synchronisation -- 5.9 Summary -- 6 Nets for Feature Detection -- 6.1 Overview of Visual System -- 6.2 Simple Cells -- 6.3 Creation of Detector Profiles for Gabor Wavelets -- 6.4 Experimental Check -- 6.5 Summary -- 7 Nets for Feature Recognition -- 7.1 Principles of Object Recognition -- 7.2 Net Architecture for Robust Feature Recognition -- 7.3 Feature Recogniser -- 7.4 Selectivity -- 7.5 Orthogonality of Rotation -- 7.6 Invariance of Function as to Brightness -- 7.7 Invariance of Function as to Form and Mimic -- 7.8 Generating Object Components through Binding of Features -- 7.9 Summary -- 8 Nets for Robust Head Detection -- 8.1 Results of Head Detection -- 8.2 Next Steps -- 8.3 Summary -- 9 Extensions of the Vision Architecture -- 9.1 Distance-Invariant Feature Pyramid -- 9.2 The Inner Screen -- 9.3 Summary -- 10 Look-out -- 10.1 Data Format of the Brain -- 10.2 Self-Organisation -- 10.3 Learning -- 10.4 Invariant Object Recognition -- 10.5 Structured Memory Domains -- 10.6 Summary -- 11 Preliminary Considerations on the Microelectronic Implementation -- 11.1 Equivalent Representations -- 11.2 Microelectronic Implementations -- 11.3 Models of Neurons and Synapses -- 12 Elementary Circuits for Neurons, Synapses, and Photosensors -- 12.1 Neuron -- 12.2 Adaptive Synapses -- 12.3 Photosensors -- 12.4 DA-Converters and Analogue Image Storage -- 12.5 Summary -- 13 Simulation of Microelectronic Neural Circuits and Systems -- 13.1 Modelling of Neurons and Synapses -- 13.2 Results of Modelling -- 13.3 Notes on the Simulation Procedure -- 13.4 Summary -- 14 Architecture and Chip Design of the Feature Recognizer -- 14.1 Chip Architecture of the Feature Recogni
بدون عنوان
0
یادداشتهای مربوط به خلاصه یا چکیده
متن يادداشت
This book presents a first generation of artificial brains, using vision as sample application. An object recognition system is built, using neurons and synapses as exclusive building elements. The system contains a feature pyramid with 8 orientations and 5 resolution levels for 1000 objects and networks for binding of features into objects. This vision system can recognize objects robustly in the presence of changes in illumination, deformation, distance and pose (as long as object components remain visible). The neuro-synaptic network owes its functional power to the introduction of rapidly modifiable dynamic synapses. These give a network greater pattern recognition capabilities than are achievable with fixed connections. The spatio-temporal correlation structure of patterns is captured by a single synaptic differential equation in a universal way. The correlation can appear as synchronous neural firing, which signals the presence of a feature in a robust way, or binds features into objects. Although in this book we can present only a first generation artificial brain and believe many more generations will have to follow to reach the full power of the human brain, we nevertheless see a new era of computation on the horizon. There were times when computers, with their precision, reliability and blinding speed, were considered to be as superior to the wet matter of our brain as a jet plane is to a sparrow. These times seem to be over, given the fact that digital systems inspired by formal logic and controlled algorithmically - today's computers - are hitting a complexity crisis. A paradigm change is in the air: from the externally organised to the self-organised computer, of which the results described in this book may give an inkling