November 15, 1990.
A retrospective history of neurocomputing; why neurocomputing and what is neurocomputing. The biological basls of neurocomputing from molecules through cells to the brain. Mathematics for neurocomputing. Artificial neural networks overview. Technologies for neurocomputing: from VLSI to nanotechnology. A review of neurocomputers, neurochips and neurosoftware. Neurocomputing applications: pattern recognition, speach recognition, systems control and others.
In spite of the efforts to get to know morphology and complex function of neurons and their mutual action, a neuron has been and stayed a puzzle.
Of particular interest is the immediate surroundings of the neuron which, according to some authors, are glial cells whose number is .an order of magnitude greater than that of the neurons. The research shows that these cells take an active part in the process of conduction of nerve pulses, in forming of reactions and in some memorizing functions.
This lecture should provide fundamental information on neurons, neural system, functions of neural system like learning, memory etc. Model of the neuron will be produced and a simple electronic simulation will be presented.
Natural and artificial neurons. Natural and artificial neural systems. Aritificial neural networks. Model of the neuron. Analog implementation of the model. Object identification. Associative memory. Learning. Circuit realization of the weighting coefficient. Circuit realization of the integrated analog neuron type circuit. Specific applications imitating human organs. Optoelectronic ANN. Neurocomputers. Dedicated digital ANN VLSI circuits. Digital neuron type circuit. Gate array implementation of ANN.
In the talk four problems related to Hopfield neural networks will be dealt with:
- Simple optimization networks.
- Modular set of analog neural blocks.
- Machines with weights represented in unary form.
- Third-order networks - extensive calculations and simulations.
Adaptive systems. Features: Goal seeking and learning. Definition of learning. Learning systems. Self-conscious systems.
Types of learning: Supervised, reinforcement and self-organization. Learning in artificial intelligence systems.
Neural learning. Biological background. Learning algorithms. Implementations.
Neural algorithm based schemas. Animal learning theory.
Neurocomputing learning systems. Learning controllers. Learning for pattern classification. Learning to control dynamic systems.
The lecture presents an information processing system applicable to automatic empirical modeling of natural phenomena. It consists of an array of sensors, a self-organizing memory, an estimator and an array of actuators. Its operation corresponds to an optimal representation of the probability distribution of measured data by a set of adaptive prototvpes. The adaptation rule of prototypes is derived from the maximum entropy principle and describes an optimal self-organization of formal neurons. In the case when incomplete information is obtained by partial observation of the phenomenon, the prototypes are applicable for the retrieval of missing information by estimation of the conditional average. The operation of a corresponding system is demonstrated by the recognition of acoustic emission sources on the basis of detected signals and by prediction of a chaotic time series.
Recent interest in artificial neural netuorks (ANNs) centres around their capabilitics of self-learning and generalisation which are achieved in an implicit (distributed, connectionist ) fashion rather than explicitly as in a convcntional algorithm. While ANNs cannot do anything that could not be done by a "conventional" aigorithm, they can perhaps do it faster or with less effort in acquiring the explicit knowledge required for a solution. This lecture will consider the sort of functions that an ANN can perform, according to details of information representation (analogue/ digital), lcarning nlgorithm (supervised/ unsupervised) and net topology (temporal/ non-tcmporal ). Thc Importance of these functions to pattern recognition will be cmphasiued. Fxamples will bc givcn of practical applications in speech and image rccognition, and in machine translation.
Prof. Đuro Koruga - the inspirer of the meeting
Prof. Damper and Prof. Litovski in conversation