Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Results from simulations of weight perturbation as an on-chip learning scheme for analogue VLSI neural networks are presented. The limitations of analogue hardware are modelled as realistically as possible. Thus synaptic weight precision is defined according to the smallest change in the weight setting voltage which gives a measurable change at the output of the corresponding neuron. Tests are carried out on a hard classification problem constructed from mobile robot navigation data. The simulations show that the degradation in classification performance on a 500-pattern test set caused by the introduction of realistic hardware constraints is acceptable: with 8-bit weights, updated probabilistically and with a simplified output error criterion, the error rate increases by no more than 7% when compared with weight perturbation implemented with full 32-bit precision.

Type

Journal article

Journal

Int J Neural Syst

Publication Date

12/1993

Volume

4

Pages

419 - 426

Keywords

Algorithms, Artificial Intelligence, Computers, Analog, Databases, Factual, Neural Networks (Computer), Neurons, Robotics, Synapses