Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Some precision issues in training such as analog VLSI multilayer perceptron (MLP) networks are examined. Presently, three techniques are used to train analog VLSI chips. They are: off-chip learning; chip-in-loop learning; and on-chip learning. It was shown that successful MLP gradient-descent learning necessitates weight precisions of between 9 and 13 bits. If the weight precision is less than about 12 bits, learning stops because the weight updates are smaller than the least significant bit of the weight representations, and thus are ignored.

Type

Journal article

Publication Date

1995-06-01T00:00:00+00:00

Volume

15

Pages

54 - 56

Total pages

2