Learning with analogue VLSP MLPs
Cairns G., Tarassenko L.
Much work has been undertaken to demonstrate the advantages of analogue VLSI for implementing neural architectures. This paper attempts to address the issues concerning 'in-situ' learning with analogue VLSI multi-layer perceptron (MLP) networks. In particular, the authors propose that 'chip-in-the-loop' learning is, at the very least, necessary to overcome typical analogue process variations and the authors argue that MLPs containing analogue circuits with 8 bit precision can be successfully trained provided they have digital representations of the weights of at least 12 bits. The authors demonstrate that weight perturbation, with careful choice of the perturbation size, gives improved results over backpropagation, at the cost of increased training time. Indeed, the authors go on to show why weight perturbation is possibly the only sensible way to implement MLP 'on-chip' learning. The authors have designed a set of analogue VLSI chips specifically to see if their theoretical results on learning work in practice. Although these chips are experimental, it is their intention to use them to solve 'real world' problems which have relatively low input dimensionality, such as the task of speaker identification.