Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Ortega-Zamorano, Franciscoa; b; * | Jerez, José M.a | Gómez, Ivána | Franco, Leonardoa
Affiliations: [a] Department of Computer Languages and Computer Science, University of Málaga, Málaga, Spain | [b] School of Mathematics and Computer Science, University of Yachay Tech, San Miguel de Urcuquí, Ecuador
Correspondence: [*] Corresponding author: Francisco Ortega-Zamorano, Departamento de Lenguajes y Ciencias de la Computación, Universidad de Málaga, Campus de Teatinos S/N, 29071, Málaga, Spain. E-mail:fortega@lcc.uma.es
Abstract: Training of large scale neural networks, like those used nowadays in Deep Learning schemes, requires long computational times or the use of high performance computation solutions like those based on cluster computation, GPU boards, etc. As a possible alternative, in this work the Back-Propagation learning algorithm is implemented in an FPGA board using a multiplexing layer scheme, in which a single layer of neurons is physically implemented in parallel but can be reused any number of times in order to simulate multi-layer architectures. An on-chip implementation of the algorithm is carried out using a training/validation scheme in order to avoid overfitting effects. The hardware implementation is tested on several configurations, permitting to simulate architectures comprising up to 127 hidden layers with a maximum number of neurons in each layer of 60 neurons. We confirmed the correct implementation of the algorithm and compared the computational times against C and Matlab code executed in a multicore supercomputer, observing a clear advantage of the proposed FPGA scheme. The layer multiplexing scheme used provides a simple and flexible approach in comparison to standard implementations of the Back-Propagation algorithm representing an important step towards the FPGA implementation of deep neural networks, one of the most novel and successful existing models for prediction problems.
Keywords: Hardware implementation, FPGA, supervised learning, deep neural networks, layer multiplexing
DOI: 10.3233/ICA-170538
Journal: Integrated Computer-Aided Engineering, vol. 24, no. 2, pp. 171-185, 2017
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl