Neural network training based on FPGA with floating point number format and it's performance


Cavuslu M. A., Karakuzu C., ŞAHİN S., Yakut M.

NEURAL COMPUTING & APPLICATIONS, cilt.20, sa.2, ss.195-202, 2011 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 20 Sayı: 2
  • Basım Tarihi: 2011
  • Doi Numarası: 10.1007/s00521-010-0423-3
  • Dergi Adı: NEURAL COMPUTING & APPLICATIONS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.195-202
  • Anahtar Kelimeler: FPGA, Artificial neural networks, VHDL, Parallel programming, Floating point arithmetic
  • Kocaeli Üniversitesi Adresli: Evet

Özet

In this paper, two-layered feed forward artificial neural network's (ANN) training by back propagation and its implementation on FPGA (field programmable gate array) using floating point number format with different bit lengths are remarked based on EX-OR problem. In the study, being suitable with the parallel data-processing specification on ANN's nature, it is especially ensured to realize ANN training operations parallel over FPGA. On the training, Virtex2vp30 chip of Xilinx FPGA family is used. The network created on FPGA is coded by using VHDL. By comparing the results to available literature, the technique developed here proved to consume less space for the subjected ANN training which has the same structure and bit length, it is shown to have better performance.