Parameter quantization effects in Gaussian potential function neural networks

gdc.relation.journal Advances in Neural Networks and Applications en_US
dc.contributor.author Karakuş, Erkan
dc.contributor.author Öğrenci, Arif Selçuk
dc.contributor.author Dündar, Günhan
dc.date.accessioned 2019-06-28T11:11:57Z
dc.date.available 2019-06-28T11:11:57Z
dc.date.issued 2001
dc.description.abstract In hardware implementations of Gaussian Potential Function Neural Networks (GPFNN) deviation from ideal network parameters is inevitable because of the techniques used for parameter storage and implementation of the functions electronically resulting in loss of accuracy. This loss in accuracy can be represented by quantization of the network parameters. In order to predict this effect theoretical approaches are proposed. One-input one-output GPFNN with one hidden layer have been trained as function approximators using the Gradient Descent algorithm. After the training the network parameters (means and standard deviations of the hidden units and the connection weights) are quantized up to 16-bits in order to observe the percentage error on network output stemming from parameter quantization. Simulation results are compared with the predictions of the theoretical approach. Consequently the behaviour of the network output has been given with combined and separate parameter quantizations. Moreover given the allowed percentage error for the network a method is proposed where the minimum number of bits required for quantization of each parameter could be determined based on the theoretical predictions. en_US]
dc.identifier.citationcount 0
dc.identifier.isbn 9608052262
dc.identifier.scopus 2-s2.0-4944220484 en_US
dc.identifier.uri https://hdl.handle.net/20.500.12469/1744
dc.identifier.uri https://www.semanticscholar.org/paper/Parameter-Quantization-Effects-in-Gaussian-Function-KARAKU-Ar/71acc1975e0e662088a34467acf19cd252044b90
dc.language.iso en en_US
dc.publisher World Scientific and Engineering Academy and Society en_US
dc.rights info:eu-repo/semantics/openAccess en_US
dc.subject Gaussian potential function neural networks en_US
dc.subject Training en_US
dc.subject Weight quantization en_US
dc.title Parameter quantization effects in Gaussian potential function neural networks en_US
dc.type Article en_US
dspace.entity.type Publication
gdc.author.institutional Öğrenci, Arif Selçuk en_US
gdc.coar.access open access
gdc.coar.type text::journal::journal article
gdc.description.department Fakülteler, Mühendislik ve Doğa Bilimleri Fakültesi, Elektrik-Elektronik Mühendisliği Bölümü en_US
gdc.description.endpage 252
gdc.description.publicationcategory Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı en_US
gdc.description.startpage 247 en_US
gdc.scopus.citedcount 0
relation.isOrgUnitOfPublication b20623fc-1264-4244-9847-a4729ca7508c
relation.isOrgUnitOfPublication.latestForDiscovery b20623fc-1264-4244-9847-a4729ca7508c

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Parameter Quantization Effects in.pdf
Size:
198.08 KB
Format:
Adobe Portable Document Format
Description: