Parameter quantization effects in Gaussian potential function neural networks
dc.contributor.author | Karakuş, Erkan | |
dc.contributor.author | Öğrenci, Arif Selçuk | |
dc.contributor.author | Dündar, Günhan | |
dc.date.accessioned | 2019-06-28T11:11:57Z | |
dc.date.available | 2019-06-28T11:11:57Z | |
dc.date.issued | 2001 | |
dc.department | Fakülteler, Mühendislik ve Doğa Bilimleri Fakültesi, Elektrik-Elektronik Mühendisliği Bölümü | en_US |
dc.description.abstract | In hardware implementations of Gaussian Potential Function Neural Networks (GPFNN) deviation from ideal network parameters is inevitable because of the techniques used for parameter storage and implementation of the functions electronically resulting in loss of accuracy. This loss in accuracy can be represented by quantization of the network parameters. In order to predict this effect theoretical approaches are proposed. One-input one-output GPFNN with one hidden layer have been trained as function approximators using the Gradient Descent algorithm. After the training the network parameters (means and standard deviations of the hidden units and the connection weights) are quantized up to 16-bits in order to observe the percentage error on network output stemming from parameter quantization. Simulation results are compared with the predictions of the theoretical approach. Consequently the behaviour of the network output has been given with combined and separate parameter quantizations. Moreover given the allowed percentage error for the network a method is proposed where the minimum number of bits required for quantization of each parameter could be determined based on the theoretical predictions. | en_US] |
dc.identifier.citation | 0 | |
dc.identifier.endpage | 252 | |
dc.identifier.isbn | 9608052262 | |
dc.identifier.scopus | 2-s2.0-4944220484 | en_US |
dc.identifier.scopusquality | N/A | |
dc.identifier.startpage | 247 | en_US |
dc.identifier.uri | https://hdl.handle.net/20.500.12469/1744 | |
dc.identifier.uri | https://www.semanticscholar.org/paper/Parameter-Quantization-Effects-in-Gaussian-Function-KARAKU-Ar/71acc1975e0e662088a34467acf19cd252044b90 | |
dc.identifier.wosquality | N/A | |
dc.institutionauthor | Öğrenci, Arif Selçuk | en_US |
dc.language.iso | en | en_US |
dc.publisher | World Scientific and Engineering Academy and Society | en_US |
dc.relation.journal | Advances in Neural Networks and Applications | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Gaussian potential function neural networks | en_US |
dc.subject | Training | en_US |
dc.subject | Weight quantization | en_US |
dc.title | Parameter quantization effects in Gaussian potential function neural networks | en_US |
dc.type | Article | en_US |
dspace.entity.type | Publication |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Parameter Quantization Effects in.pdf
- Size:
- 198.08 KB
- Format:
- Adobe Portable Document Format
- Description: