Simple but effective GRU variants

dc.authorscopusid 57215312808
dc.authorscopusid 55664402200
dc.contributor.author Yigit, G.
dc.contributor.author Amasyali, M.F.
dc.date.accessioned 2023-10-19T15:05:33Z
dc.date.available 2023-10-19T15:05:33Z
dc.date.issued 2021
dc.department-temp Yigit, G., Kadir Has University, Computer Engineering Department, İstanbul, Turkey; Amasyali, M.F., Yildiz Technical University, Computer Engineering Department, İstanbul, Turkey en_US
dc.description Kocaeli University;Kocaeli University Technopark en_US
dc.description 2021 International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2021 --25 August 2021 through 27 August 2021 -- --172175 en_US
dc.description.abstract Recurrent Neural Network (RNN) is a widely used deep learning architecture applied to sequence learning problems. However, it is recognized that RNNs suffer from exploding and vanishing gradient problems that prohibit the early layers of the network from learning the gradient information. GRU networks are particular kinds of recurrent networks that reduce the short-comings of these problems. In this study, we propose two variants of the standard GRU with simple but effective modifications. We applied an empirical approach and tried to determine the effectiveness of the current units and recurrent units of gates by giving different coefficients. Interestingly, we realize that applying such minor and simple changes to the standard GRU provides notable improvements. We comparatively evaluate the standard GRU with the proposed two variants on four different tasks: (1) sentiment classification on the IMDB movie review dataset, (2) language modeling task on Penn TreeBank (PTB) dataset, (3) sequence to sequence addition problem, and (4) question answering problem on Facebook's bAbitasks dataset. The evaluation results indicate that the proposed two variants of GRU consistently outperform standard GRU. © 2021 IEEE. en_US
dc.description.sponsorship ACKNOWLEDGMENT G.Yigit is supported by TUB?TAK - B?DEB 2211/A national fellowship program for Ph.D. studies. en_US
dc.identifier.citationcount 6
dc.identifier.doi 10.1109/INISTA52262.2021.9548535 en_US
dc.identifier.isbn 9781665436038
dc.identifier.scopus 2-s2.0-85116609087 en_US
dc.identifier.scopusquality N/A
dc.identifier.uri https://doi.org/10.1109/INISTA52262.2021.9548535
dc.identifier.uri https://hdl.handle.net/20.500.12469/4942
dc.identifier.wosquality N/A
dc.khas 20231019-Scopus en_US
dc.language.iso en en_US
dc.publisher Institute of Electrical and Electronics Engineers Inc. en_US
dc.relation.ispartof 2021 International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2021 - Proceedings en_US
dc.relation.publicationcategory Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı en_US
dc.rights info:eu-repo/semantics/closedAccess en_US
dc.scopus.citedbyCount 11
dc.subject Gated recurrent units en_US
dc.subject Recurrent neural networks en_US
dc.subject Seq2seq en_US
dc.subject Classification (of information) en_US
dc.subject Modeling languages en_US
dc.subject Multilayer neural networks en_US
dc.subject Network layers en_US
dc.subject Gated recurrent unit en_US
dc.subject Gradient informations en_US
dc.subject Learning architectures en_US
dc.subject Learning problem en_US
dc.subject Recurrent networks en_US
dc.subject Seq2seq en_US
dc.subject Sequence learning en_US
dc.subject Short-comings en_US
dc.subject Simple++ en_US
dc.subject Vanishing gradient en_US
dc.subject Recurrent neural networks en_US
dc.title Simple but effective GRU variants en_US
dc.type Conference Object en_US
dspace.entity.type Publication

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
4942.pdf
Size:
1.2 MB
Format:
Adobe Portable Document Format
Description:
Tam Metin / Full Text