Browsing by Author "Amasyali, Mehmet Fatih"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Article Citation Count: 3Assessing the Impact of Minor Modifications on the Interior Structure of Gru: Gru1 and Gru2(Wiley, 2022) Yigit, Gulsum; Amasyali, Mehmet FatihIn this study, two GRU variants named GRU1 and GRU2 are proposed by employing simple changes to the internal structure of the standard GRU, which is one of the popular RNN variants. Comparative experiments are conducted on four problems: language modeling, question answering, addition task, and sentiment analysis. Moreover, in the addition task, curriculum learning and anti-curriculum learning strategies, which extend the training data having examples from easy to hard or from hard to easy, are comparatively evaluated. Accordingly, the GRU1 and GRU2 variants outperformed the standard GRU. In addition, the curriculum learning approach, in which the training data is expanded from easy to difficult, improves the performance considerably.Article Citation Count: 0Enhancing Multiple-Choice Question Answering Through Sequential Fine-Tuning and Curriculum Learning Strategies(Springer London Ltd, 2023) Yigit, Gulsum; Amasyali, Mehmet FatihWith the transformer-based pre-trained language models, multiple-choice question answering (MCQA) systems can reach a particular level of performance. This study focuses on inheriting the benefits of contextualized language representations acquired by language models and transferring and sharing information among MCQA datasets. In this work, a method called multi-stage-fine-tuning considering the Curriculum Learning strategy is presented, which proposes sequencing not training samples, but the source datasets in a meaningful order, not randomized. Consequently, an extensive series of experiments over various MCQA datasets shows that the proposed method reaches remarkable performance enhancements than classical fine-tuning over picked baselines T5 and RoBERTa. Moreover, the experiments are conducted on merged source datasets, and the proposed method achieves improved performance. This study shows that increasing the number of source datasets and even using some small-scale datasets helps build well-generalized models. Moreover, having a higher similarity between source datasets and target also plays a vital role in the performance.Review Citation Count: 0From Text To Multimodal: a Survey of Adversarial Example Generation in Question Answering Systems(Springer London Ltd, 2024) Yigit, Gulsum; Amasyali, Mehmet FatihIntegrating adversarial machine learning with question answering (QA) systems has emerged as a critical area for understanding the vulnerabilities and robustness of these systems. This article aims to review adversarial example-generation techniques in the QA field, including textual and multimodal contexts. We examine the techniques employed through systematic categorization, providing a structured review. Beginning with an overview of traditional QA models, we traverse the adversarial example generation by exploring rule-based perturbations and advanced generative models. We then extend our research to include multimodal QA systems, analyze them across various methods, and examine generative models, seq2seq architectures, and hybrid methodologies. Our research grows to different defense strategies, adversarial datasets, and evaluation metrics and illustrates the literature on adversarial QA. Finally, the paper considers the future landscape of adversarial question generation, highlighting potential research directions that can advance textual and multimodal QA systems in the context of adversarial challenges.