Yiğit, Gülsüm

Loading...
Profile Picture
Name Variants
Y.,Gulsum
Gülsüm YIĞIT
Yiğit, GÜLSÜM
Gülsüm Yiğit
GÜLSÜM YIĞIT
G. Yiğit
Yiğit,G.
Y., Gülsüm
Yigit,Gulsum
YIĞIT, Gülsüm
Yiğit, Gülsüm
Gulsum, Yigit
Yiğit, G.
Yigit,G.
Y., Gulsum
YIĞIT, GÜLSÜM
Yigit, Gulsum
Job Title
Araş. Gör.
Email Address
Main Affiliation
Computer Engineering
Status
Current Staff
Website
Scopus Author ID
Turkish CoHE Profile ID
Google Scholar ID
WoS Researcher ID

Sustainable Development Goals

15

LIFE ON LAND
LIFE ON LAND Logo

0

Research Products

16

PEACE, JUSTICE AND STRONG INSTITUTIONS
PEACE, JUSTICE AND STRONG INSTITUTIONS Logo

0

Research Products

14

LIFE BELOW WATER
LIFE BELOW WATER Logo

0

Research Products

6

CLEAN WATER AND SANITATION
CLEAN WATER AND SANITATION Logo

0

Research Products

3

GOOD HEALTH AND WELL-BEING
GOOD HEALTH AND WELL-BEING Logo

1

Research Products

17

PARTNERSHIPS FOR THE GOALS
PARTNERSHIPS FOR THE GOALS Logo

0

Research Products

4

QUALITY EDUCATION
QUALITY EDUCATION Logo

0

Research Products

2

ZERO HUNGER
ZERO HUNGER Logo

0

Research Products

10

REDUCED INEQUALITIES
REDUCED INEQUALITIES Logo

0

Research Products

7

AFFORDABLE AND CLEAN ENERGY
AFFORDABLE AND CLEAN ENERGY Logo

0

Research Products

13

CLIMATE ACTION
CLIMATE ACTION Logo

0

Research Products

1

NO POVERTY
NO POVERTY Logo

0

Research Products

9

INDUSTRY, INNOVATION AND INFRASTRUCTURE
INDUSTRY, INNOVATION AND INFRASTRUCTURE Logo

0

Research Products

12

RESPONSIBLE CONSUMPTION AND PRODUCTION
RESPONSIBLE CONSUMPTION AND PRODUCTION Logo

0

Research Products

8

DECENT WORK AND ECONOMIC GROWTH
DECENT WORK AND ECONOMIC GROWTH Logo

0

Research Products

11

SUSTAINABLE CITIES AND COMMUNITIES
SUSTAINABLE CITIES AND COMMUNITIES Logo

0

Research Products

5

GENDER EQUALITY
GENDER EQUALITY Logo

0

Research Products
Documents

12

Citations

48

h-index

5

Documents

0

Citations

0

Scholarly Output

11

Articles

6

Views / Downloads

69/420

Supervised MSc Theses

0

Supervised PhD Theses

0

WoS Citation Count

13

Scopus Citation Count

44

WoS h-index

3

Scopus h-index

4

Patents

0

Projects

0

WoS Citations per Publication

1.18

Scopus Citations per Publication

4.00

Open Access Source

2

Supervised Theses

0

Google Analytics Visitor Traffic

JournalCount
Knowledge and Information Systems2
2019 Innovations in Intelligent Systems and Applications Conference (ASYU)1
2021 International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2021 - Proceedings1
2023 Innovations in Intelligent Systems and Applications Conference, ASYU 2023 -- 2023 Innovations in Intelligent Systems and Applications Conference, ASYU 2023 -- 11 October 2023 through 13 October 2023 -- Sivas -- 1941531
Bilişim Teknolojileri Dergisi1
Current Page: 1 / 2

Scopus Quartile Distribution

Competency Cloud

GCRIS Competency Cloud

Scholarly Output Search Results

Now showing 1 - 10 of 11
  • Article
    Citation - WoS: 1
    Citation - Scopus: 5
    Enhancing Multiple-Choice Question Answering Through Sequential Fine-Tuning and Curriculum Learning Strategies
    (Springer London Ltd, 2023) Yigit, Gulsum; Amasyali, Mehmet Fatih
    With the transformer-based pre-trained language models, multiple-choice question answering (MCQA) systems can reach a particular level of performance. This study focuses on inheriting the benefits of contextualized language representations acquired by language models and transferring and sharing information among MCQA datasets. In this work, a method called multi-stage-fine-tuning considering the Curriculum Learning strategy is presented, which proposes sequencing not training samples, but the source datasets in a meaningful order, not randomized. Consequently, an extensive series of experiments over various MCQA datasets shows that the proposed method reaches remarkable performance enhancements than classical fine-tuning over picked baselines T5 and RoBERTa. Moreover, the experiments are conducted on merged source datasets, and the proposed method achieves improved performance. This study shows that increasing the number of source datasets and even using some small-scale datasets helps build well-generalized models. Moreover, having a higher similarity between source datasets and target also plays a vital role in the performance.
  • Review
    Citation - WoS: 5
    Citation - Scopus: 9
    From Text To Multimodal: a Survey of Adversarial Example Generation in Question Answering Systems
    (Springer London Ltd, 2024) Yigit, Gulsum; Amasyali, Mehmet Fatih
    Integrating adversarial machine learning with question answering (QA) systems has emerged as a critical area for understanding the vulnerabilities and robustness of these systems. This article aims to review adversarial example-generation techniques in the QA field, including textual and multimodal contexts. We examine the techniques employed through systematic categorization, providing a structured review. Beginning with an overview of traditional QA models, we traverse the adversarial example generation by exploring rule-based perturbations and advanced generative models. We then extend our research to include multimodal QA systems, analyze them across various methods, and examine generative models, seq2seq architectures, and hybrid methodologies. Our research grows to different defense strategies, adversarial datasets, and evaluation metrics and illustrates the literature on adversarial QA. Finally, the paper considers the future landscape of adversarial question generation, highlighting potential research directions that can advance textual and multimodal QA systems in the context of adversarial challenges.
  • Article
    Citation - WoS: 3
    Citation - Scopus: 4
    Assessing the Impact of Minor Modifications on the Interior Structure of Gru: Gru1 and Gru2
    (Wiley, 2022) Yigit, Gulsum; Amasyali, Mehmet Fatih
    In this study, two GRU variants named GRU1 and GRU2 are proposed by employing simple changes to the internal structure of the standard GRU, which is one of the popular RNN variants. Comparative experiments are conducted on four problems: language modeling, question answering, addition task, and sentiment analysis. Moreover, in the addition task, curriculum learning and anti-curriculum learning strategies, which extend the training data having examples from easy to hard or from hard to easy, are comparatively evaluated. Accordingly, the GRU1 and GRU2 variants outperformed the standard GRU. In addition, the curriculum learning approach, in which the training data is expanded from easy to difficult, improves the performance considerably.
  • Conference Object
    Citation - Scopus: 13
    Simple but effective GRU variants
    (Institute of Electrical and Electronics Engineers Inc., 2021) Yigit, G.; Amasyali, M.F.
    Recurrent Neural Network (RNN) is a widely used deep learning architecture applied to sequence learning problems. However, it is recognized that RNNs suffer from exploding and vanishing gradient problems that prohibit the early layers of the network from learning the gradient information. GRU networks are particular kinds of recurrent networks that reduce the short-comings of these problems. In this study, we propose two variants of the standard GRU with simple but effective modifications. We applied an empirical approach and tried to determine the effectiveness of the current units and recurrent units of gates by giving different coefficients. Interestingly, we realize that applying such minor and simple changes to the standard GRU provides notable improvements. We comparatively evaluate the standard GRU with the proposed two variants on four different tasks: (1) sentiment classification on the IMDB movie review dataset, (2) language modeling task on Penn TreeBank (PTB) dataset, (3) sequence to sequence addition problem, and (4) question answering problem on Facebook's bAbitasks dataset. The evaluation results indicate that the proposed two variants of GRU consistently outperform standard GRU. © 2021 IEEE.
  • Conference Object
    Citation - Scopus: 2
    A Siamese Network-Based Approach for Autism Spectrum Disorder Detection With Dual Architecture
    (Institute of Electrical and Electronics Engineers Inc., 2023) Yigit,G.; Darici,M.B.
    Autism Spectrum Disorder (ASD) is a sophisticated neuro-developmental condition impacting numerous children. Early detection of ASD is crucial to implement suitable treatments to improve the daily activities of people with ASD. This paper introduces a system for ASD detection using facial images. The proposed model presents a unique system inspired by Siamese networks. Unlike traditional Siamese networks focusing on input pairs, our model leverages architectural pairs for feature combinations. During training, we combine features learned from different or the same architectures. This enables information transfer and improves the model's capture of comprehensive patterns. Experimental results on the 2940 facial images dataset demonstrate the effectiveness of our system, which exhibits improved accuracy compared to using individual architectures. When (ResNet50, VGG16) architecture pairs are employed in the proposed approach, the highest performance is obtained with an accuracy of 78.57%. Leveraging the strengths of multiple architectures, our model provides a comprehensive and robust representation of input data, leading to improved performance. © 2023 IEEE.
  • Conference Object
    Citation - WoS: 4
    Citation - Scopus: 6
    Ask me: A Question Answering System via Dynamic Memory Networks
    (Institute of Electrical and Electronics Engineers Inc., 2019) Yiğit, Gülsüm; Amasyalı, Mehmet Fatih
    Most of the natural language processing problems can be reduced into a question answering problem. Dynamic Memory Networks (DMNs) are one of the solution approaches for question answering problems. Based on the analysis of a question answering system built by DMNs described in [1], this study proposes a model named DMN∗ which contains several improvements on its input and attention modules. DMN∗ architecture is distinguished by a multi-layer bidirectional LSTM (Long Short Term Memory) architecture on input module and several changes in computation of attention score in attention module. Experiments are conducted on Facebook bAbi dataset [2]. We also introduce Turkish bAbi dataset, and produce increased vocabulary sized tasks for each dataset. The experiments are performed on English and Turkish datasets and the accuracy performance results are compared by the work described in [1]. Our evaluation shows that the proposed model DMN∗ obtains improved accuracy performance results on various tasks for both Turkish and English.
  • Article
    Soru Cevaplama Sistemleri Üzerine Detaylı Bir Çalışma: Veri Kümeleri, Yöntemler ve Açık Araştırma Alanları
    (2021) Yiğit, Gülsüm; Amasyalı, Mehmet Fatih
    Soru Cevaplama (QA) sistemleri, kullanıcıların doğal dilde sordukları sorulara belge veya bağlantıları listelemek yerine doğrudan cevap almalarını sağlayan sistemlerdir. Bu çalışmada, QA sistemlerinde yaygın kullanılan veri kümeleri tanıtılmış ve çeşitli özelliklere göre karşılaştırılmıştır. Ayrıca, QA alanındaki diğer çalışmalardan farklı olarak bu çalışmada son yıllarda literatürde yer alan QA sistemlerinin arkasında kullanılan yöntemlere odaklanılmıştır. Bu yöntemler dört farklı grupta ele alınmış olup literatürdeki güncel çalışmaları ve teknolojileri içermektedir. Bu modeller kullanılan teknikler, harici bilgi kaynaklarının veya dil modelinin kullanılıp kullanılmadığı gibi faktörlere göre karşılaştırılmıştır. Dikkat mekanizmasının, dil modellerinin, çizge işleyen ağların, harici bilgi kaynaklarının, kolektif öğrenmenin ve derin öğrenme mimarilerinin QA sistemlerinin başarısı üzerinde genel olarak olumlu etkisi olduğu görülmüştür. Ayrıca, bu çalışmada QA sistemlerinin günümüzdeki açık araştırma alanları ve olası çözüm yolları belirlenerek gelecekteki QA sistemleri için önerilerde bulunulmuştur. Gelecekteki araştırma alanları olarak yeterli veriye sahip olmayan diller üzerindeki sistemler, birden fazla dil üzerinde çalışabilen sistemler, çok sayıda bilgi kaynağının kullanılmasının gerekli olduğu sistemler ve karşılıklı konuşmaya dayalı sistemler öne çıkmaktadır.
  • Conference Object
    Citation - Scopus: 2
    Exploring the Benefits of Data Augmentation in Math Word Problem Solving
    (Institute of Electrical and Electronics Engineers Inc., 2023) Yigit,G.; Amasyali,M.F.
    Math Word Problem (MWP) is a challenging Natural Language Processing (NLP) task. Existing MWP solvers have shown that current models need to generalize better and obtain higher performances. In this study, we aim to enrich existing MWP datasets with high-quality data, which may improve MWP solvers' performances. We propose several data augmentation methods by applying minor modifications to the problem texts and equations of English MWPs datasets which contain equations with one unknown. Extensive experiments on two MWPs datasets have shown that data created by augmented methods have considerably improved performance. Moreover, further increasing the training samples by combining the samples generated by the proposed augmentation methods provides further performance improvements. © 2023 IEEE.
  • Article
    Citation - Scopus: 1
    Data Augmentation With In-Context Learning and Comparative Evaluation in Math Word Problem Solving
    (Springer, 2024) Yigit,G.; Amasyali,M.F.
    Math Word Problem (MWP) solving presents a challenging task in Natural Language Processing (NLP). This study aims to provide MWP solvers with a more diverse training set, ultimately improving their ability to solve various math problems. We propose several methods for data augmentation by modifying the problem texts and equations, such as synonym replacement, rule-based: question replacement, and rule based: reversing question methodologies over two English MWP datasets. This study extends by introducing a new in-context learning augmentation method, employing the Llama-7b language model. This approach involves instruction-based prompting for rephrasing the math problem texts. Performance evaluations are conducted on 9 baseline models, revealing that augmentation methods outperform baseline models. Moreover, concatenating examples generated by various augmentation methods further improves performance. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2024.
  • Article
    Derin Öğrenme Modellerinde Mahremiyet ve Güvenlik Üzerine Bir Derleme Çalışması
    (2021) Kale, Ayşe; Yiğit, Gülsüm
    Son dönemlerde derin öğrenmedeki devrim niteliğindeki gelişmeler ile birlikte yapay zekaya yönelik beklentiler gün geçtikçe artmaktadır. Konuşma tanıma, doğal dil işleme (NLP), görüntü işleme gibi birçok alanda etkin bir şekilde uygulanabilen bir araştırma alanı olan derin öğrenme klasik makine öğrenmesi ile karşılaştırıldığında daha yüksek başarı göstermektedir. Derin öğrenme ile geliştirilen modellerde eğitim ve tahminleme sırasında büyük miktarda veri kullanılmakta ve kullanılan veriler kişisel verilerden oluşabilmektedir. Bu verilerin işlenmesi sırasında kişisel verilerin korunması kanununa (KVKK) aykırı olmaması oldukça önemlidir. Bu nedenle verilerin gizliliği ve güvenliğinin sağlanması oldukça önemli bir husustur. Bu çalışmada, derin öğrenme modelleri geliştirilirken yaygın kullanılan mimariler verilmiştir. Verilerin gizliliği ve güvenliğini artırmak için literatürde yaygın olarak karşılaşılan güvenli çok partili hesaplama, diferansiyel mahremiyet, garbled devre protokolü ve homomorfik şifreleme araçları özetlenmiştir. Çeşitli sistem tasarımlarında kullanılan bu araçların yer aldığı güncel çalışmalar taranmıştır. Bu çalışmalar, derin öğrenme modelinin eğitim ve tahminleme aşamasında olmak üzere iki kategoride incelenmiştir. Literatürdeki çeşitli modeller üzerinde uygulanabilen güncel saldırılar ve bu saldırılardan korunmak amacıyla geliştirilen yöntemler verilmiştir. Ayrıca, güncel araştırma alanları belirlenmiştir. Buna göre, gelecekteki araştırma yönü kriptografik temelli yöntemlerin karmaşıklığının azaltılması ve geliştirilen modelin güvenilirliğini belirlemek için çeşitli ölçme ve değerlendirme yöntemlerinin geliştirilmesi yönünde olabilir.