Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Arslan, Burcu"

Filter results by typing the first few letters
Now showing 1 - 3 of 3
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Distinct Temporal Dynamics of Speech and Gesture Processing: Insights From Event-Related Potentials Across L1 and L2
    (American Psychological Association, 2026) Ozer, Demet; Soyman, Efe; Badakul, Ayse Nur; Arslan, Burcu; Yilmaz, Fatma Sena; Goksun, Tilbe
    This study examined the neural and behavioral processing of speech and iconic gestures across L1-Turkish and L2-English when participants attended the speech or gesture channel. We recorded electroencephalogram activity in Experiment 1 and reaction times in Experiment 2 (24 participants in each) during a mismatch task where concurrent speech and gesture expressed either matching or mismatching information in relation to a preceding action. Participants were asked to detect whether the gesture (gesture-focused task) or the speech (speech-focused task) was related to the preceding action. Speech was presented in Turkish or English in separate blocks. In Experiment 1, we focused on N400 and N2 components as indices of late semantic processing and early sequential matching, respectively. In the gesture-focused task, our results demonstrated a gesture mismatch effect, which was evident in more negative N400 amplitudes for mismatching than matching gestures only in the context of simultaneous matching speech. In the speech-focused task, we observed the N2 effect, which was apparent in more negative N2 amplitudes for mismatching than matching speech, regardless of the simultaneous gesture. These dynamics were largely reflected in reaction times in Experiment 2. These results point to potentially distinct neural and temporal dynamics in processing speech versus gestures and suggest that speech processing might be instantiated earlier, whereas gestures recruit later stages of processing. Notably, we observed some differential patterns across L1-Turkish and L2-English, suggesting that speech and gesture processing may operate differently across languages. Our findings highlight a complex interplay between modality, modality focus, language, and neural processing of multimodal information.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 1
    Multimodal Communication in Virtual and Face-To Gesture Production and Speech Disfluency
    (Istanbul Univ, Fac Letters, dept Psychology, 2024) Arslan, Burcu; Avci, Can; Ozer, Demet
    The COVID-19 pandemic has made online data collection a popular choice. It is important to evaluate howcomparable online studies are to face-to-face studies, particularly in multimodal language research wheremodes of communication significantly impact the results. In this study, we examined individuals' rates andpatterns of speech disfluency and gesture use across face-to-face and online videoconferencing settings asthey described their daily routines (N= 64). We asked whether and how multimodal language is affected acrossdifferent communication settings and gesture use, particularly iconic gestures, is associated with speech fluencyregardless of the context. Our results have showed that the participants' overall disfluency rate was higherfor the speech communicated via videoconferencing than the speech communicated face-to-face. However,the type of disfluencies changed across contexts, such that filled pauses and repairs were more commonin online communication, whereas silent pauses were more common in face-to-face communication. Thesefindings signal an interplay between the cognitive functions of different disfluency types and communicativestrategies. Results indicate that the overall gesture frequency and iconic gesture use were similar in bothsettings. Furthermore, the use of iconic gestures was found to negatively predict the overall disfluency rate,regardless of the setting. This finding suggests that using iconic gestures might facilitate cognitive processes,paving the way for a more fluent speech. This study demonstrates that multimodal language and communicationstrategies may vary across different communication settings and nuanced understanding of the differences inmultimodal language between online and face-to-face communication can be gained using different contexts.The findings contribute to understanding the impact of increasingly widespread online communication onmultimodal language production processes and provide foundation for future research.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 10
    Citation - Scopus: 11
    Multimodal language in bilingual and monolingual children: Gesture production and speech disfluency
    (Cambridge Univ Press, 2023) Arslan, Burcu; Aktan-Erciyes, Asli; Goeksun, Tilbe; Göksun, Tilbe
    Bilingual and monolingual children might have different styles of using multimodal language. This study investigates speech disfluency and gesture production of 5- and 7-year-old Turkish monolingual (N = 61) and Turkish-English bilingual children (N = 51). We examined monolinguals' Turkish narratives and bilinguals' Turkish and English narratives. Results indicated that bilinguals were more disfluent than monolinguals, particularly for silent and filled (e.g., umm) pauses. Bilinguals used silent pauses and repetitions (e.g., cat cat) more frequently in English than in Turkish. Gesture use was comparable across language and age groups, except for iconic gestures. Monolinguals produced more iconic gestures than bilinguals. Children's overall gesture frequency predicted disfluency rates only in Turkish. Different gesture types might be orchestrated in the multimodal system, contributing to narrative fluency. The use of disfluency and gesture types might provide insight into bilingual and monolingual children's language development and communication strategies.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback