Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Ozer, Demet"

Filter results by typing the first few letters
Now showing 1 - 7 of 7
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Distinct Temporal Dynamics of Speech and Gesture Processing: Insights From Event-Related Potentials Across L1 and L2
    (American Psychological Association, 2026) Ozer, Demet; Soyman, Efe; Badakul, Ayse Nur; Arslan, Burcu; Yilmaz, Fatma Sena; Goksun, Tilbe
    This study examined the neural and behavioral processing of speech and iconic gestures across L1-Turkish and L2-English when participants attended the speech or gesture channel. We recorded electroencephalogram activity in Experiment 1 and reaction times in Experiment 2 (24 participants in each) during a mismatch task where concurrent speech and gesture expressed either matching or mismatching information in relation to a preceding action. Participants were asked to detect whether the gesture (gesture-focused task) or the speech (speech-focused task) was related to the preceding action. Speech was presented in Turkish or English in separate blocks. In Experiment 1, we focused on N400 and N2 components as indices of late semantic processing and early sequential matching, respectively. In the gesture-focused task, our results demonstrated a gesture mismatch effect, which was evident in more negative N400 amplitudes for mismatching than matching gestures only in the context of simultaneous matching speech. In the speech-focused task, we observed the N2 effect, which was apparent in more negative N2 amplitudes for mismatching than matching speech, regardless of the simultaneous gesture. These dynamics were largely reflected in reaction times in Experiment 2. These results point to potentially distinct neural and temporal dynamics in processing speech versus gestures and suggest that speech processing might be instantiated earlier, whereas gestures recruit later stages of processing. Notably, we observed some differential patterns across L1-Turkish and L2-English, suggesting that speech and gesture processing may operate differently across languages. Our findings highlight a complex interplay between modality, modality focus, language, and neural processing of multimodal information.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 4
    Citation - Scopus: 4
    Exploring Emotions Through Co-Speech Gestures: the Caveats and New Directions
    (Sage Publications inc, 2024) Aslan, Zeynep; Ozer, Demet; Goksun, Tilbe
    Co-speech hand gestures offer a rich avenue for research into studying emotion communication because they serve as both prominent expressive bodily cues and an integral part of language. Despite such a strategic relevance, gesture-speech integration and interaction have received less research focus on its emotional function compared to its cognitive function. This review aims to shed light on the current state of the field regarding the interplay between co-speech hand gestures and emotions, focusing specifically on the role of gestures in expressing and understanding both others' and one's own emotions. The article concludes by addressing current limitations in the field and proposing future directions for researchers investigating gesture-emotion interaction. Our goal is to provide a roadmap to researchers in their exploration of the role of gestures in emotions, ultimately contributing to a more comprehensive understanding of how gestures and emotions intersect.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 14
    Citation - Scopus: 13
    Gesture use in L1-Turkish and L2-English: Evidence from emotional narrative retellings
    (Sage Publications Ltd, 2023) Ozder, Levent Emir; Ozer, Demet; Goksun, Tilbe; Emir Özder, Levent
    Bilinguals tend to produce more co-speech hand gestures to compensate for reduced communicative proficiency when speaking in their L2. We here investigated L1-Turkish and L2-English speakers' gesture use in an emotional context. We specifically asked whether and how (1) speakers gestured differently while retelling L1 versus L2 and positive versus negative narratives and (2) gesture production during retellings was associated with speakers' later subjective emotional intensity ratings of those narratives. We asked 22 participants to read and then retell eight emotion-laden narratives (half positive, half negative; half Turkish, half English). We analysed gesture frequency during the entire retelling and during emotional speech only (i.e., gestures that co-occur with emotional phrases such as happy). Our results showed that participants produced more representational gestures in L2 than in L1; however, they used more representational gestures during emotional content in L1 than in L2. Participants also produced more co-emotional speech gestures when retelling negative than positive narratives, regardless of language, and more beat gestures co-occurring with emotional speech in negative narratives in L1. Furthermore, using more gestures when retelling a narrative was associated with increased emotional intensity ratings for narratives. Overall, these findings suggest that (1) bilinguals might use representational gestures to compensate for reduced linguistic proficiency in their L2, (2) speakers use more gestures to express negative emotional information, particularly during emotional speech, and (3) gesture production may enhance the encoding of emotional information, which subsequently leads to the intensification of emotion perception.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 10
    Citation - Scopus: 10
    Gestures Cued by Demonstratives in Speech Guide Listeners' Visual Attention During Spatial Language Comprehension
    (Amer Psychological Assoc, 2023) Ozer, Demet; Karadoller, Dilay Z.; Ozyurek, Asli; Goksun, Tilbe
    Gestures help speakers and listeners during communication and thinking, particularly for visual-spatial information. Speakers tend to use gestures to complement the accompanying spoken deictic constructions, such as demonstratives, when communicating spatial information (e.g., saying The candle is here and gesturing to the right side to express that the candle is on the speaker's right). Visual information conveyed by gestures enhances listeners' comprehension. Whether and how listeners allocate overt visual attention to gestures in different speech contexts is mostly unknown. We asked if (a) listeners gazed at gestures more when they complement demonstratives in speech (here) compared to when they express redundant information to speech (e.g., right) and (b) gazing at gestures related to listeners' information uptake from those gestures. We demonstrated that listeners fixated gestures more when they expressed complementary than redundant information in the accompanying speech. Moreover, overt visual attention to gestures did not predict listeners' comprehension. These results suggest that the heightened communicative value of gestures as signaled by external cues, such as demonstratives, guides listeners' visual attention to gestures. However, overt visual attention does not seem to be necessary to extract the cued information from the multimodal message.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 1
    Citation - Scopus: 1
    The Link Between Early Iconic Gesture Comprehension and Receptive Language
    (Wiley, 2024) Dogan, Isil; Ozer, Demet; Aktan-Erciyes, Asli; Furman, Reyhan; Demir-Lira, O. Ece; Ozcaliskan, Seyda; Goksun, Tilbe
    Children comprehend iconic gestures relatively later than deictic gestures. Previous research with English-learning children indicated that they could comprehend iconic gestures at 26 months, a pattern whose extension to other languages is not yet known. The present study examined Turkish-learning children's iconic gesture comprehension and its relation to their receptive vocabulary knowledge. Turkish-learning children between the ages of 22- and 30-month-olds (N = 92, M = 25.6 months, SD = 1.6; 51 girls) completed a gesture comprehension task in which they were asked to choose the correct picture that matched the experimenter's speech and iconic gestures. They were also administered a standardized receptive vocabulary test. Children's performance in the gesture comprehension task increased with age, which was also related to their receptive vocabulary knowledge. When children were categorized into younger and older age groups based on the median age (i.e., 26 months-the age at which iconic gesture comprehension was present for English-learning children), only the older group performed at chance level in the task. At the same time, receptive vocabulary was positively related to gesture comprehension for younger but not older children. These findings suggest a shift in iconic gesture comprehension at around 26 months and indicate a possible link between receptive vocabulary knowledge and iconic gesture comprehension, particularly for children younger than 26 months.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 1
    Multimodal Communication in Virtual and Face-To Gesture Production and Speech Disfluency
    (Istanbul Univ, Fac Letters, dept Psychology, 2024) Arslan, Burcu; Avci, Can; Ozer, Demet
    The COVID-19 pandemic has made online data collection a popular choice. It is important to evaluate howcomparable online studies are to face-to-face studies, particularly in multimodal language research wheremodes of communication significantly impact the results. In this study, we examined individuals' rates andpatterns of speech disfluency and gesture use across face-to-face and online videoconferencing settings asthey described their daily routines (N= 64). We asked whether and how multimodal language is affected acrossdifferent communication settings and gesture use, particularly iconic gestures, is associated with speech fluencyregardless of the context. Our results have showed that the participants' overall disfluency rate was higherfor the speech communicated via videoconferencing than the speech communicated face-to-face. However,the type of disfluencies changed across contexts, such that filled pauses and repairs were more commonin online communication, whereas silent pauses were more common in face-to-face communication. Thesefindings signal an interplay between the cognitive functions of different disfluency types and communicativestrategies. Results indicate that the overall gesture frequency and iconic gesture use were similar in bothsettings. Furthermore, the use of iconic gestures was found to negatively predict the overall disfluency rate,regardless of the setting. This finding suggests that using iconic gestures might facilitate cognitive processes,paving the way for a more fluent speech. This study demonstrates that multimodal language and communicationstrategies may vary across different communication settings and nuanced understanding of the differences inmultimodal language between online and face-to-face communication can be gained using different contexts.The findings contribute to understanding the impact of increasingly widespread online communication onmultimodal language production processes and provide foundation for future research.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 5
    Multimodal Language in Child-Directed Versus Adult-Directed Speech
    (Sage Publications Ltd, 2023) Kandemir, Songul; Ozer, Demet; Aktan-Erciyes, Asli
    Speakers design their multimodal communication according to the needs and knowledge of their interlocutors, phenomenon known as audience design. We use more sophisticated language (e.g., longer sentences with complex grammatical forms) when communicating with adults compared with children. This study investigates how speech and co-speech gestures change in adult-directed speech (ADS) versus child-directed speech (CDS) for three different tasks. Overall, 66 adult participants (M-age = 21.05, 60 female) completed three different tasks (story-reading, storytelling and address description) and they were instructed to pretend to communicate with a child (CDS) or an adult (ADS). We hypothesised that participants would use more complex language, more beat gestures, and less iconic gestures in the ADS compared with the CDS. Results showed that, for CDS, participants used more iconic gestures in the story-reading task and storytelling task compared with ADS. However, participants used more beat gestures in the storytelling task for ADS than CDS. In addition, language complexity did not differ across conditions. Our findings indicate that how speakers employ different types of gestures (iconic vs beat) according to the addressee's needs and across different tasks. Speakers might prefer to use more iconic gestures with children than adults. Results are discussed according to audience design theory.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback