1. Home
  2. Browse by Author

Browsing by Author "Bashar, Mohammad Raihanul"

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Conference Object
    Depth3DSketch: Freehand Sketching Out of Arm's Reach in Virtual Reality
    (Assoc Computing Machinery, 2025) Bashar, Mohammad Raihanul; Amini, Mohammadreza; Stuerzlinger, Wolfgang; Sarac, Mine; Pfeuffer, Ken; Machuca, Mayra Donaji Barrera; Batmaz, Anil Ufuk
    Due to the increasing availability and popularity of virtual reality (VR) systems, 3D sketching applications have also boomed. Most of these applications focus on peripersonal sketching, e.g., within arm's reach. Yet, sketching in larger scenes requires users to walk around the virtual environment while sketching or to change the sketch scale repeatedly. This paper presents Depth3DSketch, a 3D sketching technique that allows users to sketch objects up to 2.5 m away with a freehand sketching technique. Users can select the sketching depth with three interaction methods: using the joystick on a single controller, the intersection from two controllers, or the intersection from the controller ray and the user's gaze. We compared these interaction methods in a user study. Results show that users preferred the joystick to select visual depth, but there was no difference in user accuracy or sketching time between the three methods.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - WoS: 1
    Citation - Scopus: 1
    Subtask-Based Virtual Hand Visualization Method for Enhanced User Accuracy in Virtual Reality Environments
    (Ieee Computer Soc, 2024) Voisard, Laurent; Hatira, Amal; Bashar, Mohammad Raihanul; Gemici, Mucahit; Sarac, Mine; Kersten-Oertel, Marta; Batmaz, Anil Ufuk
    In the virtual hand interaction techniques, the opacity of the virtual hand avatar can potentially obstruct users' visual feedback, leading to detrimental effects on accuracy and cognitive load. Given that the cognitive load is related to gaze movements, our study focuses on analyzing the gaze movements of participants across opaque, transparent, and invisible hand visualizations in order to create a new interaction technique. For our experimental setup, we used a Purdue Pegboard Test with reaching, grasping, transporting, and inserting subtasks. We examined how long and where participants concentrated on these subtasks and, using the findings, introduced a new virtual hand visualization method to increase accuracy. We hope that our results can be used in future virtual reality applications where users have to interact with virtual objects accurately.