This study employs various Human-Computer Interaction methods to introduce an innovative system that empowers individuals with diverse abilities to use computers in novel ways. It combines gesture-based virtual mouse control and painting, along with eye and voice integration. Advanced eye-tracking technology to accurately detect and interpret ocular movements is incorporated. These movements are then mapped to specific on-screen actions, facilitated by the computer webcam. Simultaneously, hand gesture-based mouse control is also available, where it captures and analyzes intricate hand motions, translating them into precise on-screen commands. The hand tracking painting feature revolutionizes by providing users with a virtual canvas. Through hand movements, users can write and draw with ease with various tools, expanding the potential for interactive creativity. Easy switching between the traditional mouse, virtual mouse, and gesture-based painting modes are controlled by a voice assistant. This integration provides a comprehensive and adaptable digital environment, enhancing user convenience and mobility. This method seamlessly integrates of cutting-edge technologies, culminating in the creation of an inclusive and exceptionally intuitive user experience.
Keywords: Human-computer interaction, Multi-modal interaction system, Assistive technology.
[1] Morajkar, Aarti, Atheena Mariyam James, Minoli Bagwe, Aleena Sara James & Aruna Pavate (2023). Hand gesture and voice-controlled mouse for physically challenged using computer vision. Engineering Applications, 2(2).
[2] Latha, B., Sri Sowndarya, K. Swethamalya, Ashish Raghuwanshi, Rakhmatova Feruza & G. Sathish Kumar (2023). Hand Gesture and Voice Assistants. In E3S, 399: 04050.
[3] Shakya, Nista, Prashamsa Bakhrel, Puskar Adhikari & Ramjanam Sharma (2023). Real-time object tracking and shape recognition using Air Canvas.
[4] Harshit Rajput, Mudit Sharma, Twesha Mehrotra & Tanya Maurya (2023). Air canvas through Object detection using OpenCV in python. International Journal of Creative Research Thoughts.
[5] Kanimozhi, P., S. Parkavi & T. Ananth Kumar (2023). Predicting Mortgage-Backed Securities Prepayment Risk Using Machine Learning Models. In 2nd International Conference on Smart Technologies and Systems for Next Generation Computing (ICSTSN), IEEE.
[6] Sangtani, Virendra Swaroop, Anushka Porwal, Ankit Kumar, Ankit Sharma & Aashish Kaushik (2023). Artificial Intelligence Virtual Mouse using Hand Gesture. International Journal of Modern Developments in Engineering and Science, 2(5): 26-30.
[7] Gandage, Sharanabasappa, Poornachandra Swami & Shankar Reddy Bengire (2023). Virtual Paint. Journal of Scientific Research and Technology, Pages 25-38.
[8] Gulati, Shaurya, Ashish Kumar Rastogi, Mayank Virmani, Rahul Jana, Raghav Pradhan & Chetan Gupta (2023). Paint/Writing Application through Web Cam using MediaPipe and OpenCV. In 2022 2nd International Conference on Innovative Practices in Technology and Management (ICIPTM), 2: 287-291, IEEE.
[9] Singh, Swapna, Hiya Shukla, Keshav Sharma, Harsh Tyagi & Jaya Prasad (2023). Digitized Interaction: A Gesture-Controlled Whiteboard System with OpenCV, MediaPipe and NumPy. In IPEC, 2: 01.
[10] Vasavi, R., Nenavath Rahul, A. Snigdha, K. Jeffery Moses & S. Vishal Simha (2023). Painting with Hand Gestures using MediaPipe.
[11] Urumkar, Pritish & Ashwini Gade (2023). Control of Computer Peripherals using Human Eyes. ArXiv preprint ArXiv:2310.06601.
[12] Saeed, Zinah Raad, et al. (2022). A Systematic Review on Systems-Based Sensory Gloves for Sign Language Pattern Recognition: An Update From 2017 to 2022. IEEE Access.
[13] Rathee, Sonia, et al. (2021). Eye Gaze Mouse Empowers People with Disabilities. Disruptive Technologies for Society 5.0: Exploration of New Ideas, Techniques, and Tools.
[14] Sakri, Leena I., et al. (2022). Gesture-Based Drawing Application: A Survey. Machine Learning in Information and Communication Technology: Proceedings of ICICT 2021, SMIT, Pages 303-309.
[15] Al Farid, Fahmid, et al. (2022). A structured and methodological review on vision-based hand gesture recognition system. Journal of Imaging, 8(6): 153.
[16] Baig, Faisal, Muhammad Fahad Khan & Saira Beg (2013). Text writing in the air. Journal of Information Display, 14(4): 137-148.
Source of Funding:
This study did not receive any grant from funding agencies in the public or not-for-profit sectors.
Competing Interests Statement:
The authors have declared no competing interests.
Consent for Publication:
The authors declare that they consented to the publication of this research work.
Ethical Approval:
Not Applicable.
Author’s Contribution:
All the authors took part in data collection and manuscript writing equally.
A New Issue was published – Volume 7, Issue 2, 2024
06-04-2024 08-01-2024