3D artificial vision technology for detecting movements in people with muscular disabilities through a computer application

Authors

DOI:

https://doi.org/10.24054/rcta.v2i42.2714

Keywords:

artificial vision, diverse muscular conditions, educational inclusion, digital transformation, artificial intelligence

Abstract

This article describes a computer program that incorporates 3D artificial vision technology, a branch of artificial intelligence. This application provides a straightforward way for individuals with various muscular conditions to interact with a computer. Despite the plethora of devices on the market capable of detecting movements and recognizing gestures, there is a shortage of innovations designed to facilitate access and use of information and communication media for people with motor limitations. The results of this application indicate that it is a valuable aid when used in a social inclusion process, allowing individuals with a variety of muscular conditions to participate more effectively in work and educational environments.

Downloads

Download data is not yet available.

References

Bautista L., Archila J. (2011) Visión artificial aplicada en sistemas de realidad aumentada. En 3er Congreso Internacional de Ingeniería Mecatrónica – UNAB, Vol. 2, No 1.

Chen, M., Duan, Z., Lan, Z., & Yi, S. (2023). Scene reconstruction algorithm for unstructured weak-texture regions based on stereo vision. Applied Sciences, 13(11), 6407.

Castro, G. Z., Guerra, R. R., & Guimarães, F. G. (2023). Automatic translation of sign language with multi-stream 3D CNN and generation of artificial depth maps. Expert Systems with Applications, 215, 119394.

Figueroa, Y., Arias, L., Mendoza, D., Velasco, N., Rea, S., & Hallo, V. (2018). Autonomous video surveillance application using artificial vision to track people in restricted areas. In Developments and Advances in Defense and Security: Proceedings of the Multidisciplinary International Conference of Research Applied to Defense and Security (MICRADS 2018) (pp. 58-68). Springer International Publishing.

Gómez, J. (2010) Discapacidad en Colombia: reto para la inclusión en capital humano. Colombia Líder. Bogotá: Fundación Saldarriaga Concha.

Huynh-The, T., Pham, Q. V., Pham, X. Q., Nguyen, T. T., Han, Z., & Kim, D. S. (2023). Artificial intelligence for the metaverse: A survey. Engineering Applications of Artificial Intelligence, 117, 105581.

Jones, C. R., Trott, S., & Bergen, B. (2023). EPITOME: Experimental Protocol Inventory for Theory Of Mind Evaluation. In First Workshop on Theory of Mind in Communicating Agents.

Juegos infantiles (2023). Juegos infantiles en línea. [Online]. Disponible en: http://www.juegosinfantiles.com/locos/guerradeglobosdeagua.html Fecha de acceso: septiembre de 2023.

Luo, X., Sun, Q., Yang, T., He, K., & Tang, X. (2023). Nondestructive determination of common indicators of beef for freshness assessment using airflow-three dimensional (3D) machine vision technique and machine learning. Journal of Food Engineering, 340, 111305.

Mahajan, H. B., Uke, N., Pise, P., Shahade, M., Dixit, V. G., Bhavsar, S., & Deshpande, S. D. (2023). Automatic robot Manoeuvres detection using computer vision and deep learning techniques: A perspective of internet of robotics things (IoRT). Multimedia Tools and Applications, 82(15), 23251-23276.

Mauri, C. (2004). Interacción persona-ordenador mediante cámaras Webcam. InJ. Lorés and R. Navarro (Eds.), Proceedings of V Congress Interacción Human-Computer, pp. 366–367. Lleida, Spain: Arts Gràfiques Bobalà SL.

Miao, R., Liu, W., Chen, M., Gong, Z., Xu, W., Hu, C., & Zhou, S. (2023). Occdepth: A depth-aware method for 3d semantic scene completion. arXiv preprint arXiv:2302.13540.

Minijuegos (2023). Minijuegos en linea. [Online]. Disponible en: http://www.minijuegos.com/juego/super Fecha de acceso: septiembre de 2023.

Montalvo, M. (2010). Técnicas de visión estereoscópica para determinar la estructura tridimensional de la escena. Doctoral tesis, Universidad Complutense de Madrid, España.

OpenNI (2023). OpenNI user guide. [Online]. Disponible en: https://github.com/OpenNI/OpenNI/blob/master/Documentation/OpenNI_UserGuide.pdf Fecha de acceso: septiembre de 2023.

Ramos, D. (2013). Estudio cinemático del cuerpo humano mediante Kinect. Trabajo de grado en Telecomunicaciones, Escuela Técnica De Telecomunicaciones, Universidad Politécnica de Madrid, España.

Sánchez, J., Cardona, H. & Jiménez, J. (2023) Potencialidades del uso de las herramientas informáticas para la optimización del acceso a la oferta educativa de personas adultas con trastornos neuromusculares que habitan en el Área Metropolitana del Valle de Aburrá. En: XX Congreso Latino - Iberoamericano de Gestión Tecnológica y de la Innovación - ALTEC 2023, Paraná, Argentina.

Song, Y., Demirdjian, D. (2010) Continuous body and hand gesture recognition for natural human-computer interaction. ACM Transactions on Interactive Intelligent Systems, vol. 1, No. 1, pp. 111-148.

Villaverde, I. (2009). On computational intelligence tools for vision based navigation of mobile robots. Doctoral thesis. , Department of Computer Science and Artificial Intelligence, University of the Basque Country, España.

Zhengyou, Z. (2012). Microsoft kinect sensor and its effect. Multimedia at Work, IEEE Computer Society, pp. 4-10.

Published

2023-12-20 — Updated on 2023-12-21

Versions

How to Cite

Marín Cano, A., Romero Acero, Álvaro, & Jiménez Builes, J. A. (2023). 3D artificial vision technology for detecting movements in people with muscular disabilities through a computer application. COLOMBIAN JOURNAL OF ADVANCED TECHNOLOGIES, 2(42), 115–121. https://doi.org/10.24054/rcta.v2i42.2714 (Original work published December 20, 2023)