Comparación de arquitecturas YOLO para la detección de ciclistas urbanos en un entorno de vehículos autónomos
DOI:
https://doi.org/10.24054/rcta.v1i43.2820Palabras clave:
Yolo, VRU, deep learning, detección de ciclistas, vehículo autónomoResumen
La OMS establece que más del 55% de las muertes en accidentes viales son de usuarios vulnerables, incluyendo un 3% de ciclistas. Aunque los vehículos autónomos pueden detectar objetos y personas en las carreteras, la detección de ciclistas y la predicción de sus movimientos siguen siendo desafíos significativos. Este artículo presenta resultados al comparar las arquitecturas YOLOv7, YOLOv8 y YOLO-NAS para detectar ciclistas urbanos. La metodología garantiza que los detectores se entrenaron bajo las mismas condiciones. Luego, se evaluaron con 111 imágenes de ciclistas utilizando métricas como IoU, precisión y recall. Los resultados destacan ventajas y desventajas en cada arquitectura, lo que sugiere priorizar el tiempo de inferencia o la calidad de la detección de ciclistas en futuros trabajos.
Descargas
Citas
Flohr, F. B. (2018). Vulnerable Road User Detection and Orientation Estimation for Context-Aware Automated Driving.
Thrun, S. (2010). Toward robotic cars. Communications of the ACM, 53(4), 99–106. https://doi.org/10.1145/1721654.1721679 DOI: https://doi.org/10.1145/1721654.1721679
Alhajyaseen, W. K. M., Asano, M., & Nakamura, H. (2012). Estimation of left-turning vehicle maneuvers for the assessment of pedestrian safety at intersections. IATSS Research, 36(1), 66–74. https://doi.org/10.1016/j.iatssr.2012.03.002 DOI: https://doi.org/10.1016/j.iatssr.2012.03.002
Brohm, T., Haupt, K., & Thiel, R. (2019). Pedestrian Intention and Gesture Classification Using Neural Networks. ATZ Worldwide, 121(4), 26–31. https://doi.org/10.1007/s38311-019-0006-6 DOI: https://doi.org/10.1007/s38311-019-0006-6
Chen, Y. Y., Jhong, S. Y., Li, G. Y., & Chen, P. H. (2019). Thermal-Based Pedestrian Detection Using Faster R-CNN and Region Decomposition Branch. Proceedings - 2019 International Symposium on Intelligent Signal Processing and Communication Systems, ISPACS 2019, 2019–2020. https://doi.org/10.1109/ISPACS48206.2019.8986298 DOI: https://doi.org/10.1109/ISPACS48206.2019.8986298
Heo, D., Nam, J. Y., & Ko, B. C. (2019). Estimation of pedestrian pose orientation using soft target training based on teacher-student framework. Sensors (Switzerland), 19(5). https://doi.org/10.3390/s19051147 DOI: https://doi.org/10.3390/s19051147
Lan, W., Dang, J., Wang, Y., & Wang, S. (2018). Pedestrian detection based on yolo network model. Proceedings of 2018 IEEE International Conference on Mechatronics and Automation, ICMA 2018, 1547–1551. https://doi.org/10.1109/ICMA.2018.8484698 DOI: https://doi.org/10.1109/ICMA.2018.8484698
Murphey, Y. L., Liu, C., Tayyab, M., & Narayan, D. (2018). Accurate pedestrian path prediction using neural networks. 2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017 - Proceedings, 2018-Janua, 1–7. https://doi.org/10.1109/SSCI.2017.8285398 DOI: https://doi.org/10.1109/SSCI.2017.8285398
Fairley, P. (2017). Self-driving cars have a bicycle problem [News]. IEEE Spectrum, 54(3), 12–13.
Mannion, P. (2019). Vulnerable road user detection: state-of-the-art and open challenges. 1–5. http://arxiv.org/abs/1902.03601
Li, X., Flohr, F., Yang, Y., Xiong, H., Braun, M., Pan, S., Li, K., & Gavrila, D. M. (2016). A new benchmark for vision-based cyclist detection. IEEE Intelligent Vehicles Symposium, Proceedings, 2016-Augus(Iv), 1028–1033. https://doi.org/10.1109/IVS.2016.7535515 DOI: https://doi.org/10.1109/IVS.2016.7535515
Kress, V., Jung, J., Zernetsch, S., Doll, K., & Sick, B. (2019). Pose Based Start Intention Detection of Cyclists. 2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019, 2381–2386. https://doi.org/10.1109/ITSC.2019.8917215 DOI: https://doi.org/10.1109/ITSC.2019.8917215
Garcia-Venegas, M., Mercado-Ravell, D. A., Pinedo-Sanchez, L. A., & Carballo-Monsivais, C. A. (2021). On the safety of vulnerable road users by cyclist detection and tracking. Machine Vision and Applications, 32(5), 109. DOI: https://doi.org/10.1007/s00138-021-01231-4
Casas, E., Ramos, L., Bendek, E., & Rivas-Echeverría, F. (2023). Assessing the Effectiveness of YOLO Architectures for Smoke and Wildfire Detection. IEEE Access. DOI: https://doi.org/10.1109/ACCESS.2023.3312217
BITZI, software image scraping. (2017). Instituto Tecnológico Metropolitano. Medellín. https://doi.org/10.1109/mspec.2017.7864743 DOI: https://doi.org/10.1109/MSPEC.2017.7864743
Shijie, J., Ping, W., Peiyi, J., & Siping, H. (2017, October). Research on data augmentation for image classification based on convolution neural networks. In 2017 Chinese automation congress (CAC) (pp. 4165-4170). IEEE. DOI: https://doi.org/10.1109/CAC.2017.8243510
Wickramanayake, S., Hsu, W., & Lee, M. L. (2021). Explanation-based data augmentation for image classification. Advances in Neural Information Processing Systems, 34, 20929-20940.
Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).Clymer, J. R. (1992). “Discrete Event Fuzzy Airport Control”. IEEE Trans. On Systems, Man, and Cybernetics, Vol. 22, No. 2. DOI: https://doi.org/10.1109/CVPR.2016.91
Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2023). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7464-7475). DOI: https://doi.org/10.1109/CVPR52729.2023.00721
Yasir, M., Zhan, L., Liu, S., Wan, J., Hossain, M. S., Isiacik Colak, A. T., ... & Yang, Q. (2023). Instance segmentation ship detection based on improved Yolov7 using complex background SAR images. Frontiers in Marine Science, 10, 1113669. DOI: https://doi.org/10.3389/fmars.2023.1113669
Jocher, G., Chaurasia, A., & Qiu, J. (2023). YOLO by Ultralytics. URL: https://github. com/ultralytics/ultralytics.
Xia, K., Lv, Z., Zhou, C., Gu, G., Zhao, Z., Liu, K., & Li, Z. (2023). Mixed Receptive Fields Augmented YOLO with Multi-Path Spatial Pyramid Pooling for Steel Surface Defect Detection. Sensors, 23(11), 5114. DOI: https://doi.org/10.3390/s23115114
Liu, Y., Sun, Y., Xue, B., Zhang, M., Yen, G. G., & Tan, K. C. (2021). A survey on evolutionary neural architecture search. IEEE transactions on neural networks and learning systems.
Padilla, R., Netto, S. L., & Da Silva, E. A. (2020, July). A survey on performance metrics for object-detection algorithms. In 2020 international conference on systems, signals and image processing (IWSSIP) (pp. 237-242). IEEE. DOI: https://doi.org/10.1109/IWSSIP48289.2020.9145130
Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28.
Li, X., Li, L., Flohr, F., Wang, J., Xiong, H., Bernhard, M., Pan, S., Gavrila, D. M., & Li, K. (2017). A unified framework for concurrent pedestrian and cyclist detection. IEEE Transactions on Intelligent Transportation Systems, 18(2), 269–281. https://doi.org/10.1109/TITS.2016.2567418 DOI: https://doi.org/10.1109/TITS.2016.2567418
Lin, Y., Wang, P., & Ma, M. (2017). Intelligent Transportation System(ITS): Concept, Challenge and Opportunity. Proceedings - 3rd IEEE International Conference on Big Data Security on Cloud, BigDataSecurity 2017, 3rd IEEE International Conference on High Performance and Smart Computing, HPSC 2017 and 2nd IEEE International Conference on Intelligent Data and Securit, 167–172. https://doi.org/10.1109/BigDataSecurity.2017.50 DOI: https://doi.org/10.1109/BigDataSecurity.2017.50
World Health Organization. (2018). Global status report on road safety 2018. In Global status report on road safety 2018: Summary (No. WHO/NMH/NVI/18.20).
Descargas
Archivos adicionales
Publicado
Versiones
- 2024-03-13 (5)
- 2024-03-13 (4)
- 2024-03-13 (3)
- 2024-03-13 (2)
- 2024-03-13 (1)
Cómo citar
Número
Sección
Licencia
Derechos de autor 2024 REVISTA COLOMBIANA DE TECNOLOGIAS DE AVANZADA (RCTA)
Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial 4.0.