Intelligent Information System for Driver Assistance Using Artificial Intelligence and Computer Vision Technologies

Students Name: Teliuk Artem Mykhailovych
Qualification Level: master (ESP)
Speciality: System Analysis
Institute: Institute of Computer Science and Information Technologies
Mode of Study: full
Academic Year: 2024-2025 н.р.
Language of Defence: англійська
Abstract: This thesis presents a software prototype of an intelligent driver assistance system (ADAS) capable of analyzing video from a front-facing camera in real time and delivering critical information about the road environment. The solution is intended for use in vehicles lacking built-in ADAS systems, which make up the majority of Ukraine’s vehicle fleet [1]. The system operates autonomously, without interfering with vehicle control, and is designed to increase driver awareness, reduce accident risks, and enhance road safety. It is implemented in Python using a multithreaded modular architecture optimized for low-cost, GPU-less devices. The research problem stems from the limited accessibility of modern ADAS technologies for most drivers. In Ukraine, most cars are over 20 years old and lack integrated driver support tools [1]. Given the increasing traffic load, infrastructure aging, and a high rate of driver-related accidents, there is a clear need for an affordable solution that can provide essential information in real time regardless of vehicle type [2, 3]. The proposed system uses computer vision and deep learning to detect lane markings, vehicles, traffic lights, road signs, and surface damage. The methodology involved stages of literature review, system modeling, architectural design, implementation, and testing. A system analysis was performed using UML notations (use cases, states, sequences, and classes) and BPMN for business process modeling. A multithreaded modular architecture was proposed to allow flexible integration of functional subsystems. Core modules were designed for object detection, traffic sign recognition, lane detection, pothole identification, speed estimation, and perspective transformation. The training phase utilized datasets such as KITTI, BDD100K, CULane, and Roboflow. YOLOv8 and YOLOv11 models were adapted to run efficiently on CPU [4, 5, 6]. Special attention was given to the system’s visual output. Two operation modes were developed: a full mode for detailed debugging and a fast mode for minimal distraction. The interface includes LED-based alerts, distance estimation, collision warning zones, and icon-based information. It supports configuration, calibration, and user personalization. Adaptation for platforms like Raspberry Pi is planned via model optimization, including NCNN and FP16 implementations. Prototype testing confirmed the effectiveness of the selected architectural approach. Evaluations of detection accuracy (mAP, Precision, Recall), stability, and performance demonstrated the system’s readiness for real-world use. Tests across various hardware configurations identified performance bottlenecks and optimization opportunities. Economic analysis validated the feasibility of development. The prototype’s development and operational costs were compared to existing analogs, confirming economic viability. The system can recoup its investment within several years, delivering full ADAS functionality at a fraction of the cost. Scalability offers further efficiency and affordability for mass deployment. The object of study is the process of supporting driver decision-making through real-time video analysis. The subject of study involves methods and tools for building an intelligent system using computer vision in resource-constrained environments. The scientific novelty lies in developing a comprehensive ADAS system that operates on inexpensive hardware without interfering with vehicle control. Novel approaches to architectural optimization and information delivery were introduced. The research results were presented at an international scientific conference and published in a peer-reviewed academic journal. The system has demonstrated practical value, technical feasibility, and economic justification. Keywords: ADAS, computer vision, deep learning, YOLO, Python, video analysis, information system, multithreading, real-time. References: 1. Shyrokun I. How Many Cars Are There in Ukraine Really? // Auto24. 2023. URL: https://auto.24tv.ua/skilky_naspravdi_mashyn_v_ukraini_bahato_chy_malo_n43694 2. Gasser T. M., Frey A. T., Seeck A., Auerswald R. Comprehensive definitions for automated driving and ADAS // 25th International Technical Conference on the Enhanced Safety of Vehicles (ESV). Detroit, USA, 2017. Paper Number: 17-0380. 3. Choi S., Thalmayr F., Wee D., Weig F. Advanced driver-assistance systems: Challenges and opportunities ahead // McKinsey & Company. February 2016. URL: https://www.mckinsey.com 4. Murthy J. S., Siddesh G. M., Lai W.-C., et al. ObjectDetect: A Real-Time Object Detection Framework for Advanced Driver Assistant Systems Using YOLOv5 // Wireless Communications and Mobile Computing. 2022. Article ID 9444360. DOI: 10.1155/2022/9444360. 5. Sapkota R., Ahmed D., Karkee M. Comparing YOLOv8 and Mask RCNN for Object Segmentation in Complex Orchard Environments // Qeios. 2024. DOI: 10.32388/ZB9SB0. 6. Kim Y.-M., Kim Y.-G., Son S.-Y., et al. Review of Recent Automated Pothole-Detection Methods // Applied Sciences. 2022. Vol. 12, No. 11. Article 5320. DOI: 10.3390/app12115320.