- Open Access
- Authors : P Vikranth Reddy , Joshua D’Souza , Shradhya Rakshit , Sahil Bavariya, Priya Badrinath
- Paper ID : IJERTV11IS110085
- Volume & Issue : Volume 11, Issue 11 (November 2022)
- Published (First Online): 25-11-2022
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Survey on Driver Safety Systems using Internet of Things
P Vikranth Reddy, Sahil Bavariya, Shradhya Rakshit, Joshua Dsouza, Priya Badrinath
Dept. of Computer Science & Engineering PES University, Bangalore
Abstract- Road accidents are prevalent worldwide, and a prime reason is driver drowsiness or driving under the influence. All of us are also familiar with drink & drive cases. As more and more wireless communication, entertainment and driver assistance systems permeate the automotive market, We anticipate a rise in the number of distraction-related collisions. Focusing on curbing such mishaps and to save more lives from being lost. This paper presents a survey on the various methods used for driver drowsiness and distraction detection based on behavioral measurements and deep learning techniques. After analyzing the research papers below, we were able to extract some useful algorithms and techniques that can help us with future work so that we can gain more accuracy and overcome difficulties. These consist of yawning, entity detection, head motions, blinking, and face detection. However, creating a reliable and accurate drowsiness detection system is a difficult endeavor because it necessitates accurate and strong algorithms.Several methods have been researched in the past to identify distraction and drowsiness of the drivers. These algorithms must be modified in light of the new development of deep learning in order to assess how well they identify drowsiness.A rigorous meta-analysis is performed on 24 studies that use various state-of-the-art models and deep learning techniques to detect drowsiness or distraction of a driver. After analyzing the research papers below, we were able to extract some useful algorithms and techniques that can help us with future work so that we can gain more accuracy and overcome difficulties.
Index Terms: Driver Drowsiness, Distraction Detection,Drunk Detection,Eye Aspect Ratio(EAR), Mouth Aspect Ratio(MAR),Deep Learning, Computer Vision, Convolutional Neural Network
I . INTRODUCTION
About 1.3 million people die each year due to road accidents. In addition, an estimated 20 to 50 millions of people suffer non-fatal injuries in accidents. Statistics show that the primary reasons for driving under the influence are speeding, fatigue, and distracted driving. The United Nations General Assembly hopes to cut down the global road accident rate by half by 2030, which is ambitious.Inclusion of driver assistance devices viz cellular devices have contributed greatly to increase in traffic accidents.Human error is still one of the main reasons for such accidents and therefore having a system that would alert the human and refrain them from partaking in such activities will help reduce accidents by a huge margin. Driving under the influence has always been a recurring reason for accidents. While there are several alerting systems that have been employed over the years in the vehicles to alert the driver, the driver still remains as the primary reason to prevent such accidents when it comes to
control and actions. To have a better understanding of driver drowsiness and distraction methods we present this detailed review of relevant work done in the recent years. Each method utilizes various deep learning techniques both from scratch and pretrained models and using them to perform multi classification tasks. Computer Vision techniques are also employed to detect facial and whole body features.
-
RELATED WORK AND RESEARCH
There are several survey and the researches on driver safety system using IoT. [1]Muhammad Fawwaz Yusri, et al. proposes the system considers a simple system for drowsiness based on the eye blink count derived from the EAR. For eye disclosure, use HOG and linear SVM. Eyes blink rate goes below and if the threshold is exceeded, the system will issue a warning and prevent the driver from falling asleep,in the article they did comprehensively. Analyze the required minimums for the suggested strategy. When the face is towards the camera, this method performs effectively, but if the head is sufficiently angled, it performs less well. Our technique for detecting tiredness will be improved on the basis of the assessments' findings.Drowsiness is detected by analyzing the movement of the eyelids, i.e., using different points as landmarks on the eyes to monitor activity. Milestones are tracked using the dlib library making use of an ensemble of regression trees. Using EAR, when eyes are open, it is 0.35, and that value rapidly drops to below 0.15 as the eyes start to close. Max and min thresholds are calculated. Average Blinking Rates and Sleepy Blinking Rates are compared(between 3 participants) to determine the total points. When the head is in a given posture, the algorithm is unable to locate the eyes at that location (such as tilted upwards or downwards) A change in facial expression, such as a smile, affects the EAR measurement.[2]Sulaiman, A.S, et.al proposes the intelligent system is evolved to find driver drowsiness and to trigger a voice alarm to alert said drivers. Due to the availability of several surrounding parameters, current techniques have several limitations. Poor lighting affects the camera's ability to detect the face and the eyes. This dramatically affects the image processing technique due to late detection or no detection, thereby decreasing the accuracy by a considerable margin. This system proposes a real-time detection system that utilizes a camera and image processing libraries that captures and studies the eye and the entire face to determine the state of the person driving. Feature extraction from facial landmarks is the primary approach to detecting any image. Sixty-eight individual coordinates are identified on a face and are tracked. The eye region is precisely followed, monitored, and measured concerning the Eye Aspect Ratio (EAR)
relation.Wherein parameters are the facial landmarks detected earlier. A Total of 20 trials are conducted to verify and settle the final EAR value. Then, using the accuracy formula, they calculated the accuracy of the whole Project, which came down to about 80%. [3]Aryan Khan Pisuth, et.al propose the find a way to make sleepy drivers aware of driving.Therefore, this study tried to solve the problem by creating an experiment to calculate sleepiness levels .This article required a Raspberry Pi Camera and a Raspberry Pi 3 module that could calculate drivers sleepiness levels. Head tilt and blink frequency were used to determine if the driver was feeling drowsy . An accurate time detection of the face using Raspberry Pi as the hardware component and image processing libraries to accurately detect the look and its features. The significant parameters are the position of head and the eye blink rate. The ROI(Region of Interest) is continuously tracked and monitored for any movements. A Haar Cascade Classifier has been used for face finding in a given frame in the ROI. [4]V B Navya Kiran,et.al proposes in their study, machine learning techniques are employed to forecast the drivers mental and emotional states. This is an application for artificial intelligence. Systems can learn more effectively and automatically improve without much planning through artificial intelligence. You can learn about the drivers state by observing their facial expressions, driving habits, and biometric signs. This article summarizes current research on systems that can identify and alert users to drowsy drivers. The driver's state is ascertained using MATLAB image processing and a variety of machine learning algorithms, including the PERCLOS algorithm, HAAR-based cscade classifier, and OpenCV. Finally, they listed the issues with the current system and suggested appropriate research directions. [5].Belal al Shaqaqi, et.al in their work defines the main contributing factors to traffic accidents worldwide are driver fatigue and drowsiness. Drowsiness and fatigue contribute to an increase in deaths and fatalities worldwide each year. In order to lower the number of accidents brought on by driver fatigue and raise transportation safety, they demonstrated an Advanced Driver Assistance System (ADAS) module in this study. Additionally, this ADAS system automatically detects driver drowsiness based on visual data from a camera and artificial intelligence. To measure PERCLOS, a scientifically based measure of tiredness associated with sluggish eye closure, they suggested an algorithm to find, track, and analyze the driver's face and eyes. [6]Elena Magán, et.al presents the development of ADAS (advanced driver assistance system) focused on detecting driver drowsiness, which aims to alert drivers to their drowsiness to avoid traffic accidents. It is crucial that drowsiness detection while driving is carried out in a non-intrusive manner and that the driver is not alarmed by the alarms when he is not drowsy.The way to open for problems utilizes image sequences up to 60s long and is recorded so that the driver's face is visible. Two alternative solutions for determining whether a driver is showing signs of drowsiness or not have been developed, focusing mainly on minimizing false positives. The first alternative is approaches for deep learning to extract numerical features from photos. In contrast, the second one uses recurrent and convolutional neural networks, which they introduced into a
system called a fuzzy logic-based system afterward. Accuracy of up to 65% was achieved on both systems. Also, the accuracy of the training data is 65%, and the accuracy of the test data is 60%. Moreover, the fuzzy logic-based system stood out because it achieved a specificity of 93% without generating false alarms. Although the results are not very satisfactory, they can be considered a solid basis for future work. [7]Mahek Jain.,et.al in their paper defines a system that was implemented to increase road safety by lowering accidents brought on by driver exhaustion and drowsiness. This has caused the majority of recent accidents. Driver drowsiness and fatigue are indicated by particular looks, eyes, and movements, such as yawning and fatigue. This is the crucial indicator that the driver is feeling drowsiness. The Eye Aspect Ratio (EAR) can identify tiredness by evaluating the separation between horizontal and vertical markers. The distance between the lower and upper lip is used to determine the yawn value, which is then compared to a threshold value for yawn detection. They developed a text-to-speech synthesizer called eSpeak that emits an audio alert when the driver starts to drowsy or yawn. The suggested initiative aims to advance technologies. That can lower the chance of accidents and stop fatal traffic accidents.[8]Md. Uzzal Hossain,Et al.In the paper, the image is processed to train the model in different driver positions, including food, texting, and talking to passengers. Second, the pre-trained Convolutional model contains. CNN-based in-depth learning formats, including ResNet50, MobileNetV2 adopted to detect disturbed driver action. The model is then put to the test using test data images, and the results are examined.[9]Deep Ruparel,Et al.Accidents have been occurring at a high rate worldwide in recent years. When investigating the source of this rising rate, it was discovered that most drivers were distracted while driving, resulting in accidents. The National Road Safety Authority claims that distracted drivers cause approximately one in every five car accidents. They created a model that accurately determines whether a driver is distracted or driving safely. In this paper, they listed the numerous deep learning algorithms that they used for the detection, like the vgg16 model, mobile net, sequential model, pre-trained weights of the vgg16 model, and many others, which assisted them in developing a model capable of providing them with accurate and precise outcomes. Experiments reveal that their approach has a 93% accuracy rate.[10]Rizwan Ali Naqvi,Et al.Data acquisition was made while using the NIR camera built by ELP-USB500W02M. Receiving light in the NIR range, 850 nm NIR band-pass. The NIR illuminator contains six NIR light diodes (LEDs), LED wavelengths are 850 nm Size of NIR facial images obtained are 640 × 512 pixels, with eight bits each used for data acquisition. The obtained image is fed simultaneously to the two facial features of the Dlib trackers. Pictures of interest region (ROI) are available based on the 68 local features found by Dlib facial feature trackers. The links to 68 landmarks. In step, the ROI images of the face left and the right eye is obtained based on the corresponding facial expression with a single Dlib. Similarly, in step (4), the Images of the left and right ROI images and the corresponding right eye local facial mark while using the second Dlib. The ROI obtained from NIR-image input is used
to extract three features points, namely, change in horizontal view (CHG) and change in vertical view (CVG).[11]Tianchi Li,Et al.They constructed and evaluated subject-dependent models using LapSVM and SS-ELM. In addition to that, SBN-supervised clustering, SVM, Transductive SVM and ELM were implemented as comparisons to the proposed models. The performance of ELM and SVM remains the same even with more unlabeled data added, while performance of LAPSVM and SS-ELM increases significantly. LapSVM outperforms SVM statistically in all cases. As for labeled data, LapSVM and SS-ELM outperform SVM and ELM respectively under all cases. But less improvement is observed with increasing amounts of labeled data. The highest accuracy of 97.2% is achieved by SS-ELM. [12]Salma Anber,Et al.This paper has proposed a SVM-based forwarding deep learning model to detect distraction in the driver by focusing only on the drivers face. Frames from video are taken from the NTHU drowsy driving video dataset which were processed and then fed into a pre-trained AlexNet-CNN.The feature set was reduced using NMF in order to This proposed model utilizes the drivers head position to detect distraction. The most common way of detecting is to check whether the driver is looking away from the road. This method only caters to a very specific category of distraction which isnt adequate for a correct prediction. Talking and blinking are considered as normal states, only yawning alerts the model as a sign of fatigue. The head dataset and the mouth dataset both resulted in a 95% accuracy
.[13]Wanghua Deng,Et al.In this authors has proposed, a crucial bodily feature, communicates a lot of information. Facial expressions, such as blinking and yawning more frequently than usual, differ from those in a normal state when a driver is fatigued. In this study, they presented a system called DriCare, which uses video images to detect the level of driver drowsiness without fitting their bodies with devices, such as yawning, blinking, and length of eye closure. Due to the shortcomings of preceding systems, a fresh face- tracking algorithm was retained to boost tracking accuracy. Additionally, based on 68 critical features, they created a new detecting algorithm for face regions. Then, they evaluate the drivers' condition using these facial areas. DriCare combines the characteristics of the lips and eyes.Using a fatigue warning, DriCare can notify the driver. According to the trial findings, DriCare had an accuracy rate of about 92%. [14]Rupali Pawar,Et al.In their paper, they define a Convolutional Neural Network (CNN) model that can identify drowsiness in the driver based on the driver's eyelids closing. They also discussed the potential for a stand-alone, low-cost system that might be mounted inside the vehicle. Convolutional Neural Network (CNN) model, Raspberry Pi micocontroller, and webcam for capturing the driver's facial images would be the main components of this system.. A score is computed based on the length of time the eyes are closed. This score triggers & plays a beeping alarm and warns the user when it exceeds a predefined threshold.When the eyes are open, the score stays at zero throughout. The system can readily mount inside a car and serve as Combined with a Raspberry Pi and run from the vehicle's battery, this device can serve as a continuous monitor for the driver. [15]Zuopeng Zhao,Et al.The Authors proposed the intelligent system,A
fully automated driver drowsiness status identification system employing driving photos is suggested with an emphasis on the field of drowsiness driving detection research. The region of interest (ROI) is extracted utilizing characteristics of the suggested technique using the multitask cascaded convolutional network (MTCNN) architecture for drivers face detection and feature point localization. It is suggested that an EM-CNN be used to Recognize the conditions of the lips and eyes, from ROI images. For the purpose of detecting exhaustion, two criteria are used: the degree of mouth opening (POM) and the percentage of eyelid closure over the pupil over time (PERCLOS).The suggested EM-CNN may effectively identify driver fatigue conditions from driving photos, according to experimental results. The suggested algorithm, EM-CNN, performs better than previous CNN-based techniques,i.e.,AlexNet, VGG-16, GoogleLeNet, and ResNet50, and they got accuracy and sensitivity rates of 93.623% and 93.643% respectively.[16]Prof.Ankita,Et al.They propose in their study The Advanced Driver Assistance System (ADAS), which deals with automatic driver sleepiness detection supported by visual data and artificial intelligence, is represented by this study as a module. This module seeks to improve transportation safety by reducing the frequency of accidents brought on by fatigued drivers. They recommended a technique for locating, keeping an eye on, and examining the driver's face and eyes for PERCLOS, a drowsiness signal associated with sluggish eye closure that has been scientifically validated. [17]Jimiama Mafeni Mase,Et al.This Paper Proposed The in-depth study of the 10 state-of-the-art CNN and RNN techniques used for distracted driver and driver posture detection,The dataset utilized is the American University in Cairo (AUC) Distracted Driver Dataset, and the assessment metrics used are cross-entropy loss, accuracy, and F1-score. Among the cutting-edge CNN techniques are AlexNet, VGG, ResNet, Inception, and DenseNet. RNNs are neural networks that have feedback loops that link the output of one state to the next. They have short-term memory because of the problem of some gradients disappearing. The addition of forget, input, and output gate layers to the memory cell allows LSTMs, an extension of RNN, to learn the short- and long-term reliance of data. Stacked LSTMs have multiple hidden layers as opposed to a standard LSTM.A BiLSTM model takes input in both directions, forward and backwards therefore having more information at its disposal.The InceptionV3-BiLSTM outperformed all the other models with an average accuracy of 93%. This study was only conducted on still images and live video was not considered. [18]Jimiama Mafeni Mase,Et al.This paper introduces a posture detection method for distraction using CNNs and Stacked Bidirectional Long Short Term Memory (BiLSTM) to capture spectral-spatial features of the images. The dataset used is the AUC Distracted Driver Dataset.This process is conducted in two stages.The output of the CNNs (8*8 feature maps) 64 features have been fed into the BiLSTM which extracts the spectral features in the forward and backward directions. The two outputs produced are then concatenated and passed as an input to a fully connected layer for classification. The accuracy achieved is 92.7% when trained and tested on the chosen dataset. This study does not take into
account actual video sequences or real time image capturing, which would have been a greater way of testing the model.[19]Alexey Kashevnik,Et al.This Study has produced a structured summary of distraction detection methods.It considers three main methods- manual distraction, visual distraction, and cognitive distraction.This framework helps visualize the information gathered from set of sensors, measured data, calculated data, calculated events, derived behaviors, and derived distraction types used.They have pointed out that despite all technological advances the human driver will play the biggest role as the supervisor of the system i.e who shouldTaking charge when the automated system requests it.In accordance with a predetermined standard, i.e., they have studied all relevant literature. This survey lacks any recent methodologies that have been established, such as descriptions of the sensors utilized, data measurement, computed information, computed events, inferred driver behavior, and inferred distraction kinds.[20]Yingji Zhang,,Et al.They presents a multi-channel CNN to classify the gaze area of the driver and assess their gaze activity. The classification is based on two categories viz. Gaze and Cognitive Distractions. The car has been divided into nine sections from the drivers perspective and a camera has been set near the rear-view mirror. In order to generate a static dataset, twenty three drivers were made to stare at nine specific areas successively in the same vehicle during day time.The video stream is converted into video frames and then annotated using OpenCV. Four sub-images were extracted ie. left eye, right eye, face and head.The gaze activity is detected by tracking the drivers size for six consecutive frames and the cognitive distraction takes place when the gaze activity is reduced due to shift in the brains attention. The accuracy achieved here is 95%.This method does not take into account full body posture changes which can occur even without any such variations in the gaze activity. [21]Adnan Shaou,Et al.in their work defines a slight variation of the SqueezeNet deep learning architecture using the AUC Distracted Drive
Dataset. SqueezeNet is a CNN architecture with lesser parameters but still holds the same level of accuracy. The SqueezeNet version used in this paper is 1.1 which has 1,235,496 parameters and therefore the computation time is also less.The size of the SqueezeNet model used contains 895,554 parameters, which is 1.4 times less parameters than that of SqueezeNet 1.0. It reduces the number of neurons from 1000 to two by replacing the final layer of SqueezeNet with two classes. The images from the input dataset were shrunk to 224 by 224 pixels. The L2 regularization of the network's neurons was the last change made to the SqueezeNet model. In this paper, nine classes of distracted drivers have been combined into a singular class called distracted driving and the remaining one is called safe driving.Since distracted driving is now depicted in a much greater number of images than safe driving, this poses a problem. There are 8958 images of distracted driving and 2720 images of safe driving, respectively. With this data as training, SqueezeNet may become biased or overfit to detect distraction in everything. A Jetson Nano hardware deployment has been made to run the model. An accuracy of 93% was achieved with this model.[22] Zaeem Ahmad Varaich,,Et al.This paper
proposed 2 DCNN architecture models (i.e. Inception V3 & Xception) and did comparison between them. Inception V3 has 44 layers and has 21M learnable parameters. Xception is a 36 layer model with 20M parameters and it is a VGG-esque architecture. Xception model outperforms Inception V3 model by slight difference on ImageNet dataset with 79% accuracy compared to 78.2% accuracy of Inception model. Xception converges faster than Inception V3. More variations can be seen on validation graphs of Xception than on that of Inception. Xception takes more training time as compared to the Inception model with 28 steps/sec and31 steps/sec respectively.
traffic systems.This study focuses only on one specific distraction therefore the scope is limited.
-
CHALLENGES
There are a lot of challenges in this major area and few of which are listed below.
-
Deep learning models need to be prepared which have higher accuracy than existing models.
-
Also need to implement a program that brightens the image taken at night and hardware shouldnt lag while running the code.
-
Nowadays, people dont keep phones in their hands and instead of that, they use hands-free bluetooth earphones. Detecting those tiny earphones is a very difficult task.
-
A more futuristic idea would be to stop the car, when the driver is feeling drowsy.
-
Alerting not only the driver but the nearby authorities could be a provision which would further prevent more damage.
-
Other than fatigue and the pre-existing distraction categories, emotional analysis of the driver could also be taken into account since unstable emotions can also lead the driver to cause accidents.
-
-
FINDINGS AND FUTURE SCOPE
This paper summarizes the current mass production technology and main detection/estimation methods for wakefulness detection and distraction[Table 1]. As for future technologies for sensing and estimating arousal, the cost of the systems in which they are installed will depend on the cost of the component sensors. However, if the use of machine learning (deep learning) progresses with the installation of computers and GPUs, the accuracy of the system will improve, and it is expected that the number of installations in vehicles will increase. However, evaluation in a real-time vehicle environment is required.
In the distant future, in autonomous driving levels, where the human driver is not the primary driver, there is no need to use this technology to detect and assess fatigue for accident prevention. Therefore, in the era of autonomous driving, fatigue detection and distraction detection systems will help drivers feel more comfortable inside the vehicle. In such cases, the drowsiness detection/estimation system is an application that detects driver drowsiness and puts the driver to sleep. The system then helps the driver sleep nicely and reach their destination in comfort. In other words, in the
stream is converted into video frames and then annotated using OpenCV.
[21] Adnan Shaou,Et al.
SqueezeNet is a CNN architecture, A Jetson Nano
93%
future, the drowsiness detection and distraction detection systems will function as a system that realizes comfortable driving, reducing driver fatigue when using a car, and enabling drivers to act without getting tired when they arrive at their destination. will come to do. The drowsiness detection/estimation system for level 4 and 5 automated driving will be applied not only to automobiles but also to other sleep-related fields and industries in the future, detecting drowsiness and making people comfortable.
Reference No
Authors
Methods Used
Accuracy
[6] Elena Magán,Et al.
Fuzzy logic-based system
65%
[9] Deep Ruparel,,Et al.
Deep learning algorithms like the vgg16 model,
mobile net, sequential
model,pre-trained weights of the vgg16 model
93%
[11] TianchiLi,Et al
LapSVM and SS- ELM. In addition to that, SBN-
supervised clustering, SVM, Transductive SVM and ELM
97.2%
[12] Salma Anber,Et al.
SVM-based forwarding deep learning model to detect distraction, pre-trained AlexNet- CNN.
95%
[13] Wanghua Deng ,Et al.
DriCare, 68 critical features algorithm
92%
[15] Zuopeng Zhao,Et al.
Multitask Cascaded Convolutional Network (MTCNN), Percentage of Eyelid Closure Over the Pupil Over Time (PERCLOS). EM-
CNN CNN-based techniques,i.e.,Alex Net, VGG-16,
GoogleLeNet, and ResNet50
93.6%
[18] Jimiama Mafeni Mase,Et al.
Stacked Bidirectional Long Short Term Memory (BiLSTM) to capture spectral-spatial features of the images, 64 features have been fed into the BiLSTM
92.7%
[20] Yingji Zhang,Et al.
Multi-Channel CNN, .The video
95%
-
CONCLUSION
This survey is performed to understand the importance of drowsiness and distraction when it comes to driving and to have a comprehensive understanding of all the models and techniques employed. As Per the survey, seeing Driver Drowsiness and Distracted Drivers is accomplished through various deep learning techniques and computer vision techniques. Numerous datasets obtained from Kaggle are relevant to Drowsiness and Distraction.This study only focuses on studies conducted in daylight or under well lit conditions wherein the facial and whole body recognition is very accurate. Therefore most studies are not effective to the maximum and hence certain assumptions have been made. Risks should be identified and addressed. To identify non- functional and functional requirements, an analysis was conducted. Around 24 references were studied and analyzed for the literature review. For drowsiness detection mainly the facial features have been considered whereas for distraction detection the entire body and its several positions/postures have been considered. Deep Learning models like CNN and its pretrained forms have proven to be the most effective models for classification tasks. Majority of the studies have yielded an accuracy of 80% to 93%. Future studies relevant to this topic can be done by keeping in consideration the lighting conditions and also the emotional state of the driver.
-
REFERENCES
A. Shinde, Journal of Emerging Technologies and Innovative Research(JETIR) May 2021, Volume 8, Issue 5.
[17] Benchmarking Deep Learning Models for Driver Distraction Detection,Jimiama Mafeni Mase1 , Peter Chapman, Grazziela P. Figueredo, Mercedes Torres Torres, International Conference on Machine Learning, Optimization, and Data Science,January 2020. [18] A Hybrid Deep Learning Approach for Driver Distraction Detection,Jimiama Mafeni Mase1 , Peter Chapman,Grazziela P. Figueredo,Mercedes Torres Torres Information and Communication Technology Convergence 2020,September 2020. [19] Driver Distraction Detection MethodA Literature ReviewandFramework,AlexeyKochevnik,Roman,Shchedrin;Chr istianKaiser,AlexanderStocker, IEEEAccess, April 2021. [20] Driving Distraction Detection based on Gaze Activity,.Yingji Zhang1,Xiaohui Yang,Zhiquan Feng, John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology,Vol. 57 No.22,October 2021. [21] An Embedded Deep Learning Computer Vision Method for Driver Distraction Detection,Adnan Shaou,BenjaminRoytburd,LuisAlejandroSanchez-Perez, 22nd International Arab Conference on Information Technology (ACIT),2021.
[22] Recognizing actions of Distracted Drivers using Inception V3 and Xception Convolutional Neural Networks,Zaeem Ahmad Varaich,Sidra Khalid, 2019 2nd International conference of Advancements in Computational Science (ICACS),18 Feb 2019.