AIS: Video Surveillence Using Artificial Intelligence For Old-Aged

DOI : 10.17577/IJERTCONV11IS04016

Download Full-Text PDF Cite this Publication

  • Open Access
  • Authors : Nimna Joseph, Sona Elizebeth Shaji, Prof. Ashly Thomas, Niveditha P M, Vishnupriya R
  • Paper ID : IJERTCONV11IS04016
  • Volume & Issue : Volume 11, Issue 04
  • Published (First Online): 01-07-2023
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License

Text Only Version

AIS: Video Surveillence Using Artificial Intelligence For Old-Aged

AIS: Video Surveillence Using Artificial Intelligence For Old-Aged

Nimna Joseph

Dept.of Computer Science and Engineering St.Josephs College of Engineering and Technology Palai,Kottayam,Kerala

Niveditha P M

Dept.of Computer Science and Engineering St.Josephs College of Engineering and Technology Palai,Kottayam,Kerala

Sona Elizebeth Shaji

Dept.of Computer Science and Engineering St.Josephs College of Engineering and Technology

Palai,Kottayam,Kerala

Vishnupriya R

Dept.of Computer Science and Engineering St.Josephs College of Engineering and Technology Palai,Kottayam,Kerala

Prof. Ashly Thomas

Dept.of Computer Science and Engineering St.Josephs College of Engineering and Technology

Palai,Kottayam,Kerala

AbstractThe number of seniors living alone in the country is growing rapidly, and the reason may be the shift to nuclear families and the lack of job opportunities in hometowns. Elderly people who remain alone are more susceptible to increased health risks from neglect, carelessness and loneliness in this age group with cognitive impairment such as Alzheimers disease along with increased stress, anxiety, feelings of unworthiness, depression. The risk of emergency situations is therefore also high in this age group, which is also often the target of fraudsters. Although the placement of tracking systems is a common method, it is difficult to trace. Knowing the time of occurrence is needed to identify the event and also requires enormous data work. Thats why we came up with a video surveillance system with automatic anomaly detection techniques. Advancement in the field of artificial intelligence help in the rapid and automatic identification of nominal and anomalous events. A sequential and incremental learning approach in feature extraction will help build a model that provides much more accurate anomaly classifications and predictions. Through this prediction, we can alert those who need to act immediately. Various neural network algorithms are useful for automatic identification of abnormal situations. With the use of artificial intelligence, we can identify the features and frequency of occurrence of abnormal events. Immediate identification of abnormal events will help reduce the number of victims of the situation or even avoid the situation, thereby enhancing safety.

Index TermsAnomaly Detection, Surveillance, Abnormal

  1. INTRODUCTION

    With the constant development of economy and society, the aging population in our country is becoming a problem increasingly serious. It is assumed that the number of people over 60 will exceed 300 million, accounting for 20.7% of the total population by 2025 [17]. With ongoing increasing num- ber of elderly people, number the number of elderly people living alone is also increasing every day, which ensures the

    day-to-day safety of elderly people living alone. Homemade research shows that falls happen second cause of death in accidents and unintentional injuries, etc it is also the leading cause of death from human injuries over 65 years [3]. Medical surveys show that if effective treatment can be received in time after a fall, risk death may be reduced, and the survival rate of the elderly may be reduced also increase. Therefore, effective and practical a fall detection system for the elderly needs to be built advanced science and technology that can detect and identify falling in time and send warning to reduce injuries caused by falls and also ensure the safety of elderly people who live alone. It is very necessary to research elderly fall detection, which has important social meaning and practical value [4].

    With a revolutionary multi-channel 1-D convolutional neural network architecture, we are developing a fall detection sys- tem. We replace the manually created feature extraction pro- cess in HAR with an automated feature learning engine [13]. CNNs have the simplest training procedures when compared to deep designs like recurrent neural networks and long short- term memory networks. We are taking advantage of CNNs convolution (which computes a combination of neighbouring sensor values), and pooling operation to identify subtleties in the data properties (which makes the representation invariant to small translations of the input) [6]. When compared to previously published ones, the architecture we use is new since it uses batch normalisation for HAR with the CNN architecture and is used for time series data analysis.

  2. OBJECTIVE AND SCOPE

    The main objectives of our project is to construct a model to detect when a person has fallen in an indoor environment and

    to allow health care workers to get notified and take necessary actions. The proposed solution is an deep learning based automatic falling detection algorithm and is implemented in a multi-camera video surveillance system. The advancements in Artificial Intelligence help in the quick and automatic identifi- cation of nominal and anomalous events. Artificial intelligence systems are anticipated to improve video surveillances ability to provide notifications and alarms. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to the concerned. We aim to obtain a sensor free deep learning method that is highly accurate and fast in its decision making.

  3. BACKGROUND

    Several authors developed a wide range of static and shal- low feature-based classical machine learning models. HAR overview illustrating the strengths and limitations of various statistical machine learning models like Support Vector Ma- chines (SVMs) [13].The fact that the developers had to decide which features were appropriate for the task through trial and error was one of their key flaws. Researchers have switched to deep learning models where the feature extraction technique is incorporated into its modelling to overcome this handcrafting.

    1. Related Deep Learning Works

      CNN has recently been applied to the problem of activity recognition since convolutional networks comprised of one or more convolutional and pooling layers followed by one or more fully connected layers [6].

      1. Some Authors proposed an activity recognition classifi- cation using Deep CNN and long short-term memory (LSTM) and two open datasets collected by inertial measurement units and wearable sensors. The authors classified hand gestures like opening door, washing dishes, cleaning table etc., and movements like standing, walking, sitting, lying, and nulling using a combination of CNN and LSTM after converting the sensory data into sensor signal graphs [15].

      2. Another attempt was on the design of CNN with con- volutional layers applied along the time axis and overall the sensors simultaneously. They used two or three of these layers followed by a pooling layer and softmax classifier [2].

      3. Several concurrent methods that implemented a CNN based architecture, showing better performance were compared to shallower or handcrafted methods using Linear Discriminant Analysis (LDA), Quadratic Dis- criminant Analysis (QDA) , K-Nearest Neighbor (KNN), SVMs.

      4. CNNs were used to distinguish activities using data from inertial sensors on the body [20]. This approach performed well, and was better for low power devices,

    but reintroduced the extraction of handcrafted features, and it requires multiple sensor evices.

  4. LITERATURE SURVEY

    1. Wearable Detectors

      The users stride and orientation are measured by dispersed body-worn sensors, which sound or transmit an alarm when a sophisticated algorithm detects motion that resembles falling [4]. Options include pendants, watches, spectacles, belts worn around the waist, and various combinations of these. In the event that the user becomes incapacitated from falling, they are all built to automatically inform first responders. Some even actively remind the user to correct their posture in order to avoid falling in the first place. However, there is no assurance that it will be worn, just like with other garment. Elderly users frequently forget to wear their sensors or choose not to do so.

    2. Pressure and Motion Sensors

      In order to notify carers to the movement of senior patients and residents, hospitals and aged care facilities frequently use a combination of motion and pressure sensors installed on beds, chairs, toilet seats, or flooring. [4]. Many of these sensors also trigger an audible alarm or pre-recorded message to warn the user to stay put until help arrives. The movement of a patient or resident can be reliably detected by these sensors, but not all movements are signs of a fall. In fact, even the smallest movement of the user can trigger a false alarm.

    3. Multisensor Technology

    Although there is no such thing as a 100% effective fall detection or prevention solution, most experts agree that the combination of available sensor technologies can greatly in- crease system reliability. And they are increasingly accepting of elders because surveillance is a means of expanding their options for independent living. Integration of thermal video surveillance with other open platform-based fall detection systems enables more efficient and reliable fall detection and prevention. More importantly, this approach provides peace of mind for elders, their families and their caregivers.

  5. PROPOSED SYSTEM

    We are proposing an algorithm that detects a fall using human activity recognition.The graphical user interface which we are planning is an android application.We aim to collect normal situation image dataset which will be preprocessed and keypoints will be extracted using open pose estimation.The system is divided into five modules : Registration, Pre- Training, CNN, LSTM and Alerting. The classification will be done using CNN-LSTM model.The model network parameters would be setup and the model would be trained using keypoint features and labels. After loading the CNN model each live video frame from webcam would be taken, preprocessed and the prediction would be done using the loaded model.Finally the fall status would be send to the android application. Using the android application the user can view the fall alert and the alert can also be send to emergency contacts if there is no

    Fig. 1. System Architecture

    response from the person after a time interval. The figure 1.1 shows the architecture diagram of the proposed system.

    1. Module Description

      1. Registration: In this module we have user registration and patient details registration. A User can register to the system either as a verified user or as a non-verified user. Non- verified user create their account by providing their name and basic contact details. User enter the details of patient such as name, age, disease they have, location. During the registration procedure location information of patient is collected in-order to perform efficient help when a request is made.

      2. Pre-Training: On training and test datasets we perform:

        1. Scaling and Normalization

          The scaling across the channels is performed to prevent any form of training bias brought on by the direct usage of huge data from any of the six channels from the Python Sklearn module we apply a minmax normalization function to transform all values of the channel to a range between 0 and 1.

        2. Segmentation

        After scaling the raw data, the six-channel input time series is divided into 1×128 windows so that any temporal relationship between the data inside an activity could be explored by the convolutional filters. The decision about the ideal window size was determined in an empirical and adaptive way to create effective segmentation for all the activities taken into account.

      3. CNN: In the CNN architecture, the convolutional kernels implicitly learn the internal representation of the input. For

        extracting features from the raw data a four layer stacked con- volution and pooling is used.In the convolution layer, several non-linear transformations are applied to the input, each of which produces a different result map. In a subsampling layer with a given pool size, each output from the previous layer is subsampled. This procedure is usually an averaging or maxing of each region that produces a single output.

        Every layer gets an 1-D array input, and all operations are performed in segments and windows of this input. The data points that contains related or similar information will be grouped by a specific filter or kernel. Each level has distributed weights that are applied equally to all parts of the input. This defines the functionality of the CNN because the 1-D input is combined with a small weight matrix to produce the output of a layer that acts as a filter. In this context, convolution extracts features, while fusion combines the extracted data in a more meaningful way. Together they extract special features from the entire input window.

      4. LSTM: To locate fall in each frame an LSTM based scheme is used. Once the sequences from CNN are obtained for every video, we consider two scenarios.The video se- quences we get are categorised into fall and non-fall cases by training LSTM network on these features.Non fall corresponds to all other daily activities that are not falls.Secondly, a set of images that is created, is fed later to some pre-trained network to extract distinctive features.A two-class SVM classifier to is used to detect falls.

      5. Alerting: After the fall is detected a potential alert message is generated and send to the respected for the rescue.

  6. CONCLUSION

We are trying to implement a deep convolutional neural network model for fall detection in old aged. Our idea of a deep learning based fall detection is expected to solve most of the key issues seen in the existing systems. We are trying to demonstrate a system which uses a combination of CNN and LSTM. Compared to deep architectures CNNs have the most straight-forward training process. And we also try to incorporate edge computing in this system in order to reduce the higher bandwidth and latency issues faced.

By developing such a system we aim to ensure the safety and security of the old aged people. Further in future we aim to add more features to the system by connecting the interface to hospital or related services through which emergency actions could be undertook. The system is expected to be really beneficial to people who are far away from their parents and are not able to take care of them.

ACKNOWLEDGMENT

This project was carried out at St. Josephs College of Engineering and Technology, Palai and was supported by Dr. Joby P.P, Professor & Head, Department of Computer Science and Engineering. We also received immense support from project coordinator Dr Praseetha VM , Associate Professor, Department of Computer Science and Engineering and our

project guide Ms. Ashly Thomas , Assistant Professor, De- partment of Computer Science and Engineering who took keen interest in our project and guided us all.

REFERENCES

[1] Kelathodi Kumaran Santhosh ,Debi Prosad Dogra , Partha Pratim Roy

, and Adway Mitra, Vehicular Trajectory Classification and Traffic Anomaly Detection in Videos Using a Hybrid CNN-VAE Architecture, IEEE Transactions on Intelligent Transportation System, Vol. 23, pp.11891 – 11902, Aug 2022.

[2] X. Wang et al., Robust Unsupervised Video Anomaly Detection by Multipath Frame Prediction ,in IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 6, pp. 2301-2312, June 2022, doi: 10.1109/TNNLS.2021.3083152.

[3] C. Vishnu, R. Datla, D. Roy, S. Babu and C. K. Mohan, Human Fall Detection in Surveillance Videos Using Fall Motion Vector Modeling, in IEEE Sensors Journal, vol. 21, no. 15, pp. 17162-17170, 1 Aug.1,

2021, doi: 10.1109/JSEN.2021.3082180.

[4] M. Saleh, M. Abbas, J. PrudHomm, D. Somme and R. Le Bouquin Jeanne`s, A Reliable Fall Detection System Based on Analyzing the Physical Activities of Older Adults Living in Long-Term Care Facilities, in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 29, pp. 2587-2594, 2021, doi: 10.1109/TNSRE.2021.3133616.

[5] Y. -H. Liu, P. C. K. Hung, F. Iqbal and B. C. M. Fung, Automatic Fall Risk Detection Based on Imbalanced Data, in IEEE Access, vol. 9, pp. 163594-163611, 2021, doi: 10.1109/ACCESS.2021.3133297.

[6] W. -J. Chang, C. -H. Hsu and L. -B. Chen, A Pose Estimation- Based Fall Detection Methodology Using Artificial Intelligence Edge Computing, in IEEE Access, vol. 9, pp. 129965-129976, 2021, doi: 10.1109/ACCESS.2021.3113824.

[7] C. Mosquera-Lopez et al., Automated Detection of Real-World Falls: Modeled From People With Multiple Sclerosis, in IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 6, pp. 1975-1984, June 2021, doi: 10.1109/JBHI.2020.3041035.

[8] U. P. Naik, V. Rajesh, R. K. R and Mohana, Implementation of YOLOv4 Algorithm for Multiple Object Detection in Image and Video Dataset using Deep Learning and Artificial Intelligence for Urban Traffic Video Surveillance Application, 2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT), 2021, pp. 1-6, doi: 10.1109/ICECCT52121.2021.9616625.

[9] Ribeiro, Osvaldo, Luis Gomes, and Zita Vale. 2022. IoT-Based Human Fall Detection System, Electronics 11, no. 4: 592. https://doi.org/10.3390/electronics11040592

[10] J. Chen, K. Li, Q. Deng, K. Li and P. S. Yu, Distributed Deep Learning Model for Intelligent Video Surveillance Systems with Edge Computing, in IEEE Transactions on Industrial Informatics, doi: 10.1109/TII.2019.2909473.

[11] Musci, M., De Martini, D., Blago, N., Facchinetti, T. and Piastra, M., 2018. Online fall detection using recurrent neural networks, arXiv preprint arXiv:1804.04976.

[12] G. Lou and H. Shi, Face image recognition based on convolutional neural network, in China Communications, vol. 17, no. 2, pp. 117-124, Feb. 2020, doi: 10.23919/JCC.2020.02.010.

[13] T. Zebin, P. J. Scully, N. Peek, A. J. Casson and K. B. Ozanyan, Design and Implementation of a Convolutional Neural Network on an Edge Computing Smartphone for Human Activity Recognition, in IEEE Access, vol. 7, pp. 133509-133520, 2019, doi: 10.1109/AC- CESS.2019.2941836.

[14] A. Kaya, K. Atas and I. Myderrizi, Implementation of CNN based COVID-19 classification model from CT images, 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI), 2021, pp. 000201-000206, doi: 10.1109/SAMI50585.2021.9378646.

[15] R. Xin, J. Zhang and Y. Shao, Complex network classification with convolutional neural network, in Tsinghua Science and Technology, vol. 25, no. 4, pp. 447-457, Aug. 2020, doi: 10.26599/TST.2019.9010055.

[16] Krishnan, Sreedevi R., P. Amudha, and S. Sivakumari, Automatic Detec- tion of Anomalies in Video Surveillance using Artificial Intelligence, In IOP Conference Series: Materials Science and Engineering, vol. 1085, no. 1, p. 012020. IOP Publishing, 2021.

[17] Beddiar, Djamila Romaissa, Mourad Oussalah, and Brahim Nini, Fall detection using body geometry and human pose estimation in video sequences, Journal of Visual Communication and Image Representation 82 (2022): 103407.

[18] Casilari, Eduardo, and Carlos A. Silva, An analytical comparison of datasets of Real-World and simulated falls intended for the evaluation of wearable fall alerting systems, Measurement 202 (2022): 111843.

[19] Li, Suyuan, Xin Song, Siyang Xu, Haoyang Qi, and Yanbo Xue, Dilated spatialtemporal convolutional auto-encoders for human fall detection in surveillance videos, ICT Express (2022).

[20] Pan, Daohua, Hongwei Liu, Dongming Qu, and Zhan Zhang, Human falling detection algorithm based on multisensor data fusion with SVM, Mobile Information Systems 2020 (2020).