Forest Fire Alarm System

DOI : 10.17577/NCRTCA-PID-093

Download Full-Text PDF Cite this Publication

Text Only Version

Forest Fire Alarm System

Forest Fire Alarm System

Convolutional Neural Networks and wireless sensor networks are being compared.

PALLAVI KB

PG scholar, Department of MCA

Dayananda Sagar College of Engineering

Banglore, India pallavipallu0602@gmail.com

Dr.SAMITHA KHAIYUM

HOD-Dept. of MCA

Dayananda Sagar College of Engineering

Banglore, India

hod-mcavtu@dayanandasagar.edu

Abstract We present a unique Convolutional Neural Networks (CNN)- based fire detection system in this study. Fire detection may be difficult using the present techniques of smoke sensors put in structures. They are expensive and sluggish due to antiquated technology and design. In this study, The use of artificial intelligence for video identification and alerting from CCTV footage is critically examined. A self-created dataset of video frams containing fire is utilized for this experiment. Before CNN is used to develop a machine learning model, the data is preprocessed. The approach is validated using the dataset's test set as input, and experiment are documented. This project objective is to develop a machine that is both inexpensive and highly precise, and that can be used in virtually every fire detection scenario. Furthermore, this study proposes a system and approach for detecting forest fires in their early phases using a non-wired monitoring network. To increase the detection accuracy, a device- adapting regression technique is offered. Due to the main power supply provided by a reusable power unit with a supplementary solar power source, a remedy is easily put into practice as an independent system for an extended period. In extreme forest conditions, sensor architectures and node positioning requirements are of paramount importance to reduce damage and adverse effects caused by wildlife, climatic conditions and other system elements. Several testing in actual forest locations has shown that the suggested technique beats

existing solutions for warning forest fires with lower latency.

Keywords: Fire alarm, Convolutional neural networks, ML, Security Cameras, Object detection.

  1. INTRODUCTION

    Forests help to keep the earth's natural equilibrium. Unfortunately, wildfires are frequently detected after they have devastated a significant region, making management and suppression difficult, if not impossible. Aside from the catastrophic devastation of ecosystems (due to immense smoke and carbon dioxide (CO2) in the atmosphere), the resultant tremendous loss and lasting harm to the environment and climate (due to carbon dioxide in the atmosphere (30% of CO2)). This atmosphere is caused by forest fires)[1]. Long-term damage from wildfires includes collision with local climate patterns, global warming, and loss of endemic plant and animal species.

    Forest fires are particularly dangerous because they commonly occur in remote, abandoned, or Areas that have been badly maintained and are thickly wooded with trees, dry, parched lumber, leaves, and other fuels. [2]

    These elements merge to form a highly combustible material that acts as fuel for the fire's spread and the optimal atmosphere for its initial ignition. The fire may begin due to human causes, such as smoking or barbecuing, or due to natural causes, Maybe it's the heat on a summer's day, or maybe the broken glass is like a giant lens that focuses the sun's rays on one spot for a long time

    before starting a fire. Once a fire has begun, flammable material may easily fuel it, allowing it to spread and grow in size. The "surface fire" stage is commonly used to describe the initial stage of ignition. As a result, the fire may spread to surrounding trees and develop in size, eventually transforming into a "crown fire." In most situations, the fire becomes uncontrollable at this stage, and depending on the topography and present weather, damage to the landscape may become enormous and endure for an extended period.[3]

    Every year, fires burn across millions of hectares of forest. Large areas that have been destroyed by these fires emit more carbon monoxide than all the cars combined.

    The reaction time can be greatly shortened by keeping an eye on potential risk areas and detecting fire as soon as it starts, which will also lower the cost of fighting the fire and the potential damage. Here, the standard criteria apply 1 minute

    = 1 cup, 2 minutes = 100 gallons, and 10 minutes

    = 1,000 litre.

    The aim is to get the fire out as soon as we can, and it's important to know where the fire is and get in touch with the fire department right away. That's what this invention is all about – finding a forest fire early and making it easier or more likely to be put out before it grows or causes too much damage. camera sensors, various types of fire detection sensors, or combinations of them, are also becoming more and more common. In the following section, we provide a summary of the worlds automated and semi-automatic fire detection and tracking systems, along with real- world experiences and evaluations of their effectiveness, accuracy, adaptability, and other critical factors.

  2. EFFECTS OF WILDFIRES

    Wildfires can cause significant damage to rivers, lakes and streams in the long and short period. One of the most significant effects of wildfires is stormwater runoff. The soil changes in the absence of plants, becoming hydrophobic and hindering water absorption. Trash and silt are encouraged to be transported into larger bodies of water due to their inability to absorb water,

    endangering crucial resources. Floods caused by fires can be dangerous because ash and soil can release heavy metals into rivers. These sources of water can take a long time and cost to clean.

    A wildfire may cause severe damage to plants depending on the weather and season. Taller trees may be able to endure wildfires if they do not advance into the tree canopy, Plants on the forest floor, on the other hand, or smaller trees, are often burnt. The flames from these fires devastate many animals' habitats and food supplies, threatening their existence. Fire-tolerant plants and trees are more prone to disease, fungal and insect infestations due to reduced fire resistance after burn injuries. The ecological benefits of wildfires

    Wildfires leave behind a great deal of destruction in their wake, but they also have some positive effects. Many plants need to be burned frequently to distribute their seeds and live. Additionally, fires can clear the forest floor of extraneous debris, eradicate illnesses and insects that may be harming plants, and provide plants more access to the nutrients that open sunshine provides. Low- intensity flames clear underbrush and prevent subsequent fires from causing greater damage.[4]

    Fresh grasslands are generated as a result of a wildfire, which favors grazing animals. Because the natural order contains more species, the ecology can evolve in a way that promotes growth and the never-ending circle of life. Fire disruption is required for vegetation, such as fireweed, to bloom and for plants that have died as a result of the fire to recover. Fresh life begins to recuperate and emerge when plants and other vegetation die.[6]

  3. WHAT CAN BE DONE?

    Weve all heard the adage Only we can stop forest fires. We may be able to successfully lower the danger and frequency of wildfire breakouts by implementing preventative measures. The first line of defence against wildfires is to never leave a fire unattended. Put out the fire thoroughly before going to bed or leavingthe area. Never litter the ground with cigarettes, flammable substances, or smoking aterials. Take caution when you throw them away.

    If you notice an open or unattended fire, get in touch with local fire departments and emergency services as soon as possible. If there's a chance of a wildfire in your neighborhood, plan your escape route and maintain an emergency supply bag on hand. While exiting your home, shut all doors, Openings, and outlets to avoid draughts. You should also clear your yard of any combustibles and switch off any fuel oil sources. Heat resistant clothing to protect against sparks and ash, and a mask to keep your lungs clear of harmful gases.

  4. EXISTING SYSTEM

    Fires are detected using smoke and heat detectors. There is one major disadvantage to smoke & heat sensor alarms, and that is that you cant keep an eye on every potential fire hazard with just one module. The best way to avoid the fire is to be alerted at all times. They would never be able to constantly provide an efficient output even if they were deployed in every nook and crevice. The more smoke sensors you need, the more expensive it will be. Within seconds of a crash or fire, the recommended smoke sensor could give you reliable and extremely accurate alerts.

    It saves money since the entire monitoring network may be powered by a single piece of software. Data science and ML professionals are working in this field. The main concern is lowering detection of fire inaccuracy and giving timely alerts.

  5. LITERATURE REVIEW

    [1] YOLO (YOLO stands for You Only Look Once) is a deep learning object recognition model. YOLO2 is an improved version of YOLO that addresses the shortcomings of YOLO, such as the backdrop to consistently identify and label the regions of information in images, and the poor recall identification compared to other regions- oriented approaches. This improves the architecture's efficiency. In this post, I'm going to show you how you can use YOLO Convolutional Neural Network (YOLOV2) to see if there's a fire in your home or office. To do this, I will use live video footage from anti-fire surveillance systems. This is one of the most effective ways to detect fire and smoke.

    They start with a 128x128x3 picture and mapped the input picture attributes using convolutional

    layers. These attributes are then passed to the object detection subnetwork (YOLO2). The YOLO2 Transform layer improves the stability of the network for object localization.

    [2] In this article, we're talking about a system that looks a lot like how humans detect fires. We're using Fast R-CNN, which is based on a region. Once we've identified the POIs, we take the properties from the bounding boxes and feed them into our LSTM. That way, we can figure out if there's a fire in no time.

    Fast R-CNN maps the features of an input image using a combination of CNN features and a regional proposal network. It gathers the attributes using ROI pooling and categorizes them based on the block scores of the item position.

    [3] This study found that vision-based fire detection systems mounted on UAVs used to frequently map acreage in fire-prone areas can detect forest fires. Convolutional neural networks (CNNs) are also strongly recommended to find the life of fire and smoke using video frames as images.

    The researchers collected data from several websites and scaled the images to the standard 240×320 resolution. The main objective of this research was to locate the fire damages in the picture.

    The authors explain how to build the algorithm in two different ways. The first way is to build the fire patch classifiers from the ground up. They train the whole picture classifiers and then fine- tune the patch classifiers when the image has fire in it. The researchers report that SVM classifiers have a 95.6% accuracy and CNN classifiers have a 97.3% accuracy. The detection rate for SVM is 84.8%.

  6. FRAMEWORK

    The benefits of a CNN are utilized in the proposed architecture. The CNN preprocesses the input before gathering it using regions of suggestions. CNN's region-based object identification approach then uses convolutional layers to identify those recommendations as either fire or not in the area of interest (ROI).

    1. Convolutional Neural Networks (CNN)

      A CNN is a type of artificial neural network that uses supervised learning to imitate human brain activity for data processing. CNN is an abbreviation for CNN, which is a modified multilayer perceptron or fully connected network. It has various levels to make it work, there are three layers: input layer, output layer, and the layers hidden. Convolutional neural networks are so named because their hidden layers are convolutional. It provides exceptional capabilities for object detection. These convolutional layers analyze and evaluate data using a variety of mathematical models.[14] The subsequent layers receive these outputs from the preceding layers. The fact that the network is completely connected raises the possibility of overfitting. To prevent this circumstance, CNN takes advantage of the data's hierarchical pattern and sorts the patterns in the layers by complexity, from simpler to more complicated. The input is supplied as a tensor with the following properties: height, width, and input channels. The image has grown abstract as a result of the layers, and it now resembles a feature map. This is done layer by layer to show the activity of the brain cells. Since the whole network is connected, all the output from the output layer is filtered and combined into one output. The filter count is based on how big the feature map is.

      1. Architecture of CNN

        Convolutional layers are used in the architecture of a convolutional neural network. CNN differs from prior object recognition algorithms in that it may use image transform filters known as convolutional kernels to build an area of interest in the original picture. In addition, connection weights can be used, as well as weighted sums. Finally, the number of kernels generated is equal to the number of model feature maps. In the feature maps, each pixel is colored to indicate an activation point. The feature map's white pixels correspond to important activation spots in the original picture. Grey dots denote low-level activation sites, while black pixels denote high- level negative activation sites. Since the fire zone of the original image was a reddish orange colour,

        these pixels were rendered white using a convolutional kernel. A convolutional network is a network in which each neuron receives information from a slice of the layer before it. Each node in the network outputs a result by acting on previous levels. The weights of the input values define these functions. Convolutional neural networks are unique in that functions may be shared across all layers. Alex Net deep CNN, a basic CNN software that makes finding objects in pictures simple, is the network's feature extractor. Figure 1 shows the fundamental infrastructure of CNN.

        Region propos al

        CNN LAYERS

        Output fully connected layers Fig 1. Architecture of CNN

        The picture above Convolutional neural networks' basic architecture is depicted in Fig. 1. In this example, our input data consists of pictures of fire. The network layers then abstract the picture by deleting all the noise from background and mentioning the object to be detected. In order to construct a ML model within the complete inter- connected layer, the layer provides an area for suggestions. The decision-making algorithm provides the layer output to complete the model.

    2. Sensor Node Design

    This sensor node is designed to be cylindrical so it won't be affected by strong winds or rain, and it also has features to help protect it from the harsh weather conditions that can be found in tropical forests. (See Fig. 2). From the bottom of the sensor node, the temperature and humidity are measred, and the intensity of light is measured from the top of the sensor node (see Fig. 3). Inside the sensor node, there are three layers. Lithium- ion battery is in the top layer. Microcontroller, voltage regulator, and connector board are in the middle layer.[7]

    The node foundation, also referred to as the bottom layer, houses the sensors measuring the environmental variables outlined above. All sensors are orientated downwards to provide protection from external elements, including rain, hard winds, and debris, leaves. A hole on the nodes side will allow the antenna to be carried outside. Mounting brackets and supports are installed on the back side for the sensor node (fig 5).

    Fig. 2. External spherical design.

    Fig. 3. Sensor installation at the bottom.

    Fig. 4. Placing inertial components.

    Fig. 5. Increasing the number of supporters.

    The DHT22 sensor measures temperature and relative humidity, The LDR sensor measures light intensity, and the MQ9 sensor measures CO level. The microcontroller is a small and flexible Arduino Nano board. It can be used for many different applications. The sensor node(s), cluster head(s), and base node(s) are connected via a transceiver, which is built into the microcontroller. Each node(s) is connected to the module (nrf24L1). Rechargeable Lithium Iron CellsThe main power source is the 18650 Lithium ion cells. These cells have a long-lasting power- unit life and are very price-effective. Each 18650 Li-ion cell has an 18650 capacity (4800 mAh) and a voltage (3:7 V). The solar panel is used as the backup power source.5 W of electricity is generated by the solar panel.12 V of voltage is generated.

    Deployment of sensor nodes

    A sensor node has a maximum sensing range of 5 meters after thorough testing. After a site study of normal foliage height, sensor nodes are placed 1 meter above the floor for easy detection of early- stage ground fire conditions. Following a similar site assessment of average leaf heights and cellular structure (as shown in Fig. 8) sensor nodes are distributed throughout the forest to encompass the entire site of interest. With a maximum sensing range of 5 meters, a sensor node can cover a radius of 5 meters.

    The distance between two sensor nodes is computed as shown in Figs. 6 and 7.

    x=5m×sin (60) = 4.33m

    2x = 4.33m×2 = 8.66m

    Fig 6. Deployment of the sensor node.

    Fig 7. Distance between two sensor nodes.

    In forest-like environments, the data transfer between sensor node and gateway node is supported by NRF 24L01 module (Figure 8) with 100m transmission range. Sensor nodes are grouped together to collect data from nodes that are more than 100m away and to facilitate communication. A cluster head leads each cluster, collecting data from sensor nodes and sending it to the gateway node. When it comes to the maximum transceiver module range, cluster heads are also placed according to the cellular design.

    A single cluster head covers an area the diameter of a circle and has a circumference of 50 meters.The estimated cluster head distance from sensor node to node is 86.66 meters.Each cluster head collects data from 100 nodes.When the threshold ratio exceeds, the corresponding cluster heads collect the information values by the sensor node and forward them to the gateway. Cluster

    heads that are more than 100 m from the gateway node send data via intermediate cluster heads.

    Fig 8. Deployment of Sensor Nodes and Cluster Heads for a 200 x 200 m area

    LITHIUM-ION Batteries Solar Panel Arduino Nano Module NRF24L01 Module SIM800 L Module Each cluster head is powered by domestic electricity. The base station consists of Processing Machine (PC)

  7. METHODOLOGY

    DISCUSSION OF CNN

    The approaches proposed in this study are broken into many phases. The steps are as follows: A. Obtaining the Dataset, B. Preprocessing the Data, and C. Processing the Data. C. Feature Extraction,

    D. Model Construction, and E. Validation and Testing.

    1. Dataset Acquisition

      The data is video frames taken from CCTV footage, but for training and testing, specially made films will be used for ease of use. Compiling these fire-related videos was a difficult effort. The fire-free and non-fire-free frames are then stored separately. Next, the data set is split into training and testing modes. Be careful, though, as bad data can mess up the neural networks output and impede its ability to build an accurate system.

    2. Data Preparation

      Data preprocessing is the next stage in building a great ML model. Here, information is either ready to be used or cleaned and processed. Preprocessing data involves clearing the frame of noise and other undesired items. The algorithm

      must call for pertinent data or it can yield undesirable outcomes.

    3. Extraction of Feature

      In order to accurately identify the presence of fire, the neural network must be cognizant of the characteristics of fire and its visual appearance. Human perception of fire is relatively accurate. Fire is a reddish colour and varies in colour depending on the fuel it is consumed. Under different conditions and motions, fire changes shape. This research shows that smoke and fire are different in shape, colour, and speed.

      In the training set, we extract features from various video frames. To achieve this, we utilize CNNs feature extraction network, which is powered by an in- built algorithm. Once the features are extracted, the training videos are segmented into fire versus non-fire scenarios.

    4. Constructing the model

      The obtained attributes will then be used to build a model in the network. This model consists of a set of properties that will help the network detect fire more accurately. The model creates a standard for assessing incoming input data based on the recovered properties.

    5. Testing and validation

    The machine learning model needs to be validated because accuracy and whether or not the system is working are important. The validation approach uses a unique set of video frames of the dataset that were used to create the model. Based on the test results, the system has about 93% validation set accuracy.

    DISCUSSION OF SENSOR NODE

    1. Data Collection

      Two analytical methodologies are employed to detect fire conditions: Using a machine learning algorithm, we ran a threshold ratio analysis and analysis. The data for these tools was generated by modeling different controlled fire scenarios. The scenario above was constructed in an area of 1m2 and the sensor node was placed on a platform 1m above ground and 1m from the fire. Data was collected in morning, afternoon and evening weather zones to capture the natural variation of the environment throughout the year.

    2. Analysis of Threshold Ratios

      The system checks the temperature, humidity, light-purple index, and carbon dioxide levels in different climates during the early morning, late afternoon, and late evening. During these big-scale tests, the RTH ratio is constantly measured inside the sensor node. Each parameter R is read from the sensors every 30 seconds. If the ratio calculated for one parameter goes over the threshold three times in a row, the gateway node gets 10 data points for that parameter. The data is collected at different times of the day to figure out the ratio.

      Figure 9 shows the sensor node decision flow. If the temperature and the intensity of light are measured, then humidity and the CO level must equal R(TH>=R) if the circumstances are R(TH=R). In this case, the threshold ratio is Defined by R(TH).

      Fig 9. The sensor node's decision flow.

    3. Field Testing

    A controlled fire was initiated and sensor nodes were established to detect the presence of a fire. The system was first tested in the vicinity of the Kanneliya forest reserve (Sri Lanka) (Sri Lanka). Subsequently, the system was tested on the adjacent land of the Knuckles Mountain range Wildlife Reseve (Sri Lanka), both of which are subject to forest fires. For data transmission and subsequent processing, one cluster head and one base node were established. The experiments were carried out in both locations in the morning, afternoon, and night to assess the system's suitability at different times of the day.

  8. EXPERIMENTATION RESULT

    The results of the project have been really promising. The device was able to detect fire with 93% accuracy. Compared to other neural networks, these results show a lot of potential for using convolutional networks to detect fire. The system automatically integrates different trained data to compute and minimize false alarms with the fully integrated network. The Decision- making system relies on this data to decide whether there is or not a fire.

    There are a few minor detection issues with some photographs. Performance and Statistics are very remarkable. The important issue is that it take a bit longer to deliver the final results because it requires more processing power. By constantly cleaning the data you might be able to minimize false alerts.Keep false alarm rate low during implementation.

  9. CONCLUSION

Using video frames to detect fires using machine learning can be both creative and difficult. But it's possible to reduce the amount of damage and loss from accidental fires by setting up monitoring systems in big places like big companies, homes, or forests. The proposed system could be made even more complex by using wireless sensors and CCTV for better accuracy and security. The algorithm is really versatile and can be used in lots of different situations.

The Forest Fire Alarm System (FAS) is a wireless sensor network that uses machine learning to

detect fires in forests. It's proven to be a workable method for detecting fires in forests, and the analysis is done at both sensor nodes and base stations to get the most accurate results with the least latency. The sensor node introduces a threshold ratio for analysis, so the system can adapt to any weather conditions, climate, or place. Since the transceiver is connected to the existing network, it can be set up anywhere in the woods, even if you don't have access to the internet. With rechargeable batteries and a second solar power source powering the main power source, it's easy to set up as a stand-alone system for a long time. During a few tests in real tropical forests, the system, which linked to the communication network, sent alerts to the right people with much less delay than the current methods.

REFERENCES

[1] Nelson, R., The Environmental Impact of Forest-Freses UNTAMDYScience.COM April 2019[ONLINE: HTTPS:UNTAMDYSCIENCE.COM/BLOG/The

-Environmental-Impact-of-Forest-Freses.

Accessed 30 December 2020

[2] Saponara, S., Elhanashi, A. & Gagliardi, A. Real-time video fire/smoke detection based on CNN identifier surveillance systems. J Real- Time ImageProc 18, 889900 (2021).

[3] Kim, Byoungjun, and Joonwhoan Lee. 2019. "AVideo-Based Fire Detection Using Deep Learning Models" Applied Sciences 9, no. 14: 2862.

[4] Qingjie Zhang, Jiaolong Xu, Liang Xu and Haifeng Guo, Deep Convolutional Neural Networks for Forest Fire Detection. IFMEITA 2016.

[5] A Arul, R. S. (2021, May). Fire Detection System Using Machine Learning.

[6] Yuanbin Wang, L. D. (2019). Journal of Algorithms and Computational Technology. Forest fire image recognition based on convolutional neural network, 1.

[7] BoWFire Dataset. Available online:https://bitbucket.org/gbdi/bowfire- dataset/downloads/(accessed on 1 January 2021).

[8] VisiFire. Available online:http://signal.ee.bilkent.edu.tr/VisiFire/ (accessed on 1 January 2021).

[9] Xu, R.; Lin, H.; Lu, K.; Cao, L.; Liu, Y. A

ForestFire Detection System Based on Ensembl eLearning. Forests 2021

[10] Alkhatib, A. A. (2014). International Journal of distributed sensor networks. A Review on Forest Fire Detection Techniques.

[11] Nelson, R. (2020) Untamedscience.com.

April 2019. [Online]. Available: https://untamedscience.com/blog/the- environmental-impact-of-forest-fires/. Accessed 30 December 2020.

[12] Alkhatib, A. A. A review on forest fire detection techniques. Int. J. Distributed Sensor Netw. 10, 597368 (2014).

[13] Matin, M.A., Islam, M.M. Overview of the wireless sensor network. Intech Open, (2012). [14] Díaz-Ramírez, A., Tafoya, L.A., Atempa, J.A., Mejía-Alvarez, P. Wireless sensor networks and fusion information methods. In The 2012 Iberoamerican Conference on Electronics Engineering and Computer Science, México, (2012).