- Open Access
- Total Downloads : 344
- Authors : Sushree Bibhuprada B. Priyadarshini, Debapriya Soumyesh Das, Dr. Hemanta Kumar Das
- Paper ID : IJERTV3IS10301
- Volume & Issue : Volume 03, Issue 01 (January 2014)
- Published (First Online): 13-01-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Omnidirectional Camera Sensors Versus Directional Camera Sensors in Wireless Multimedia Sensor Network (WMSN) Considering the Occuring Event Region and Sensing Region Outside Event Region
Sushree Bibhuprada B. Priyadarshini
1Computer Science and Data Processing(Student),M.Tech, SOA/ITER,
Continuing Ph.D. at VSSUT, BURLA Bhubaneswar, Odisha, India
Debapriya Soumyesh Das,
Software Engineer B.Tech,IIIT
Dr. Hemanta kumar Das
Reader , DD. Autonomous college, keonjhar
ABSTRACT: Sensor Network finds many applications in todays society. In Wireless Multimedia Sensor Network(WMSN), camera sensors are present in addition to scalar sensors. Whenever an event takes place in a monitored region, it is first of all detected by the scalar sensors. The scalar sensors inform their corresponding camera sensors regarding the occurance of event. When event takes place, if we consider the case that sensing occurs by scalar sensors lying inside the event boundary, on the event boundary as well as up to some extent of outside of event boundary region that is covering some more portion of the area after the event boundary[1], then the concerned scalars who are present within the FOV of cameras, who lie outside the event boundary inform their respective camera sensors regarding the event and the concerned camera sensors undergo distributed camera actuation unnecessarily and some or all of the cameras that lie outside the event boundary are actuated based on distributed camera actuation scheme[2] due to sensing of event outside event boundary even though their depth of field(DOF) does not cover the event region. Therefore, our objective is to eliminate the redundant data along with actuation of optimum number of camera sensors
in such a manner that no event information will be lost. Camera sensors can be either directional or omni-directional. Directional camera sensors can capture image along a particular direction. As a result, some portions of the occurring event is not covered by the directional field of view. Field of view is the angle at which a camera sensor can capture accurate image of an object. But omni-directional camera sensors are the sensors that can capture image in 360 degree. By using omni-directional camera in place of directional camera helps in covering more portion of area of concerned occurring event. As a result more accurate information regarding occurring event is captured and event information loss will be minimized. Again in case of directional camera sensors, event information captured by some of the outer nodes that lie outside Field of view of camera sensors are lost. But by using omni-directional camera such type of information loss is minimized .Such thing occurs as omni- directional camera captures image uniformly along all the directions, so number of outer nodes present in its case is minimum, as most of them are covered by field of view of these omni-directional camera sensors. The solution is
an comparative approach to make it easy to integrate existing standards.
Keywords: Field of view(FOV), Depth of field(DOF), Scalar count(SC), inner node, Outer node, Fringe node
-
INTRODUCTION
Wireless Multimedia Sensor Network (WMSN) is an extension of Wireless Sensor Network(WSN), where in addition to scalar sensors camera sensors are present. Scalar sensors capture only the textual information. Cameras can capture video information. Out of several problems, data redundancy is a basic problem which is encountered in WMSN. Due To data redundancy , transmission cost in terms of band width used , cpu processing etc. increases. So the communication cost increases. So several methods are used to eliminate it. As per paper [2],initially the scalar and the camera sensors are randomly deployed throughout an area of interest(eg:forest) to monitor the behaviour of habitat and the living organism. When an event takes place in a monitored region it is first of all captured by the scalar sensors and the scalar sensors inform their corresponding camera sensors regarding the occuring event. Then the camera sensors decide who among them to be actuated. Each camera sen-
sor has two basic parameters namely field of view(FOV) and depth of field(DOF)[2].
Field of view represents the angle at which camera sensor can capture accurate image of an object and Depth of field is the distance at which a camera can capture the accurate image of an object. In this paper, FOV is referred as the pie-shaped area[2] .But in our context we used the FOV as trapezium for easy implementation. The camera sensors are assumed to have a fixed random position and orientation and they do not move. According to paper[2], When the scalar informs the camera sensor regarding the occurring event, it sends a message namely DETECTION message that contains the id of the concerned scalar sensor and the occurring event information. After receiving DETECTION message the camera sensors exchange their scalar count values with each other. Scalar Count(SC) represents the number of scalar sensors that are present within the field of view of camera sensor and those are detecting the event. The scalar count value is exchanged among cameras by exchange of INFORM message. After exchange of INFORM message the camera sensor having maximum SC value is activated first. The camera that undergoes activation, broadcast UPDATE message[2]. UPDATE message contains the ids of scalars that are within the FOV of activated
camera sensor and those are detecting the event. The other camera sensors undergo activation based on matching their scalar ids with the activated camera sensor ids[2].
When ever event takes place, if we consider the case of sensing of event, sensing not only takes place on or inside the event boundary region, Sensing also takes place up to certain extent of outside of the event region[1]. As per paper[1] as some of the scalars that lie outside of the event boundary sense the occurring event though they lie outside the event boundary, still their sensing range cover the event region. After detecting the event the scalar sensors inform their corresponding camera sensors regarding the occurring event. Being informed from the scalar sensors the camera sensors undergo distributed camera actuation scheme and some or all of them are actuated unnecessarily even though they do not cover the event region. Therefore, due to overlapping of field of views of those cameras redundant data transmission occurs. Therefore, our aim is to keep those cameras in turned off condition and to activate only the optimum number of camera sensors for adequate coverage of the event region in such a manner that no event information will be missed
-
Related Work
Elimination of redundant data is a crucial issue in case of WMSN. As redundancy causes transmission of same data repeatedly, it needs to be eliminated so as to reduce communication cost in terms of unnecessary energy wastage, bandwidth used and CPU processing etc. Art gallery problem is a well known related work. But art gallery problem can be used to determine the least number of nodes and their locations in order to provide full coverage of the monitored region[2]. The problem is that the problem can be solved in polynomial time in two dimensional(2D) environment and the solution for art gallery problem can not be used for for our problem If sensors are arbitrarily deployed. For art gallery solution, a prior manual deployment of camera sensors should be done assuming that the topology for scalar sensors within the WMSN and the deployment regions are known in advance. Another related work to elimnate data redundancy based on sensing region management is presented in paper[3], Where the the entire sensing field is divided into number of sensing regions. During running of the network by forming cluster of scalar in each sensing region, events occurring in each sensing region can be managed by scalar cluster head. By hearing from salar cluster heads each camera can know the exact coverage overlaps through exchanging information with neighbours. Due to FOV , in some works
coverage was considered as a special case of circular coverage used in WSNs and such networks are referred to as directional FOV sensor network. The work in [14] uses a node placement strategy for providing full coverage and connectivity among nodes in such networks. Similarly several works are there to minimize data redundancy.
-
Assumptions Taken
.
The scalars and cameras sensors are assumed to be randomly deployed. Both the cameras and scalars are assumed to have fixed positions. The sensing range of scalars and event boundary are assumed to be circular for omni-directional camera sensors and trapezoidal in shape for directional camera sensors for implementation. The event boundary can be represented by a circle or a polygon. Circle shape is assumed for easy implementation. Each camera sensor has a certain field of view (FOV) and depth of field (DOF). Field of view (FOV) represents the angle at which a camera sensor can take accurate image of an object[2]. Depth of field (DOF) represents the distance at which a camera can take accurate image of an object[2]. The sensing range of scalars are assumed to be circular for easy implementation. Coverage is defined as the portion of area of an event that is covered by all the camera sensors.In this paper it is considered that camera sensors broadcast CIM (camera information message) and scalars broadcast SIM (scalar information message) which contains their id and location information. As a result all the sensors can know position of each other. It is assumed that all the
sensor can communicate with each other independent of the type of sensor it is.
When the event takes place, if we consider the case of sensing of event, sensing not only takes place on or inside the event boundary region, Sensing also takes place up to certain extent of outside of the event region[1]. As some of the scalars that lie outside of the event boundary sense the occurring event though they lie outside the event boundary, still their sensing range cover the event region. After detecting the event the scalar sensors inform their corresponding camera sensors regarding the occurring event. Being informed from the scalar sensors the camera sensors undergo distributed camera actuation scheme and some or all of them are actuated unnecessarily even though they do not cover the event region. Therefore, due to overlapping of field of views of those cameras redundant data transmission occurs. Therefore, our aim is to keep those cameras in turned off condition and to activate only the optimum number of camera sensors for adequate coverage of the event region in such a manner that no event information will be missed. After considering this sensing of event as per paper[1] We have compared the results for omni directional and directional camera sensor.
-
Problem definition and proposed Work
In our case we are using a comparative approach of use of directional versus omni directional camera sensors. So before entering into the topic let us know what SC is.SC is the number of scalar sensors that are detecting the event and are present within FOV of camera sensors[2].
Fig. 1. Scalar Count(SC)
Consider Fig. 1. The large pink circle represents the event region.Here C represents the camera sensor. S represents the scalar sensors. There are six scalar sensors present within the FOV of camera sensor. The FOV is represented by a trapeziumfor directional camera sensor here. The dark circles represent the
scalars that are detecting the event. The white circles represent the scalars that are not detecting the event.As only four scalars out of six scalars are detecting the event, so SC value for camera C IS 4. Now come to know about inner, outer and fringe scalar nodes. Inner node is one that completely lie within the field of view of camera sensor. Outer node is that node that lie completely outside the field of view of camera sensor. Fringe node is one that lies partly within the field of view of camera sensor[3].
.
Fig. 2. Case of Directional Camera Sensors
Consider Fig. 2. There are nine camera sensors namely C1, C2, C3, C4, C5, C6, C7, C8,C9. The large pink circle represents the event region. The small circles represent the scalar sensors. When the scalar informs the camera sensor regarding the occurring event ,DETECTION is sent by the scalar sensors, that contains the id of the concerned scalar sensor and the occurring event information. Then after receiving DETECTION message the camera sensors exchange their scalar count values with each other. INFORM messageis used for SC value exchange among camera sensors. After that the camera sensor having maximum SC value is activated first[2]
.If there is a tie in SC value any one of them can be activated, The camera that undergoes activation broadcast UPDATE message that contains the ids of scalars that are within the FOV of activated camera sensor and those are detecting the event. The other camera sensors undergo activation based on matching their scalar ids with the activated camera sensor scalar ids[2] contained in UPDATE message of activated camera sensor[2]. Here in such cases though we used distributed camera actuation based on scalar count method for redundant data elimination, still redundancy due to overlapping of FOVs can not be eliminated. This is because in such case as shown in Fig. 2 if we turn off some camera sensor it may lead to event information loss. Our objective is to eliminate the redundant data in such a manner that no event information loss will be there.Here in this case C1,C2,C3,C4,C5,C6,C7,C8 AND C9 all are to be activated so as to avoid event information loss. So if we use circular FOV in case of directional FOV here then we need not activate all the cameras ,as most of the scalars of the cameras will be covered by activating less number of cameras. As by making the FOV circular, it will cover more portion of event area, so that less number of cameras are necessary to be activated
.In many cases by using directional camera sensors, some of the scalars that are present outside of event region are not covered by the field of view of camera sensors.So the event information captured by scalars that lie outside the FOV of directional cameras are lost. By considering circular field of view , most of the outer nodes come within the FOV of omnidirectional camera.So that the event information loss that occurred due to such outer nodes can be eliminated.
Consider another scenario where we are using the directional camera sensors as shown in Figure 3.The pink circle represents the event region. The medium circles represent the omni-directional FOV of cameras. Small circles represent the scalar sensors. The camera sensors are represented by C1,C2,C3,C4,C5,C6,C7,C8,C9,C10,C11.Here when event takes place, first of all the scalars send DETECTION message to camera sensors the cameras exchange SC value with each other. Through exchange of INFORM msg. The camera having maximum SC value is activated first as in case of directional camera sensor in paper[2]. The activated camera sends UPDATE message to its neighbours. Based on ids of scalars contained in UPDATE message the other cameras decide who among them are to be actuated.
Fig. 3. Case of Omni-directional camera Sensor
Here C1, C3, C4, C5, C6, C9 are activated in Fig. 3. Rest of the cameras are kept in sleep mode. As the event information captured by C2, C7, C8, C10, C11 are also captured by the activated camera sensors, so no need to turn on these three cameras. By using omni-directional camera, some of most of the event detecting scalars are covered by the camera sensors. So information loss that occurred due to event detecting outer scalar nodes are minimized by using omni-directional camera sensors. Again use of omni directional cameras lead to reduce data redundancy with less number of cameras activated as compared to use of directional camera sensors. Again less event information loss occurs due to use of omni-directional camera as compared to directional camera. Such thing happens as more portion of event area is covered by omni-directional cameras.
-
Implementation and Result Analysis
The implementation was done in C++ in UBUNTU platform. We assumed that the scalar and camera sensors are deployed randomly. They are assumed to have fixed positions. We varied different parameters such as depth of field, event radius, number of scalars and number of camera sensors individually and observed their effect on number of cameras actuated in both the cases.
We have compared two cases: the case of directional camera and case of omnidirectional camera.
-
Effect of varying number of scalar sensors on number of cameras activated
We varied number of scalar sensors keeping DOF, number of cameras, sensing range of scalars, event point ,event radius and sensing range of event as constant and observed its effect on number of camera activated. The green line represents the case of directional camera and redline represents the case of omni-directional camera.
The horizontal axis in the graph as shown in Fig. 4 represents the number of scalar sensors represented by nos. The vertical axis in the graph represents the number of cameras activated represented by noca.
We observe that in both the cases varying the number of scalar sensors increases the number of cameras activated initially. As with increase of number of scalar sensors, the number of event detecting scalar sensors also increases, so SC of cameras also goes on increasing.
Fig.4 Number ofSscalars(nos) vs. Number of Cameras Activated(noca)
Due to increase of number of scalars more cameras contain atleast one scalar. So number of cameras activated increases gradually. Then it remains almost constant, as the optimum number of camera sensors required to cover a particular event region is always constant. On comparing both the cases we found that number of cameras activated in omni-directional camera case is found to be less than or equal to that of directional camera approach in many cases.
-
Effect of varying number of camera sensors on number of cameras activated
We varied number of camera sensors keeping DOF, number of scalar sensors, sensing range of scalars, event point, event radius and sensing range of event as constant and observed its effect on number of camera activated.
The horizontal axis in the graph as shown in Fig .3 represents the number of camera sensors represented by noc. The vertical axis in the graph represents the number of cameras activated represented by noca. The green line represents the case of directional cameras and redline represents the case of omni-directional cameras. Consideboth.in both the cases the number of cameras activated remains almost constant. As the optimum number of cameras required to cover an event region is always a constant.
Fig. 5 Nnumber of Cameras(noc) vs. Numb er of Cameras Activated(noca)
-
Effect of varying event radius on number of cameras activated.
The horizontal axis in the graph as shown in Fig. 6 represents the event radius represented by evtradius. The vertical axis in the graph represents the number of cameras activated represented by noca. Here we observed that with increase of event radius the number of cameras activated initially increases and then it remains almost constant in both the cases. But in initial approach the number of cameras activated suddenly decreases as we are considering random deployment of nodes. After some time the number of cameras activated remains constant in both the cases as optimum number of cameras required to cover the event region is constant
Fig. 6 Event Radius(evtradius) vs. Number of Cameras Activated(noca)
-
Effect of varying depth of field (DOF) on number of cameras activated
We varied DOF keeping number of scalar sensors, number of camera sensors, sensing range of scalars, event point, event radius and sensing range of event as constant and observed its effect on number of cameras activated . The horizontal axis in the graph shown in Fig. 7 represents the depth of field of camera represented by dof. The vertical
axis in the graph represents the number of cameras activated represented by noca. With increase of dof of camera number of cameras activated in both the cases increases.Then it starts decreasing. With increase of dof the number of scalars within field of view of cameras increases, so scalar count of camera increases and more number of cameras will cover atleast one scalar, therefore more number of cameras are are activated. Then due to excess increase in dof, overlapping region of dof increases for camera sensors. Some of the scalars become the common scalars for one or more camera sensors. Therefore the number of camera activated decreases with further increase of dof in both the cases.
. Fig. 7. Depth of Field(dof) vs. Number of Cameras Activated(noca)
-
Distributed camera actuation achieves redundant data elimination in which Optimum number of camera sensors actuated for adequate coverage of event region[2]. When we consider the case of sensing of occurring event at outside ,up to certain distance of event region, we are able to activate only required optimum number of camera sensors in the event region and keeping all other cameras those lie outside event region but inside the sensing range of event in turned off condition[1]. using omni-directional camera, some of most of the event detecting scalars are covered by the camera sensors. So information loss that occurred due to event detecting outer scalar nodes are minimized by using omni-directional camera sensors. Again use of omni directional cameras lead to reduce data redundancy with less number of cameras activated as compared to use of directional camera sensors. Again less event information loss occurs due to use of omni-directional camera as compared to directional camera. Such thing happens as more portion of event area is covered by omni-directional cameras.
References
-
Sushree Bibhuprada B. Priyadarshini, Biswa Mohan Acharya, Debapriya Soumyesh Das, Redundant Data Elimination & Optimum Camera Actuation In Wireless Multimedia Sensor Network (WMSN), IJERT, vol. 2,issue 6, pp.2381-2388, june edition 2013.
-
Andrew Newell, Kemal Akkaya.,"Distributed collaborative camera actuation for redundant data elimination in Wireless Multimedia Sensor Networks", AdHoc Networks, vol. 9, pp. 514-527, 2011.
-
Wusheng Luo, Qin Lu,Jingjing Xiao.,"Distributed Collaborative Camera Actuation Scheme Based on Sensing Region Management for Wireless Multimedia Sensor Network",2012
-
Jennifer Yick, Biswanath Mukherjee, Dipak Ghosal.,"Wireless Sensor Network Survey", Computer Networks, vol. 52, pp. 2292-2330, 2008.
-
Ian F. Akyildiz , Tommaso Melodia, Kaushik R. Chowdhury,"A survey onWireless Multimedia Sensor Networks", Computer Networks, vol. 51, pp. 921-960, 2007.
-
I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci," A survey on sensor networks", IEEE Communications Magazine 40(8),pp. 104-112,2002.
-
S. Toumpis, T. Tassiulas, "Optimal deployment of large wireless sensor networks", IEEE Transactions on Information Theory,.vol. 52, pp. 2935-2953,2006.
-
J. Yick, G. Pasternack, B. Mukherjee, D. Ghosal," Placement of network services in sensor networks, Self- Organization Routing and Information, Integratin in Wireless Sensor Networks" (Special Issue) in International Journal of Wireless and Mobile Computing (IJWMC),pp 101-112, 2006.
-
D. Pompili, T. Melodia, I.F. Akyildiz, "Deployment analysis in underwater acoustic wireless sensor networks", in: WUWNet, Los Angeles, CA, 2006.
-
I.F. Akyildiz, E.P. Stuntebeck,"Wireless underground sensor networks: research challenges, Ad-Hoc Networks",vol. 4, pp. 669-686,2006.
-
Pu Wang,Rui Dai,Ian F.Akyildiz,"Collaborative Data Compression Using Clustered Source Coding For Wireless Multimedia Sensor Net- works",Broadband Wireless Networking Laboratory,School of Electrical and Computer Engineering,Georgia Institute of Technology,Atlanta,Georgia,IEEE INFOCOM,2010.
-
M. Amac Guvensan ., A. Gokhan Yavuz,"On coverage issues in directional sensor networks: A survey",vol. 9,pp. 1238-1255,2011.
-
K. Ren, K. Zeng, W. Lou," Fault-tolerant event boundary detection in wireless sensor networks", in: IEEE GLOBECOM, vol. 7,pp. 354 363,2006
-
X. Han, X. Cao, E. Lloyd, C.-C. Shen,Deploying directional sensor networks with guaranteed connectivity and coverage, in: 5th Annual IEEE Communications Society Conference on Sensor, Mesh and AdHoc Communications and Networks,SECON 08, pp. 153160. Doi :10.1109/SAHCN,2008
-
Amitabha Ghosha,, Sajal K. Das,"Coverage and connectivity issues in wireless sensor networks:A survey",vol. 4,pp. 303-334,2008.
-
H. Gupta, Z. Zhou, S.R. Das, Q. Gu," Connected sensor cover: Self-organization of sensor networks for e_cient query execution", IEEE/ACM Trans. Netw vol. 14, pp. 55-67,2006
-
J. ORourke,"Art Gallery Theorems and Algorithms", Oxford University Press, Inc., Oxford, 1987.
-
Tom Pfeifer*,Redundant positioning architecture,vol. 28 ,pp. 1575-1585,2005.
-
Andrew Newell and Kemal Akkaya,"Self-actuation of Camera Sensors for Redundant Data Elimination in Wireless Multimedia Sensor Net- works",Southern Illinois University Carbondale Carbondale, IL 62901 USA.
-
yong Xu1, yin Liu2, yao Liu3,"Algorithm for redundancy elimination in Network ".
-
Soumyadip Sengupta,SwagatamDas,M.D.Nasir,B.K.Panigrahi,"Multi-objectivenodedeploymentinWSNs: Insearchofanoptimaltrade amongcoverage,lifetime,energyconsumption,andconnectivity",23May2012.
-
U. Monaco , F. Cuomo , T. Melodia , F. Ricciato , M. Borghini,"Understanding optimal data gathering in the energy and latency domains of a wireless sensor network",Computer Network, vol. 50,pp. 3564-3584,2006.
-
Dimitrios Zorbas, Dimitris Glynos, Panayiotis Kotzanikolaou , Christos Douligeris,"Solving coverage problems in wireless sensor networks using coversets",Ad Hoc Networks ,vol. 8,pp. 400-415,2010.
-
Nurcan Tezcan,WenyeWang,"Self-Orienting Wireless Multimedia Sensor Networks for Maximizing Multimedia Coverage",2008.
-
Xianhua Liu and R. B. Randall,"Redundant Data Elimination In Independent Component Analysis",School of Mechanical and Manufacturing Engineering The University of New South Wales, Sydney,Australia.
-
Alain Girault *,"Elimination of redundant messages with a two-pass static analysis algorithm",Parallel Computing vol. 28, pp.433-453,2002.
-
] Yanli Cai, Wei Lou, Minglu Li and Xiang-Yang Li,"Target-Oriented Scheduling in Directional Sensor Networks",IEEE INFOCOM, 2007.
-
Jian Wang, Changyong Niu, Ruimin Shen,"Priority-based target coverage in directional sensor networks using a genetic algorithmI",Computers and Mathematics with Applications vol. 57,pp. 1915-1922,2009.
-
S. Sundhar Ram, Student Member, IEEE, D. Manjunath, Member, IEEE, Srikanth K. Iyer, and D. Yogeshwaran,"On the Path Coverage Properties of Random Sensor Networks",IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 6, NO. 5, MAY 2007.
-
Dimitrios Zorbas, Dimitris Glynos, Panayiotis Kotzanikolaou *, Christos Douligeris,"Solving coverage problems in wireless sensor networks using cover sets",Ad Hoc Networks, vol. 8,pp. 400-415,2010.
-
M. Ding, D. Chen, K. Xing, X. Cheng," Localized fault-tolerant event boundary detection in sensor networks", vol. 2,pp. 902-913 doi:10.1109/INFCOM.,2005.
Sushree Bibhuprada B. Priyadarshini is continuing her Ph.D. at VSSUT burla right now. She has completed her M.Tech in Computer Science and Data Processing at ITER, Bhubaneswar,Odisha,India after completion of her BTech in Information Technology.Her area of interest is Sensor Network and database security. She is the topper in her branch for the year 2013 in M.Tech at ITER.
Address for Correspondence: Sushree Bibhuprada B. Priyadarshini C/O: Dr. Hemanta Kumar Das
Plot No: 945
At: Baramunda, Post: Delta, Thana: Khandagiri Dist: Khurdha, Bhubaneswar,Odisha, India.
PIN: 751003