Redundant Data Elimination & Optimum Camera Actuation in Wireless Multimedia Sensor Network (WMSN)

DOI : 10.17577/IJERTV2IS60654

Download Full-Text PDF Cite this Publication

Text Only Version

Redundant Data Elimination & Optimum Camera Actuation in Wireless Multimedia Sensor Network (WMSN)

Sushree Bibhuprada B. Priyadarshini

M.Tech at ITER, Bhubaneswar in Computer Science and Data Processing after B.Tech in Information Technology

Biswa Mohan Acharya

Assistant Professor in dept. of Computer Applications at ITER,Bhubaneswar

Debapriya Soumyesh Das

B.Tech in Electronics and Telecommunication Engineering at IIIT, Bhubaneswar

Abstract

Wireless Multimedia Sensor Network(WMSN) is an extension of Wireless Sensor Network(WSN), where in addition to scalar sensors camera sensors are present. In case of WMSN, multimedia data requires high cost for communication and processing which is due to possible data redundancy that occurs in case of WMSN. Data redundancy occurs due to over lapping of Field of view (FOV) of camera sensors. Due to data redundancy communication cost in terms of bandwidth used, CPU processing etc. increases. Whenever an event takes place in a monitored region, it is first of all detected by the scalar sensors. The scalar sensors inform their corresponding camera sensors regarding the occurance of event. When event takes place, if we consider the case that sensing occurs by scalar sensors lying inside the event boundary, on the event boundary as well as up to some extent of outside of event boundary region that is covering some more portion of the area after the event boundary, then the concerned scalars who are present within the FOV of cameras, who lie outside the event boundary inform their respective camera sensors regarding the event and the concerned camera sensors undergo distributed camera actuation unnecessarily and some or all of the cameras that lie outside the event boundary are actuated based on distributed camera actuation scheme due to sensing

of event outside event boundary even though their depth of field(DOF) does not cover the event region. Therefore, our objective is to eliminate the redundant data along with actuation of optimum number of camera sensors in such a manner that no event information will be lost.

N

N

  1. Introduction

    ow a days sensor networks are used in many spheres of life. Wireless multimedia sensor network(WMSN) has started to receive a lot of attention very recently due to their potential to be deployed flexibly in various applications with lower costs[1].But several challenges such as resource constraint, congestion, delay, data redundancy etc. occurs in case of WMSN. Out of several challenges, data redundancy is our topic of interest. Data redundancy involves the transmission of same data repeatedly during the communication in WMSN. When an event takes place in a monitored region, it is first of all detected by the scalar sensors. The scalar sensors inform their corresponding camera sensors regarding the occurring event. Then the camera sensors collaboratively exchange their reading to decide which among them to be actuated by distributed camera actuation based on scalar sensor count(DCA-SC) [1].When event takes place, if we consider the case of sensing of event, sensing not only takes place on or

    inside the event boundary region, Sensing also takes

    place up to certain extent of outside of the event region.

    As some of the scalars that lie outside of the event boundary sense the occurring event though they lie outside the event boundary, still their sensing range cover the event region. After detecting the event the scalar sensors inform their corresponding camera sensors regarding the occurring event. Being informed from the scalar sensors the camera sensors undergo distributed camera actuation scheme and some or all of them are actuated unnecessarily even though they do not cover the event region. Therefore, due to overlapping of field of views of those cameras redundant data transmission occurs. Therefore, our aim is to keep those cameras in turned off condition and to activate only the optimum number of camera sensors for adequate coverage of the event region in such a manner that no event information will be missed.

  2. Related Work

    Elimination of redundant data is a crucial issue in case of WMSN. As redundancy causes transmission of same data repeatedly, it needs to be eliminated so as to reduce communication cost in terms of unnecessary energy wastage, bandwidth used and CPU processing etc. Art gallery problem is a well known related work. But art gallery problem can be used to determine the least number of nodes and their locations in order to provide full coverage of the monitored region[1].

    The problem is that the problem can be solved in polynomial time in two dimensional(2D) environment and the solution for art gallery problem can not be used for our problem if sensors are arbitrarily deployed[1]. For art gallery solution, a prior manual deployment of camera sensors should be done assuming that the topology for scalar sensors within the WMSN and the deployment regions are known in advance. Another related work to eliminate data redundancy based on sensing region management is presented in paper[2], where the the entire sensing field is divided into number of sensing regions. During running of the network by forming cluster of scalar in each sensing region, events occurring in each sensing region can be managed by scalar cluster head. By hearing from salar cluster heads each camera can know the exact coverage overlaps through exchanging information with neighbours. Due to FOV , in some works coverage was considered as a special case of circular coverage used in WSNs and such networks are referred to as directional FOV sensor network. The work in [13] proposes a node placement strategy for providing full coverage and connectivity among nodes in such

    networks. Similarly several works are there to minimize data redundancy.

  3. Assumptions Taken

    We assume that the scalar and camera sensors are randomly deployed. Both the cameras and scalars are assumed to have fixed positions. The depth of field (DOF), sensing range and event boundary are assumed to be circular for easy implementation. The event boundary can be represented by a circle or a polygon. Circle shape is assumed for easy implementation in our context. Each camera sensor has a certain field of view (FOV) and depth of field (DOF)[1]. Field of view (FOV) represents the angle at which a camera sensor can take accurate image of an object. Depth of field (DOF) represents the distance at which a camera can take accurate image of an object. Coverage is defined as the portion of area of an event that is covered by all the camera sensors.

    In this paper it is considered that camera sensors broadcast CIM (camera information message) and scalars broadcast SIM (scalar information message) which contains their id and location information. As a result all the sensors can know position of each other. It is assumed that all the sensors can communicate with each other independent of the type of sensor it is.

  4. Problem definition and proposed Work

    1. Problem definition

      The considered problem can be defined as follows. Let us consider a WMSN with m scalars and n camera sensors. Initially all the scalar and camera sensors are randomly deployed.

      Consider Figure 1, where the tiny dark small circles represent the scalar sensors. C1, C2, C3, C4, C5, C6, C7, C8, C9 represent the camera sensors. Their field of view (FOV) is represented by the medium size circle around it. There are all total nine cameras and their corresponding field of view (FOV) are shown in Figure 1. The pink circle represents the event region. The largest circle represents the sensing area of the event. R represents the event radius. Sr represents the distance up to which sensing of event occurs. Four types of tables are maintained.FOV table contains the ids of scalars that lie within the FOV of camera sensor. EDS(Event detecting scalar) table contains the ids of scalars that lie within the field of view of camera which are detecting the event. PCS(priority camera sensor) table contains the ids of activated camera sensor(s). COV (camera of view) table contains ids of

      neighbouring scalars . When an event takes place in a monitored region it is first of all detected by the scalar sensors and the scalars inform the camera sensors regarding the occurring event by sending DETECTION message[1].

      Figure 1. Data Redundancy Issue

      This message contains the ids of scalar sensors which detect the event. The event detecting scalar sensor can only inform the camera sensor regarding the occurring event only if it lies within the FOV of that concerned camera[1]. Being informed from the scalar sensors the camera sensors exchange INFORM message with each other[1]. INFORM message contains the scalar count(SC) of each camera sensors.

      SC represents the number of scalar sensors those lie within the FOV of camera and those are detecting the event. After exchange of INFORM message the camera sensors maintain a priority list that contain the SC value of each of the camera sensors. So after INFORM message exchange, the SC value of each of the camera sensor becomes available with other camera sensors. The camera having maximum SC value is activated first. The activated camera sensor sends UPDATE message to other camera sensors. UPDATE message contains the ids of scalar sensors that lie within the FOV of concerned camera sensors. Based on matching the ids contained in UPDATE message and the ids of scalars lying within FOV table of camera, the camera sensors decide which will be activated next. The camera sensors are activated based on descending order of SC value. As per this cameras C1,C3,C5,C6,C8 C9 are activated. But here being informed from event detecting scalar sensors, we observe that cameras C8, C9 are activated. But activation of C8 and C9 are unnecessary as their DOFs do not cover the actual event region. Similarly in case of large event region,

      number of such cameras are present, which unnecessarily undergo distributed camera actuation scheme and some or all of them are activated. So our objective is to keep such cameras in turned off condition and to activate optimum number of camera sensors for adequate coverage of event region in such a manner that no event information will be missed and redundancy can be eliminated.

    2. Proposed work

      When an event takes place in a monitored region it is first of all detected by the scalar sensors. Scalar sensors inform their corresponding camera sensors regarding the occurring event by broadcasting DETECTION message[1]. In our proposed work, We considered a binary parameter whose value can be either 1 or 0. The binary parameter value is sent along with the DETECTION message by the concerned scalar sensors. The scalar sensor send binary parameter value 1 along with DETECTION message, if it lies within the occurring event region. It contains value 0 if it lies outside the event region.

      After receiving the DETECTION message, the camera sensors add the id of the scalar which contains binary parameter value 1 to DETECT table. Then each of the camera sensors match their FOV table ids with the ids contained in DETECT table. If any of the scalar ids maintained in both tables matches then that camera undergoes distributed camera actuation scheme. Otherwise, it is kept in turned off condition. Here if we consider Figure 1, all the scalar ids of event region marked with pink color will be maintained in DETECT table. The ids maintained in FOV tables of each of C8 and C9 does not match with ids maintained in DETECT table. Therefore they are kept in turned off condition. As a result unnecessary camera actuation is avoided in that case. Similarly, in case of large event regions, many number of such cameras are present that lie outside the event boundary, but inside the sensing region, but are unnecessarily activated as their DOF does not cover the event region. Now in this way by maintaining DETECT table and matching FOV table ids, the camera sensor can decide from very beginning that whether it should undergo distributed camera actuation or not. The camera for which no scalar id matches for both the tables are kept in turned off condition.

      The two cases can be stated as follows. Case1:

      FOV DETECT

      If for camera sensor i ,

      ,then camera i is not activated and it broadcasts Sleep message..

      Case2:

      If for a camera sensor i , FOV DETECT , then camera i undergoes distributed camera actuation scheme and takes part in Inform message exchange with neighbours.

      After exchange of INFORM message, the camera sensors maintain their own priority list which contains the SC value of each of other cameras. As a result SC value of each camera sensor becomes available with other camera sensors. The camera sensor having maximum SC value is activated first. If tie occurs in case of SC value any of the camera can be actuated first to break the tie. The activated camera id is then maintained in PCS table. The activated camera sensor sends UPDATE message to other camera sensors. UPDATE message contains the ids of event detecting scalars that lie within the FOV of concerned camera sensor. The other camera sensors collaboratively match their own FOV table ids with ids contained in UPDATE message to decide which among them to be actuated. In the algorithm i and j stands for the sensor nodes in respective cases. Following is the modified algorithm of the algorithm given in paper[1], where DETECT table is maintained.

    3. Algorithm[1]

  1. Initialize Table FOV to all scalar sensors within Field of view.

  2. Initialize Tables EDS, DETECT, COV, and PCS to be empty.

  3. Cameras send CIM(Camera Information Message) and scalars send SIM(Scalar Information Message) to know position of each other.

  4. When Event takes place, Scalars present within event region broadcast DETECTION message, which contains id=1 for scalars present within event region and contains id=0 for scalars present outside event region.

  5. While receiving 'Detection' messages

  6. If 'Detection' message received from j AND j belongs to FOV then

  7. Add j to EDS table

  8. If DETECTION msg contains id value 1,then camera adds id of that scalar to DETECT Table.

  9. Match the ids maintained in DETECT Table with the ids of scalars present within FOV, If any id matches then the concerned camera takes part in Distributed camera actuation. Otherwise,it is kept in turned off condition.

  10. if SC 0 then[1]

  11. broadcast Inform message.

  12. While receiving 'Inform' messages do

  13. if 'Inform' message received from j then

  14. if (SCi < SCj) or(SCi= SCj)and i<j then

  15. activate camera j having maximum SC value. Add j to PCS table.

  16. If PCS value > 0

  17. If Update Message received from j then

  18. add neighbour scalars of j to neighbours of i

  19. delete j from PCS table

  20. broadcast Update Message.

  1. Implementation and Result Analysis

    The implementation was done in C++. We assumed that the scalar and camera sensors are deployed randomly. They are assumed to have fixed positions. We varied different parameters such as depth of field, event radius, number of scalars and number of camera sensors individually and observed their effect on number of cameras actuated .

    We have compared two cases: the initial and proposed approach. In the initial approach the cameras lying i the entire sensing range of event were activated. But In the proposed approach the cameras covering only the concerned event region are activated keeping the cameras outside the event region and inside the sensing range in turned off condition.

    1. Effect of varying number of scalar sensors on number of cameras activated

      Figure 2. Number Of Scalars(nos) Vs. Number Of Cameras Activated(noca)

      We varied number of scalar sensors keeping DOF, number of cameras, sensing range of scalars, event point ,event radius and sensing range of event as constant and observed its effect on number of camera activated and coverage ratio. The green line represents initial approach and redline represents the proposed approach.

      The horizontal axis in the graph as shown in Figure 2 represents the number of scalar sensors represented by nos. The vertical axis in the graph represents the number of cameras activated represented by noca.

      Consider initial approach and proposed approach. We observe that in both the cases varying the number of scalar sensors increases the number of cameras activated initially. As with increase of number of scalar sensors, the number of event detecting scalar sensors also increases, so SC of cameras also goes on increasing.

      Due to increase of number of scalars more cameras contain atleast one scalar. Therefore number of cameras activated increases gradually. Then it remains almost constant, as the optimum number of camera sensors required to cover a particular event region is always constant. On comparing both the cases we found that number of cameras activated in proposed approach is found to be less than or equal to that of initial approach in many cases.

      As number of cameras activated in proposed approach are less than that of initial approach, amount of over lapping in proposed approach is less than that of initial approach.

      Therefore, redundant data transmission is minimized in proposed approach as comparision to initial approach. Therefore proposed approach is better than the initial approach.

    2. Effect of varying number of camera sensors on number of cameras activated

      We varied number of camera sensors keeping DOF, number of scalar sensors, sensing range of scalars, event point, event radius and sensing range of event as constant and observed its effect on number of camera activated and coverage ratio.

      The horizontal axis in the graph as shown in Figure 3 represents the number of camera sensors represented by noc. The vertical axis in the graph represents the number of cameras activated represented by noca. The green line represents initial approach and redline represents the proposed approach.

      Figure 3. Number Of Cameras(noc) Vs. Number Of Cameras Activated(noca)

      Consider initial approach and proposed approach, in both the cases the number of cameras activated remains constant. As the optimum number of cameras required to cover an event region is always a constant. But in case of proposed approach the number of cameras activated are found to be less than that of initial approach in all the cases shown in Figure 3.Therefore it is concluded that proposed approach is found to be better than the initial approach.

    3. Effect of varying event radius on number of cameras activated

      The horizontal axis in the graph as shown in Figure 4 represents the event radius represented by evtradius. The vertical axis in the graph represents the number of cameras activated represented by noca. The green line represents initial approach and redline represents the proposed approach.

      Here we observed that with increase of event radius the number of cameras activated initially increases and then it remains almost constant in both the cases. But in initial approach the number of cameras activated suddenly decreases as we are considering random deployment of nodes.

      After some time the number of cameras activated remains constant in both the cases as optimum number of cameras required to cover the event region is constant.

      Figure 4. Event Radius(evtradius) Vs. Number Of Cameras Activated(noca)

    4. Effect of varying depth of field (DOF) on number of cameras activate

      We varied DOF keeping number of scalar sensors, number of camera sensors, sensing range of scalars, event point, event radius and sensing range of event as constant and observed its effect on number of cameras activated and coverage ratio. The horizontal axis in the graph shown in Figure 5 represents the depth of field of camera represented by dof. The vertical axis in the graph represents the number of cameras activated represented by noca. The green line represents initial approach and redline represents the proposed one.

      Figure 5. Depth Of Field(dof) Vs. Number Of Cameras

      Activated(noca).

      With increase of dof of camera number of camera activated in both the cases increases. Then it starts decreasing. With increase of dof the number of scalars within field of view of cameras increases, so scalar count of camera increases and more number of cameras will cover atleast one scalar, therefore more number of cameras are activated. Then due to excess increase in dof, overlapping region of dof increases for camera sensors. Some of the scalars become the common scalars for one or more camera sensors. Therefore the number of camera activated decreases with further increase of dof in both the cases. But the number of cameras activated in most of the cases in proposed approach are found to be less than that of number of cameras activated in initial approach.

  2. Conclusion

    Distributed camera actuation achieves redundant data elimination in which Optimum number of camera sensors actuated for adequate coverage of event region. When we consider the case of sensing of occurring event at outside ,up to certain distance of event region, using the proposed approach we are able to activate only required optimum number of camera sensors in the event region and keeping all other cameras those lie outside event region but inside the sensing range of event in turned off condition.

    By studying all the four cases such as varying number of scalars, number of cameras, event radius and dof and studying their effect on number of cameras activated, we observed that the number of cameras activated in proposed approach in most of the cases is found to be less than the number of cameras activated in initial approach. As number of cameras activated in proposed

    approach is found to be less than that of initial approach, so the amount of overlapping region of FOVs in proposed approach is less than that of initial approach, so redundant data transmission in proposed approach is less than that of initial approach and number of cameras actuated in proposed approach is less than that of initial approach. Therefore, we can conclude that proposed approach is more optimized approach than the initial one.

  3. Acknowledgment

    The authors are highly grateful to Mrs. Kaberi Das, Asst.Professor, dept. of Computer Applications, ITER, Bhubaneswar,for her constant inspiration, constructive ideas, persistent encouragement and timely Cooperation in making this investigation successful. Authors also want to convey their gratitude to Dr. Debahuti Mishra, H.O.D., Computer Applications

    ,ITER, Bhubaneswar, for her cooperation, valuable suggestion and support in various ways for the successful Completion of the work.

  4. References

  1. Andrew Newell, Kemal Akkaya.,"Distributed collaborative camera actuation for redundant data elimination in Wireless Multimedia Sensor Networks", AdHoc Networks, vol 9, pp. 514-527, 2011.

  2. Wusheng Luo, Qin Lu,Jingjing Xiao.,"Distributed Collaborative Camera Actuation Scheme Based on Sensing Region Management for Wireless Multimedia Sensor Network",2012

  3. Jennifer Yick, Biswanath Mukherjee, Dipak Ghosal.,"Wireless Sensor Network Survey", Computer Networks, vol 52, pp. 2292-2330, 2008.

  4. Ian F. Akyildiz , Tommaso Melodia, Kaushik R. Chowdhury,"A survey onWireless Multimedia Sensor Networks", Computer Networks, vol 51, pp. 921-960, 2007.

  5. I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci," A survey on sensor networks", IEEE Communications Magazine 40(8),pp. 104-112,2002.

  6. S. Toumpis, T. Tassiulas, "Optimal deployment of large wireless sensor networks", IEEE Transactions on Information Theory,.vol 52, pp. 2935-2953,2006.

  7. J. Yick, G. Pasternack, B. Mukherjee, D. Ghosal," Placement of network services in sensor networks, Self- Organization Routing and Information, Integration in Wireless Sensor Networks" (Special Issue) in International Journal of Wireless and Mobile Computing (IJWMC),pp 101- 112, 2006.

  8. D. Pompili, T. Melodia, I.F. Akyildiz, "Deployment analysis in underwater acoustic wireless sensor networks", in: WUWNet, Los Angeles, CA, 2006.

  9. I.F. Akyildiz, E.P. Stuntebeck,"Wireless underground sensor networks: research challenges, Ad-Hoc Networks",vol 4, pp. 669-686,2006.

  10. Pu Wang,Rui Dai,Ian F.Akyildiz,"Collaborative Data Compression Using Clustered Source Coding For Wireless Multimedia Sensor Networks", BroadbandWirelessNetworkingLaboratory, School of Electrical and Computer Engineering, Georgia Institute of Technology,Atlanta,Georgia,IEEE INFOCOM,2010.

  11. M. Amac Guvensan ., A. Gokhan Yavuz,"On coverage issues in directional sensor networks: A survey",vol 9,pp 1238-1255,2011.

  12. K. Ren, K. Zeng, W. Lou," Fault-tolerant event boundary detection in wireless sensor networks", in: IEEE GLOBECOM, vol. 7,pp. 354 -363,2006

  13. X. Han, X. Cao, E. Lloyd, C.-C. Shen,Deploying directional sensor networks with guaranteed connectivity and coverage, in: 5th Annual IEEE Communications Society Conference on Sensor, Mesh and AdHoc Communications and Networks,SECON 08, pp. 153160. Doi

    :10.1109/SAHCN,2008

  14. Amitabha Ghosha,, Sajal K. Das,"Coverage and connectivity issues in wireless sensor networks:A survey",vol 4,pp. 303-334,2008.

  15. H. Gupta, Z. Zhou, S.R. Das, Q. Gu," Connected sensor cover: Self-organization of sensor networks for e_cient query execution", IEEE/ACM Trans. Netw vol. 14,PP. 55-67,2006

[16]J. ORourke,"Art Gallery Theorems and Algorithms",

Oxford University Press, Inc., Oxford, 1987.

  1. Tom Pfeifer*,Redundant positioning architecture,vol 28

    ,pp. 1575-1585,2005.

  2. Andrew Newell and Kemal Akkaya,"Self-actuation of Camera Sensors for Redundant Data Elimination in Wireless Multimedia Sensor Networks", Southern Illinois University Carbondale Carbondale, IL 62901 USA.

  3. yong Xu1, yin Liu2, yao Liu3,"Algorithm for redundancy elimination in Network ".

  4. Soumyadip Sengupta, Swagatam Das, M.D Nasir,B.K. Panigrahi.Multi objective node deployment in WSNs: In Search of an optimal trade among coverage, lifetime,energy consumption, and connectivity, 23 May,2012..

  5. U. Monaco , F. Cuomo , T. Melodia , F. Ricciato , M. Borghini,"Understanding optimal data gathering in the energy and latency domains of a wireless sensor network",Computer Networks 50,PP. 3564-3584,2006.

  6. Dimitrios Zorbas, Dimitris Glynos, Panayiotis Kotzanikolaou , Christos Douligeris,"Solving coverage problems in wireless sensor networks using coversets",Ad Hoc Networks ,vol. 8,pp. 400-415,2010.

  7. Nurcan Tezcan,WenyeWang,"Self-Orienting Wireless Multimedia Sensor Networks for Maximizing Multimedia Coverage",2008.

  8. Xianhua Liu and R. B. Randall,"Redundant Data Elimination In Independent Component Analysis",School of Mechanical and Manufacturing Engineering The University of New South Wales, Sydney,Australia.

  9. Alain Girault *,"Elimination of redundant messages with a two-pass static analysis algorithm",Parallel Computing vol 28, pp.433-453,2002.

  10. ] Yanli Cai, Wei Lou, Minglu Li and Xiang-Yang Li,"Target-Oriented Scheduling in Directional Sensor Networks",IEEE INFOCOM, 2007.

  11. Jian Wang, Changyong Niu, Ruimin Shen,"Priority- based target coverage in directional sensor networks using a genetic algorithmI",Computers and Mathematics with Applications vol 57,1915-1922,2009.

  12. S. Sundhar Ram, Student Member, IEEE, D. Manjunath, Member, IEEE, Srikanth K. Iyer, and D. Yogeshwaran,"On the Path Coverage Properties of Random Sensor Networks",IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 6, NO. 5, MAY 2007.

  13. Dimitrios Zorbas, Dimitris Glynos, Panayiotis Kotzanikolaou *, Christos Douligeris,"Solving coverage problems in wireless sensor networks using cover sets",Ad Hoc Networks, vol. 8, pp. 400-415,2010.

.

Leave a Reply