Wireless Networking For Live Video Broadcasting: Video Compression And Cost Optimization

DOI : 10.17577/IJERTV1IS8066

Download Full-Text PDF Cite this Publication

Text Only Version

Wireless Networking For Live Video Broadcasting: Video Compression And Cost Optimization

Ms.Kalpna Dhurve Ms. Varsha Pujari

Abstract

Live video broadcasting of popular sports events attracts a huge amount of audience throughout the world. These events happens everyday at local levels like schools and colleges. But these are related to small groups of viewers, for such live video broadcasting the broadcasting equipments

,crew of technicians and producers requires a large amount of cost which does not justify the non popularity of such events. Wireless Networking faces many challenges in terms of providing Live Video Broadcasting at low-cost also the quality level should be maintained.

Basically in this paper we present how the video analysis can be used to detect relative locations of players, zooming factors, focus, different views and angles also the automated nature of this system minimizes the human involvement in production of video contents and hence reduces the production cost. The deployment of the Wireless microcasting is HD videos this can be done with the help of video encoders. Live video broadcasting can be done by using the wireless technique at a low cost compared with the current streaming solutions.

Index term- Wireless networking, Video Compression, HD Videos, Video Encoders And Cost Optimization.

I.Introduction

Today, a large amount of sports events are held but these events are limited to only small group of viewers. These are considered to relatively very less amount of crowd. The cost of deploying the broadcast equipment and a crew of technicians and producers is usually too high because such events are not popular enough to justify the investment. In just the last few years, video application usage has increased significantly and is comprising a greater portion of Internet traffic every day. Todays wireless networks must be designed from the start with the capacity to

handle bandwidth-eating video traffic. More challenging is to provide more efficiency of videos at an optimium cost.

Now-a-days, there are different techniques, video encoders , video analysers, HD videos, etc. are used in order to increase the reliability and efficiency of videos to be broadcasted. Different cameras can be used for different views such as for zooming, different angle effects, locations of players, etc. The automated nature of this system minimizes human involvement in the production of the video content and hence reduces the production cost. All this comprises for and Live Video Broadcasting but still this leads to partially solving of the problem, as there is still requirement of the Ethernet cables, optical fibre cables in order to transmit the data to the data center. In addition to this complicated regulatory rules for construction work, listed and protected buildings adjacent to the location, or unknown site ownership may increase the investment. Temporary cabling lying on ground might be undesirable in places where vehicles could potentially destroy the installation during the event. All this results in requirement of wireless microcasting solution at an optimized cost or reliable low-cost.

We can make use of different broadcasting, tracking, smart, etc. cameras in order to sent the data to the data center and then the video streams are then sent from the broadcasting cameras and distributed via a content distribution network. Video comes in many different formats and todays wireless networks must be capable of handling all of them. For example, watching a YouTube video may consumes 500 Kbps of bandwidth while a high-definition video may require ten times this bandwidth for acceptable quality. In order to provide acceptable service in general for video applications, wireless networks must be appropriately designed and configured to handle this full range of video applications. Providing high-quality video over wireless poses challenges above and beyond sheer bandwidth requirements. For starters, video traffic has very low tolerance for packet loss in the transport network from video server to video client. High or variable latency can also cause issues for streaming video applications. Wireless networks must take these factors into consideration during the design phase.

Video over wireless becomes even more challenging in high-density, high-usage scenarios such as classrooms or training rooms where dozens of users may be simultaneously accessing a single video source. Worst-case scenarios must be considered when designing

wireless networks that will be used for such applications. The remainder of this paper is organized as follows: the overall system architecture of the wireless microcasting or networking and the wireless requirements are described in Section II. Existing video compression techniques and their impact on video quality and throughput are discussed in Section III. In Section IV, we present several recent developments in video compression, HD videos and computer vision that might be employed in the wireless networking. Section V provides a knowledge of how the use of different Video encoders can be done in wireless broadcasting network. Section VI tells how the Cost Optimization for wireless networking: an automated Live Video broadcasting can be done Section VII provides a summary of related work, and Section VIII concludes the paper.

  1. Wireless Microcasting or Networking.

    An automated multi-camera system for high-definition video broadcasting of micro- events referred to as wireless microcasting. Currently the wireless microcasting system comprises of video cameras that are connected through optical cables. Future wireless solutions need to meet several requirements to replace the cables. There are different requirements are needed for wireless microcasting. The decisions for the videos are made at Real-time,the data is to be send from the data centers to the Broadcasting cameras And then through the Distribution Networks. Note that, to enable smooth handover from one camera to the next, the frame sequences of the broadcasting cameras need to be synchronized in time using so-called generator lock (genlock) signal [1]. A microcasting testbed has been deployed at a field hockey pitch as shown in Fig. 1. It contains five broadcasting cameras (three at one side and two behind the goals) and eight tracking cameras. The video streams from the broadcasting cameras have a resolution of 1920×1080 pixels at 30 fps and three bytes per pixel (8-bit RGB), which gives a raw bit-rate of 1.5 Gb/s. The cameras are currently connected to the data center via fiber optic cables. The video streams from the tracking cameras have a resolution of 1920 × 1080 pixels at 30 fps and one byte per pixel resulting in a bit-rate of 500 Mb/s. Higher resolution for the tracking is possible and may be required by the data processing center. Currently, the bit-rate is limited by the 1 Gb/s speed of the Ethernet cables deployed on each pole that carries the tracking cameras. Also there are various requirements needed for microcasting such as:

    1. Sufficient wireless signal

      When designing any wireless network, it is imperative that proper wireless signal required for the applications to be run and the devices to be used is provided in all areas where users are expected to operate. Sufficient signal level is necessary to ensure the wireless connection can be maintained at or near its peak rate and function reliably. Low signal levels will result in intermittent network operation causing packet loss and network delay, which will wreak havoc on video Active site surveys enable network desigers to place equipment exactly where it needs to be to deliver optimal performance.

    2. Sufficient wireless bandwidth

      A key to delivering high-quality video over wireless is sufficient bandwidth capacity of the wireless network and its ability to deliver high throughput in actual operation. Video operates at a constant bit rate so it becomes a math problem to determine the overall capacity required of the network based on maximum number of expected users and the bandwidth required by the highest rate applications. Video is frequently the highest bandwidth application expected on most networks. More radios means more wireless bandwidth which ultimately enables the highest density of video users.

      The 802.11 wireless standards support operation in the 2.4GHz and 5GHz unlicensed frequency bands. However the 2.4GHz band is limited in bandwidth with only three operational channels while the 5GHz band supports 21 channels or seven times the bandwidth. For more details about how Xirrus [2] can help you provide a high-performance wireless solution, visit us at www.xirrus.com. which makes use of The Array which allows wireless networks to be designed for maximum capacity based on the users and applications.

    3. Quality of service (QoS)

      Beyond pure bandwidth, wireless networks need a means of appropriately prioritizing video traffic over other traffic on the network as necessary to provide an acceptable quality of experience to users. Priority queuing can be used by the wireless infrastructure to assign higher priority levels to expedite real-time traffic such as video while providing lower priority to applications.

    4. Multicast optimization

      When many users are watching the same video, for example a live sporting event. With multicast video, a single video stream is sent from the source with users desiring to watch the stream subscribing to it. This reduces bandwidth consumption on the network since a separate stream does not need to be established and maintained between the video source and each individual station. This works well for wired networks; however in wireless, multicast packets are retransmitted if packet loss is experienceda common occurrence in wireless. If a multicast video packet is corrupted, all wireless users subscribed to that video will experience degraded quality. Their is requirement to minimize the impact of any corrupted or lost video packets and provides the best quality video service.

    5. High throughput and Low latency

      The camera network must provide high throughput to support HD video streaming. Although video sequences can often be compressed to a fraction of their original bit rate without significant loss in perceptual quality in human visions, the object detection/tracking in computer vision may be affected by compression artifacts. Therefore, none or lossless compression may be required for the video streams used by the automated decision making. The data transmitted from data centers to the broadcasting cameras should be with minimium delay. Some videos which are not much interactive and therefore can tolerate a certain delay and jitter.

  2. Existing video compression techniques and their impact on video quality.

Video takes up a lot of space. Uncompressed footage from a camcorder takes up about 17MB per second of video. Because it takes up so much space, video must be compressed before it is put on the web. Compressed just means that the information is packed into a smaller space. There are two kinds of compression: lossy and lossless.

Lossy compression means that the compressed file has less data in it than the original file. In some cases this translates to lower quality files, because information has been lost, hence the name. However, you can lose a relatively large amount of data before you start to notice a difference. Lossy compression makes up for the loss in quality by producing comparatively small files. For example, DVDs are compressed using the MPEG-2 format, which can make files 15 to 30 times smaller, but we still tend to perceive DVDs as having high-quality picture.

Lossless compression is exactly what it sounds like, compression where none of the information is lost. This is not nearly as useful because files often end up being the same size as they were before compression. This may seem pointless, as reducing the file size is the primary goal of compression. However, if file size is not an issue, using lossless compression will result in a perfect-quality picture. For example, a video editor transferring files from one computer to another using a hard drive might choose to use lossless compression to preserve quality while he or she is working.

In order to ensure high quality and minimum latency ideally, video streams from the tracking and broadcasting cameras would be sent uncompressed. We can make use of various video encoders to judge about what extent a reduction in the throughput requirement can be considered.

The broadcasting an Tracking cameras are used for transmitting the video streams, but the quality requirements of these cameras differs: 1) For the streams from the broadcasting cameras moderate loss in quality is acceptable since end viewers are humans with their imperfect vision. Modern video/image coding standards provide high compression efficiency: the bit-rate requirements of an uncompressed sequence can be significantly reduced at the expense of a minor loss in quality. Typically, HD videos can be losslessly compressed to a third of the original bitrate. 2) For the streams from the tracking cameras color compression might not be desirable since even a small loss in quality may affect the performance of video analysis algorithms used for action tracking . Compression also introduces latency in the camera steering/handover control signals.

  1. Video and Audio Compression.

    Video and Audio files are very large beasts. Unless we develop and maintain very high bandwidth networks (Gigabytes per second or more) we have to compress to data.

  2. Video Compression for the Broadcasting Cameras.

    We can make use of different video encoders in order to estimate the efficiency of video compression.Some of the video encoders are H.264/AVC and Motion JPEG 2000. H.264/AVC

    [3] is the most popular video coding standard for wireless video transmission.This provides with

    improved error resilience and high compression.but have some drawbacks over the lossy wireless links as 1]The encoder employes motion-compensation which leads to propagation of transmission errors(impaired) through all predicted frames.2]There are also challenges for the efficient use of radio resources since bursty video traffic are produced by the encoders.

    MPEG is the "Moving Picture Experts Group", working under the joint direction of the International Standards Organization (ISO) and the International Electro-Technical Commission (IEC). This group works on standards for the coding of moving pictures and associated audio.

    In detail the various encoders that can be used for the wirelesss microcasting and networking is being discussed in Section VI.

  3. Video Compression for the Tracking Cameras

    The PSNR (Peak signal noise Ratio) levels that are higher than human vision may be required at the data center for computer vision algorithm for the sequence analysis. So, that the throughput neeed for thr tracking cameras can achieve those PSNR levels.

    For this we can make use how computer vision algorithm can be designed to tolerate the artifacts which may affect the camera streeing and handover. Also different radio transmission techniques can be used to support the compression.

  4. Video Compression for the Smart Cameras

Although there are many definitions of smart cameras offered by the media, camera manufacturers and developers, still no binding definition exists. In a field where terms are often defined by theirpredominant usage, most material in this article is based on the term's most predominant usage. In the book "Smart Cameras",[1] a smart camera is defined as a vision system which, in addition to image capture circuitry, is capable of extracting application-specific information from the captured images, along with generating event descriptions or making decisions that are used in an intelligent and automated system.

A smart camera or "intelligent camera" is a self-contained, standalone vision system with built-in image sensor in the housing of an industrial video camera. Smart cameras can in general be used for the same kind of applications where more complex vision systems are used.

Smart cameras are cameras capable of on-board video processing for scene analysis and metadata extraction. Smart tracking cameras could be used to extract relevant information from uncompressed video streams by means of background subtraction, blob forming, and object tracking algorithms. Only the important portions of the video are transmitted to the data center to significantly reduce the throughput requirements for the tracking cameras. Furthermore, video processing and action tracking might be performed on smart broadcasting cameras, thus eliminating the need for dedicated tracking cameras. A prototype of a camera that computes and transmits the tracking information only to enable real-time tracking of objects and persons.

We have to analyze how current technologies in the 5 GHz spectrum can be used for transmissions of visually lossless videos in cameras(broadcasting and tracking) and also to evaluate advanced solutions in the 60 GHz band, that can be applied to uncompressed or lossless compressed HD videos for tracking cameras. Creating a Multi-Input Multi-Output (MIMO) channel, over a single-radio 40 MHz bandwidth will result in each station transmits and receives over two independent Antennas. From the ping tests, we also find that the average round-trip time is 11.2 ms. This throughput may be acceptable for a single HD broadcasting camera, while 802.11e prioritization may be used to guarantee a low latency (and low bandwidth) transmission for the steering data from the data center to the broadcasting camera. However, the throughput may be not sufficient for a set of HD broadcasting cameras. In fact, the deployment of multiple broadcasting cameras imply radio resources sharing and thus issues in the scalability of the solution. That is, the limited capacity of the PHY technology and the need of continuous streaming required by each camera collides with the throughput reduction due to channel contention and does not efficiently scale to a high number of cameras. We thus expect that in 802.11n wireless microcasting only a subset of HD broadcasting cameras may be active at each time. Also to increase the available throughput and to utilize the available throughput more efficiently, we will consider the optimization technique ie.Link Layer Optimization: Based on the 802.11 family of standards and many other wireless tehnologies , are oblivious to the structure of video streams. They lack proper mechanisms to provide different treatments to data

packets belonging to different parts of a video stream. An uncompressed video is represented by a stream of bytes, each corresponding to the value of a given pixel. Clearly, the most significant bit (MSB) of each of these bytes has greater visual importance than the least significant bit (LSB) since an error in the LSB will result in a minor change in the pixels value. A larger share of the available radio resources should be allocated to the important parts of the stream (e.g. by using more robust channel codes and/or modulation schemes for those parts). Such resource optimization on the link layer may help to achieve better video quality compared to general- purpose wireless technology, given the same available bit rate. The Wireless Home Digital Interface (WHDI) [3] is an example of a standard for wireless HD video transmission that employs such principles. WHDI operates in the 5 GHz band and supports data rates of up to

3 Gb/s, but only over short distances of up to 30 m.

The data center may be a hundred and more meters apart from the wireless cameras so our goal is to investigate whether WiGig and similar 60 GHz technologies can provide the needed throughput and latency for tracking cameras on outdoor sight links.Basically WiGig is used in indoor but we need to find out its working in outdoor.This can be done with the help preliminary link budget analysis for the 60GHz spectrum.

60GHz Link Budget Analysis:It is an essential tool to estimate the system capacity and the tradeoff between throughput and BER, based on the available bandwidth and the signal to noise ratio (SNR), which is relevant for estimating the BER at a given distance and output power or for determining the required output power or maximum distance for a target BER.In order to allow to send data simultaneously from four streams and from four different cameras. or maximum distance for a target BER. Four channels with center frequencies defined at 58.32, 60.48, 62.64, and 64.8 GHz are available. In case of WiGig, the channel bandwidth is = 2.16 GHz per channel. Now we can make use of the Directional beams as it allows for better frequency reuse, which is desirable when many cameras stream simultaneously to the data center. there is no strict requirement of directionality of the steering control data from the data center, and signals may be wirelessly broadcasted to the set of broadcasting cameras, that may then decide what piece of data packet is directed to them. We are thus interested in understanding what type of antenna gains are needed to compensate for the losses.

For a target of = 10 dB at the receiver, the Shannon capacity is equal to:

= 2(1 + ) = 7.4 /. (1) We can then express as:

= + () , (2)

where is the Equivalent Isotropically Radiated Power, is the receiver gain, () is the path loss at distance , is the noise power at the receiver, and is the link margin. According to the 802.11ad standard draft, the is 57 dBm in Europe and 40 dBm in USA. the directionality of the wireless cameras is mostly determined by the spatial reuse required by the system, the antenna gain at the data center receiver depends on the radio propagation characteristics of millimeter waves [6]. Their should not be any obstruction between the transmitter and receiver (and thus no shadowing), a good link margin may deal with different effects. For example, radio waves in this band are usually strongly attenuated by the atmosphere and particles contained in it. Furthermore, in frequencies around 60 GHz, the radio waves are strongly attenuated by molecular oxygen in the atmosphere. Thus, an attenuation due to oxygen molecules (causing an extra attenuation of up to 15 dB/km) and to the rain (that can also be of some importance in the mm-wave band) may be added to the path loss. However, for the specific case of wireless microcasting, where we expect ranges less than 200 m, oxygen and rain attenuations can be mostly neglected [7]. Also some interference may occur between independent transmit cameras, which may reduce the SNR at the receiver. To take into account these effects, we consider a link margin of = 10 dB.

Regarding the noise power , it can be calculated as:

= 10log10() + = 174dBm/Hz + 10 log10 + = 74 dBm,

if the main source of noise is the thermal power, and where is the noise figure, that we suppose equal to 6 dB and is the noise power spectral density, i.e., the product of Boltzmann constant with temperature. Since we expect an unobstructed LOS between the antenna and the receiving unit, under the hypothesis that transmit and receive antennas have the same physical orientation to match the polarization, the signal follows the Friis equation:

0() = ((4)/)2 until a breakpoint distance 0, where = 5 at 0 GHz indicates the wavelength.

The path loss is a function of , and as a result the higher frequency of WiGig causes a higher attenuation respect to 2.4 and 5 GHz Wi-Fi. Assuming a breakpoint distance of 0 = 10 m [23],

0(0 = 10) = 20log10((410)/) = 88 dB. Above 10 meters, the outdoor channel is close to

the free space loss channel, with path loss coefficient reported to be up to = 2.5 [22]. Then, for

0 = 10, the path loss can be expressed as:

() = 0(0) + 10 log10() 10 log10(0).

If we consider a target distance of = 150 m to guarantee enough coverage range in the stadium (that is the maximum distance between a tracking camera and the data center) at the target SNR, we need to add an attenuation of 25 log10 15 = 30 dB.

Thus:

(150) = 88 + 30 = 118 dB

Thus, from equation (2), a receiver antenna gain of 10 40 + 118 74 + 10 = 24 dBi is needed in USA, while in Europe: 10 57 + 118 74 + 10 = 7 dBi. Concluding, WiGig can potentially work in outdoor links in wireless microcasting with directional antennas. We aim to further investigate the accuracy of the path loss model and verify with experimental hardware if a free space path loss model ( = 2) can be also applied in our deployment, that would translate in a path loss ( = 150) = 111 dB. We are also interested in measuring the cross- interference at the data center among multiple WiGig signals with different transmit antenna gains.

  1. Video encoders in wireless broadcasting network.

    1. JPEG

      For single-frame image compression, the industry standard with the greatest acceptance is JPEG (Joint Photographic Experts Group). JPEG consists of a minimum implementation (called a baseline system) which all implementations are required to support, and various extensions for specific applications. JPEG has received wide acceptance, largely driven by the proliferation of image manipulation software which often includes the JPEG compression algorithm in software form as part of a graphics illustration or video editing package. JPEG compressor chips and PC boards are also available to greatly speed up the compression/decompression operation.

      JPEG, like all compression algorithms, involves eliminating redundant data. The amount of loss is determined by the compression ratio, typically about 16:1 with no visible degradation. If more compression is needed and noticeable degradation can be tolerated, as in downline loading

      several images over a communications link that only need to be identified for selection purposes by the recipient, compression of up to 100:1 may be employed.

      The human eye is less sensitive to high-frequency color information, JPEG calls for the coding of chrominance (color) information at a reduced resolution compared to the luminance (brightness) information. In the pixel format, there is usually a large amount of low-spatial- frequency information and relatively small amounts of high-frequency information. The image information is then transformed from the pixel (spatial) domain to the frequency domain by a discrete cosine transform (DCT), a DSP algorithm similar to the fast Fourier transform (FFT). This produces two-dimensional spatial-frequency components, many of which will be zero and discarded. Near-zero components are truncated to zero and need not be sent on, either. This quantization step is where most of the actual compression takes place. The remaining components are then entropy coded by the Huffman tree method which assigns short codes to frequent symbols and longer codes to infrequent symbols. This results in additional compression.

      Note : JPEG does not address the question of audio tracks and audio/video synchronization.

    2. MPEG

    As discussed earlier MPEG is the "Moving Picture Experts Group", working under the joint direction of the International Standards Organization (ISO) and the International Electro- Technical Commission (IEC). This group works on standards for the coding of moving pictures and associated audio.

    MPEG involves fully encoding only key frames through the JPEG algorithm (described above) and estimating the motion changes between these key frames. Since minimal information is sent between every four or five frames, a significant reduction in bits required to describe the image results. Consequently, compression ratios above 100:1 are common. The scheme is asymmetric; the MPEG encoder is very complex and places a very heavy computational load for motion estimation. Decoding is much simpler and can be done by today's desktop CPUs or with low cost decoder chips.

    The MPEG encoder may chose to make a prediction about an image and transform and encode the difference between the prediction and the image. The prediction accounts for movement within an image by using motion estimation. Because a given image's prediction may be based on future images as well as past ones, the encoder must reorder images to put reference images before the predicted ones. The decoder puts the images back into display sequence.

    One of the advantages of Motion JPEG is that each image in a video sequence can have the same guaranteed quality that is determined by the compression level chosen for the network camera or video encoder. The higher the compression level, the lower the file size and image quality. In some situations, such as in low light or when a scene becomes complex, the image file size may become quite large and use more bandwidth and storage space.

    c. H.264

    A JPEG 2000 image can be truncated at any point to obtain an image with a lower signal-to- noise ratio.. The most important shortcomings of streaming MJPEG 2000 videos are: i) lower compression efficiency compared to H.264/AVC, and ii) high computational complexity of the encoding/decoding.

    We encode the video sequence with H.264/AVC and MJPEG 2000 using different compression ratios. For each resulting bit-rate, we measure the quality of the compressed sequence in terms of the average peak signal-to-noise ratio (PSNR)2. The PSNRs of the sequence for different compression ratios are shown in Fig. 4. With a lossless compression, PSNR is infinite since decompressed sequence is identical to the original sequence. In this case, the bit rate of the compressed sequence approximately one-third of the original bit-rate for both encoders, i.e. about 250 Mb/s in the specific sequence under test. With lossy compressions, bit rates can be further decreased at the expense of video quality. For PSNRs above 40 dB, videos are often considered to be visually lossless, i.e. a typical viewer is not able to detect the degradation in quality. According to some Mean Opinion Score (MOS) conversions, PSNRs above 37 dB are considered to be excellent (impairments are imperceptible) and PSNRs between 31 and 37 dB are good (impairments are perceptible, but not annoying) [13]. As shown in Fig. 4, H.264 and MJPEG 2000 are able to compress the test sequence close to, respectively, 1% and 5% of its original bit-rate while keeping the PSNR at the visually lossless level, corresponding to 7.5 and

    37.5 Mb/s, respectively. H.264 provides significantly better compression efficiency. For example, at the PSNR equal to 50 dB, the sequence is compressed to around 15% of its original bit-rate with H.264, compared to around 25% with MJPEG 2000. However, on lossy wireless links, the superior error resilience features of MJPEG 2000 may be more beneficent than the compression efficiency of H.264.

  2. Cost Optimization for wireless networking: an automated Live Video broadcasting.

    In the past few years, the field of wireless networks has become a key area of research. The field of wireless sensor network is receiving a lot of attention, and is evolving very fast. It is difficult to provide a comprehensive survey about a field which is not fully mature yet. The wireless solution must also be cost-efficient compared to the wired alternatives (e.g. diging for laying optical cables). The cost of the wireless hardware is not necessarily the main deciding factor, but proprietary non-standardized technology that would incur high licensing or maintenance cost should be avoided. For example, standardized solutions enable a multi-vendor strategy. However, one of the most important limitations of transceivers used in sensor nodes is their idle mode energy consumption. Transceivers spend a considerable amount of energy when their radio is in idle mode, i.e., neither transmitting nor receiving, and sometimes this energy is as high as the energy spent on transmission or reception (see Shihetal. (2001)). As a result, when the transmissions and receptions are not perfectly synchronized, the nodes continue to spend energy on idle listening. This is especially true for a multi-hop network where a relay node does not know beforehand when it is going to receive the next packet. The simplistic model of associating a fixed amount of energy with each packet transmission without accounting for idle mode energy is an idealistic scenario.

    An overview of some of the recent work on energy and cost optimizations in wireless networks can be considered. Different nodes are highly energy constrained, and energy effciency is of prime importance at all the levels. Different network design issues surface depending on the kind of application involved. We are mainly restricted ourselves to those applications which are of data gathering type. We have to keep an track on two important aspects of networks, namely

    routing and design optimizations. In the context of routing optimizations, we can consider some of the important papers on energy efficient routing for maximizing the system lifetime. Several tools from the theory of network were used to tackle these optimization problems. Several optimization tools and techniques can be used in designing and dimensioning of wireless networks for live video broadcasting. In the microcasting system, camera steering/handover algorithms analyze the position and movement of players and a ball (in case of ball games) based of the streams from the tracking cameras in order to derive targets for the broadcasting cameras. However, the target locations could also be derived, not from the complex action on the field, but from the behavior of the audience around the field. The tracking cameras could observe the audience to estimate its focus of attention using head pose estimation, gaze direction estimation, and similar techniques. For example, in [8], the authors propose an automatic pan control system that tracks face direction of the audience. Potential benefits of this approach are: i) the number of tracking cameras, thus video streams, can be reduced since the behavior of the audience is often homogeneous and ii) the algorithm may be more tolerant compression artifacts, thus allowing higher compression ratios, assuming that the tracking cameras are placed close to the audience. Adjacent cameras have partially overlapping views of the pitch. Therefore, some of the video streams are highly correlated, but joint encoding is not practical since it would require wireless communication between the cameras. Fortunately, distributed video coding (DVC) addresses scenarios where multiple correlated video sequences are separately encoded, but jointly decoded (at the data center), thus not requiring any communication between the cameras. The DVC is based on two major results from information theory, the Slepian-Wolf [9] and Wyner-Ziv [10] theorems, which suggests that the minimum rate to separately encode two correlated sources ( and ) with an arbitrary small probability of error is the same as the minimum rate for joint encoding, when joint decoding is performed and the difference is Gaussian distributed. Based on the two theorems, several algorithms for distributed video coding (DVC) have been proposed. For example, the algorithm proposed in [11] has been adopted for the DVC video codecs developed in the context of VISNET [12], and DISCOVER [13] projects. However, practical DVC algorithms are still in an infancy stage. We will explore the feasibility of the DVC and propose needed improvements to the existing algorithms for microcasting scenarios. The potential benefits of the DVC for the microcasting are i) reduction of the transmission rates by exploiting the correlation between camera views, ii) reduced encoding complexity in the

    resource-limited cameras at the expense of more complex joint decoding in the data center, and

    iii) improved resilience to channel errors since DVC facilitates joint source-channel coding. An overview of wireless technology capable of streaming compressed and uncompressed high- definition videos is provided in [14]. The feasibility of the HD video transmission over short distances (up to few meters) using the ultrawideband (UWB) technology is analyzed in [15], [16]. A design of a 60 GHz transceiver chipset capable of streaming uncompressed 1080p/60 videos at distances of up to ten meters is described in [17]. In [18], the authors present a 60 GHz system that supports uncompressed HD videos with data rates of up to 3 Gb/s. The system includes error protection and concealment schemes that exploit unequal error resilience properties of uncompressed video. A system based on IEEE 802.11ac that operates in 5 GHz band with 80 MHz bandwidth and provides bit-rates above 1.5 Gb/s is presented in [19]. In this system, video is compressed by MJPEG 2000 and uses its advanced error resilience tools. In [20], the authors propose an error correction scheme for wireless video transmission that uses the large amount of spatial redundancy already present in uncompressed HD video to provide an extra layer of protection in addition to that provided by channel coding.

  3. The system makes use of various video decoders and analysers in order to detect or trace the relative position of the player ,also which camera angle should be used so that it will look realistic.Also we should investigate how the different stationary cameras can be used to display the positions of the hockey players.In [21] the auther describes the design for an automated computer driven sports broadcasting that provides personalized automated broadcasts. Research should be done for the soccer games since it requires higher-level,sport specific semantic information such as a shot or a foul recognized from audio and commentary, in order to determine which camera angle to show. [22] describes a system that uses received signal strength data from multiple strategically placed sensor nodes to localize the game assets (e.g. ball, players) and automate the control of broadcasting cameras. Also an An overview of wireless technology capable of streaming compressed and uncompressed high-definition videos is provided in [14].We have discussed about various different cost optimization techniques and still some more techniques should be considered. In [20], the authors propose an error correction

    scheme or wireless video transmission that uses the large amount\ of spatial redundancy already present in uncompressed HD video to provide an extra layer of protection in addition to

    that provided by channel coding.

  4. Conclusion

In this paper we have discussed, how we can make use of different video encoders in order to get broadcasting efficiently also wireless microcasting, for an automated wireless system for broadcasting of live sport events that to at a lower cost compared to current streaming solutions.The use of different video encoders, video analysers, computer vision techniques,etc.also has been discussed.An we also dealed with the different cost optimization methods so which was the main and important factor that was to be considered.Therefore, we aim to further explore this interaction with real data from the hockey pitch testbed.

References

  1. WirelessHD. http://www.wirelesshd.org/.

  2. An Overview of Genlock. http://www.mivs.com/technical/ appnotes/an005.htm.

  3. WHDI Wireless High Definition. http://www.whdi.org/.

  4. IEEE 802.11 working group for wireless LANs. http://www. ieee802.org/11/.

  5. P. Smulders and L. Correia, Characterisation of propagation in 60 ghz radio channels, in

    Electronics and Communication Engineering Journal, pp. 7380, 1997

  6. L. Wu, Multi-view hockey tracking with trajectory smoothing and camera selection,

    Master Thesis, The University Of British Columbia, 2008.

  7. P. Smulders and L. Correia, Characterisation of propagation in 60 ghz radio channels, in

    Electronics and Communication Engineering Journal, pp. 7380, 1997.

  8. S. Daigo and S. Ozawa, Automatic pan control system for broadcasting ball games based on audiences face direction, in Proc. ACM Int. Conf. Multimedia, (New York, NY, USA), 2004.

  9. D. Slepian and J. Wolf, Noiseless coding of correlated information sources, IEEE Trans. Inform. Theory, vol. 19, no. 4, pp. 471480, 2003. [10] A. Wyner, Recent results in Shannon theory, IEEE Trans. Inform. Theory, vol. IT-20, no. 2, pp. 210, 1974.

  1. A. Aaron and B. Girod, Compression with side information using turbo codes, in Proc. IEEE Data Compression Conf., DCC 02, (Snowbird, USA), pp. 252261, 2002.

  2. VISNET I/II: Networked Audiovisual Media Technologies. http://www.visnet-noe.org/.

  3. DISCOVER: Distributed Coding for Video Services. http: //www.discoverdvc.org/.

  4. G. Lawton, Wireless hd video heats up, IEEE Computer, vol. 41, no. 12, pp. 1820, 2008.

  5. R. Ruby, Y. Liu, and J. Pan, Evaluating video streaming over uwb wireless networks, in Proc. 4th ACM Workshop on Wireless multimedia networking and performance modeling, WMuNeP 08, (Vancouver, Canada), 2008.

  6. D. Porcino, B. van der Wal, and Y. Zhao, Hdtv over uwb: wireless video streaming trials and quality of service analysis, JPEG2000 Technical article, Analog Devices, Inc., 2006.

  7. J. M. Gilbert, C. H. Doan, S. Emami, and C. B. Shung, A 4- gbps uncompressed wireless hd a/v transceiver chipset, IEEE Micro, vol. 28, no. 2, pp. 5664, 2008.

  8. H. Singh, J. Oh, C. Kweon, X. Qin, H.-R. Shao, and C. Ngo, A 60 ghz wireless network for enabling uncompressed video communication, IEEE Communications Magazine, vol. 46, no. 12, pp. 7178, 2008.

  9. M. Kurosaki, Y. Hirata, M. Matsuo, W. A. Syafei, B. Sai, Y. Kuroki, A. Miyazaki, and H. Ochi, A study on wireless transmission system for full-spec hdtv, in Proc. 11th Int. Conf. Advanced Communication Technology, ICACT09, (Phoenix Park, USA), 2009.

  10. M. Manohara, R. Mudumbai, J. Gibson, and U. Madhow, Error correction scheme for uncompressed hd video over wireless, in Proc. IEEE Int. Conf. Multime

  11. M. Chuang and P. Narasimhan, Automated viewer-centric personalized sports broadcast,

    Procedia Engineering, vol. 2, no. 2, pp. 3397 3403, 2010.

  12. H. Gandhi, M. Collins, M. Chuang, and P. Narasimhan, Real-time tracking of game assets in american football for automated camera selection and motion capture, Procedia Engineering, vol. 2, no. 2, pp. 2667 2673, 2010.

Leave a Reply