- Open Access
- Total Downloads : 467
- Authors : Pratik Soygaonkar, Shilpa Paygude
- Paper ID : IJERTV3IS070528
- Volume & Issue : Volume 03, Issue 07 (July 2014)
- Published (First Online): 18-07-2014
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Dynamic Texture Segmentation Using LBP and Optical Flow Technique
Pratik S. Soygaonkar1 , Shilpa S. Paygude2
1Computer Department, Maharashtra Institute of Technology, Pune, India
2Computer Department, Maharashtra Institute of Technology, Pune, India
Abstract – The texture which is in motion is known as Dynamic texture. Dynamic texture can be defined as sequences of images of moving scenes that exhibit certain stationary properties in time. The spatial (i.e. appearance) and temporal (i.e. motion) features of dynamic texture may differ from each other. Image segmentation is the process of partitioning a digital image into multiple regions and these regions belongs to group of connected pixels with similar properties. The goal of segmentation is to simplify and change the representation of an image into something that is more meaningful and easier to analyze. However, Segmentation of dynamic texture is more challenging as dynamic texture can change in shape and direction over time. In this paper two different methods based on both appearance and motion are combined together to achieve accurate segmentation. Two methods include Local binary pattern and Lucas-Kanade based Optical Flow. Local binary pattern (LBP) is used in both spatial and temporal domain while Lucas-Kanade which gives optical flow information about pixel is used in temporal domain. These two features are computed for every section of individual frame and equivalent histograms are obtained. These histograms are concatenated and compared with suitable threshold to obtain segmentation of dynamic texture.
Keywords:
Dynamic texture, descriptors, histogram, segmentation
-
INTRODUCTION
A large class of scenarios commonly experienced in real world exhibit characteristic motion with certain form of regularities. Dynamic texture refers to image sequences of these motion patterns. A flock of flying birds, water streams, and fluttering leaves etc. serve to illustrate examples of such motion patterns. Characterization of visual processes of dynamic textures has vital importance in research areas of computer vision, electronic entertainment, and content- based video coding with a number of potential applications in recognition (automated surveillance and industrial monitoring), synthesis (animation and computer games), and segmentation (robot navigation and MPEG).
Segmentation is considered as one of the basic problem in computer vision [2], [11], [12]. Meanwhile, as compared with static texture, Segmentation of dynamic texture is very challenging because of their unknown spatiotemporal extension, stochastic nature of the motion fields and the different moving particles. Segmentation of dynamic texture is to separate the different groups of particles showing different random motion.
One major limitation of the existing dynamic texture segmentation techniques is their inability to characterize visual processes consisting of multiple dynamic textures. For example,
a flock of flying birds in front of a water fountain, highway traffic or pedestrians moving in opposite directions, image sequences containing both windblown trees and fire, and so forth. In such cases, existing dynamic texture segmentation techniques are inherently limited.
In this paper, a new method based on both appearance and motion information for the segmentation of dynamic textures is introduced. For the appearance of Dynamic texture, we use local spatial texture descriptor to describe the spatial mode of Dynamic texture; for the motion of Dynamic texture, we use the optical flow and local temporal texture descriptor to represent the movement of objects, and employ the Lucas and kanade approach to organize the optical flow of a region. In Optical flow we are mainly dealing with information regarding direction of motion of the pixels.
We employ both the appearance and motion modes for the Dynamic texture segmentation as dynamic textures might be different from their spatial feature (i.e., appearance) and/or temporal feature (i.e., motion). Combining the spatial and temporal modes of dynamic texture, we exploit the fuse of discriminate features of both the appearance and motion for the robust segmentation of various Dynamic textures.
-
METHODS AND PROCEDURES
The framework of implementation followed in this paper is illustrated in figure 1. The video is given as input and firstly this video is pre-processed and then split into suitable equal number of sections. This features include () and optical flow. Using these features, we perform the segmentation by agglomerative merging and pixel wise classification.
2.2 FEATURE COMPUTATION
2.2.1 Local Binary Pattern
Local binary texture descriptor is computed to segment dynamic texture from an input video. This texture descriptor is used as spatialtexture descriptor when utilized in XY plane of a video. Also when this descriptor is used in XT
and YT plane, it is called temporal-texture descriptor. Hence this texture descriptor is called spatiotemporal descriptor as this texture descriptor is used in both spatial and temporal domain.
XT plane indicates the change/deviation in pixels row- wise over temporal domain. YT plane indicates the change/deviation in pixels column-wise over temporal domain. Local Binary Pattern is computed in all three planes and hence it is called as [7]. The computation of LBP is very similar for all the three planes considered (XY, XT, YT) and computation is as follows:
-
Optical Flow
Optical flow which is described in [14] is the random based distribution of apparent velocities movement of brightness patterns in an image. Furthermore, Optical flow can arise from relative motion of objects as well as that of the viewer. This statement implies that, optical flow can give important information regarding the spatial (appearance) arrangement of the objects viewed and also the rate of change of this spatial arrangement. There is discontinuities in the optical flow which can help in segmenting images into regions that correspond to different objects.
For the computation of optical flow between two images, the optical flow constraint equation must be solved. Many of the methods that are used to compute optical flow uses this general constrain equation. The constraint equation is as follows:
+ + = 0 (3)
The values that are represented in above equation are as follows:
Figure 1. Flowchart of Implementation
= 2
=0
-
, and are the spatiotemporal image brightness derivatives
-
is the horizontal optical flow
-
is the vertical optical flow
There are several methods available to solve for and from the constraint equation out of which two very well known methods are as follows:
-
Horn-Schunck Method
-
Lucas-Kanade Method
In this paper suitable Lucas-Kanade method has been
= 1, 1
0,
. .1
used to derive the optical flow of pixels. Even though there are many other methods are available for solving the constraint equation, the Lucas-kanade method has been
Where, correspond to the gray values of P number of neighboring pixels on a rectangle of length 2(Lx+1) and (2Ly+1) (Lx>0 and Ly>0). All three histograms from 3 different planes are concatenated to obtain the histogram for
.
The figure.2 is similar as given in [6]. The equation 1 is taken from [4] and the number of bins used to map the histogram is depends on further experments. This is based on the concept of uniform pattern same as [4]. Uniform pattern can be defined as follows:
A local binary pattern is called uniform sequence if the measure of uniformity is at most 2. For example the patterns 00000000 (0 transitions), 01110000 (2 transitions) and 11001111 (2 transitions) are uniform whereas the patterns 11001001 (4 transitions) and 01010011 (6 transitions) are not Non-uniform patterns. Ojala et al. observed that in their experiments with texture images, uniform patterns account/measure for a bit less than 90% of all patterns when using the (8,1) neighborhood where 8 is number of surrounding pixels and 1 is radius.
successfully used even now days. Lucas-Kanade Method:
The LucasKanade method assumes that the displacement of the image contents between two nearby instants (frames) is small and approximately constant within a neighborhood of the point under consideration. Thus the optical flow equation can be assumed to hold for all pixels within a window cantered at .
Figure 2. Computation of LBPTOP for a DT. (a) Sequence of a DT.
(b) Three orthogonal planes of the given DT. (c) Vertex coordinates of the three orthogonal planes. (d) Computation of LBP of a pixel. (e) Computation of sub-histograms for LBPTOP.
Namely, the local image flow (velocity) vector ( , )
must satisfy following equations.
1 + 1 = (1)
-
-
-
RESULTS
The implementation of project is divided into three phases- Pre-processing, feature computation and segmentation. We are performing segmentation on mainly well known dynamic texture dataset called Dyntex [3]. We are limiting test video by considering video sequences which are shot by stationary camera only.
The experiment is performed on large number of sequences but results of some of them are shown here. Segmentation is perform on Sequence 648ea10 which is taken from the DynTex database, sequence vidf1_33_000.y from [16].
Initially the quality of the frame is very essential before performing any video processing operation. In pre-
2 + 2
= (2)
processing phase improvement of the quality of frame is taken into the consideration and accordingly the video is tested with three types of noises i.e., salt and pepper noise,
+ = ( )
Where, 1, 2, are the pixels inside the window, and
, , are the partial derivatives of the image
with respect to position x, y and time t, evaluated at the point
and at the current time.
-
Steps For Segmentation
As shown in the figure 1, preprocessed input video frame is split into equal number of sections. Here we are splitting entire frame into 17×17 equal sections. Two features LBP and Optical flow are calculated for each of these split sections independently. With the help of these features the dynamic texture segmentation is done. The segmentation procedure is mainly classified into two distinct steps called agglomerative merging and pixel-wise classification.
-
Agglomerative Merging
In splitting stage the texture is splits and cumulative value is calculated. With the help of this computed Cumulative from various histograms of patches of roughly uniform texture, the Cumulative having same or nearly same values are kept together. Those with completely different set of values are taken as another set.
-
Pixel wise Classification
-
For better localize the boundaries of the roughly segmented section of a video, pixel-wise classification is done. This pixel wise classification is done on the boundary pixel set of the dynamic texture. The Local binary pattern ( ) and Optical Flow is computed over an 8 x 8 neighborhood for each pixel in the boundary pixel-set, The Cumulative is calculated and compared with the threshold, in the same way as before.
Afterwards the pixels are classified as dynamic texture or otherwise with the help of texture features calculated. For refining the boundary we need this classification. Therefore the boundary of the pixels that were classified as dynamic texture is further analyzed to refine the boundary.
Gaussian noise and periodic noise. These noises have more probability to distract the quality of video frames. Each noise type is eliminated using various filtering techniques and out of which the best suited filters are taken into the consideration for these noises.
The pre-processed video is then passed to feature computation phase and segmentation is done with the help of this features. Figure 3, 4 shows segmentation results on various video sequences.
Figure 1. Segmentation result of Dyntex Sequence 648ea10
Figure 1. Segmentation result of Dyntex Sequence 648ea10
Figure 2. Segmentation results of sequence vidf1_33_000.y [16].
-
-
CONCLUSION
Segmentation of Dynamic texture is done by new method which uses spatial and temporal descriptor and optical flow of pixels. For the spatial mode, A texture feature employed to characterize each region of a frame of dynamic texture i.e., the histograms of LBP features in the XY plane of dynamic texture.
For the temporal mode, we used the optical flow and the histograms of LBP and WLD features in XT and YT planes of dynamic texture to describe its motion field.
In future, by using this method our aim to measure highway traffic density at various timings in a day and also implement proposed algorithm in medical field like sonography etc.
ACKNOWLEDGEMENT
The authors acknowledge the valuable guidance and support received from Maharashtra Institute of Technology, Pune and also for providing us with the software and all other facilities to work.
REFERENCES
-
G. Doretto, A. Chiuso, Y. N. Wu, and S. Soatto, Dynamic textures,
Int. J. Comput. Vis., vol. 51, no. 2, pp. 91109, 2003.
-
G. Doretto, D. Cremers, P. Favaro, and S. Soatto, Dynamic texture segmentation, in Proc. IEEE Int. Conf. Comput. Vis., Oct. 2003, pp. 12361242.
-
D. Chetverikov and R. Péteri, A brief survey of dynamic texture description and recognition, in Proc. 4th Int. Conf. Comput. Recognit. Syst., 2005, pp. 1726.
-
G. Zhao and M. Pietikäinen, Dynamic texture recognition using local binary patterns with an application to facial expressions, IEEE Trans.
Pattern Anal. Mach. Intell., vol. 29, no. 6, pp. 915928, Jun. 2007.
-
J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, X. Chen, and W. Gao, WLD: A robust local image descriptor, IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 17051720, Sep. 2010.
-
Jie Chen, Guoying Zhao, Mikko Salo, Esa Rahtu, and Matti Pietikäinen, Automatic Dynamic Texture Segmentation using Local Descriptors and Optical Flow., IEEE Trans. On Image Processing, Vol. 22(1), pp. 326-339, 2013.
-
J. Chen, G. Zhao, and M. Pietikäinen, An improved local descriptor and threshold learning for unsupervised dynamic texture segmentation, in Proc. 12th IEEE Int. Conf. Comput. Vis. Workshop, Oct. 2009, pp. 460467.
-
R. Chaudhry, A. Ravichandran, G. Hager, and R. Vidal, Histograms of oriented optical flow and BinetCauchy kernels on nonlinear dynamical systems for the recognition of human actions, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 1932 1939.
-
R. Vidal and A. Ravichandran, Optical flow estimation & segmentation of multiple moving dynamic textures, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Jun. 2005, pp. 516521.
-
A. Rahman and M. Murshed, Detection of multiple dynamic textures using feature space mapping, IEEE Trans. Circuits Syst. Video Technol., vol. 19, no. 5, pp. 766771, May 2009.
-
T. Amiaz, S. Fazekas, D. Chetverikov, and N. Kiryati, Detecting regions of dynamic texture, in Proc. Conf. Scale Space Variat. Methods Cmput. Vis., 2007, pp. 848859.
-
T. Ojala and M. Pietikäinen, Unsupervised texture segmentation using feature distributions, Pattern Recognit., vol. 32, no. 3, pp. 477486, 1999.
-
R. Polana and R. Nelson, Temporal texture and activity recognition, in Motion-Based Recognition. Norwell, MA: Kluwer, 1997.
-
Mokri, S.S.; Ibrahim, N.; Hussain, A.; Mustafa, M.M, Motion detection using Horn Schunck algorithm and implementation, Vol. 01, pp. 83-87, 2009.
-
L. Cooper, J. Liu, and K. Huang, Spatial segmentation of temporal texture using mixture linear models, in Proc. Int. Conf. Dynamical Vis.,2005, pp. 142150.
-
A. B. Chan and N. Vasconcelos, Modeling, clustering, and segmenting video with mixtures of dynamic textures, IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 5, pp. 909926, May 2008.
-
A. B. Chan and N. Vasconcelos, Variational layered dynamic textures, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 10621069.
-
A. Ghoreyshi and R. Vidal, Segmenting dynamic textures with ising descriptors, ARX models and level sets, in Proc. Eur. Conf. Comput. Vis. Dynamical Vis. Workshop, 2006, pp. 127141.
-
R. Vidal and D. Singaraju, A closed form solution to direct motion segmentation, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., Jun. 2005, pp. 510515.
-
D. Chetverikov, S. Fazekas, and M. Haindl, Dynamic texture as foreground and background, Mach. Vis. Appl., vol. 22, no. 5, pp. 741 750, 2011.
-
A. Stein and M. Hebert, Occlusion boundaries from motion: Low- level detection and mid-level reasoning, Int. J. Comput. Vis., vol. 82, no. 3, pp. 325357, 2009.