Author(s): Swetha K. H, Minal Moharir
Published in: International Journal of Engineering Research & Technology
License: This work is licensed under a Creative Commons Attribution 4.0 International License.
Volume/Issue: Volume. 4 - Issue. 07 , July - 2015
In applications such as pattern recognition, the whole image cannot be processed directly as it is inefficient and unpractical. Hence several image segmentation algorithms were proposed to segment an image. Image segmentation is a part of image processing, which classify digital image into multiple segments according to the feature of image for example pixel values. This is useful in pattern recognition and computer vision. Pattern recognition is a process of recognition of patterns and regularities in data and is used in medical diagnosis. Computer vision is a field of understanding images to produce numerical or symbolic information. This field theme is to duplicate the abilities of human vision by understanding and electronically perceiving an image. Level Set Method (LSM) had been used widely for image segmentation. Partitioning the input image into multiple segments efficiently using LSM is the main purpose of this paper. The paper implements an image segmentation algorithm. First it preprocess input color image (size can be upto 2MB). It deals with converting of input image into gray-scale image and then noise removal using Gaussian filter. This is followed by segmentation, which is implemented using LSM with edge, region and 2D histogram information. LSM enables to easily handle complex shapes and topological changes such as merging and splitting. But LSM is computationally expansive. This computational complexity of LSM based image segmentation approach is significantly reduced by the use of extremely parallelizable method called Lattice Boltzmann Method (LBM), with a body force to solve the Level Set Equation (LSE). Nvidia GPU (GeForce GTX) is used to take full advantage of LBM parallelism nature. CUDA C programming is used to implement algorithm on GPU. An Image segmentation algorithm is been implemented. Testing was carried out for over 50 images and on different sizes of images (50KB to 1000KB size). The method is very effective when segmenting objects with/without edges also, independent to the position of initial contour and robust against noise. The experimental result shows that the use of GPU improved performance efficiency of level set method based segmentation when run in parallel. Segmentation speed is increased 10-times more for parallel implementation of algorithm as compared to sequential.
Number of Citations for this article: Data not Available
7 Paper(s) Found related to your topic:
Publish your Ph.D/Master's Thesis Online