- Open Access
- Total Downloads : 364
- Authors : Arthi S, Stanly Jayaprakash J
- Paper ID : IJERTV4IS030726
- Volume & Issue : Volume 04, Issue 03 (March 2015)
- DOI : http://dx.doi.org/10.17577/IJERTV4IS030726
- Published (First Online): 27-03-2015
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Novel Approach for Fingerprint Sparse Coding Analysis using K-SVD Learning Technique
Arthi S, Stanly Jayaprakash J,
Computer Science And Engineering, Computer Science And Engineering, Mahendra Institute Of Technology, Mahendra Institute Of Technology, Namakkal, India. Namakkal, India.
Abstract:- Fingerprint is one of the most important security systems in Biometrics. Biometrics refers to the authentication techniques. In biometrics it can be classified into several types, they are face, fingerprint, retina, hand geometry etc. In this paper they introduced a new fingerprint compression algorithm. This algorithm is based on sparse representation. On the basis of a set of fingerprint patches got from over complete dictionary we are allowed to represent a sparse linear combination of dictionary atoms. A dictionary for predefined fingerprint image patches is constructed first in the algorithm. Because the patches are represented by a new given fingerprint images as per the computing l0 minimization and quantize and encode the representation of the dictionary. The effect of various factors on compression results is considered by us here and for that we test three groups of fingerprint images. It is demonstrated that several competing compression techniques (JPEG, JPEG 2000, and WSQ) especially at high compression ratios are not as efficient as our algorithm. It is also illustrated that the proposed algorithm is robust to extract minutiae.
-
INTRODUCTION
We can recognize persons in a society with the help of biometric characteristics as they represent the individuals bodily identity intrinsically and hence they cant be shared. Some of the popular biometric technologies for personal identification are their uniqueness, universality, collectability and invariance. Forensics and access control are collected in large volumes of fingerprints and stored every day in a wide range of applications. At first in 1995, there were only 200 million items in FBI fingerprint card archive and day by day it was increasing at the rate of 30,000 to 50,000 new cards. The key technique to solve the problem is fingerprint image compression.
Lossless and lossy are the two compression technologies. The compressed data is reconstructed from the exact original images in a lossless compression. If the original and decompressed data are same the lossless compression technologies are used. This avoids their distortion limits and compression efficiency. Slight distortion is acceptable in an image compression; we employed lossless compression technologies in the output coefficients of lossy compression.
An image is transformed into another domain, quantize and encode its coefficients by lossy compression technologies
.Lot of research has been done on transform-based image
compression technologies and some standards have appeared in this during the last three decades.DCT and DWT are the two common transformations. Compression of a stream of 8×8 small blocks of images is a DCT- based encoder. JPEG adopts this transformation. Simplicity, universality, and availability are the advantages of JPEG compression. Underlying block based DCT scheme makes it perform badly at low bit rates. So, a new wavelet-based compression standard for still images is developed by the JPEG-committee in 1995, the result is JPEG 2000.A DWT computation of the normalize image, quantize of the DWT coefficients and lossless coding of the quantized coefficients are the basis of the DWT-algorithm. JPEG 2000 is better than JPEG in the provision of many features to support scalable and interactive access to large-sized image.
SPIHT algorithm is the other DWT-based algorithm. It has updated the ability of the dictionary. The following process is to be done: we have to construct a base matrix with fingerprint image columns. This is referred to as atoms in the matrix dictionary. The fingerprint is divided into patches of the small blocks and the numbers of pixels are the same to the dimension of the atoms. Sparse representation to obtain the coefficients is used, coefficients are quantized, then the coefficients and related information are encoded using lossless coding methods. PSNR restricts the evaluation of compression performance of the algorithm in many cases. Ridges endings and bifurcations are used to match the two fingerprint images in most AFIS. Thats why we have considered the difference of the minutiae between pre and post compression.
-
LITERATURE REVIEW
The author [11] introduces the greedy pursuit technique which is very essential and this is derived from the concept of dictionaries and core ideas. S.S. Chen, D. Donoho and M. Saunders [12] used l1 norm for sparse. The convex programming task is used to solve the proper solution. The activity in this field is spread over various disciplines because the two seminal works, research have contributed a great deal in the field. Face recognition [13], image denoising [14], object detection [15] and super resolution image reconstructions [16] are the many successful applications in various fields.
A general classification algorithm for object recognition based on a sparse representation computed by l1 minimization is proposed by the author [13]. The algorithm such as nearest neighbor, nearest subspace and linear SVM are not as good as the algorithm based on sparse representation. The new framework provided new insights into face recognition, the number of features is more important than the choice of features. This is not only in the face recognition but also appear in other situations.
An algorithm to remove the zero mean white and homogeneous Gaussian additive noise from a given image is designed by the author [14] and this is based on sparse and redundant representations on over complete dictionary. The content of the dictionary is very important. The images should be reflected by the dictionary and the same images should be represented sparsely by the dictionary these are the two important aspects of the content of the dictionary. For sparse representation method these two points are very important.
Image compression [17], [18] has sparse representation and the proposed algorithm has shown very great performance in the paper [17]. This is shown by some experiments. JPEG 2000s efficiency is consistently more than its compression efficiency. Compression efficiency is defined as it can be measured by compression ratio or bit rate.
This can be transparently found if we test natural images, that is the state of the art compression technology is higher than the compression efficiency. This is successfully proved by the experiments. The author [18] opines that image alignment procedure is very important in the pre-process stage. It cannot be done easily in the practical applications.
Based on a linear model assumption we have some other algorithms [19 21] for fingerprint image compression. We have to exploit the data dependent nature of ICA than the compression special images in the paper [20, 21]. From these two paper we know that the over complete dictionaries are not so worth. An over complete dictionary can compressed the fingerprint images better with proper construction. The fingerprint compression based on Nonnegative Matrix Factorization (NMF) [22, 23] is proposed by the author in the paper [19]. Nonnegative is not necessary in some cases this is the only defect in the NMF, a successful application.
In the general image compression field we cannot use the methods based on sparse representation because of the richness of the general images of the content. The size of the dictionary is to large to compute the given image sparsely and effectively.
-
THE MODEL AND ALGORITHMS OF SPARSE REPRESENTATION
-
The Model of Sparse representation
Let y = Ax , Where y RM x 1, A RM x N and x = [x 1, x 2, . . . , x N ]T RN x 1 .
To get the compression of y we have to compress the x. The locations of non-zero entries and their magnitudes are to be entered first. Then records are to be quantized and encoded this is to be done. Then we have to give the techniques for solving the optimization problem.
-
Sparse solution by greedy algorithm
The researches thought of solving the optimization problem l0 directly first. NP hard is the sparest solution of the system was the problem. L0 problem is solving by using the MP as it was the simple and efficient one.
-
-
FINGERPRINT COMPRESSION BASED ON
REPRESENTATION
We can see how the sparse representation is used to compress fingerprint images in this part. Construction of the dictionary, compress a given fingerprint, quantize and coding and analysis of the algorithm are included here. As more information is contained in a dictionary the size is huge. So, the preprocessing is essential to obtain a modest size dictionary. Transformation, rotation and noise can influence the fingerprints of the same finger and can make it look different.
Normally the fingerprint images are simple in structure than the general natural images. Ridges and valleys composed them. The same can be found in the local regions. The whole image is divided into square and non- overlapping small patches in order to solve the above problems. The small patches dont have problem of transformation and rotation. The two small blocks make the size of the dictionary not too large.
Fig: 1 A fingerprint image with its corresponding orientation image computed over a square-meshed grid .Each element denotes the local orientation of the fingerprint ridges.
-
Construction of the Dictionary
Constructing a training set, obtaining the dictionary from it, choosing the whole fingerprint images which is cut down into fixed-size square patches are some of the ways of constructing a dictionary. These patches are given initial screening and then a greedy algorithm is employed which helps to construct the training samples.
-
First we have to add the patch to the dictionary in the empty space.
-
Then we have to check all the patches to find out their similarity. If we find it ok the next patch is test, if not it is added into the dictionary.
-
We have to do this repeatedly until we test all the patches.
We have to calculate the mean value of each patch and subtract from the corresponding patch before constructing the dictionary. Some of the three methods are:
First method: First fingerprint images are chosen at random and average as columns of the dictionary matrix. Second method: Patches from the foreground of a fingerprint have an orientation but patches from the background dont have orientation. This helps us to construct the dictionary. The intervals are to be divided equally. The intervals have equal orientation. The numbers of patches are same (or) each interval and after that they
are arranged into the dictionary.
Third method: Third method is training method. We can obtain the dictionary by iteratively solving an optimization problem.
-
-
Compression of a given fingerprint
If a new fingerprint is given, that is to be sliced into square patches of the same size with the training patches. This has direct impact on the compression efficiency. If the size is greater the algorithm is more efficient. Anyhow complexity in the computation and the dictionary size increase rapidly. We have to choose the proper size. Before fitting the patches into the dictionary in a better way we have to calculate the mean of each patch which is subtracted from the patch.
Sparse representation for each patch is computed by solving the l0 problem. The absolute values of coefficients which are less than a given threshold are considered as zero. We have to record four kinds of information such as mean value, the number of atoms to be used, the coefficients and their locations for each patch. It is tested that many image patches need few coefficients.
-
Coding and Quantization
Static arithmetic coder carries entropy coding of the atom number of each number, the mean value of each patch, the coefficients and the indexes. We have to code the atom number of each patch and the mean value of each patch separately. Lloyd algorithm is used in the quantization of coefficients from the training set by the MP algorithms over the dictionary we can obtain learnt off-line from the coefficients. A large number of bits are used to quantize the first coefficient of each block. The other coefficients and entropy-coded are quantized by using a separate arithmetic coder.
-
Analysis of the Algorithm Complexity
The training process and the compression process are the two parts of algorithm. We analyze only the complexity of compression process as the training process is off-line.
Algorithm: Fingerprint Compression based on Sparse Representation
-
The fingerprint is to be sliced into small patches.
-
We have to calculate the mean of each patch and it is subtracted from the patch.
-
MP method is used to solve the l0-minimization problem for each patch.
-
If the absolute value of a coefficient is less than a given threshold we have to that it was zero. Then the remaining coefficients and their locations are recorded.
-
We have to encode the atom number of each patch, the mean value of each patch, and the indexes, quantize and encode the coefficients.
-
Then the compressed stream is output.
-
-
EXPERIMENTS
Aiming both to demonstrate the feasibility of the proposed compression algorithm and to validate the claims of the previous sections experiments on fingerprint database for fingerprint compression is presented by us. Initially, we describe the database used in the study. Later, it is proved that the proposed method is feasible and we give the experimental results on three various dictionaries. Then, the method of choosing the proper parameter is studied. Then we make a comparison of our method with the relevant fingerprint compression algorithm namely WSQ, JPEG, JPEG 2000. Then a study is making about the designing of the training set of sparse representation. Experiment is done to show that the proposed compression algorithm is robust to extract minutiae.
-
Databases
A set of training fingerprints including a major pattern types is used to construct a dictionary of fingerprint patches. The distribution of different pattern in this set and the distribution in the other database need not be similar they can be dissimilar also.
-
Feasibility of Fingerprint Compression on Sparse Representation
There are both a few large coefficients and other coefficients which are approximately equal to zero in the patches of fingerprints. For some patches we dont have coefficients at all. The mean values of the patches of database can be represented effectively from the background of the fingerprint images.
-
Experimental Results in Different Dictionaries
We study, the effects of different dictionaries on fingerprint compression is here. Selecting 4096 patches at random and arranging them as columns of the dictionary, selecting patches as per orientations and training the dictionary by K-SVD method are the three different ways to construct the dictionary.
-
Choose the Algorithmic Parameters
The size of the patches and the size of the dictionary are the parameters in the compression based on
sparse representation. The size of patches and compression efficiency are directly related.ie) if the size is larger than the efficiency wll be higher. If the dictionary is quite large the arbitrary patch can be represented well which causes more computational complexity. Only with experiments the size of the patch is chosen.
-
Comparison with Existing Fingerprint Compression Algorithms
A comparison is made between the proposed methods with existing fingerprint compression algorithms. The same database is used to test our algorithm and all others. We can find in mat lab the implementation of our compression algorithm. WSQ, JPEG, JPEG 2000 are the three images used in compression algorithms.
Normally, the fingerprint images are not the multiple of size of the patches. If not the left part of the fingerprint images is arranged into the columns vector if the same size as the patch in a given order. MP algorithm can be used under the same dictionary can be used to represent the column vector.
Each fingerprint image cannot be exactly compressed at the given compression ratios. The nearest one from those which are not less than the given compression ratios minus 2.5 as the final compression ratio is chosen.
-
Design the Training Set
We have to consider many details to target the problem of fingerprint. The number of training samples, components of the training set and how to modify the dictionary we have to take the consideration.
-
The number of training samples:
As K-SVD method trains the dictionary, the effect of the number of training samples on the PSNR is to be considered.
-
Components of the training set:
General images are not easy to deal with even though the fingerprint images are simple. Some of the main feature of the fingerprints is in the orientation, ridge frequency and minutiae. These are important in the construction of dictionary.
-
Modify the dictionary :
Empirical information is to be used to modify the dictionary as it has the ability of learning. If we apply for longer period many patches cannot be represented well. So, these new patches and original patches can be used to update the dictionary.
-
-
Robustness
Gabor filtering does not rely on minutiae, but minutiae are mainly used in most Automatic Fingerprint Identification System to match two fingerprint images. So, we compare the difference of minutiae between pre and post compression in the illustration of the robustness of our algorithm. The accuracy rate, the recall rate and their sum are the important standards to measure the robustness.
-
-
CONCLUSION
We find that a new compression algorithm is adapted to fingerprint images. Even though our proposed algorithm is simple, they never fail to compare the existing more sophisticated algorithms at high compression ratios. Because of the block-by-block processing mechanism, we can find that the algorithm has higher complexities. The effect of the block of our algorithm is less serious than that of JPEG. The effect of three different dictionaries on fingerprint compression is considered by us. The dictionary that is found out by the K-SVD algorithms works the best. To make the compression result a better one we should have the larger number of a training set.
We have to preserve the minutiae to find out the identification which is the main difficulty in developing compression algorithms for fingerprints. At the time of compression and reconstruction our algorithm can old most of the minutiae robustly.
Finally we have to give importance to the following points and we should not be forgotten in future work. They are 1) we should think about the features and the methods for constructing dictionaries.2) Fingerprints of various qualities should be included in the training sample.3) the optimum algorithms which can solve a sparse representation are to be investigated.4) we have to optimize the code so that the complexity of our proposed method is reduced. Lastly, applications based on sparse representation for fingerprint images must be founded after through experiments.
REFERENCES
-
D. Maltoni, D. Miao, A. K. Jain, and S. Prabhakar, Handbook of Fingerprint Recognition, 2nd ed. London, U.K.: Springer-Verlag, 2009.
-
N. Ahmed, T. Natarajan, and K. R. Rao, Discrete cosine transform, IEEE Trans. Comput., vol. C-23, no. 1, pp. 9093, Jan. 1974.
-
C. S. Burrus, R. A. Gopinath, and H. Guo, Introduction to Wavelets and Wavelet Transforms: A Primer. Upper Saddle River, NJ, USA: Prentice Hall, 1998.
-
W. Pennebaker and J. Mitchell, JPEGStill Image Compression Standard. New York, NY, USA: Van Nostrand Reinhold, 1993.
-
M. W. Marcellin, M. J. Gormish, A. Bilgin, and M. P. Boliek, An overview of JPEG-2000, in Proc. IEEE Data Compress. Conf., Mar. 2000, pp. 523541.
-
A. Skodras, C. Christopoulos, and T. Ebrahimi, The JPEG 2000 still image compression standard, IEEE Signal Process. Mag., vol. 11, no. 5, pp. 3658, Sep. 2001.
-
T. Hopper, C. Brislawn, and J. Bradley, WSQ gray-scale ngerprint image compression specication, Federal Bureau of Investigation, Criminal Justice Information Services, Washington, DC, USA, Tech. Rep. IAFIS-IC-0110-V2, Feb. 1993.
-
C. M. Brislawn, J. N. Bradley, R. J. Onyshczak, and T. Hopper, FBI compression standard for digitized fingerprint images, Proc. SPIE, vol. 2847, pp. 344355, Aug. 1996.
-
A. Said and W. A. Pearlman, A new, fast, and efficient image codec based on set partitioning in hierarchical trees, IEEE Trans. Circuits Syst. Video Technol., vol. 6, no. 3, pp. 243250, Jun. 1996.
-
R. Sudhakar, R. Karthiga, and S. Jayaraman, Fingerprint compression using contourlet transform with modified SPIHT algorithm, IJECE Iranian J. Electr. Comput. Eng., vol. 5, no. 1, pp. 310, 2005.
-
S. G. Mallat and Z. Zhang, Matching pursuits with time frequency dictionaries, IEEE Trans. Signal Process., vol. 41, no. 12, pp. 33973415, Dec. 1993.
-
S. S. Chen, D. Donoho, and M. Saunders, Atomic decomposition by basis pursuit, SIAM Rev., vol. 43, no. 1, pp. 129159, 2001.
-
J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, Robust face recognition via sparse representation, IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2, pp. 210227, Feb. 2009.
-
M. Elad and M. Aharon, Image denoising via sparse and redundant representation over learned dictionaries, IEEE Trans. Image Process., vol. 15, no. 12, pp. 37363745, Dec. 2006.
-
S. Agarwal and D. Roth, Learning a sparse representation for object detection, in Proc. Eur. Conf. Comput. Vis., 2002, pp. 113 127.
-
J. Yang, J. Wright, T. Huang, and Y. Ma, Image super-resolution as sparse representation of raw image patches, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2008, pp. 18.
-
K. Skretting and K. Engan, Image compression using learned dictionaries by RLS-DLA and compared with K-SVD, in Proc. IEEE ICASSP, May 2011, pp. 15171520.
-
O. Bryt and M. Elad, Compression of facial images using the K- SVD algorithm, J. Vis. Commun. Image Represent., vol. 19, no. 4, pp. 270282, 2008.
-
Y. Y. Zhou, T. D. Guo, and M. Wu, Fingerprint image compression algorithm based on matrix optimization, in Proc. 6th Int. Conf. Digital Content, Multimedia Technol. Appl., 2010, pp. 1419.
-
A. J. Ferreira and M. A. T. Figueiredo, Class-adapted image compression using independent component analysis, in Proc Int. Conf. Image Process., vol. 1. 2003, pp. 625628.
-
A. J. Ferreira and M. A. T. Figueiredo, On the use of independent component analysis for image compression, Signal Process., Image Commun., vol. 21, no. 5, pp. 378389, 2006.
-
P. Paatero and U. Tapper, Positive matrix factorization: A nonnegative factor model with optimal utilization of error estimates of data values, Environmetrics, vol. 5, no. 1, pp. 111126, 1994.
-
D. D. Leeand and H. S. Seung, Leaning the parts of objects by nonnegative matrix factorization, Nature, vol. 401, pp. 799791, Oct. 1999.
-
E. Amaldi and V. Kann, On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, Theoretical Comput. Sci., vol. 209, nos. 12, pp. 237260, 1998.
-
S. Mallat and Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Trans. Signal Process., vol. 41, no. 12, pp. 33973415, Dec. 1993.
-
Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition, in Proc. Conf. Rec. 27th Asilomar Conf. Signals, Syst. Comput., Nov. 1993, pp. 4044.
-
E. Candes, J. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Commun. Pure Appl. Math., vol. 59, no. 8, pp. 12071223, 2006.
-
E. Candes, J. Romberg, and T. Tao, Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 54065425, Dec. 2006.
-
D. Donoho, For most large underdetermined systems of linear equations the minimal _1-norm solution is also the sparsest solution, Commun. Pure Appl. Math., vol. 59, no. 6, pp. 797829, 2006.
-
E. Candes and Y. Plan, Near-ideal model selection by _1 minimization, Ann. Statist., vol. 37, no. 5A, pp. 21452177, 2009.
-
I. F. Gorodnitsky and B. D. Rao, Sparse signal reconstruction from limited data using FOCUSS: A re-weighted norm minimization algorithm, IEEE Trans. Signal Process., vol. 45, no. 3, pp. 600 616, Mar. 1997.
-
A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imag. Sci., vol. 2, no. 1, pp. 183202, 2009.
-
M. Aharon, M. Elad, and A. M. Bruckstein, On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them, J. Linear Algebra Appl., vol. 416, no. 1, pp. 4867, 2006.
-
M. Aharon, M. Elad, and A. M. Bruckstein, The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process., vol. 54, pp. 4311 4322, 2006.
-
K. Sayood, Introduction to Data Compression, 3rd ed. San Mateo, CA, USA: Morgan Kaufman, 2005, pp. 81115.
-
S. Lloyd, Least squares quantization in PCM, IEEE Trans. Inf. Theory, vol. 28, no. 2, pp. 129137, Mar. 1982.
-
(2013, Jun.). Fingerprint Images [Online]. Available: http://pan.baidu.com/share/link?shareid= 3872475108&uk=2301599420
-
(2012, Dec.). SetupWSQ [Online]. Available: http://www.download3k.com/Install-WSQ-viewer.html
-
S. Prabhakar, Fingerprint classification and matching using a filterbank, Ph.D. dissertation, Dept. Comput. Sci., Eng., Michigan State Univ., East Lansing, MI, USA, 2001.