- Open Access
- Total Downloads : 271
- Authors : R. Lakshmi Priya , P. Eswaran, S. L.Ponnambli Kamakshi
- Paper ID : IJERTV2IS50370
- Volume & Issue : Volume 02, Issue 05 (May 2013)
- Published (First Online): 28-05-2013
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Blind Steganalysis With Modified Markov Features And RBFNN
R.Lakshmi Priya , P.Eswaran, S.L.Ponnambli Kamakshi
PSN College of Engineering & Technology, Tirunelveli, India
Abstract: In recent years more sophisticated Steganographic tools and techniques have evolved which poses serious challenges to the law enforcement agencies. This is the first and foremost paper which deals with Modified Markov features in feature extraction and Radial Basis Neural Network as neural network classifier. Steganography involves hiding information so it appears that no information is hidden at all. In contrast, Steganalysis technique determines the presence of hidden messages in the image/text/video/audio file. A general steganalysis method that can attack Steganography blindly, detect hidden information irrespective of the embedding algorithm is more useful for real time applications. In this paper, a Steganalysis system was proposed that can detect the hidden information irrespective of the image databases and irrespective of the embedding algorithm using Modified Markov features. Radial Basis Function Neural Network (RBFNN) is utilized to construct a blind classifier. Experimental results show that our method is significantly better than the existing steganalysis system.
Keywords: Steganography, Blind steganalysis, Stego image, Cover image, L-GEM, Artificial neural network
-
Introduction
Steganography [1] is a technique to hide data inside a cover medium in such a way that the existence of any communication itself is undetectable as opposed to cryptography where the existence of secret communication is known but is indecipherable. Steganography includes the concealment of information within computer files. The carrier files used are text, image, audio, video etc. There exist few important requirements for the steganographic systems such as security of the
hidden information, size of the payload and robustness against malicious and unintentional attacks. Also for information hiding digital images are widely made use of.
The two major advantages of using digital images for steganography are first, digital images are widely used medium today and second, it takes the advantage of our limited visual perception of colors. The larger the image size, more information can be hidden. However, larger images may require compression to avoid detection. Often there seems to be a misunderstanding between steganography and cryptography, the one difference that can solve this issue is steganography deals with hiding information in the carrier file while cryptography helps to encrypt the hidden information from the carrier file.
Steganalysis [1] is the process of identifying a suspected stego media, inspecting various parameters whether that media contains hidden message, then try to recover the hidden data. In the steganalysis, the suspected media may or may not contain hidden data.
Most of the hidden messages hidden using steganography tools do not leave many (if any) signs that something has been hidden. On the other hand steganalysis is the art and science of detecting secret messages hidden using steganography. In addition, steganalysis also serves as a tool to measure the performance of a
steganographic algorithm. Steganographic techniques find its main applications in computer forensics, cyber warfare and tracking the criminal activities etc. With the growing attention given to securing the social public network steganalysis has become a hot research topic. Many steganalysis techniques have been proposed in the literature. Literature reveals that steganalysis can be divided into two classes: target steganalysis and universal steganalysis.
Target steganalysis normally used to detect a specific steganographic method. Also this method will not work for other steganographic techniques. For example, Fridrich et al. broke the F5 algorithm by estimating an approximation of cover image using the stego image [18]. Whereas universal or blind steganalysis is a generic scheme that are capable of detecting the presence of secret message irrespective of the steganographic algorithm being used. Blind steganalysis is more popular for its usefulness in practical applications and also capable of detecting a new steganographic technique for which no targeted steganalysis is available.
The various steganographic tools available today are JSteg[16], F5[14], MB1(Model Based Steganography)[19], MB2(Model Based Steganography with deblocking)[20], Outguess[15], Steghide[21] etc. steganography can be used as a beneficial tool for privacy.
-
Related Works
The first works on blind steganalysis was proposed by Avcibas in [23]. In which the authors have employed image quality metrics as features. They have also used one way ANOVA test to select the metrics that responds consistently and
strongly. Based on the features obtained a multivariate regression techniques was used to classify the image as cover image and stego image.
Harmsen et al. in [24] presented a detector based on the Histogram Characteristic Function (HCF) for color images; however its performance is not good for raw images and gray scale images. Later to improve the performance number of works were presented [25,26].
Farid was one of the first to propose the use of higher order statistics to detect hidden messages in a stego medium [10]. He used a wavelet like decomposition to build a higher order statistical model for natural images. Fisher linear discriminant (FLD) pattern classifier is used to train and predict if a given image is cover or stego. The results show an average of 90% detection rate for Outguess and JSteg.
The idea of using Markov random chains was first proposed in [27] to detect the spread spectrum hiding in gray scale images. The Markov chains were used to model the correlation between pixels in an image. The authors have used a supervised learning machine to classify the cover and stego image.
Pevny and Fridich in [8] applied calibration to Markov features and merged the markov feature with DCT features to improve the performance of blind steganalysis.
Shi et al in [7] has presented a Markov feature based approach to attack JPEG stegnography. The author has also got promising results in detecting F5, Outguess and MB1. The advantage with this kind of technique is that it can be used with any existing algorithm without any modification and
hence can be categorized as a universal steganalyzer.
Chen and Agaian [6] used RBFNN for steganographic detection through feature
difference arrays determined along four directions such as horizontal, vertical, diagonal and minor diagonal are as follows:
preprocessing, feature extraction, data standardization and data classification. The author shows through experimental results that the RBFNN has high classification accuracy.
Patrick and Daniel proposed in [22] a
Fh (u, v) F(u, v) F(u, v 1)
Fv (u, v) F(u, v) F(u 1, v)
Fd (u,v) F(u,v) F(u 1, v 1)
Fm (u, v) F(u 1, v) F(u, v 1)
(1)
(2)
(3)
(4)
steganalysis method based on markov features and Radial Basis Function Neural Network for data classification. In this paper, the author introduced a steganalysis method to generalize the testing images obtained from different images databases and of different sizes.
This paper is organized as follows. Section 3 starts with the description about the feature construction. Section 4 briefly describes about the
classifier construction, which is the training of the
The transition probabiliy matrix can be used to characterize the markov random process. In this paper, we make use of 1-step transition probability which gives a balance between the high steganalysis capability and computational complexity.
In this paper, we construct the transition probability matrices only along the vertical and horizontal directions. Hence the transition probability matrices constructed based on
RBFNN with L-GEM. Then section 5 presents the experimental results. Finally, conclusion is drawn
Fh (u, v) and
Fv (u, v) are as follows
in Section 6.
Su 2 S
v
v
M1 hh (i, j)=
(F
(u, v)=i, F h (u+1,v)=j)
u 1 v1 h
-
Feature Extraction
Su 2 S
v
v
(F
(u,v)=i) (5)
The Markov feature [1] captures the statistics
u 1
v1 h
based on the difference JPEG arrays by a Markov
Su 1 Sv 1
M1 hv (i, j)= (F
(u, v)=i, F h (u,v+1)=j)
process. In this paper, we expand the Markov
u 1 v1 h
approach further to a modified Markov approach.
Su 1 Sv 1
(F
(u,v)=i) (6)
First we extract the Markov features based on
u 1 v1 h
absolute values of the neighboring DCT
Su 1 Sv 1
M1 vh (i, j)= (F
(u, v)=i, F v (u+1,v)=j)
coefficients, then on correlation of the neighboring
u 1 v1 v
DCT coefficients in the inter-DCT block and
Su 1 Sv 1
(F
(u,v)=i) (7)
finally to DWT approximation subband.
u 1 v1 v
-
Modified Markov Feature
Let F denote the matrix of the absolute values of
S Sv 2
M1 vv (i, j)= uu 1
M1 vv (i, j)= uu 1
v1 (F v
(u, v)=i, F v (u,v+1)=j)
DCT coefficients of the image. Hence, the four
S Sv 2
uu 1
uu 1
v1 (F v
(u,v)=i) (8)
Here S u and S v are the dimensions of the image
i.e. size of the coefficient matrix along the
M2 vv
S Sv 16
(i, j)= uu 1
(i, j)= uu 1
v1 (D v
(u, v)=i, D v
(u, v+8) =j)
horizontal direction and vertical direction, the value of is equal to 1 iff the arguments are
S Sv 16
uu 1
uu 1
v1 (D v
(u, v) = i) (15)
satisified. The range of i and j is [-4,+4]
In order, to reduce the noise, we take the average of vertical and horizontal directions as,
F avg ={ M1 hh + M1 hv + M1 vh + M1 vv }/4 (9)
-
Markov Feature based on inter-block DCT coefficients:
Here, the markov feature is expanded to the neighbouring DCT coefficients on the inter-blocks as in [8,17].The horizontal and vertical difference arrays on the inter block is determined as follows,
Then the average is taken as,
F avg ={M2 hh + M2 hv + M2 vh + M2 vv }/4 (16)
-
DWT Approximation Subband
The horizontal and vertical difference arrays along DWT approximation subbands are calculated as follows,
WA h (u, v) = WA(u, v) WA(u, v-1) (17)
WA v (u, v ) = WA(u, v) WA(u+1, v) (18)
Let WA denote the Haar DWT approximation sub-band that is multiplied by 2.
Dh (u, v) F(u, v) F(u 8, v)
Dv (u, v) F(u, v) F(u, v 8)
(10)
(11)
We prefer Haar transform as it is real and orthogonal than any other wavelet transform.
The transition probability matrices are
v1 h h
v1 h h
The transition probability matrices for the difference arrays on the inter-block is calculated as follows,
determined as follows,
M2 hh
Su 16
(i, j)= u 1
Sv
v1 (D h
(u,v) = i, D h
(u+8, v)=j)
SWu 2
M3 hh (i,j)= u1
SWv (WA (u, v)=i,WA (u+1, v)=j)
v
v
Su 16 S
v1 h
v1 h
(D
(u, v) = i) (12)
SWu 2
u1
SWv (WA (u, v) = i) (19)
u 1
v1 h
SWu 1
M3 hv (i, j)= u1
SWv 1
v1
(WA h (u, v)= i, WA h (u, v + 1 )=j)
Su 8 Sv 8
M2 (i, j= (D
(u,v) = i, D
(u, v+8)=j)
hv u 1
v1 h h
SWu 1
u1
SWv 1
v1
(WA h (u, v) = i) (20)
Su 8 Sv 8
(D
(u, v) = i) (13)
u 1 v1 h
SWu 1
M3 vh (i, j)= u1
SWv 1
v1
(WA v (u, v) = i, WA v (u+1, v)=j)
Su 8 Sv 8
M2 (i, j)= (D
(u, v)= i, D
(u+8, v)=j)
vh u 1
v1 v v
SWu 1
u1
SWv 1
v1
(WA v (u, v) = i) (21)
Su 8 Sv 8
(D
(u, v) = i) (14)
u 1 v1 v
SW SW 2
M3 (i, j) = u v (WA
(u, v) = i,WA
(u, v+1) =j)
space and the output layer applies a linear
vv u 1
v1 v v
transformation from the hidden space to the output
uu
uu
SW
1
Then the average is taken as,
SWv 2
v1 (WA v
(u, v) = i) (22)
space. Also there exists a major problem with RBFNN in selecting the number of hidden neurons. If too many hidden neurons are chosen,
then it may lead to over-fitting problem. If very
F avg ={ M3 hh + M3 hv + M3 vh + M3 vv }/4 (23)
-
Calibrating the images
We adopt calibration method as in [17] in our paper to increase the detection accuracy of the steganalysis system. The calibrated image features do have the same statistical characteristics as that of the original image.
The calibrated features can be obtained as the difference between the test image and the calibrated image. The calibrated features can be determined from the formula
F = | F test F cal | (24)
Where the F test denotes the features extracted
from the test image and F cal denotes the features extracted from the original image.
-
-
-
Classifier
-
Radial Basis Function Neural Network (RBFNN):
RBFNN is a feed forward network. RBFNN is used for pattern recognition, regression and time series prediction applications The architecture of RBFNN is a three layer network in which the input layer is just a fan-out layer (no processing), the hidden layer applies a nonlinear transformation from the input space to the hidden
few hidden neurons are chosen then the classifier may not learn well.
Fig. 2 A Traditional Radial Basis Function Network
-
Localized Error Generalization Model (L-GEM)
RBFNN performance is primarily determined by its architecture selection. For this purpose L-GEM is introduced [9] i.e. for architecture selection. The two major concepts in L-GEM are Q- neighbourhood and Stochastic sensitivity measure. The Q-neighbourhood of a training sample [9] is defined as follows
S Q ( xb ) = {x|x=x b +x; |x i |<=Q
i= 1,..,n} (25)
Where Q is a real number, x be the input perturbation and x b be the training samples
The localized generalization error of RBFNN
[9] is defined as follows2
xi
xi
R SM (Q) = SQ (f (x) – F(x)) p(x) dx (26)
Where x, F(x), p(x), S denote the input vector of a sample, the true output, the true unknown
n
n
E( s j )=
i1
n
-
u ji
2
xi
xi
(35)
probability density function of the input x and the union of all
Where
u j and v j denote the center and width
of the j th RBFNN hidden neuron, wj
be the
S(x b ).
weights between the
j th
hidden neuron and its
Let y=f
(x)-f
(x b )&err
( xb ) = f
(x b )- F(x b )
corresponding output neuron,
and
x
x
i
2 be
x
x
i
N 2 the expectation and variance,
u ji
denotes the
R emp= (1/N) b1 (err ( x b ))
(27)
i th input feature of the j th center of the
E SQ
((y) 2 )=(1/N) N
b1
b1
S
S
Q
(y) 2 /(2Q) n dx (28)
hidden neuron and D the training data set.
Also the other architecture selection
The L-GEM provides an upper bound for the of RBFNN with a probability 1- , we have,[9]
methods such as cross validation, sequential learning and ad-hoc methods [9] yields best classification accuracy but only with fewer
R SM
(Q) ( R SM
2 2
S
S
(Q)+E ((y) )+A) +
Q
hidden neurons and less training time. This paves way for the usage of the Localized
i.e.,R SM
(Q) = R *
(Q) (29)
Generalization Error Model (L-GEM).
SM
SM
S
S
Where = B( ln /(2N)) ,E ,A, y, N and B
Q
denotes the stochastic sensitivity measure of
RBFNN, difference between the maximum and minimum values of target outputs, output perturbation, the number of training samples and maximum possible value of the MSE. Here MSE is the mean square error i.e. the measure of the
difference between the classifier f and the
mapping function F. Stochastic sensitivity measure
can be calculated as follow [9].
-
-
-
Results and Discussion
-
Image set
In our experiment, 1000 images were downloaded from http://www.freefoto.com ,1000 images from BROWS 2 image database and 1000 images from ucid database. All these 3000 images were joint to form a new image database. All these images were resized to 256 ×256 from the center of the original images.
E s ((y) )=1/3Q
E s ((y) )=1/3Q
M
2 2
Q
j 1
M
+ 0.2/9 Q n
+ 0.2/9 Q n
4
j
j1
j (30)
About 80% of the images were used for training and 20% images were used for testing. Though all the images were color images we have converted
Where M denotes the number of hidden neurons.
them into gray scale images in our experiment.
Here s j
|| x u j
n
||2
2
, then we have,
Figure 2 shows some of the sample images used in this experiment.
j j xi
xi ji j
= ( (
i 1
+(
) 2 )/ 4 (31)
= / 4
(32)
j j j
=( w ) 2 exp (( var ( s )/2 4 ) (E( s )/ 2 )) (33)
j j j j j j
E
E
D xi x
xi
xi
4 2 2
i
i
var(s )
4 n
i
i
n
n
u 2
(34)
i
i
xi
xi
j xi xi ji
D
D
i 1
4E
xi x
3
-
u
-
u
ji
Fig. 2 Sample Images used in this Experiment
-
-
Stego Image Generation
Stego images were generated by using four embedding algorithms namely, MB1[19], MB2[20], F5[14] and JSteg[16]embedding algorithm at four different embedding rates:5%, 10%, 15%, and 20% bits per non zero AC coefficients. Figure 3 shows the stego image generated using JSteg Embeding Algorithm. The emdedded message used in this experiment is text and images.
-
-
Conclusion
In this paper, we improve the steganalysis performance by expanding the Markov process to both
DCT and DWT domains and applying the L-GEM.
Original Image
+
Ramakrishna Mission was named after Ramakrishna Paramahamsa. The
Ramakrishna Mission is = a philanthropic, volunteer organization founded by Swamy Vivekananda, the chief disciple of shri Ramakrishna, on May 1,1897.
Stegno Image
Fig.3 Stego Image generated using JSteg Embedding Algm
5.3 Experimental Results
In our experiment we have evaluated the detection accuracy by varying the feature set. We have employed markov features proposed in [22] and modified markov features proposed in this paper. The classifier used is RBFNN with Localized Error Generalization Model. In the proposed method the threshold value is set to 4. Experimental results show that the detection accuracy of the proposed method is better than the markov features proposed in [22]. The proposed method is tested with MB1, MB2, F5 and JSteg algorithm. The table 1 shows the results obtained in our experiment. The detection accuracy of the method that employed modified markov feature is dominant than the features proposed in [22].
Embedding Algorithm |
Embedding Rate |
Modified markov features (This Paper) |
Markov features in [22] |
MB1 |
5% |
98.9% |
98.5% |
10% |
99.1% |
98.8% |
|
15% |
99.4% |
99.1% |
|
20% |
99.9% |
99.2% |
|
MB2 |
5% |
96.2% |
92.6% |
10% |
98.1% |
94.8% |
|
15% |
98.9% |
96.1% |
|
20% |
99.5% |
98.8% |
|
JSTEG |
5% |
95.4% |
94.71% |
10% |
97.9% |
96.77% |
|
15% |
98.89% |
98.23% |
|
20% |
99.88% |
99.46% |
|
F5 |
5% |
98.4% |
96.2% |
10% |
98.2% |
97.4% |
|
15% |
98.8% |
98.1% |
|
20% |
99.4% |
98.6% |
Embedding Algorithm |
Embedding Rate |
Modified markov features (This Paper) |
Markov features in [22] |
MB1 |
5% |
98.9% |
98.5% |
10% |
99.1% |
98.8% |
|
15% |
99.4% |
99.1% |
|
20% |
99.9% |
99.2% |
|
MB2 |
5% |
96.2% |
92.6% |
10% |
98.1% |
94.8% |
|
15% |
98.9% |
96.1% |
|
20% |
99.5% |
98.8% |
|
JSTEG |
5% |
95.4% |
94.71% |
10% |
97.9% |
96.77% |
|
15% |
98.89% |
98.23% |
|
20% |
99.88% |
99.46% |
|
F5 |
5% |
98.4% |
96.2% |
10% |
98.2% |
97.4% |
|
15% |
98.8% |
98.1% |
|
20% |
99.4% |
98.6% |
Table 1. Comparison of Detection Accuracy of Modified markov features and Original markov features
based RBFNN. The neural network RBFNN is trained with several training images in online. The percentage of identifying hidden information is more than 98% when compared with the previous work. The high performance of this algorithm shows that it can be used as an important tool in the world of steganalysis for the detection of stego images. Future work may improve the eature extraction by investigating third order markov chain and also merge some more statistics of DCT coefficient to improve the detection accuracy of the blind steganalysis.
References
-
R. Chandramouli , M. Kharrazi and N. menon, Image Steganography and Steganalysis: Concepts and Practice, Springer-Verlag, Lecture Notes in Computer Science, Digital Watermarking, Vol. 2939, pp. 204-211, 2004.
-
Kang Leng Chiew, Josef Pieprzyk Blind Steganalysis : A Countermeasure for Binary Image Steganography. International Conference on Availability, Reliability and Security, 2010
-
Alondra Gabriela Hernandez Chamorro and Mariko Nakano Miyatake A New Methodology Of Image Steganalysis including for JPEG Steganography, Electronics and Automotive Mechanics Conference, 2010
-
Wenqiong Yu, Zhuo Li, Lingdi Ping Blind Detection for JPEG Steganpgraphy International Conference on Networking and Information Technology, 2010
-
Qingzhong Liu, Andrew H.Sung, mengyu Qiao, Zhongxue Chen, Bernardete Ribeiro An Improved Approach To Steganalysis of JPEG Images, Information Sciences, Vol. 180, pp 1643-1655, 2010
-
Mei-Ching Chen, Sos S. Agaian, C. L. Philip Chen, Benjamin M. RodRiguez Steganography Detection Using RBFNN,International Conference on Machine Learning and Cybernetics, pp. 3720- 3725, 2008
-
Yun Q. Shi, Chunhua Chen, Wen Chen A Markov Process Based Approach to Effective Attacking JPEG Steganography, Information Science, Vol. 180, pp. 1643-1655, 2010
-
Tomas Pevny, Jessica Fridrich Merging Markov and DCT Features for Multi-Class JPEG Steganalysis, In Proceedings of SPIE : Security, Steganography and Watermarking of Multimedia Contents IX, San Jose, CA, USA, 2007
-
Daniel S. Yeung, Wing W. Y. Ng, Defeng Wang, Eric C. C. Tsang, Xi-Zhao Wang,Localized Generalization Error Model and Its Application to Architecture Selection for Radial Basis Function Neural Network, IEEE Trans. On Neural Networks, Vol.18, 1294-1305, 2007
-
Siewi Lyu, Hany Farid Steganalysis Using Higher-Order Image Statistics, IEEETrans. On Information Forensics and Security, Vol. 1, pp. 111- 119, 2006
-
Jennifer Davidson, Clifford Bergman and Eric Barlett, An Artificial Neural Network for
Wavelet Steganalysis Proc. of SPIE : the International Society for Optical Engineering ,
Vol 5916, Mathematical Methods in Pattern and Image Analysis, pp. 1-10, 2005
-
Siewi Lyu, Hany Farid Steganalysis Using Color Wavelet Statistics and One-Class Support Vector Machines, In Proc. SPIE Symp. Electronic Imaging, 2004
-
Ismail Avcibas, Nasir Memon, Bulent Sankur Image Steganalysis With Binary Similarity Measures, In Proc. of 2002 IEEE International Conference on Image Processing, Vol. 3, pp.645- 648, 2002
-
Andreas WestfeldF5: A Steganographic Algorithm, High Capapcity Despite Better Steganalysis I.S. Moskowitz (Ed.) : IH 2001,
LNCS 2137, pp. 289-302, 2001
-
N. Provos, Defending against Statistical Steganalysis, 10th USENIX Security Symposium, Washington DC, USA, 2001.
-
D. Upham, Jsteg. Available from: ftp://ftp. funet.fi/ pub/crypt/steganography/, 2002.
-
J. Fridrich, Feature-based steganalysis for JPEG images and its implications for future design of steganographic schemes, in Information Hiding. Springer, 2005, pp. 6781.
-
J. Fridrich, M. Goljan, and D. Hogea, Steganalysis of JPEG images: Breaking the F5 algorithm, Lecture Notes in Computer Science, pp. 310323, 2003.
-
P. Sallee, Model-based steganography, Digital Watermarking, pp. 254260, 2004.
-
P.Sallee, Model Based methods for Steganography and Steganalysis, International Journal of Image and Graphics, Vol. 5, No. 1, 167- 189,2005
-
S.Hetzl, Steghide http:// Steghide. sourceforge.net/, 2003.
-
Wing W.Y. Ng, Zhi-Min He, Patrick P.K. Chan, Daniel S. Yeung, Blind Steganalysis With High- Generalization Capability for Different Image Databases Using L-GEM, In Proc of the 2011 International Conference on Machine Learning and Cybernetics, Gullin, 10-13 July, 2011
-
I. Avcibas, N. Memon, and B. Sankur. Steganalysis using image quality metrics. In E. Delp and P. W. Wong, editors, In Proc. of SPIE Electronic Imaging, Security and Watermarking of Multimedia Contents III, volume 4314, pages 523531, 2001.
-
Harmsen, J. and Pearlman, W., Higher-order statistical steganalysis of palette images, In Security and Watermarking of Multimedia Contents V, Proc. SPIE 5020, 131142 (2003).
-
Ker, A., Steganalysis of LSB matching in grayscale images, IEEE Signal Processing Letters 12(6), 441444, (2005).
-
Xuan, G., Shi, Y., Gao, J., Zou, D., Yang, C., Zhang, Z., Chai, P., Chen, C., and Chen, W., Steganalysis based on multiple features formed by statistical moments of wavelet characteristic functions, In Proc. 7th Information Hiding Workshop, Springer LNCS 3727, 262277 (2005).
-
K. Sullivan, U. Madhow, S. Chandrasekaran, and
B.S. Manjunath. Steganalysis of spread spectrum data hiding exploiting cover memory. In E. J. Delp and P. W. Wong, editors, Proc. SPIE, Electronic Imaging, Security, steganography, and Watermarking of Multimedia, Contents VII, volume 5681, pages 38{46, San Jose, CA, January 16{20, 2005.
-
http://bows2.gipsa-lab.inpg.fr/
-
http://www- staff.lboro. ac.uk /~cogs/ datasets
/UCID
-
http://www.freefoto.com