A Probability Distribution Kernel based on Whitening Transformation

A Probability Distribution Kernel based on Whitening Transformation

Jiangsheng GuiYuanfeng Chi Qing Zhang Xiaoan Bao 

The college of information, Zhejiang Sci-Tech University Hangzhou 310018, China

Corresponding Author Email: 
dewgjs@126.com
Page: 
93-109
|
DOI: 
https://doi.org/10.18280/ama_b.600106
Received: 
15 March 2017
| |
Accepted: 
15 April 2017
| | Citation

OPEN ACCESS

Abstract: 

A kernel function can change linearly inseparable vectors into linearly separable vectors by mapping the original feature space into another feature space. But the dimension of the new feature space is usually several times of the original feature space, which results in a more computational complexity. This paper aims at introducing a Dirichlet probability distribution kernel based on whitening transformation which is called DPWT kernel function for short. The DPWT kernel function first mapping feature vectors of the original feature space into new vectors of another same dimension feature space, and then classifies new vectors in the new feature space so as to achieve the purpose of classifying original feature vectors. The DPWT kernel doesn’t augment the dimension of the new feature space. What’s more, the DPWT kernel can effectively eliminate the correlation between vectors, and reduce the redundancy of data which can further improve the accuracy of classification. In this paper, we use the DPWT kernel and other five commonly used kernels on the three benchmark datasets (VOC2007, UIUCsport and Caltech101) for image classification experiments. Experiments show that the DPWT kernel exhibits superior performances compared to the other state of the art kernels. 

Keywords: 

whitening transformation, dirichlet probability distribution, kernel function,image classification

1. Introduction
2. Foundations of the DPWT Kernel
3. DPWT Kernel
4. Parameter Optimization
5. Experiments
6. Conclusion and Further Work
  References

1. A. Rahimi, B. Recht. Random Features for Large-Scale Kernel Machines. Advances in Neural Information Processing Systems, 2007, vol. 20, pp. 1177-1184.

2. S. Vempati, A. Vedaldi, A. Zisserman, et al. Generalized RBF Feature Maps for Efficient Detection// British Machine Vision Conference, BMVC 2010, Aberystwyth, UK, August 31 - September 3, 2010. Proceedings. DBLP, 2010:1-11.

3. S. Maji, A.C. Berg, J. Malik. Classification using intersection kernel support vector machines is efficient[C]// Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008:1-8.

4. S. Maji, A.C. Berg, J. Malik. Efficient Classification for Additive Kernel SVMs. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2013, vol. 35, no. 1, pp. 66-77.

5. F. Perronnin, J. Sánchez, Y. Liu. Large-scale image categorization with explicit data embedding. 2010, vol. 23, no. 3, pp. 2297-2304.

6. A. Vedaldi, A Zisserman. Efficient Additive Kernels via Explicit Feature Maps. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2012, vol. 34, no. 3, pp. 480.

7. M. Guillaumin, J. Verbeek, C. Schmid. Multimodal semi-supervised learning for image classification// Computer Vision and Pattern Recognition. IEEE, 2010, pp. 902-909.

8. J. Yang, Y. Li, Y. Tian, et al. Group-sensitive multiple kernel learning for object categorization// IEEE, International Conference on Computer Vision. IEEE, 2009, pp. 436-443.

9. T. Howley, M.G. Madden. The Genetic Kernel Support Vector Machine: Description and Evaluation. Artificial Intelligence Review, 2005, vol. 24, no, 3, pp. 379-395.

10. G.Y. Chen, P. Bhattacharya. Function Dot Product Kernels for Support Vector Machine. 2006, vol. 20, no. 2, pp. 614-617.

11. R.T. Ionescu, M. Popescu. Kernels for Visual Words Histograms// Image Analysis and Processing – ICIAP 2013. 2013, pp. 81-90.

12. G. Wang, D. Hoiem, D. Forsyth. Learning image similarity from Flickr groups using Stochastic Intersection Kernel MAchines// IEEE, International Conference on Computer Vision. IEEE Xplore, 2009, pp. 428-435.

13. H W Chang, H T Chen. A square-root sampling approach to fast histogram-based search[C]// Computer Vision and Pattern Recognition. IEEE, 2010, pp. 3043-3049.

14. D Chen, J M Phillips. Relative Error Embeddings for the Gaussian Kernel Distance. 2016.

15. C. Tonde, A. Elgammal. Learning Kernels for Structured Prediction using Polynomial Kernel Transformations. 2016.

16. H. Hong, B. Pradhan, M.N Jebur, et al. Spatial prediction of landslide hazard at the Luxi area (China) using support vector machines. Environmental Earth Sciences, 2016, vol. 75, no. 1, pp. 1-14.

17. P.E. Szabó. Response to “Variable directionality of gene expression changes across generations does not constitute negative evidence of epigenetic inheritance” Sharma, A. Environmental Epigenetics, 2015, 1-5. Genome Biology, 2016, vol. 17, no. 1, pp. 1-4.

18. K. Chatfield, V. Lempitsky, A Vedaldi, et al. The devil is in the details: An evaluation of recent feature encoding methods// British Machine Vision Conference. 2011: 76.1-76.12.

19. X. Zhang, M.H. Mahoor. Task-dependent multi-task multiple kernel learning for facial action unit detection. Pattern Recognition, 2016, vol. 51, pp. 187-196.