Setting lower bounds on Jensen–Shannon divergence and its application to nearest neighbor document search
Vestnik Sankt-Peterburgskogo universiteta. Prikladnaâ matematika, informatika, processy upravleniâ, Tome 14 (2018) no. 4, pp. 334-345
Cet article a éte moissonné depuis la source Math-Net.Ru

Voir la notice de l'article

The Jensen–Shannon divergence provides a mechanism to determine nearest neighbours in a document collection to a specific query document. This is an effective mechanism however for exhaustive search this can be a time-consuming process. In this paper, we show by setting lower bounds on the Jensen–Shannon divergence search we can reduce by up to a factor of 60% the level of calculation for exhaustive search and 98% for approximate search, based on the nearest neighbour search in a real-world document collection. In these experiments a document corpus that contains 1 854 654 articles published in New York Times from 1987-01-01 till 2007-06-19 (The New York Times Annotated Corpus) was used. As queries, 100 documents from same document corpus were selected randomly. We assess the effect on performance based on the reduction in the number of log function calculations. Approximate nearest neighbour search is based on clustering of documents using Contextual Document Clustering algorithm. We perform an approximated nearest neighbour search by finding the best matching set of cluster attractors to a query and limiting the search for documents to the attractors' corresponding clusters.
Keywords: nearest neighbors search, dimensionality reduction.
Mots-clés : Jensen–Shannon divergence
@article{VSPUI_2018_14_4_a5,
     author = {V. Yu. Dobrynin and N. Rooney and J. A. Serdyuk},
     title = {Setting lower bounds on {Jensen{\textendash}Shannon} divergence and its application to nearest neighbor document search},
     journal = {Vestnik Sankt-Peterburgskogo universiteta. Prikladna\^a matematika, informatika, processy upravleni\^a},
     pages = {334--345},
     year = {2018},
     volume = {14},
     number = {4},
     language = {en},
     url = {http://geodesic.mathdoc.fr/item/VSPUI_2018_14_4_a5/}
}
TY  - JOUR
AU  - V. Yu. Dobrynin
AU  - N. Rooney
AU  - J. A. Serdyuk
TI  - Setting lower bounds on Jensen–Shannon divergence and its application to nearest neighbor document search
JO  - Vestnik Sankt-Peterburgskogo universiteta. Prikladnaâ matematika, informatika, processy upravleniâ
PY  - 2018
SP  - 334
EP  - 345
VL  - 14
IS  - 4
UR  - http://geodesic.mathdoc.fr/item/VSPUI_2018_14_4_a5/
LA  - en
ID  - VSPUI_2018_14_4_a5
ER  - 
%0 Journal Article
%A V. Yu. Dobrynin
%A N. Rooney
%A J. A. Serdyuk
%T Setting lower bounds on Jensen–Shannon divergence and its application to nearest neighbor document search
%J Vestnik Sankt-Peterburgskogo universiteta. Prikladnaâ matematika, informatika, processy upravleniâ
%D 2018
%P 334-345
%V 14
%N 4
%U http://geodesic.mathdoc.fr/item/VSPUI_2018_14_4_a5/
%G en
%F VSPUI_2018_14_4_a5
V. Yu. Dobrynin; N. Rooney; J. A. Serdyuk. Setting lower bounds on Jensen–Shannon divergence and its application to nearest neighbor document search. Vestnik Sankt-Peterburgskogo universiteta. Prikladnaâ matematika, informatika, processy upravleniâ, Tome 14 (2018) no. 4, pp. 334-345. http://geodesic.mathdoc.fr/item/VSPUI_2018_14_4_a5/

[1] Manning C. D., Raghavan P., Schütze H., Introduction to information retrieval, Cambridge University Press, Cambridge, 2008 544 | Zbl

[2] Bentley J. L., “Multidimensional binary search trees used for associative searching”, Communications of the ACM, 18:9 (1975), 509–517 | DOI | Zbl

[3] Samet H., Foundations of multidimensional and metric data structures, Morgan Kaufmann Publ., San-Francisco, 2006, 1024 pp. | Zbl

[4] Cayton L., “Fast nearest neighbor retrieval for bregman divergences”, Proceedings of the 25th Intern. conference on Machine Learning, Morgan Kaufmann Publ., San Francisco, 2008, 112–119

[5] Coviello E., Mumtaz A., Chan A., Lanckriet G., “That was fast! Speeding up NN search of high dimensional distributions”, Proceedings of the 25th Intern. conference on Machine Learning, Morgan Kaufmann Publ., San Francisco, 2013, 468–476

[6] Li C., Chang E., Garcia-Molina H., Wiederhold G., “Clustering for approximate similarity search in high-dimensional spaces”, IEEE Transactions on Knowledge and Data Engineering, 14:4 (2002), 792–808 | DOI

[7] Arya S., Mount D. M., “Approximate nearest neighbor queries in fixed dimensions”, Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms. SODA, Society for Industrial and Applied Mathematics, 1993, 271–280 | MR

[8] Andoni A., Indyk P., “Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions”, 47th Annual IEEE Symposium on Foundations of Computer Science. FOCS'06 (2006), 459–468

[9] Indyk P., Motwani R., “Approximate nearest neighbors: towards removing the curse of dimensionality”, Proceedings of the 30th Annual ACM Symposium on theory of computing, 1998, 604–613 | MR | Zbl

[10] Andoni A., Indyk P., Nguyen H. L., Razenshteyn I., “Beyond Locality-Sensitive Hashing”, SODA, 2014, 1018–1028 | MR

[11] Li K., Malik J., Fast $k$-nearest neighbour search via prioritized DCI, 2017, arXiv: 1703.00440

[12] Li K., Malik J., “Fast $k$-nearest neighbour search via dynamic continuous indexing”, Intern. Conference on Machine Learning (2016), 671–679

[13] Pereira F., Tishby N., Lee L., “Distributional clustering of English words”, Proceedings of the 31st annual meeting on Association for Computational Linguistics, 1993, 183–190 | DOI

[14] Baker L. D., McCallum A. K., “Distributional clustering of words for text classification”, Proceedings of the 21st annual Intern. ACM SIGIR conference on Research and Development in Information Retrieval, 1998, 96–103

[15] Slonim N., Tishby N., “The power of word clusters for text classification”, 23rd European Colloquium on Information Retrieval Research, v. 1, 2001, 200

[16] Bekkerman R., El-Yaniv R., Tishby N., Winter Y., “On feature distributional clustering for text categorization”, Proceedings of the 24th annual Intern. ACM SIGIR conference on Research and Development in Information Retrieval, 2001, 146–153

[17] Dhillon I. S., Mallela S., Kumar R., “A divisive information theoretic feature clustering algorithm for text classification”, The Journal of Machine Learning Research, 3 (2013), 1265–1287 | MR

[18] Dobrynin V., Patterson D., Rooney N., “Contextual document clustering”, Advances in Information Retrieval, Springer Publ., Berlin–Heidelberg, 2004, 167–180 | DOI | Zbl

[19] Lin J., “Divergence measures based on the Shannon entropy”, IEEE Transactions on Information Theory, 37:1 (1991), 145–151 | DOI | MR | Zbl

[20] Lee L., “Measures of distributional similarity”, Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, 1999, 25–32 | DOI | MR

[21] Topsoe F., “Some inequalities for information divergence and related measures of discrimination”, IEEE Transactions on Information Theory, 46:4 (2000), 1602–1609 | DOI | MR | Zbl

[22] Guntuboyina A., Saha S., Schiebinger G., “Sharp inequalities for $f$-divergences”, IEEE Transactions on Information Theory, 60:1 (2014), 104–121 | DOI | MR | Zbl

[23] Sason I. L., Tight bounds for symmetric divergence measures and a refined bound for lossless source coding, 2014, arXiv: 1403.7164 [cs.IT] | MR

[24] Capra J. A., Singh M., “Predicting functionally important residues from sequence conservation”, Bioinformatics, 23:15 (2007), 1875–1882 | DOI