Feature and instance reduction for PNN classifiers based on fuzzy rough sets | International Journal of Machine Learning and Cybernetics Skip to main content
Log in

Feature and instance reduction for PNN classifiers based on fuzzy rough sets

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Instance reduction for K-nearest-neighbor classification rules (KNN) has attracted much attention these years, and most of the existing approaches lose the semantics of probability of original data. In this work, we propose a new reduced KNN rule, called FAIR-KNN, to perform feature and instance reduction based on fuzzy rough set theory. First, we use fuzzy rough sets to evaluate candidate features and select the most informative ones. The algorithm of feature selection returns the selected features and the membership values of samples to the lower approximations of their classes. These values reflect the distances of the samples to classification boundary and are used to compute probabilities of samples to be subsampled. Then we introduce a weighted Parzen window technique to estimate the probability from the weighted subsampled data. Thus we can not only reduce features and samples in original data, but also do not lose the semantics of probability. Finally, the memberships of samples to lower and upper approximations of decisions are interpreted as certainty and possibility degrees of samples belonging to the corresponding decisions, respectively. So the weighted averages with probability of the memberships of samples to lower and upper approximations are outputted as the certainty and possibility degrees of unseen samples belonging to some decisions, which enrich the semantics of KNN. Numerical experiments on artificial and real-world data validate the effectiveness of the proposed technique.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Cover TM, Hart PE (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory 13(1):21–27

    Article  MATH  Google Scholar 

  2. Hart PE (1968) Condensed nearest neighbor rule. IEEE Trans Inf Theory 14(3):515

    Article  Google Scholar 

  3. Lai JZC, Liaw YC, Liu J (2007) Fast k-nearest-neighbor search based on projection and triangular inequality. Pattern Recogn 40(2):351–359

    Article  MATH  Google Scholar 

  4. Angiulli F (2007) Fast nearest neighbor condensation for large data sets classification. IEEE Trans Knowl Data Eng 19(11):1450–1464

    Article  Google Scholar 

  5. Tao YF, Yiu ML, Mamoulis N (2006) Reverse nearest neighbor search in metric spaces. IEEE Trans Knowl Data Eng 18(9):1239–1252

    Article  Google Scholar 

  6. Keller JM, Gray MR, Givens JA (1985) A fuzzy k-nearest neighbor algorithm. IEEE Trans Syst Man Cybern 15(4):580–585

    Article  Google Scholar 

  7. Yager RR (2002) Using fuzzy methods to model nearest neighbor rules. IEEE Trans Syst Man Cybern Part B Cybern 32(4):512–525

    Article  MathSciNet  Google Scholar 

  8. Wettschereck D, Aha DW, Mohri T (1997) A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms. Artif Intell Rev 11(1–5):273–314

    Article  Google Scholar 

  9. Domeniconi C, Gunopulos D, Peng J (2005) Large margin nearest neighbor classifiers. IEEE Trans Neural Netw 16(4):899–909

    Article  Google Scholar 

  10. Paredes R, Vidal E (2006) Learning weighted metrics to minimize nearest-neighbor classification error. IEEE Trans Pattern Anal Mach Intell 28(7):1100–1110

    Article  Google Scholar 

  11. Wang JG, Neskovic P, Cooper LN (2007) Improving nearest neighbor rule with a simple adaptive distance measure. Pattern Recogn Lett 28(2):207–213

    Article  Google Scholar 

  12. Ghosh AK (2007) On nearest neighbor classification using adaptive choice of k. J Comput Gr Stat 16(2):482–502

    Article  MathSciNet  Google Scholar 

  13. Ferri FJ, Albert JV, Vidal E (1999) Considerations about sample-size sensitivity of a family of edited nearest-neighbor rules. IEEE Trans Syst Man Cybern Part B Cybern 29(5):667–672

    Article  Google Scholar 

  14. Wilson DR, Martinez TR (2000) Reduction techniques for instance-based learning algorithms. Mach Learn 38(3):257–286

    Article  MATH  Google Scholar 

  15. Lindenbaum M, Markovitch S, Rusakov D (2004) Selective sampling for nearest neighbor classifiers. Mach Learn 54(2):125–152

    Article  MATH  Google Scholar 

  16. Zouhal LM, Denoeux T (1998) An evidence-theoretic k-NN rule with parameter optimization. IEEE Trans Syst Man Cybern Part C Appl Rev 28(2):263–271

    Article  Google Scholar 

  17. Ghosh AK, Chaudhuri P, Murthy CA (2005) On visualization and aggregation of nearest neighbor classifiers. IEEE Trans Pattern Anal Mach Intell 27(10):1592–1602

    Article  Google Scholar 

  18. Wettschereck D, Dietterich TG (1995) An experimental comparison of the nearest-neighbor and nearest-hyperrectangle algorithms. Mach Learn 19(1):5–27

    Google Scholar 

  19. Wang H (2006) Nearest neighbors by neighborhood counting. IEEE Trans Pattern Anal Mach Intell 28(6):942–953

    Article  Google Scholar 

  20. Hu QH, Yu DR, Xie ZX (2008) Neighborhood classifiers. Expert Syst Appl 34:866–876

    Article  Google Scholar 

  21. Homes CC, Adams NM (2002) A probabilistic nearest neighbour method for statistical pattern recognition. J Royal Stat Soc B 64:295–306

    Article  MathSciNet  MATH  Google Scholar 

  22. Jiang QY, Zhang WS (1993) An improved method for finding nearest neighbors. Pattern Recogn Lett 14(7):531–535

    Article  Google Scholar 

  23. McNames J (2001) A fast nearest-neighbor algorithm based on a principal axis search tree. IEEE Trans Pattern Anal Mach Intell 23(9):964–976

    Article  Google Scholar 

  24. Cha GH, Zhu XM, Petkovic D et al (2002) An efficient indexing method for nearest neighbor searches in high-dimensional image databases. IEEE Trans Multimed 4(1):76–87

    Article  Google Scholar 

  25. Sproull RF (1991) Refinements to nearest neighbor searching in k-dimensional trees. Algorithmica 6:579–589

    Article  MathSciNet  MATH  Google Scholar 

  26. Narayan BL, Murthy CA, Pal SK (2006) Maxdiff kd-trees for data condensation. Pattern Recogn Lett 27(3):187–200

    Article  Google Scholar 

  27. Fu AW, Chan PM, Cheung YL et al (2000) Dynamic vp-tree indexing for n-nearest neighbor search given pair-wise distances. VLDB J 9(2):154–173

    Article  Google Scholar 

  28. Ritter GL, Woodruff HB, Lowry SR et al (1975) Algorithm for a selective nearest neighbor decision rule. IEEE Trans Inf Theory 21(6):665–669

    Article  MATH  Google Scholar 

  29. Gates GW (1972) Reduced nearest neighbor rule. IEEE Trans Inf Theory 18(3):431

    Article  Google Scholar 

  30. Manocha S, Girolami MA (2007) An empirical analysis of the probabilistic K-nearest neighbour classifier. Pattern Recogn Lett 28:1818–1824

    Article  Google Scholar 

  31. Dubois D, Prade H (1990) Rough fuzzy-sets and fuzzy rough sets. Int J Gen Syst 17(2–3):191–209

    Article  MATH  Google Scholar 

  32. Pawlak Z (1982) Rough set. Int J Comput Inform Sci 11(5):341–356

    Article  MathSciNet  MATH  Google Scholar 

  33. Mitra S, Banka H, Pedrycz W (2006) Rough-fuzzy collaborative clustering. IEEE Trans Syst Man Cybern Part B Cybern 36(4):795–805

    Article  Google Scholar 

  34. Jensen R, Shen Q (2004) Semantics-preserving dimensionality reduction: rough and fuzzy-rough-based approaches. IEEE Trans Knowl Data Eng 16(12):1457–1471

    Article  Google Scholar 

  35. Bhatt RB, Gopal M (2005) On fuzzy-rough sets approach to feature selection. Pattern Recogn Lett 26(7):965–975

    Article  Google Scholar 

  36. Li Y, Shiu SCK, Pal SK (2006) Combining feature reduction and case selection in building CBR classifiers. IEEE Trans Knowl Data Eng 18(3):415–429

    Article  Google Scholar 

  37. Hu QH, Xie ZX, Yu DR (2007) Hybrid attribute reduction based on a novel fuzzy-rough model and information granulation. Pattern Recogn 40(12):3509–3521

    Article  MATH  Google Scholar 

  38. Jensen R, Shen Q (2007) Fuzzy-rough sets assisted attribute selection. IEEE Trans Fuzzy Syst 15(1):73–89

    Article  Google Scholar 

  39. Shen Q, Chouchoulas A (1999) Combining rough sets and data-driven fuzzy learning for generation of classification rules. Pattern Recogn 32(12):2073–2076

    Article  Google Scholar 

  40. Hong TP, Wang TT, Wang SL et al (2000) Learning a coverage set of maximally general fuzzy rules by rough sets. Expert Syst Appl 19(2):97–103

    Article  Google Scholar 

  41. Shen Q, Chouchoulas A (2002) A rough-fuzzy approach for generating classification rules. Pattern Recogn 35(11):2425–2438

    Article  MATH  Google Scholar 

  42. Pal SK, Mitra P (2004) Case generation using rough sets with fuzzy representation. IEEE Trans Knowl Data Eng 16(3):292–300

    Article  Google Scholar 

  43. Sarkar M (2007) Fuzzy-rough nearest neighbor algorithms in classification. Fuzzy Sets Syst 158(19):2134–2152

    Article  MathSciNet  MATH  Google Scholar 

  44. Zadeh LA (1996) Fuzzy logic = computing with words. IEEE Trans Fuzzy Syst 4:103–111

    Article  Google Scholar 

  45. Morsi NN, Yakout MM (1998) Axiomatics for fuzzy rough set. Fuzzy Sets Syst 100:327–342

    Article  MathSciNet  MATH  Google Scholar 

  46. Mi J-S, Zhang W-X (2004) An axiomatic characterization of a fuzzy generalization of rough sets. Inf Sci 160:235–249

    Article  MathSciNet  MATH  Google Scholar 

  47. Yeung DS, Chen D-G, Tsang ECC, Lee JWT, Wang X-Z (2005) On the generalization of fuzzy rough sets. IEEE Trans Fuzzy Syst 13(3):343–361

    Article  Google Scholar 

  48. Moser B (2006) On representing and generating kernels by fuzzy equivalence relations. J Mach Learn Res 7:2603–2620

    MathSciNet  MATH  Google Scholar 

  49. Babich GA, Camps OI (1996) Weighted Parzen windows for pattern classification. IEEE Trans Pattern Anal Mach Intell 18:567–570

    Article  Google Scholar 

  50. Hu QH, Zhang L, Chen DG, Pedrycz W, Yu DR (2010) Gaussian kernel based fuzzy rough sets: model, uncertainty and applications. Int J Approx Reason 51(4):453–471

    Article  MATH  Google Scholar 

  51. Derrac J, Cornelis C, García S, Herrera F (2012) Enhancing evolutionary instance selection algorithms by means of fuzzy rough set based feature selection. Inf Sci 186(1):73–92

    Article  Google Scholar 

  52. Jensen R, Cornelis C (2010) Fuzzy-rough instance selection. Fuzzy systems (FUZZ), 2010 IEEE International Conference, pp 1–7

  53. Kang X-M, Liu X-P, Zhai J-H, Zhai M-Y (2011) Instances selection for NN with fuzzy rough technique. In: 2011 International Conference on machine learning and cybernetics, vol 3, pp 1097, 1100

  54. Verbiest N, Cornelis C, Herrera F (2013) fRPS: a fuzzy rough prototype selection method. Pattern Recogn 46(10):2770–2782

    Article  MATH  Google Scholar 

  55. Hu QH, Yu DR, Pedrycz W, Chen DG (2011) Kernelized fuzzy rough sets and their applications. IEEE Trans Knowl Data Eng 23(11):1649–1667

    Article  Google Scholar 

  56. Tomašev N, Radovanović M, Mladenić D, Ivanović M (2012) Hubness-based fuzzy measures for high-dimensional k-nearest neighbor classification. Int J Mach Learn Cybernet. doi:10.1007/s13042-012-0137-1

    Google Scholar 

  57. Jiang L, Cai Z, Wang D, Zhang H (2013) Bayesian citation-KNN with distance weighting. Int J Mach Learn Cybernet. doi:10.1007/s13042-013-0152-x

    Google Scholar 

  58. Basu T, Murthy CA (2013) Towards enriching the quality of k-nearest neighbor rule for document classification. Int J Mach Learn Cybernet. doi:10.1007/s13042-013-0177-1

    Google Scholar 

  59. Rajesh Prasad J, Kulkarni U (2013) Gujrati character recognition using weighted k-NN and Mean X2 distance measure. Int J Mach Learn Cybern. doi:10.1007/s13042-013-0187-z

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric C. C. Tsang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tsang, E.C.C., Hu, Q. & Chen, D. Feature and instance reduction for PNN classifiers based on fuzzy rough sets. Int. J. Mach. Learn. & Cyber. 7, 1–11 (2016). https://doi.org/10.1007/s13042-014-0232-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-014-0232-6

Keywords

Navigation