An Improved Dynamic Based Incremental Clustering in Affinity Propagation  
  Authors : P. Sujitha; R. Suresh

 

Affinity propagation based clustering algorithm will be individually placed on each and every object Specific cluster. Utilizing the subsequent clustering technique. Affinity Propagation (AP) clustering continues to be proven to work in many of clustering problems. This particular paper views the best way to utilize AP in incremental clustering problems. Firstly, we mention the problems within Incremental Affinity Propagation (IAP) clustering, then propose two techniques to solve them. Correspondingly, two IAP clustering algorithms are usually proposed. Five popular labeled data sets, real-world time series as well as a video are employed test the performance associated with IAPKM and IAPNA. Standard AP clustering is usually implemented to produce benchmark performance. Experimental results show that IAPKM and IAPNA is capable of doing comparable clustering performance together with standard AP clustering on each of the data sets. Meanwhile, the time cost is actually dramatically reduced within IAPKM and IAPNA. The two effectiveness and also the efficiency make IAPKM and IAPNA capable of being well utilized in incremental clustering tasks.

 

Published In : IJCAT Journal Volume 2, Issue 8

Date of Publication : August 2015

Pages : 288 - 292

Figures :01

Tables : 01

Publication Link :An Improved Dynamic Based Incremental Clustering in Affinity Propagation

 

 

 

P. Sujitha : M.Tech 2nd year, Department of CSE, JNTUA, CREC Tirupati, AP, India

R. Suresh : Professor & Head , Department of CSE, JNTUA, CREC Tirupati, AP, India

 

 

 

 

 

 

 

Cluster

affinity propagation

IAP

IAPNA

IAPKM

This paper describes a new and novel approach for incremental K-means clustering. The method stems from the pyramid K-means algorithm presented in difference from the pyramid approach, the sampling is done without replacement. Furthermore, the sampling size is fixed. On the other hand, two measures are applied to the data in order to overcome the fact that each data block is processed only one time. First, the algorithm starts with a relatively large number of clusters and scales the number down in U1e last stage. the dynamic approach can be used to mitigate another inherent problem of K-means where the number of clusters has to be predetermined. In both cases the algorithms work on "chunks" of data referred to as blocks.

 

 

 

 

 

 

 

 

 

[1] Xindong Wu, Vipin Kumar,J.Ross Quinlan, Joydeep Ghosh, Qiang Yang, Hiroshi Motoda,Geoffrey, J. McLachlan, Angus Ng, Bing Liu, Philip S. Yu, Zhi-Hua Zhou, Michael steinbach,David J. Hand, and Dan Steinberg, 2007. A survey paper on Top 10 algorithm in data mining,published by Springer. [2] M. Mehta, R. Agrawal, and J. Rissanen, 1996. SLIQ: A fast scalable classifier for data mining.In Proceeding of International Conference on Extending Database Technology (EDBT?96), Avignon, France. [3] H J. Shafer, R. Agrawal, and M. Mehta, 1996. SPRINT: A scalable parallel classifier for data mining. In Proceedings of the 22 nd International Conference Very Large Databases (VLDB),pages 544-555, Mumbai, India. New York: Springer-Verlag, 1985, ch. 4 [4] J. Han, and M. Kamber, 2006. Data Mining Concepts and Techniques, Elsevier Publishers. [5] H. Freeman, Jr., 1987. Applied Categorical Data Analysis, Marcel Dekker, Inc., New York. Micheline Kamber, Lara Wistone, Wan Gong, Shan Chen, and Han, 1997. Generalization and Decision Tree Induction: Efficient classification in Data Mining, for IEEE.