RoughKMeans_LW {SoftClustering} | R Documentation |
Lingras & West's Rough k-Means
Description
RoughKMeans_LW performs Lingras & West's k-means clustering algorithm. The commonly accepted relative threshold is applied.
Usage
RoughKMeans_LW(dataMatrix, meansMatrix, nClusters, maxIterations, threshold, weightLower)
Arguments
dataMatrix |
Matrix with the objects to be clustered. Dimension: [nObjects x nFeatures]. |
meansMatrix |
Select means derived from 1 = random (unity interval), 2 = maximum distances, matrix [nClusters x nFeatures] = self-defined means. Default: 2 = maximum distances. |
nClusters |
Number of clusters: Integer in [2, nObjects). Note, nCluster must be set even when meansMatrix is a matrix. For transparency, nClusters will not be overridden by the number of clusters derived from meansMatrix. Default: nClusters=2. |
maxIterations |
Maximum number of iterations. Default: maxIterations=100. |
threshold |
Relative threshold in rough k-means algorithms (threshold >= 1.0). Default: threshold = 1.5. |
weightLower |
Weight of the lower approximation in rough k-means algorithms (0.0 <= weightLower <= 1.0). Default: weightLower = 0.7. |
Value
$upperApprox
: Obtained upper approximations [nObjects x nClusters]. Note: Apply function createLowerMShipMatrix()
to obtain lower approximations; and for the boundary: boundary = upperApprox - lowerApprox
.
$clusterMeans
: Obtained means [nClusters x nFeatures].
$nIterations
: Number of iterations.
Author(s)
M. Goetz, G. Peters, Y. Richter, D. Sacker, T. Wochinger.
References
Lingras, P. and West, C. (2004) Interval Set Clustering of web users with rough k-means. Journal of Intelligent Information Systems 23, 5–16. <doi:10.1023/b:jiis.0000029668.88665.1a>.
Peters, G. (2006) Some refinements of rough k-means clustering. Pattern Recognition 39, 1481–1491. <doi:10.1016/j.patcog.2006.02.002>.
Lingras, P. and Peters, G. (2011) Rough Clustering. WIREs Data Mining and Knowledge Discovery 1, 64–72. <doi:10.1002/widm.16>.
Lingras, P. and Peters, G. (2012) Applying rough set concepts to clustering. In: Peters, G.; Lingras, P.; Slezak, D. and Yao, Y. Y. (Eds.) Rough Sets: Selected Methods and Applications in Management and Engineering, Springer, 23–37. <doi:10.1007/978-1-4471-2760-4_2>.
Peters, G.; Crespo, F.; Lingras, P. and Weber, R. (2013) Soft clustering – fuzzy and rough approaches and their extensions and derivatives. International Journal of Approximate Reasoning 54, 307–322. <doi:10.1016/j.ijar.2012.10.003>.
Peters, G. (2014) Rough clustering utilizing the principle of indifference. Information Sciences 277, 358–374. <doi:10.1016/j.ins.2014.02.073>.
Peters, G. (2015) Is there any need for rough clustering? Pattern Recognition Letters 53, 31–37. <doi:10.1016/j.patrec.2014.11.003>.
Examples
# An illustrative example clustering the sample data set DemoDataC2D2a.txt
RoughKMeans_LW(DemoDataC2D2a, 2, 2, 100, 1.5, 0.7)