Clustering algorithms are designed to identify groups in data where the traditional emphasis has been on numeric data. In consequence, many existing algorithms are devoted to this kind of data even though a combination of numeric and categorical data is more common in most business applications. Recently, new algorithms for clustering mixed-type data have been proposed based on Huang’s k-prototypes algorithm. This paper describes the R package clustMixType which provides an implementation of k-prototypes in R.
Clustering algorithms are designed to identify groups in data where the
traditional emphasis has been on numeric data. In consequence, many
existing algorithms are devoted to this kind of data even though a
combination of numeric and categorical data is more common in most
business applications. For an example in the context of credit scoring,
see, e.g. Szepannek (2017). The standard way to tackle mixed-type data
clustering problems in R is to use either (1) Gower distance
(Gower 1971) via the
gower package (van der Loo 2017) or
the daisy(method = "gower")
in the
cluster package
(Maechler et al. 2018); or (2) Hierarchical clustering through hclust()
or the
agnes()
function in cluster. Recent innovations include the package
CluMix (Hummel et al. 2017), which
combines both Gower distance and hierarchical clustering with some
functions for visualization. As this approach requires computation of
distances between any two observations, it is not feasible for large
data sets. The package
flexclust (Leisch 2006)
offers a flexible framework for k-centroids clustering through the
function kcca()
which allows for arbitrary distance functions. Among
the currently pre-implemented kccaFamilies
, there is no distance
measure for mixed-type data. Alternative approaches based on
expectation-maximization are given by the function flexmixedruns()
in
the fpc package (Hennig 2018) and
the package clustMD
(McParland 2017). Both require the variables in the data set to be ordered
according to their data type, and that categorical variables to be
preprocessed into integers. The clustMD algorithm (McParland and Gormley 2016) also
allows ordinal variables but is quite computationally intensive. The
kamila package (Foss and Markatou 2018)
implements the KAMILA clustering algorithm which uses a kernel density
estimation technique in the continuous domain, a multinomial model in
the categorical domain, and the Modha-Spangler weighting of variables in
which categorical variables have to be transformed into indicator
variables in advance (Modha and Spangler 2003).
Recently, more algorithms for clustering mixed-type data have been proposed in the literature ((He et al. 2005; Amir and Dey 2007; Pham et al. 2011; Dutta et al. 2012; Ji et al. 2012, 2013, 2015; Lim et al. 2012; Foss et al. 2016; HajKacem et al. 2016; Liu et al. 2017)). Many of these are based on the idea of Huang’s k-prototypes algorithm (Huang 1998). The rest of this paper describes the R package clustMixType (Szepannek 2018), which provides up to the author’s knowledge the first implementation of this algorithm in R. The k-modes algorithm (Huang 1997a) has been implemented in the package klaR ((Weihs et al. 2005; Roever et al. 2018)) for purely categorical data, but not for the mixed-data case. The rest of the paper is organized as follows: A brief description of the algorithm is followed by the functions in the clustMixType package. Some extensions to the original algorithm are discussed and as well as a worked example application.
The k-prototypes algorithm belongs to the family of partitional cluster algorithms. Its objective function is given by: \[E = \sum_{i=1}^n \sum_{j=1}^k u_{ij} d\left(x_i, \mu_j\right),\] where \(x_i, i = 1, \ldots, n\) are the observations in the sample, \(\mu_j, j = 1, \ldots, k\) are the cluster prototype observations, and \(u_{ij}\) are the elements of the binary partition matrix \(U_{n \times k}\) satisfying \(\sum_{j=1}^k u_{ij} = 1,\, \forall\, i\). The distance function is given by: \[d(x_i, \mu_j) = \sum_{m=1}^q \left(x_i^m - \mu_j^m\right)^2 + \lambda \sum_{m=q+1}^p \delta\left(x_i^m,\mu_j^m\right).\] where \(m\) is an index over all variables in the data set where the first \(q\) variables are numeric and the remaining \(p-q\) variables are categorical. Note that \(\delta(a,b) = 0\) for \(a=b\) and \(\delta(a,b) = 1\) for \(a \ne b\), and \(d()\) corresponds to weighted sum of Euclidean distance between two points in the metric space and simple matching distance for categorical variables (i.e. the count of mismatches). The trade off between both terms can be controlled by the parameter \(\lambda\) which has to be specified in advance as well as the number of clusters \(k\). For larger values of \(\lambda\), the impact of the categorical variables increases. For \(\lambda = 0\), the impact of the categorical variables vanishes and only numeric variables are taken into account, just as in traditional k-means clustering.
The algorithm iterates in a manner similar to the k-means algorithm (MacQueen 1967) where for the numeric variables the mean and the categorical variables the mode minimizes the total within cluster distance. The steps of the algorithm are:
Initialization with random cluster prototypes.
For each observation do:
Assign observations to its closest prototype according to \(d()\).
Update cluster prototypes by cluster-specific means/modes for all variables.
As long as any observations have swapped their cluster assignment in 2 or the maximum number of iterations has not been reached: repeat from 2.
An implementation of the k-prototypes algorithm is given by the function
kproto(x, k, lambda = NULL, iter.max = 100, nstart = 1, na.rm = TRUE)
where
x
is a data frame with both numeric and factor variables. As
opposed to other existing R packages, the factor variables do not
need to be preprocessed in advance and the order of the variables
does not matter.
k
is the number of clusters which has to be pre-specified.
Alternatively, it can also be a vector of observation indices or a
data frame of prototypes with the same columns as x
. If ever at
the initialization or during the iteration process identical
prototypes do occur, the number of clusters will be reduced
accordingly.
lambda
\(>0\) is a real valued parameter that controls the trade off
between Euclidean distance for numeric variables and simple matching
distance for factor variables for cluster assignment. If no
\(\lambda\) is specified the parameter is set automatically based on
the data and a heuristic using the function lambdaest()
.
Alternatively, a vector of length ncol(x)
can be passed to
lambda
(cf. Section on 4).
iter.max
sets the maximum number of iterations, just as in
kmeans()
. The algorithm may stop prior to iter.max
if no
observations swap clusters.
nstart
may be set to a value \(> 1\) to run k-prototypes multiple
times. Similar to k-means, the result of k-prototypes depends on its
initialization. If nstart
\(> 1\), the best solution (i.e. the one
that minimizes \(E\)) is returned.
Generally, the algorithm can deal with missing data but as a default
NA
s are removed by na.rm = TRUE
.
Two additional arguments, verbose
and keep.data
, can control whether
information on missing values should be printed and whether the original
data should be stored in the output object. The keep.data=TRUE
option
is required for the default call to the summary()
function, but in
case of large x
, it can be set to FALSE
to save memory.
The output is an object of class "kproto"
. For convenience, the
elements are designed to be compatible with those of class "kmeans"
:
cluster
is an integer vector of cluster assignments
centers
stores the prototypes in a data frame
size
is a vector of corresponding cluster sizes
withinss
returns the sum over all within cluster distances to the
prototype for each cluster
tot.withinss
is their sum over all clusters which corresponds to
the objective function \(E\)
dists
returns a matrix of all observations’ distances to all
prototypes in order to investigate the crispness of the clustering
lambda
and iter
store the specified arguments of the function
call
trace
lists the objective function \(E\) as well as the number of
swapped observations during the iteration process
Unlike "kmeans"
, the "kproto"
class is accompanied corresponding
predict()
and summary()
methods. The predict.kproto()
method can
be used to assign clusters to new data. Like many of its cousins, it is
called by
predict(object, newdata)
The output again consists of two elements: a vector cluster
of cluster
assignments and a matrix dists
of all observations’ distances to all
prototypes.
The investigation resulting from a cluster analysis typically consists
of identifying the differences between the clusters, or in this specific
case, those of the k prototypes. For practical applications besides the
cluster sizes, it is of further interest to take into account the
homogeneity of the clusters. For numeric variables, this can be done by
calling the R function summary()
. For categorical variables the
representativity of the prototypes is given their frequency distribution
obtained by prop.table()
. The summary.kproto()
method applies these
methods to the variables conditional to the resulting clusters and
returns a comparative results table of the clusters for each variable.
The summary is not restricted to the training data but it can further be
applied to new data by calling summary(object, data)
where data
are
new data that will be internally passed to the predict()
method on the
object
of class "kproto"
. If no new data is specified (default:
data = NULL
), the function requires object
to contain the original
data (argument keep.data = TRUE
). In addition, a function
clprofiles(object, x, vars = NULL)
supports the analysis of the clusters by visualization of cluster
profiles based on an object
of class "kproto"
and data x
. Note
that the latter may also have different variables compared to the
original data, such as for profiling variables that were not used for
clustering. As opposed to summary.kproto()
, no new prediction is done
but the cluster assignments of the "kproto"
object given by
object$cluster
are used. For this reason, the observations in x
must
be the same as in the original data. Via the vars
argument, a subset
of variables can be specified either by a vector of indices or variable
names. Note that clprofiles()
is not restricted to objects of class
"kproto"
but can also be applied to other cluster objects as long as
they are of a "kmeans"
-like structure with elements cluster
and
size
.
For unsupervised clustering problems, the user typically has to specify the impact of the specific variables on the desired cluster solution which is controlled by the parameter \(\lambda\). Generally, small values \(\lambda \sim 0\) emphasize numeric variables and will lead to results similar to standard k-means whereas larger values of \(\lambda\) lead to an increased influence of categorical variables. In Huang (1997b), the average standard deviation \(\sigma\) of the numeric variables is the suggested choice for \(\lambda\) and in some practical applications in the paper values of \(\frac{1}{3}\sigma \le \lambda \le \frac{2}{3}\sigma\) are used. The function
lambdaest(x, num.method = 1, fac.method = 1, outtype = "numeric")
provides different data based heuristics for the choice of \(\lambda\):
The average variance \(\sigma^2\) (num.method = 1
) or standard deviation
\(\sigma\) (num.method = 2
) over all numeric variables is related to the
average concentration \(h_{cat}\) of all categorical variables. We compute
\(h_{cat}\) by averaging either \(h_m = 1-\sum_c p_{mc}^2\)
(fac.method = 1
) or \(h_m = 1-\max_c p_{mc}\) (fac.method = 2
) over
all variables \(m\) where \(c\) are the categories of the factor variables.
We set \(\lambda = \frac{\sigma^t}{h_{cat}}, t \in\{1,2\}\) as a
user-friendly default choice to prevent over-emphasizing either numeric
or categorical variables. If kproto()
is called without specifying
lambda
, the parameter is automatically set using
num.method = fac.method = 1
. Note that this should be considered a
starting point for further analysis; the explicit choice of \(\lambda\)
should be done carefully based on the application context.
Originally, \(\lambda\) is real-valued, but in order to up- or downweight
the relevance of single variables in a specific application context, the
function kproto()
is extended to accept vectors as input where each
element corresponds to a variable specific weight, \(\lambda_m\). The
formula for distance computation changes to:
\[d(x_i, p_j) = \sum_{m=1}^q \lambda_m \left(x_i^m - \mu_j^m\right)^2 + \sum_{m=q+1}^p \lambda_m \delta\left(x_i^m,\mu_j^m\right).\]
Note that the choice of \(\lambda\) only affects the assignment step for
the observations but not the computation of the prototype given a
cluster of observations. By changing the outtype
argument into a
vector, the function lambdaest()
returns a vector of \(\lambda_m\)s. In
order to support a user-specific definition of \(\lambda\) based on the
variables’ variabilities, outtype = "variation"
returns a vector of
original values of variability for all variables in terms of the
quantities described above.
An issue of great practical relevance is the ability of an algorithm to
deal with missing values which can be solved by an intuitive extension
of k-prototypes. During the iterations, cluster prototypes can be
updated by ignoring NA
s: Both means for numeric variables as well as
modes for factors can be computed based on the available data.
Similarly, distances for cluster assignment can be computed for each
observation based on the available variables only. This not only allows
cluster assignment for observations with missing values, but already
takes these observations into account when the clusters are formed. By
using kproto()
, this can be obtained by setting the argument
na.rm = FALSE
. Note that in practice, this option should be handled
with care unless the number of missing values is very small. The
representativeness of a prototype might become questionable if its means
and modes do not represent the major part of the cluster’s observations.
Finally, a modification of the original algorithm presented in Section 2 allows for vector-wise computation of the iteration steps, reducing computation time. The update of the prototypes is not done after each new cluster assignment, but once each time the whole data set has been reassigned. The modified k-prototypes algorithm consists of the following steps:
Initialization with random cluster prototypes.
Assign all observations to its closest prototype according to \(d()\).
Update cluster prototypes.
As long as any observations have swapped their cluster assignment in 2 or the maximum number of iterations has not been reached: repeat from 2.
As an example, data x
with two numeric and two categorical variables
are simulated according to the documentation in ?kproto
: Using
set.seed(42)
, four clusters \(j = 1, \ldots, 4\) are designed such that
two pairs can be separated only by their numeric variables and the other
two pairs only by their categorical variables. The numeric variables are
generated as normally distributed random variables with cluster specific
means \(\mu_1 = \mu_2 = -\mu_3 = -\mu_4 = \Phi^{-1}(0.9)\) and the
categorical variables have two levels (\(A\) and \(B\)) each with a cluster
specific probability \(p_1(A) = p_3(A) = 1-p_2(A) = 1-p_4(A) = 0.9\).
Table 1 summarizes the clusters. It can be seen that
both numeric and categorical variables are needed in order to identify
all four clusters.
cluster | numeric | categorical |
---|---|---|
1 | + | + |
2 | + | - |
3 | - | + |
4 | - | - |
Given the knowledge that there are four clusters in the data, a straightforward call for k-prototypes clustering of the data will be:
<- kproto(x = x, k = 4)
kpres # output 1
kpres summary(kpres) # output 2
library(wesanderson)
par(mfrow=c(2,2))
clprofiles(kpres, x, col = wes_palette("Royal1", 4, type = "continuous")) # figure 1
The resulting output is of the form:
# Output 1:
: 2
Numeric predictors: 2
Categorical predictors: 5.52477
Lambda
: 4
Number of Clusters: 100 95 101 104
Cluster sizes: 332.909 267.1121 279.2863 312.7261
Within cluster error
:
Cluster prototypes
x1 x2 x3 x492 A A 1.4283725 1.585553
54 A A -1.3067973 -1.091794
205 B B -1.4912422 -1.654389
272 B B 0.9112826 1.133724
# Output 2 (only for variable x1 and x3):
x1
cluster A B1 0.890 0.110
2 0.905 0.095
3 0.069 0.931
4 0.144 0.856
-----------------------------------------------------------------
x31st Qu. Median Mean 3rd Qu. Max.
Min. 1 -0.7263 0.9314 1.4080 1.4280 2.1280 4.5110
2 -3.4200 -1.9480 -1.3170 -1.3070 -0.6157 2.2450
3 -4.2990 -2.0820 -1.4600 -1.4910 -0.7178 0.2825
4 -1.5300 0.2788 0.9296 0.9113 1.5000 3.1480
-----------------------------------------------------------------
The first two as well as the last two cluster prototypes share the same
mode in the factor variables but they can be distinguished by their
location with respect to the numeric variables. For clusters 1 & 4 (as
opposed to 2 & 3), it is vice versa. Note that the order of the
identified clusters is not necessarily the same as in cluster
generation. Calling summary()
and clprofiles()
provides further
information on the homogeneity of the clusters. A color palette can be
passed to represent the clusters across the different variables for the
plot; here it is taken from the package
wesanderson
(Ram and Wickham 2018).
By construction, taking into account either only numeric (k-means) or only factor variables (k-modes) will not be able to identify the underlying cluster structure without further preprocessing of the data in this example. A performance comparison using the Rand index (Rand 1971) as computed by the package clusteval (Ramey 2012) results in rand indices of \(0.728\) (k-means) and \(0.733\) (k-modes). As already seen above, the prototypes as identified by clustMixType do represent the true clusters quite well, as the corresponding Rand index improves to \(0.870\).
library(klaR)
library(clusteval)
<- kmeans(x[,3:4], 4) # kmeans using numerics only
kmres <- kmodes(x[,1:2], 4) # kmodes using factors only
kmores
cluster_similarity(kpres$cluster, clusid, similarity = "rand")
cluster_similarity(kmres$cluster, clusid, similarity = "rand")
cluster_similarity(kmores$cluster, clusid, similarity = "rand")
The runtime of kproto()
is linear in the number of observations
(Huang 1997b) and thus it is also applicable to large data sets.
Figure 2 (left) shows the behaviour of the runtime
for 50 repeated simulations of the example data with an increased number
of variables (half of them numeric and half of them categorical). It is
possible to run kproto()
for more than hundred variables which is far
beyond most practical applications where human interpretation of the
resulting clusters is desired. Note that clustMixType is written in R
and currently no C++ code is used to speed up computations which could
be a subject of future work.
In order to determine the appropriate number of clusters for a data set,
we can use the standard scree test. In this case, the objective function
\(E\) is given by the output’s tot.withinss
element. The kproto()
function is run multiple times for varying numbers of clusters (but
fixed \(\lambda\)) and the number of clusters is chosen as the minimum \(k\)
from whereon no strong improvements of \(E\) are possible. In
Figure 2 (right), an elbow is visible at the
correct number of clusters in the sample data; recall that we simulatd
from four clusters. Note that from a practitioner’s point of view, an
appropriate solution requires clusters that are well represented by
their prototypes. For this reason, the choice of the number of clusters
should further take into account a homogeneity analysis of the clusters
as returned by summary()
, clprofiles()
or the withinss
element of
the "kproto"
output.
<- numeric(10)
Es for(i in 1:10){
<- kproto(x, k = i, nstart = 5)
kpres <- kpres$tot.withinss
Es[i]
}plot(1:10, Es, type = "b", ylab = "Objective Function", xlab = "# Clusters",
main = "Scree Plot") # figure 2
The clustMixType package provides a user-friendly way for clustering mixed-type data in R given by the k-prototypes algorithm. As opposed to other packages, no preprocessing of the data is necessary, and in contrast to standard hierarchical approaches, it is not restricted to moderate data sizes. Its application requires the specification of two hyperparameters: the number of clusters \(k\) as well as a second parameter \(\lambda\) that controls the interplay of the different data types for distance computation. As an extension to the original algorithm, the presented implementation allows for a variable-specific choice of \(\lambda\) and can deal with missing data. Furthermore, with regard to business purposes, functions for profiling a cluster solution are presented.
This paper is based on clustMixType version 0.1-36. Future work may focus on the development of further guidance regarding the choice of the parameter \(\lambda\), such as using stability considerations (cf. Hennig 2007), or the investigation of possible improvements in computation time by integrating Rcpp ((Eddelbuettel and François 2011; Eddelbuettel et al. 2018)).
gower, cluster, CluMix, flexclust, fpc, clustMD, kamila, clustMixType, klaR, wesanderson, clusteval
Cluster, Environmetrics, MachineLearning, Robust
This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.
Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Szepannek, "clustMixType: User-Friendly Clustering of Mixed-Type Data in R", The R Journal, 2018
BibTeX citation
@article{RJ-2018-048, author = {Szepannek, Gero}, title = {clustMixType: User-Friendly Clustering of Mixed-Type Data in R}, journal = {The R Journal}, year = {2018}, note = {https://rjournal.github.io/}, volume = {10}, issue = {2}, issn = {2073-4859}, pages = {200-208} }