Most clustering strategies have not changed considerably since their initial definition. The common improvements are either related to the distance measure used to assess dissimilarity, or the function used to calculate prototypes. Timeseries clustering is no exception, with the Dynamic Time Warping distance being particularly popular in that context. This distance is computationally expensive, so many related optimizations have been developed over the years. Since no single clustering algorithm can be said to perform best on all datasets, different strategies must be tested and compared, so a common infrastructure can be advantageous. In this manuscript, a general overview of shapebased timeseries clustering is given, including many specifics related to Dynamic Time Warping and associated techniques. At the same time, a description of the dtwclust package for the R statistical software is provided, showcasing how it can be used to evaluate many different timeseries clustering procedures.
Cluster analysis is a task that concerns itself with the creation of groups of objects, where each group is called a cluster. Ideally, all members of the same cluster are similar to each other, but are as dissimilar as possible from objects in a different cluster. There is no single definition of a cluster, and the characteristics of the objects to be clustered vary. Thus, there are several algorithms to perform clustering. Each one defines specific ways of defining what a cluster is, how to measure similarities, how to find groups efficiently, etc. Additionally, each application might have different goals, so a certain clustering algorithm may be preferred depending on the type of clusters sought (Kaufman and Rousseeuw 1990).
Clustering algorithms can be organized differently depending on how they handle the data and how the groups are created. When it comes to static data, i.e., if the values do not change with time, clustering methods can be divided into five major categories: partitioning (or partitional), hierarchical, densitybased, gridbased, and modelbased methods (Liao 2005; Rani and Sikka 2012). They may be used as the main algorithm, as an intermediate step, or as a preprocessing step (Aghabozorgi et al. 2015).
Timeseries is a common type of dynamic data that naturally arises in many different scenarios, such as stock data, medical data, and machine monitoring, just to name a few (Aggarwal and Reddy 2013; Aghabozorgi et al. 2015). They pose some challenging issues due to the large size and high dimensionality commonly associated with timeseries (Aghabozorgi et al. 2015). In this context, dimensionality of a series is related to time, and it can be understood as the length of the series. Additionally, a single timeseries object may be constituted by several values that change on the same time scale, in which case they are identified as multivariate timeseries.
There are many techniques to modify timeseries in order to reduce dimensionality, and they mostly deal with the way timeseries are represented. Changing representation can be an important step, not only in timeseries clustering, and it constitutes a wide research area on its own (cf. Table 2 in Aghabozorgi et al. (2015)). While choice of representation can directly affect clustering, it can be considered as a different step, and as such it will not be discussed further here.
Timeseries clustering is a type of clustering algorithm made to handle dynamic data. The most important elements to consider are the (dis)similarity or distance measure, the prototype extraction function (if applicable), the clustering algorithm itself, and cluster evaluation (Aghabozorgi et al. 2015). In many cases, algorithms developed for timeseries clustering take static clustering algorithms and either modify the similarity definition, or the prototype extraction function to an appropriate one, or apply a transformation to the series so that static features are obtained (Liao 2005). Therefore, the underlying basis for the different clustering procedures remains approximately the same across clustering methods. The most common approaches are hierarchical and partitional clustering (cf. Table 4 in Aghabozorgi et al. (2015)), the latter of which includes fuzzy clustering.
Aghabozorgi et al. (2015) classify timeseries clustering algorithms based on the way they treat the data and how the underlying grouping is performed. One classification depends on whether the whole series, a subsequence, or individual time points are to be clustered. On the other hand, the clustering itself may be shapebased, featurebased, or modelbased. Aggarwal and Reddy (2013) make an additional distinction between online and offline approaches, where the former usually deals with grouping incoming data streams onthego, while the latter deals with data that no longer change.
In the context of shapebased timeseries clustering, it is common to utilize the Dynamic Time Warping (DTW) distance as dissimilarity measure (Aghabozorgi et al. 2015). The calculation of the DTW distance involves a dynamic programming algorithm that tries to find the optimum warping path between two series under certain constraints. However, the DTW algorithm is computationally expensive, both in time and memory utilization. Over the years, several variations and optimizations have been developed in an attempt to accelerate or optimize the calculation. Some of the most common techniques will be discussed in more detail in 2.1.
The choice of timeseries representation, preprocessing, and clustering algorithm has a big impact on performance with respect to cluster quality and execution time. Similarly, different programming languages have different runtime characteristics and user interfaces, and even though many authors make their algorithms publicly available, combining them is far from trivial. As such, it is desirable to have a common platform on which clustering algorithms can be tested and compared against each other. The dtwclust package, developed for the R statistical software, and part of its TimeSeries view, provides such functionality, and includes implementations of recently developed timeseries clustering algorithms and optimizations. It serves as a bridge between classical clustering algorithms and timeseries data, additionally providing visualization and evaluation routines that can handle timeseries. All of the included algorithms are custom implementations, except for the hierarchical clustering routines. A great amount of effort went into implementing them as efficiently as possible, and the functions were designed with flexibility and extensibility in mind.
Most of the included algorithms and optimizations are tailored to the DTW distance, hence the package’s name. However, the main clustering function is flexible so that one can test many different clustering approaches, using either the timeseries directly, or by applying suitable transformations and then clustering in the resulting space. We will describe the new algorithms that are available in dtwclust, mentioning the most important characteristics of each, and showing how the package can be used to evaluate them, as well as how other packages complement it. Additionally, the variations related to DTW and other common distances will be explored.
There are many available R packages for data clustering. The flexclust package (Leisch 2006) implements many partitional procedures, while the cluster package (Maechler et al. 2019) focuses more on hierarchical procedures and their evaluation; neither of them, however, is specifically targeted at timeseries data. Packages like TSdist (Mori et al. 2016) and TSclust (Montero and Vilar 2014) focus solely on dissimilarity measures for timeseries, the latter of which includes a single algorithm for clustering based on \(p\) values. Another example is the pdc package (Brandmaier 2015), which implements a specific clustering algorithm, namely one based on permutation distributions. The dtw package (Giorgino 2009) implements extensive functionality with respect to DTW, but does not include the lower bound techniques that can be very useful in timeseries clustering. New clustering algorithms like kShape (Paparrizos and Gravano 2015) and TADPole (Begum et al. 2015) are available to the public upon request, but were implemented in MATLAB, making their combination with other R packages cumbersome. Hence, the dtwclust package is intended to provide a consistent and userfriendly way of interacting with classic and new clustering algorithms, taking into consideration the nuances of timeseries data.
The rest of this manuscript presents the different logical units
required for a timeseries clustering workflow, and specifies how they
are implemented in dtwclust. These build on top of each other and are
not entirely independent, so their coherent combination is critical. The
information relevant to the distance measures will be presented in
2. Supported algorithms for prototype extraction will
be discussed in 3. The main clustering algorithms
will be introduced in 4. Information regarding
cluster evaluation will be provided in 5. The
provided tools for a complete timeseries clustering workflow will be
described in 6, and the final remarks will be given
in 7. Note that code examples are intentionally
brief, and do not necessarily represent a thorough procedure to choose
or evaluate a clustering algorithm. The data used in all examples is
included in the package (saved in a list called CharTraj
), and is a
subset of the character trajectories dataset found in Lichman (2013): they
are pen tip trajectories recorded for individual characters, and the
subset contains 5 examples of the \(x\) velocity for each considered
character.
Distance measures provide quantification for the dissimilarity between two timeseries. Calculating distances, as well as crossdistance matrices, between timeseries objects is one of the cornerstones of any timeseries clustering algorithm. The proxy package (Meyer and Buchta 2019) provides an extensible framework for these calculations, and is used extensively by dtwclust; 2.5 will elaborate in this regard.
The \(l_1\) and \(l_2\) vector norms, also known as Manhattan and Euclidean distances respectively, are the most commonly used distance measures, and are arguably the only competitive \(l_p\) norms when measuring dissimilarity (Aggarwal et al. 2001; Lemire 2009). They can be efficiently computed, but are only defined for series of equal length and are sensitive to noise, scale, and time shifts. Thus, many other distance measures tailored to timeseries have been developed in order to overcome these limitations, as well as other challenges associated with the structure of timeseries, such as multiple variables, serial correlation, etc.
In the following sections a description of the distance functions included in dtwclust will be provided; these are associated with shapebased timeseries clustering, and either support DTW or provide an alternative to it. The included distances are a basis for some of the prototyping functions described in 3, as well as the clustering routines from 4, but there are many other distance measures that can be used for timeseries clustering and classification (Montero and Vilar 2014; Mori et al. 2016). It is worth noting that, even though some of these distances are also available in other R packages, e.g., DTW in dtw or Keogh’s DTW lower bound in TSdist (see 2.1), the implementations in dtwclust are optimized for speed, since all of them are implemented in C++ and have custom loops for computation of crossdistance matrices, including multithreading support; refer to 2.6 for more information.
To facilitate notation, we define a timeseries as a vector (or set of vectors in case of multivariate series) \(x\). Each vector must have the same length for a given timeseries. In general, \(x^v_i\) represents the \(i\)th element of the \(v\)th variable of the (possibly multivariate) timeseries \(x\). We will assume that all elements are equally spaced in time in order to avoid the time index explicitly.
DTW is a dynamic programming algorithm that compares two series and tries to find the optimum warping path between them under certain constraints, such as monotonicity (Berndt and Clifford 1994). It started being used by the data mining community to overcome some of the limitations associated with the Euclidean distance (Ratanamahatana and Keogh 2004).
The easiest way to get an intuition of what DTW does is graphically. 1 shows the alignment between two sample timeseries \(x\) and \(y\). In this instance, the initial and final points of the series must match, but other points may be warped in time in order to find better matches.
DTW is computationally expensive. If \(x\) has length \(n\) and \(y\) has
length \(m\), the DTW distance between them can be computed in \(O(nm)\)
time, which is quadratic if \(m\) and \(n\) are similar. Additionally, DTW
is prone to implementation bias since its calculations are not easily
vectorized and tend to be very slow in noncompiled programming
languages. A custom implementation of the DTW algorithm is included with
dtwclust in the dtw_basic
function, which has only basic
functionality but still supports the most common options, and it is
faster (see 2.6).
The DTW distance can potentially deal with series of different length directly. This is not necessarily an advantage, as it has been shown before that performing linear reinterpolation to obtain equal length may be appropriate if \(m\) and \(n\) do not vary significantly (Ratanamahatana and Keogh 2004). For a more detailed explanation of the DTW algorithm see, e.g., Giorgino (2009). However, there are some aspects that are worth discussing here.
The first step in DTW involves creating a local cost matrix (LCM or \(lcm\)), which has \(n \times m\) dimensions. Such a matrix must be created for every pair of distances compared, meaning that memory requirements may grow quickly as the dataset size grows. Considering \(x\) and \(y\) as the input series, for each element \((i,j)\) of the LCM, the \(l_p\) norm between \(x_i\) and \(y_j\) must be computed. This is defined in (1), explicitly denoting that multivariate series are supported as long as they have the same number of variables (note that for univariate series, the LCM will be identical regardless of the used norm). Thus, it makes sense to speak of a \(\text{DTW}_p{}\) distance, where \(p\) corresponds to the \(l_p\) norm that was used to construct the LCM.
\[\label{eq:lcm} lcm(i,j) = \left( \sum_v \lvert x^v_i  y^v_j \rvert ^ p \right) ^ {1/p} \tag{1}\]
In the seconds step, the DTW algorithm finds the path that minimizes the alignment between \(x\) and \(y\) by iteratively stepping through the LCM, starting at \(lcm(1,1)\) and finishing at \(lcm(n,m)\), and aggregating the cost. At each step, the algorithm finds the direction in which the cost increases the least under the chosen constraints.
The way in which the algorithm traverses through the LCM is primarily
dictated by the chosen step pattern. It is a local constraint that
determines which directions are allowed when moving ahead in the LCM as
the cost is being aggregated, as well as the associated perstep
weights. 2 depicts two common step patterns and
their names in the dtw package. Unfortunately, very few articles from
the data mining community specify which pattern they use, although in
the author’s experience, the symmetric1
pattern seems to be standard.
By contrast, the dtw
and dtw_basic
functions use the symmetric2
pattern by default, but it is simple to modify this by providing the
appropriate value in the step.pattern
argument. The choice of step
pattern also determines whether the corresponding DTW distance can be
normalized or not (which may be important for series with different
length). See Giorgino (2009) for a complete list of step patterns and to
know which ones can be normalized.


It should be noted that the DTW distance does not satisfy the triangle inequality, and it is not symmetric in general, e.g., for asymmetric step patterns (Giorgino 2009). The patterns in 2 can result in a symmetric DTW calculation, provided no constraints are used (see the next section), or all series have the same length if a constraint is indeed used.
One of the possible modifications of DTW is to use global constraints, also known as window constraints. These limit the area of the LCM that can be reached by the algorithm. There are many types of windows (see, e.g., Giorgino (2009)), but one of the most common ones is the SakoeChiba window (Sakoe and Chiba 1978), with which an allowed region is created along the diagonal of the LCM (see 3). These constraints can marginally speed up the DTW calculation, but they are mainly used to avoid pathological warping. It is common to use a window whose size is 10% of the series’ length, although sometimes smaller windows produce even better results (Ratanamahatana and Keogh 2004).
Strictly speaking, if the series being compared have different lengths, a constrained path may not exist, since the SakoeChiba band may prevent the end point of the LCM to be reached (Giorgino 2009). In these cases a slanted band window may be preferred, since it stays along the diagonal for series of different length and is equivalent to the SakoeChiba window for series of equal length. If a window constraint is used with dtwclust, a slanted band is employed.
It is not possible to know a priori what window size, if any, will be best for a specific application, although it is usually agreed that no constraint is a poor choice. For this reason, it is better to perform tests with the data one wants to work with, perhaps taking a subset to avoid excessive running times.
It should be noted that, when reported, window sizes are always integers greater than zero. If we denote this number with \(w\), and for the specific case of the slanted band window, the valid region of the LCM will be constituted by all valid points in the range \(\left[ (i,j  w), (i, j + w) \right]\) for all \((i,j)\) along the LCM diagonal. Thus, at each step, at most \(2w + 1\) elements may fall within the window for a given window size \(w\). This is the convention followed by dtwclust.
Due to the fact that DTW itself is expensive to compute, lower bounds
(LBs) for the DTW distance have been developed. These lower bounds
guarantee being less than or equal to the corresponding DTW distance.
They have been exploited when indexing timeseries databases,
classification of timeseries, clustering, etc.
(Keogh and Ratanamahatana 2005; Begum et al. 2015). Out of the existing DTW LBs, the two most
effective are termed LB_Keogh
(Keogh and Ratanamahatana 2005) and LB_Improved
(Lemire 2009). The reader is referred to the respective articles for
detailed definitions and proofs of the LBs, however some important
considerations will be further discussed here.
Each LB can be computed with a specific \(l_p\) norm. Therefore, it
follows that the \(l_p\) norms used for DTW and LB calculations must
match, such that \(\text{LB}_p \leq \text{DTW}_p{}\). Moreover,
\(\text{LB\_Keogh}_p \leq \text{LB\_Improved}_p \leq \text{DTW}_p{}\),
meaning that LB_Improved
can provide a tighter LB. It must be noted
that the LBs are only defined for series of equal length and are not
symmetric regardless of the \(l_p\) norm used to compute them. Also note
that the choice of step pattern affects the value of the DTW distance,
changing the tightness of a given LB.
One crucial step when calculating the LBs is the computation of the socalled envelopes. These envelopes require a window constraint, and are thus dependent on both the type and size of the window. Based on these, a running minimum and maximum are computed, and a lower and upper envelope are generated respectively. 4 depicts a sample timeseries with its corresponding envelopes for a SakoeChiba window of size 15.
In order for the LBs to be worth it, they must be computed in
significantly less time than it takes to calculate the DTW distance.
Lemire (2009) developed a streaming algorithm to calculate the envelopes
using no more than \(3n\) comparisons when using a SakoeChiba window.
This algorithm has been ported to dtwclust using the C++ programming
language, ensuring an efficient calculation, and it is exposed in the
compute_envelope
function.
LB_Keogh
requires the calculation of one set of envelopes for every
pair of series compared, whereas LB_Improved
must calculate two sets
of envelopes for every pair of series. If the LBs must be calculated
between several timeseries, some envelopes can be reused when a given
series is compared against many others. This optimization is included in
the LB functions registered with proxy by dtwclust.
Cuturi (2011) proposed an algorithm to assess similarity between time series by using kernels. He began by formalizing an alignment between two series \(x\) and \(y\) as \(\pi\), and defined the set of all possible alignments as \(\mathcal{A}(n,m)\), which is constrained by the lengths of \(x\) and \(y\). It is shown that the DTW distance can be understood as the cost associated with the minimum alignment.
A Global Alignment (GA) kernel that considers the cost of all possible alignments by computing their exponentiated softminimum is defined, and it is argued that it quantifies similarities in a more coherent way. However, the GA kernel has associated limitations, namely diagonal dominance and a complexity \(O(nm)\). With respect to the former, Cuturi (2011) states that diagonal dominance should not be an issue as long as one of the series being compared is not longer than twice the length of the other.
In order to reduce the GA kernel’s complexity, Cuturi (2011) proposed using the triangular local kernel for integers shown in (2), where \(T\) represents the kernel’s order. By combining it with the kernel \(\kappa\) in (3) (which is based on the Gaussian kernel \(\kappa_\sigma\)), the Triangular Global Alignment Kernel (TGAK) in (4) is obtained. Such a kernel can be computed in \(O(T \min(n,m))\), and is parameterized by the triangular constraint \(T\) and the Gaussian’s kernel width \(\sigma\).
\[\label{eq:intkernel} \omega(i,j) = \left( 1  \frac{i  j}{T} \right)_{+} \tag{2}\]
\[\label{eq:phikernel} \begin{gather} \kappa (x,y) = e ^ {\phi_\sigma(x,y)} \\ \phi_\sigma(x,y) = \frac{1}{2 \sigma ^ 2} \left\lVert x  y \right\rVert ^ 2 + \log \left( 2  e ^ {\frac{\left\lVert x  y \right\rVert ^ 2}{2 \sigma ^ 2}} \right) \end{gather} \tag{3}\]
\[\label{eq:tgakernel} \text{TGAK}(x,y,\sigma,T) = \tau ^ {1} \left( \omega \otimes \frac{1}{2} \kappa \right) (i,x;j,y) = \frac{\omega(i,j) \kappa (x,y)}{2  \omega(i,j) \kappa (x,y)} \tag{4}\]
The triangular constraint is similar to the window constraints that can be used in the DTW algorithm. When \(T = 0\) or \(T \rightarrow \infty\), the TGAK converges to the original GA kernel. When \(T = 1\), the TGAK becomes a slightly modified Gaussian kernel that can only compare series of equal length. If \(T > 1\), then only the alignments that fulfil \(T < \pi_1(i)  \pi_2(i) < T\) are considered.
Cuturi (2011) also proposed a strategy to estimate the value of \(\sigma\) based on the timeseries themselves and their lengths, namely \(c \cdot \text{med}(\left\lVert x  y \right\rVert) \cdot \sqrt{\text{med}(\text{x})}\), where \(\text{med}(\cdot)\) is the empirical median, \(c\) is some constant, and \(x\) and \(y\) are subsampled vectors from the dataset. This, however, introduces some randomness into the algorithm when the value of \(\sigma\) is not provided, so it might be better to estimate it once and reuse it in subsequent function evaluations. In dtwclust, the value of \(c\) is set to 1.
The similarity returned by the TGAK can be normalized with (5) so that its values lie in the range \([0,1]\). Hence, a distance measure for timeseries can be obtained by subtracting the normalized value from 1. The algorithm supports multivariate series and series of different length (with some limitations). The resulting distance is symmetric and satisfies the triangle inequality, although it is more expensive to compute in comparison to DTW.
\[\label{eq:tgaknorm} \exp \left( \log\left( \text{TGAK}(x,y,\sigma,T) \right)  \frac{\log\left( \text{TGAK}(x,x,\sigma,T) \right) + \log\left( \text{TGAK}(y,y,\sigma,T) \right)}{2} \right) \tag{5}\]
A C implementation of the TGAK algorithm is available at its author’s
website ^{1}. An R wrapper has been implemented in dtwclust in the
GAK
function, performing the aforementioned normalization and
subtraction in order to obtain a distance measure that can be used in
clustering procedures.
Following with the idea of the TGAK, i.e., of regularizing DTW by smoothing it, Cuturi and Blondel (2017) proposed a unified algorithm using a parameterized softminimum as shown in (6) (where \(\Delta(x,y)\) represents the LCM), and called the resulting discrepancy a softDTW, discussing its differentiability. Thanks to this property, a gradient function can be obtained, and Cuturi and Blondel (2017) developed a more efficient way to compute it. This can be then used to calculate centroids with numerical optimization as discussed in 3.3.
\[\label{eq:softdtw} \begin{gather} \text{dtw}_\gamma(x,y) = \text{min} ^ \gamma \lbrace \langle A, \Delta(x,y) \rangle, A \in \mathcal{A}(n,m) \rbrace \\ \text{min} ^ \gamma \lbrace a_1, \ldots, a_n \rbrace = \begin{cases} \text{min}_{i \leq n} a_i, \quad \gamma = 0 \\ \gamma \log \sum_{i=1}^{n} e^{a_i / \gamma}, \quad \gamma > 0 \end{cases} \end{gather} \tag{6}\]
However, as a standalone distance measure, the softDTW distance has some disadvantages: the distance can be negative, the distance between \(x\) and itself is not necessarily zero, it does not fulfill the triangle inequality, and also has quadratic complexity with respect to the series’ lengths. On the other hand, it is a symmetric distance, it supports multivariate series as well as different lengths, and it can provide differently smoothed results by means of a userdefined \(\gamma\).
The shapebased distance (SBD) was proposed as part of the kShape clustering algorithm (Paparrizos and Gravano 2015); this algorithm will be further discussed in 3.4 and 4.0.2. SBD is presented as a faster alternative to DTW. It is based on the crosscorrelation with coefficient normalization (NCCc) sequence between two series, and is thus sensitive to scale, which is why Paparrizos and Gravano (2015) recommend znormalization. The NCCc sequence is obtained by convolving the two series, so different alignments can be considered, but no pointwise warpings are made. The distance can be calculated with the formula shown in (7), where \(\left\lVert \cdot \right\rVert_2\) is the \(l_2\) norm of the series. Its range lies between 0 and 2, with 0 indicating perfect similarity.
\[\label{eq:sbd} SBD(x,y) = 1  \frac{\max \left( NCCc(x,y) \right)}{\left\lVert x \right\rVert_2 \left\lVert y \right\rVert_2} \tag{7}\]
This distance can be efficiently computed by utilizing the Fast Fourier Transform (FFT) to obtain the NCCc sequence, although that might make it sensitive to numerical precision, especially in 32bit architectures. It can be very fast, it is symmetric, it was very competitive in the experiments performed in Paparrizos and Gravano (2015) (although the runtime comparison was slightly biased due to the slow MATLAB implementation of DTW), and it supports (univariate) series of different length directly. Additionally, some FFTs can be reused when computing the SBD between several series; this optimization is also included in the SBD function registered with proxy by dtwclust.
The distances described in this section are the ones implemented in dtwclust, which serve as basis for the algorithms presented in 3 and 4. 1 summarizes the salient characteristics of these distances.
Distance  Computational cost  Normalized  Symmetric  Multivariate support  Support for length differences 

LB_Keogh  Low  No  No  No  No 
LB_Improved  Low  No  No  No  No 
DTW  Medium  Can be*  Can be*  Yes  Yes 
GAK  High  Yes  Yes  Yes  Yes 
SoftDTW  High  Yes  Yes  Yes  Yes 
SBD  Low  Yes  Yes  No  Yes 
Nevertheless, there are many other measures that can be used. In order
to account for this, the proxy package is leveraged by dtwclust, as
well as other packages (e.g., TSdist). It aggregates all its measures
in a database object called pr_DB
, which has the advantage that all
registered functions can be used with the proxy::dist
function. For
example, registering the autocorrelationbased distance provided by
package TSclust could be done in the following way.
require("TSclust")
::pr_DB$set_entry(FUN = diss.ACF, names = c("ACFD"),
proxyloop = TRUE, distance = TRUE,
description = "Autocorrelationbased distance")
::dist(CharTraj[3L:8L], method = "ACFD", upper = TRUE)
proxy
A.V3 A.V4 A.V5 B.V1 B.V2 B.V30.7347970 0.7269654 1.3365966 0.9022004 0.6204877
A.V3 0.7347970 0.2516642 2.0014314 1.5712718 1.2133404
A.V4 0.7269654 0.2516642 2.0178486 1.6136650 1.2901999
A.V5 1.3365966 2.0014314 2.0178486 0.5559639 0.9937621
B.V1 0.9022004 1.5712718 1.6136650 0.5559639 0.4530352
B.V2 0.6204877 1.2133404 1.2901999 0.9937621 0.4530352 B.V3
Any distance function registered with proxy can be used for timeseries clustering with dtwclust. More details are provided in 4.1.
As mentioned before, one of the advantages of the distances implemented as part of dtwclust is that the core calculations are performed in C++, making them faster. The other advantage is that the calculations of crossdistance matrices leverage multithreading. In the following, a series of comparisons against implementations in other packages is presented, albeit without the consideration of parallelization. Further information is available in the vignettes included with the package ^{2}.
One of DTW’s lower bounds, LB_Keogh
, is also available in TSdist as
a pure R implementation. We can see how it compares to the C++ version
included in dtwclust in 5, considering
different series lengths and window sizes. The time for each point in
the graph was computed by repeating the calculation 100 times and
extracting the median time.
Similarly, the DTW distance is also available in the dtw package, and
possesses a C core. The dtw_basic
version included with dtwclust
only supports a slanted window constraint (or none at all), and the
symmetric1
and symmetric2
step patterns, so it performs less checks,
and uses a memorysaving version where only 2 rows of the LCM are saved
at a time. As with LB_Keogh
, a comparison of the DTW implementations’
execution times can be seen in 6.
The time difference in single calculations is not so dramatic, but said
differences accumulate when calculating crossdistance matrices, and
become much more significant. The behavior of LB_Keogh
can be seen in
7, with a fixed window size of 30 and
series of length 100. The implementation in dtwclust performs the
whole calculation in C++, and only calculates the necessary warping
envelopes once, although it can be appreciated that this does not have a
significant effect.