dbcsp: User-friendly R package for Distance-Based Common Spatial Patterns

Common Spatial Patterns (CSP) is a widely used method to analyse electroencephalography (EEG) data, concerning the supervised classification of the activity of brain. More generally, it can be useful to distinguish between multivariate signals recorded during a time span for two different classes. CSP is based on the simultaneous diagonalization of the average covariance matrices of signals from both classes and it allows the data to be projected into a low-dimensional subspace. Once the data are represented in a low-dimensional subspace, a classification step must be carried out. The original CSP method is based on the Euclidean distance between signals, and here we extend it so that it can be applied on any appropriate distance for data at hand. Both the classical CSP and the new Distance-Based CSP (DB-CSP) are implemented in an R package, called dbcsp.

Itsaso Rodríguez (University of the Basque Country UPV/EHU) , Itziar Irigoien (University of the Basque Country UPV/EHU) , Basilio Sierra (University of the Basque Country UPV/EHU) , Concepción Arenas (University of Barcelona UB)
2022-12-20

1 Background

Eigenvalue and generalized eigenvalue problems are very relevant techniques in data analysis. The well-known Principal Component Analysis with the eigenvalue problem in its roots was already established by the late seventies (Mardia et al. 1979). In mathematical terms, Common Spatial Patterns (CSP) is based on the generalized eigenvalue decomposition or the simultaneous diagonalization of two matrices to find projections in a low dimensional space. Although in algebraic terms PCA and CSP share several similarities, their main aims are different: PCA follows a non-supervised approach but CSP is a two-class supervised technique. Besides, PCA is suitable for standard quantitative data arranged in \(`individuals \times variables'\) tables, while CSP is designed to handle multivariate signals time series. That means that, while for PCA each individual or unit is represented by a classical numerical vector, for CSP each individual is represented by several signals recorded during a time span, i.e., by a \(`number of signals\times time span'\) matrix. CSP allows the individuals to be represented in a dimension reduced space, a crucial step given the high dimensional nature of the original data. CSP computes the average covariance matrices of signals from the two classes to yield features whose variances are optimal to discriminate the classes of measurements. Once data is projected into a low dimensional space, a classification step is carried out. The CSP technique was first proposed under the name Fukunaga-Koontz Transform in (Fukunaga and Koontz 1970) as an extension of PCA, and (Müller-Gerking et al. 1999) used it to discriminate electroencephalography data (EEG) in a movement task. Since then, it has been a widely used technique to analyze EEG data and develop Brain Computer Interfaces (BCI), with different variations and extensions (Blankertz et al. 2007a,b; Grosse-Wentrup and Buss 2008; Lotte and Guan 2011; Wang et al. 2012; Astigarraga et al. 2016; Darvish Ghanbar 2021). In (Wu et al. 2013), subject specific best time window and number of CSP features are fitted through a two-level cross validation scheme within the Linear Discriminant classifier. (Samek et al. 2014) offer a divergence-based framework including several extensions of CSP. As a general term, CSP filter maximizes the variance of the filtered or projected EEG signals of one class of movements while minimizing it for the signals of the other class. Similarly, it can be used to detect epileptic activities (Khalid et al. 2016) or other brain activities. BCI systems can also be of great help to people who suffer from some disorders of cerebral palsy, or who suffer from other diseases or disabilities that prevent the normal use of their motor skills. These systems can considerably improve the quality of life of these people, for which small advances and changes imply big improvements. BCI systems can also contribute to human vigilance detection, connected with occupations involving sustained attention tasks. Among others, CSP and variations of it have been applied to the vigilance estimation task (Yu et al. 2019).

The original CSP method is based on the Euclidean distance between signals. However, as far as we know, a generalization allowing the use of any appropriate distance was not introduced. The aim of the present work is to introduce a novel Distance-Based generalization of it (DB-CSP). This generalization is of great interest, since these techniques can also offer good solutions in other fields where multivariate time series data arise beyond pure electroencephalography data (Poppe 2010; Rodríguez-Moreno et al. 2020).

Although CSP in its classical version is a very well-known technique in the field of BCI, it is not implemented in R. In addition, as DB-CSP is a new extension of it, it is worth building an R package that includes both CSP and DB-CSP techniques. The package offers functions in a user-friendly way for the less familiar users of R but it also offers complete information about its objects so that reproducible analysis can be carried out and more advanced and customised analysis can be performed taking advantage of already well-known packages of R.

The paper is organized as follows. First, we review the mathematical formulation of the Common Spatial Patterns method. Next, we present the core of our contribution describing both the novel CSP’ extension based on distances and the dbcsp package. Then, the main functions in dbcsp are introduced along with reproducible examples of their use. Finally, some conclusions are drawn.

2 CSP and DB-CSP

Let us consider that we have \(n\) statistical individuals or units classified in two classes \(C_1\) and \(C_2\), with \(\#C_1 = n_1\) and \(\#C_2 = n_2\). For each unit \(i\) in class \(C_k\), data from \(c\) sources or signals are collected during \(T\) time units and therefore unit \(i\) is represented in matrix the \(X_{ik}\) (\(i = 1, \ldots, n_k\, ; \; k=1, 2\)). For instance, for electroencephalograms, data are recorded by a \(c\)-sensor cap each \(t\) time units (\(t=1, \ldots, T\)). As usual, we consider that each \(X_{ik}\) is already scaled or with the appropriate pre-processing in the context of application; for instance, if working with EGG data, each signal should be band-pass filtered before its use.

The goal is to classify a new unit \(X\) in \(C_1\) or \(C_2\). To this end, first a projection into a low-dimensional subspace is carried out. Then, as a standard approach the Linear Discriminant classifier (LDA) is applied taking as input data for the classifier the log-variance of the projections obtained in the first step. It is obvious that the importance of the technique lies mainly in the first step, and once it is done, LDA or any other classifiers could be applied. Based on that, we focus on how this projection into a low-dimensional space is done, from the classical CSP point of view as well as its novel extension DB-CSP (see Figure 1).

graphic without alt text
Figure 1: Flow-chart showing the steps to classify a new data. First, the filtering is done along with the feature extraction. This is the core of the procedure (CSP or DB-CSP). Then, a classifier is built to make the decision giving the classification of the new data.

Classical CSP

The main idea is to use a linear transform to project or filter data into low-dimensional subspace with a projection matrix, in such a way that each row consists of weights for signals. This transformation maximizes the variance of two-class signal matrices. The method performs a simultaneous diagonalization of the covariance matrices of both classes. Given data \(X_{11}, \ldots, X_{n_1 1}\) (matrices \(c \times T\)) from class \(C_1\) and \(X_{12}, \ldots, X_{n_2 2}\) (also matrices \(c \times T\)) from class \(C_2\), the following steps are needed:

Vectors \(\mathbf{w}_j\) offer weights so that new signals \(X_{i 1}'\mathbf{w}_j\) and \(X_{i 2}'\mathbf{w}_j\) have big and low variability for the first \(q\) vectors (\(j=1, \ldots, q\)) respectively, and vice-versa for the last \(q\) vectors (\(j=c-q+1, \ldots, c\)). To clarify the notation and interpretation, let us denote \(\mathbf{a}_j=\mathbf{w}_j\) the first \(q\) vectors and \(\mathbf{b}_j=\mathbf{w}_{c+1-j}\) the last \(q\). That way, and broadly speaking, variability of elements in \(C_1\) is big when projecting on vectors \(\mathbf{a}_j\) and low on vectors \(\mathbf{b}_j\), and vice-versa, for elements in class \(C_2\).

Finally, the log-variability of these new and few \(2q\) signals are considered as input for the classification, which classically is the Linear Discriminant Analysis (LDA). Obviously, any other classification technique can be used, as it is illustrated in the subsection Extending the example.

Distance-based CSP

Following the commented ideas, the Distance-Based CSP (DB-CSP) is an extension of the classical CSP method. In the same way as the classical CSP, DB-CSP gives some weights to the original sources or signals and obtains new and few \(2q\) signals which are useful for the discrimination between the two classes. Nevertheless, the considered distance between the signals can be any other than the Euclidean. The steps are the following:

Once we have the covariance matrices related to the chosen distance matrix, the directions are found as in classical CSP and new signals \(X_{i k}'\mathbf{a}_j\), \(X_{i k}'\mathbf{b}_j\) are built (\(j=1, \ldots, q\)). Again, for individuals in class \(C_1\) the projections on vectors \(\mathbf{a}\) and \(\mathbf{b}\) are big and low respectively; for individuals in class \(C_2\) it is the other way round.

It is important to note that if the chosen distance does not produce a positive definite covariance matrix, it must be replaced by a similar one that is positive definite.

When the selected distance is the Euclidean, then, DB-CSP reduces to classical CSP.

Once the \(q\) directions \(\mathbf{a}_j\) and \(\mathbf{b}_j\) are calculated, new \(2q\) signals are built. Many interesting characteristics of the new signals could be extracted, although the most important in the procedure is the variance. Those characteristics of the new signals are the input data for the classification step.

3 Implementation

In this section, the structure of the package and the functions implemented are explained. The dbcsp package was developed for the free statistical R environment and it is available from the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/dbcsp/index.html.

Input

The input data are the corresponding \(n_1\) and \(n_2\) matrices \(X_{i k}\) of the \(n\) units classified in classes \(C_1\) and \(C_2\), respectively (\(i=1, \ldots, n_k\)  ; \(k=1, 2\)). Let x1 and x2 be two lists of length \(n_1\) and \(n_2\), respectively, with \(X_{i k}\) matrices (\(c \times T\)) as elements of the lists. NA values are allowed. They are imputed by interpolating with the surrounding values via the na.approx function in package zoo. To ensure the user is aware of the missing values and their imputation, a warning is printed. We also consider that new items to be classified are in list xt. The aforementioned first step of the method is carried out by building the object called "dbcsp".

dbcsp object

The dbcsp object is an S4 class created to compute the projection vectors \(W\). The object has the following slots:

Functions plot and boxplot

For exploratory and descriptive purposes, the original signals \(X_{i k}\) and the projected ones can be plotted for the selected individual \(i\) in class \(k\), and the selected pair of dimensions \(\mathbf{a}_j\) and \(\mathbf{b}_j\) (\(i= 1, \ldots, n_k\), \(k=1,2\)).

Besides, the log-variances of the projected signals of both classes can be shown in boxplots. This graphic can help to understand the discriminative power that is in the low-dimension space.

It is worth taking into account that in the aforementioned functions, values in argument vectors must lie between 1 and \(2q\), being \(q\) the number of dimensions used to perform the DB-CSP algorithm when creating the dbcsp object. Therefore, values 1 to \(q\) correspond to vectors \(\mathbf{a}_1\) to \(\mathbf{a}_q\) and values \(q+1\) to \(2q\) correspond to vectors \(\mathbf{b}_1\) to \(\mathbf{b}_q\). Then, if pairs=TRUE, it is recommended that values in argument vectors are in \(\{1, \ldots, q\}\), since their pairs are plotted as well. When values are above \(q\), it should be noted that they correspond to vectors \(\mathbf{b}_1\) to \(\mathbf{b}_q\). For instance, if q=15 and boxplot(object, vectors=16, pairs=FALSE), vector \(\mathbf{b}_1\) \((16-q=1)\) is shown.

Function selectQ, Function train and Function predict

The functions in this section help the classification step in the procedure. Function selectQ helps to find an appropriate dimension needed for the classification. Given different values of dimensions, the accuracy related to each dimension is calculated so that the user can assess which dimension of the reduced space can be sufficient. A \(k\)-fold cross-validation approach or a holdout approach can be followed. Function train performs the Linear Discriminant classification based on the log-variances of the dimensions built in the dbcsp object. Since LDA has a geometric interpretation that makes the classifier sensible for more general situations (Duda et al. 2001), not the normality nor the homoscedasticity of data are checked. The accuracy of the classifier is computed based on the \(k\)-fold validation procedure. Finally, function predict performs the classification of new individuals.

This function returns the accuracy values related to each dimension set in Q. If CV=TRUE, the mean accuracy as well as the standard deviation among folds is also returned.

It is important to note that in this way a classical analysis can be carried out, in the sense of:

However, it is evident that it may be of interest to use other classifiers or other characteristics in addition to or different from log-variances. This more advanced procedure is explained below. See the basic analysis of the User guide with a real example section in order to visualize and follow the process of a first basic/classic analysis.

4 User guide with a real example

To show an example beyond pure electroencephalography data, Action Recognition data is considered. Besides having a reproducible example to show the use of the implemented functions and the results they offer, this Action Recognition data set is included in the package. The data set contains the skeleton data extracted from videos of people performing six different actions, recorded by a semi-humanoid robot. It consists of a total of 272 videos with 6 action categories. There are around 45 clips in each category, performed by 46 different people. Each instance is composed of 50 signals (\(xy\) coordinates for 25 body key points extracted using OpenPose (Cao et al. 2019)), where each signal has 92 values, one per frame. These are the six categories included in the data set:

  1. Come: gesture for telling the robot to come to you. There are 46 instances for this class.

  2. Five: gesture of ‘high five’. There are 45 instances for this class.

  3. Handshake: gesture of handshaking with the robot. There are 45 instances for this class.

  4. Hello: gesture for saying hello to the robot. There are 44 instances for this class.

  5. Ignore: ignore the robot, pass by. There are 46 instances for this class.

  6. Look at: stare at the robot in front of it. There are 46 instances for this class.

The data set is accessible via AR.data and more specific information can be found in (Rodrı́guez-Moreno et al. 2020). Each class is a list of matrices of \([K \times num\_frames]\) dimensions, where \(K=50\) signals and \(num\_frames=92\) values. As mentioned before, the 50 signals represent the \(xy\) coordinates of 25 body key points extracted by OpenPose.

For example, two different classes can be accessed this way:

x1 <- AR.data$come
x2 <- AR.data$five

where, x1 is a list of 46 instances of \([50x92]\) matrices of come class and x2 is a list of 45 instances of \([50x92]\) matrices of five class. An example of skeleton sequences for both classes is shown in Figure 2 (left, for class come and right, for class five).

graphic without alt text
Figure 2: Sequences of the skeleton extracted from the videos. Left: sequence for action ‘come’. Right: sequence for action ‘(high) five’. For each frame, \(x\) and \(y\) coordinates of the 25 body key points of the skeleton are extracted by OpenPose.

Next, the use of functions in dbcsp is shown based on this data set. First a basic/classic analysis is performed.

Basic/classic analysis

Let us consider an analysis using 15-dimensional projections and the Euclidean distance. At a first step the user can obtain vectors \(W\) by:

x1 <- AR.data$come
x2 <- AR.data$five
mydbcsp <- new('dbcsp', X1=x1, X2=x2, q=15, labels=c("C1", "C2"))
summary(mydbcsp)

Creating the object mydbcsp, the vectors \(W\) are calculated. As indicated in parameter q=15, the first and last 15 eigenvectors are retained. With summary, the obtained output is:

There are 46 instances of class C1 with [50x92] dimension.
There are 45 instances of class C2 with [50x92] dimension.
The DB-CSP method has used 15 vectors for the projection.
EUCL distance has been used.
Training has not been performed yet.

Now, if the user knows from the beginning that 3 is an appropriate dimension, the classification step could be done while creating the object. Using classical analysis, with for instance 10-fold, LDA as classifier and log-variances as characteristics, the corresponding input and summary output are:

mydbcsp <- new('dbcsp', X1=x1, X2=x2, q=3, labels=c("C1", "C2"), training=TRUE, fold = 10, seed = 19)
summary(mydbcsp)
There are 46 instances of class C1 with [50x92] dimension.
There are 45 instances of class C2 with [50x92] dimension.
The DB-CSP method has used 3 vectors for the projection.
EUCL distance has been used.
An accuracy of 0.9130556 has been obtained with 10 fold cross validation and using 3 vectors when training.

If a closer view of the accuracies among the folds is needed, the user can obtain them from the out slot of the object:

# Accuracy in each fold
mydbcsp@out$folds_acc

# Intances belonging to each fold
mydbcsp@out$used_folds

Basic/classic analysis selecting the value of \(q\)

Furthermore, it is clear that the optimal value of \(q\) should be chosen based on the percentages of correct classification. It is worth mentioning that the LDA is applied on the \(2q\) projections, as set in the object building step. It is interesting to measure how many dimensions would be enough using selectQ function:

mydbcsp <- new('dbcsp', X1=x1, X2=x2, labels=c("C1", "C2"))
selectDim <- selectQ(mydbcsp, seed=30, CV=TRUE, fold = 10) 
selectDim
   Q       acc         sd
1  1 0.7663889 0.12607868
2  2 0.9033333 0.09428818
3  3 0.8686111 0.11314534
4  5 0.8750000 0.13289537
5 10 0.8797222 0.09513230
6 15 0.8250000 0.05257433

Since the \(10\)-fold cross-validation approach is chosen, the mean accuracies as well as the corresponding standard deviations are returned. Thus, with Linear Discriminant Analysis (LDA), log-variances as characteristics, it seems that dimensions related to first and last \(q=2\) eigenvectors (\(2\times 2\) dimensions in total) are enough to obtain a good classification, with an accuracy of 90%. Nevertheless, it can also be observed that variation among folds can be relevant.

To visualize what is the representation in the reduced dimension space function plot can be used. For instance, to visualize the first unit of the first class, based on projections along the 2 first and last vectors (\(\mathbf{a}_1, \mathbf{a}_2\) and \(\mathbf{b}_1, \mathbf{b}_2\)):

plot(mydbcsp, index=1, class=1, vectors=1:2)

In the top graphic of Figure 3, the representation of the first video of class \(C_1\) given by non standardized matrix \(X_{11}\) can be seen, where the horizontal axis represents the frames of the video and the lines are the positions of the body key points (50 lines). In the bottom graphic, the same video is represented in a reduced space where the video is represented by the new signals (only 4 lines).

graphic without alt text
Figure 3: Representation of the first video of class \(C_1\). Top: original version where each line corresponds to the signal of a body key point. Bottom: the projections on vectors \(\mathbf{a}_1\) and \(\mathbf{a}_2\) (continuous lines) and \(\mathbf{b}_1\) and \(\mathbf{b}_2\) (dotted lines). Being a video of class \(C_1\), variabilities of the projections on vectors \(\mathbf{a}_1\) and \(\mathbf{a}_2\) are big whereas on vectors \(\mathbf{b}_1\) and \(\mathbf{b}_2\) are small, as expected.

To have a better insight of the discriminating power of the new signals in the reduced dimension space, we can plot the corresponding log-variances of the new signals. Parameter vectors in function boxplot sets which are the eigenvectors considered to plot.

boxplot(mydbcsp, vectors=1:2)
graphic without alt text
Figure 4: Log-variabilities of the projected signals on vectors \(\mathbf{a}_1\) and \(\mathbf{a}_2\) and \(\mathbf{b}_1\) and \(\mathbf{b}_2\), separated by classes \(C_1\) and \(C_2\). By construction, variabilities of the projections on vectors \(\mathbf{a}_1\) and \(\mathbf{a}_2\) are big for units in class \(C_1\) and small for units \(C_2\); opposite pattern can be seen for projections on vectors \(\mathbf{b}_1\) and \(\mathbf{b}_2\).

In Figure 4 it can be seen that variability of projections on the first eigenvector direction (\(\log(VAR(X_{i k}'\mathbf{a}_1))\)) are big for elements in x1, but small for elements in x2. Analogously, projecting on the last dimension (\(\log(VAR(X_{i k}'\mathbf{b}_1))\)), low variability is held in x1 and big variability in x2. The same pattern holds when projecting on vectors \(\mathbf{a}_2\) and \(\mathbf{b}_2\).

Basic/classic analysis new unit classification

Once the value of \(q\) has been decided and the accuracy of the classification is known, the classifier should be built (through train()) so that the user can proceed to predict the class a new action held in a video belongs to, using the function predict. For instance, with only illustrative purpose, we can classify the first 5 videos which are stored in x1.

mydbcsp <- train(mydbcsp, selected_q=2, verbose=FALSE)
xtest <- x1[1:5]
outpred <- predict(mydbcsp, X_test=xtest)

If the labels of the testing items are known, the latter function returns the accuracy.

outpred <- predict(mydbcsp, X_test=xtest, true_targets= rep("C1", 5))

Finally, notice that the user could use any other distance instead of the Euclidean between the signals to compute the important directions \(\mathbf{a}_j\) and \(\mathbf{b}_j\). For instance, in this case it could be appropriate to use the Dynamic Time Warping distance, setting so in the argument type="dtw":

# Distance DTW
mydbcsp.dtw <- new('dbcsp', X1=x1, X2=x2, labels=c("C1", "C2"), type="dtw")

5 Extending the example

In the previous section a basic workflow to use functions implemented in dbcsp is presented. Nevertheless, it is straightforward to extend the procedure. Once the interesting directions in \(W\) are calculated through dbcsp, other summarizing characteristics beyond the variance could be extracted from the projected signals, as well as other classifiers which could be used in the classification step. For those purposes, dbcsp is used to compute the directions in \(W\) that will be the base to calculate other features as well as the input features for other classifiers. Here it is shown how, once the eigenvectors are extracted from an object dbcsp, several characteristics could be extracted from the signals and a new data.frame can be built so that any other classification technique could be applied. In this example we worked with caret package to apply different classifiers. It is important to pay attention to which the train and test sets are, so that the vectors are computed based only on training set instances.

# Establish training and test data
n1 <- length(x1)
trainind1 <- rep(TRUE, n1)
n2 <- length(x2)
trainind2 <- rep(TRUE, n2)
set.seed(19)
trainind1[sample(1:n1, 10, replace=FALSE)] <- FALSE
trainind2[sample(1:n2, 10, replace=FALSE)] <- FALSE
x1train <- x1[trainind1]
x2train <- x2[trainind2]

# Extract the interesting directions
vectors <- new('dbcsp', X1=x1train, X2=x2train, q=5, labels=c("C1", "C2"))@out$vectors

# Function to calculate the desired characteristics from signals
calc_info <- function(proj_X, type){
  values <- switch(type,
                   'var' = values <-  plyr::laply(proj_X, function(x){apply(x,1,var)}),
                   'max' = values <-  plyr::laply(proj_X, function(x){apply(x,1,max)}),
                   'min' = values <- plyr::laply(proj_X, function(x){apply(x,1,min)}),
                   'iqr' = values <- plyr::laply(proj_X, function(x){
                     apply(x,1,function(y){
                       q <- quantile(y, probs = c(0.25, 0.75))
                       q[2] -q[1]
                     })
                    })
  )
  return(values)
}

By means of this latter function, besides the variance of the new signals, the maximum, the minimum, and the interquartile range can be extracted.

Next, imagine we want to perform our classification step with the interquartile range information along with the log-variance.

# Project units of class C1 and 
projected_x1 <- plyr::llply(x1, function(x,W) t(W)%*%x, W=vectors)

# Extract the characteristics
logvar_x1 <- log(calc_info(projected_x1,'var'))
iqr_x1 <- calc_info(projected_x1,'iqr')
new_x1 <- data.frame(logvar=logvar_x1, iqr=iqr_x1)

# Similarly for units of class C2
projected_x2 <- plyr::llply(x2, function(x,W) t(W)%*%x, W=vectors)
logvar_x2 <- log(calc_info(projected_x2,'var'))
iqr_x2 <- calc_info(projected_x2,'iqr')
new_x2 <- data.frame(logvar=logvar_x2, iqr=iqr_x2)


# Create dataset for classification
labels <- rep(c('C1','C2'), times=c(n1,n2))
new_data <- rbind(new_x1,new_x2)
new_data$label <- factor(labels)
new_data_train <- new_data[c(trainind1, trainind2), ]
new_data_test <- new_data[!c(trainind1, trainind2), ]

# Random forest
trControl <- caret::trainControl(method = "none")
rf_default <- caret::train(label~.,
                           data = new_data_train,
                           method = "rf",
                           metric = "Accuracy",
                           trControl = trControl)
rf_default

# K-NN
knn_default <- caret::train(label~.,
                            data = new_data_train,
                            method = "knn",
                            metric = "Accuracy",
                            trControl = trControl)
knn_default

# Predictions and accuracies on test data
# Based on random forest classifier
pred_labels <- predict(rf_default, new_data_test)
predictions_rf <- caret::confusionMatrix(table(pred_labels,new_data_test$label))
predictions_rf

# Based on knn classifier
pred_labels <- predict(knn_default, new_data_test)
predictions_knn <- caret::confusionMatrix(table(pred_labels,new_data_test$label))
predictions_knn

Thus, it is easy to integrate results and objects that dbcsp builds so that they can be integrated with other R packages and functions. This is interesting for more advanced users to perform their own customized analysis.

6 Conclusions

In this work a new Distance-Based Common Spatial Pattern is introduced. It allows to perform the classical Common Spatial Pattern when the Euclidean distance between signals is considered, but it can be extended to the use of any other appropriate distance between signals as well. All of it is included in package the dbcsp. The package is easy to use for non-specialised users but, for the sake of flexibility, more advanced analysis can be carried out combining the created object and obtained results with already well-known R packages, such as caret, for instance.

7 Acknowledgements

This research was partially supported: IR by The Spanish Ministry of Science, Innovation and Universities (FPU18/04737 predoctoral grant). II by the Spanish Ministerio de Economia y Competitividad (RTI2018-093337-B-I00; PID2019-106942RB-C31). CA by the Spanish Ministerio de Economia y Competitividad (RTI2018-093337-B-I00, RTI2018-100968-B-I00) and by Grant 2017SGR622 (GRBIO) from the Departament d’Economia i Coneixement de la Generalitat de Catalunya. BS II by the Spanish Ministerio de Economia y Competitividad (RTI2018-093337-B-I00).

8 Author’s contributions

II and CA designed the study. IR and II wrote and debugged the software. IR, II and CA checked the software. II, CA, IR and BS wrote and reviewed the manuscript. All authors have read and approved the final manuscript.

CRAN packages used

zoo, TSdist, parallelDist, RcppXPtrUtils, Matrix, caret

CRAN Task Views implied by cited packages

Econometrics, Environmetrics, Finance, HighPerformanceComputing, MachineLearning, MissingData, NumericalMathematics, TimeSeries

Note

This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.

A. Astigarraga, A. Arruti, J. Muguerza, R. Santana, J. I. Martin and B. Sierra. User adapted motor-imaginary brain-computer interface by means of EEG channel selection based on estimation of distributed algorithms. Mathematical Problems in Engineering, 1435321, 2016. URL https://doi.org/10.1155/2016/1435321.
B. Blankertz, M. Kawanabe, R. Tomioka, F. U. Hohlefeld, V. V. Nikulin and K.-R. Müller. Invariant common spatial patterns: Alleviating nonstationarities in brain-computer interfacing. In NIPS’07: Proceedings of the 20th international conference on neural information processing, pages. 113–120 2007a.
B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe and K.-R. Muller. Optimizing spatial filters for robust EEG single-trial analysis. IEEE Signal Processing Magazine, 25(1): 41–56, 2007b. URL https://doi.org/10.1109/MSP.2008.4408441.
Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei and Y. Sheikh. OpenPose: Realtime multi-person 2D pose estimation using part affinity fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(1): 172–186, 2019. URL https://doi.org/10.1109/TPAMI.2019.2929257.
T. A. F. Darvish Ghanbar Khatereh AND Yousefi Rezaii. Correlation-based common spatial pattern (CCSP): A novel extension of CSP for classification of motor imagery signal. PLOS ONE, 16: 1–18, 2021. URL https://doi.org/10.1371/journal.pone.0248511.
R. O. Duda, P. e. Hart and D. G. Stork. Pattern classification. New York: John Wiley & Sons, 2001.
K. Fukunaga and W. L. Koontz. Application of the Karhunen-Loève expansion to feature selection and ordering. IEEE Transactions on Computers, 100(4): 311–318, 1970.
T. Giorgino et al. Computing and visualizing dynamic time warping alignments in R: The dtw package. Journal of statistical Software, 31(7): 1–24, 2009. URL http://dx.doi.org/10.18637/jss.v031.i07.
M. Grosse-Wentrup and M. Buss. Multiclass common spatial patterns and information theoretic feature extraction. IEEE Transactions on Biomedical Engineering, 55(8): 1991–2000, 2008. URL https://doi.org/10.1109/TBME.2008.921154.
M. I. Khalid, T. Alotaiby, S. A. Aldosari, S. A. Alshebeili, M. H. Al-Hameed, F. S. Y. Almohammed and T. S. Alotaibi. Epileptic MEG spikes detection using common spatial patterns and linear discriminant analysis. IEEE Access, 4: 4629–4634, 2016. URL https://doi.org/10.1109/access.2016.2602354.
F. Lotte and C. Guan. Regularizing common spatial patterns to improve BCI designs: Unified theory and new algorithms. Transactions on Biomedical Engineering, 58(2): 355–362, 2011. URL https://doi.org/10.1109/TBME.2010.2082539.
K. V. Mardia, J. T. Kent and J. M. Bibby. Multivariate analysis. London: Academic Press, 1979.
Johannes. Müller-Gerking, Gert. Pfurtscheller and Henrik. Flyvbjerg. Designing optimal spatial filters for single-trial EEG classification in a movement task. Clinical Neurophysiology, 110(5): 787–798, 1999. URL https://doi.org/10.1016/S1388-2457(98)00038-8.
R. Poppe. Common spatial patterns for real-time classification of human actions. In Machine learning for human motion analysis: Theory and practice, pages. 55–73 2010. IGI Global.
I. Rodríguez-Moreno, J. M. Martínez-Otzeta, B. Sierra, I. Irigoien, I. Rodriguez-Rodriguez and I. Goienetxea. Using common spatial patterns to select relevant pixels for video activity recognition. Applied Sciences, 10(22): 2020. URL https://www.mdpi.com/2076-3417/10/22/8075.
I. Rodrı́guez-Moreno, J. M. Martı́nez-Otzeta, I. Goienetxea, I. Rodriguez-Rodriguez and B. Sierra. Shedding light on people action recognition in social robotics by means of common spatial patterns. Sensors, 20(8): 2436, 2020.
W. Samek, M. Kawanabe and K.-R. Müller. Divergence-based framework for common spatial patterns algorithms. IEEE Reviews in Biomedical Engineering, 7: 50–72, 2014. URL https://doi.org/10.1109/RBME.2013.2290621.
H. Wang, Q. Tang and W. Zheng. L1-norm-based common spatial patterns. IEEE Transactions on Biomedical Engineering, 59(3): 653–662, 2012. URL https://doi.org/10.1109/TBME.2011.2177523.
S.-L. Wu, C.-W. Wu, N. R. Pal, C.-Y. Chen, S.-A. Chen and C.-T. Lin. Common spatial pattern and linear discriminant analysis for motor imagery classification. In 2013 IEEE symposium on computational intelligence, cognitive algorithms, mind, and brain (CCMB), pages. 146–151 2013. IEEE.
H. Yu, H. Lu, S. Wang, K. Xia, Y. Jiang and P. Qian. A general common spatial patterns for EEG analysis with applications to vigilance detection. IEEE Access, 7: 111102–111114, 2019. URL https://doi.org/10.1109/ACCESS.2019.2934519.

References

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

Citation

For attribution, please cite this work as

Rodríguez, et al., "dbcsp: User-friendly R package for Distance-Based Common Spatial Patterns", The R Journal, 2022

BibTeX citation

@article{RJ-2022-044,
  author = {Rodríguez, Itsaso and Irigoien, Itziar and Sierra, Basilio and Arenas, Concepción},
  title = {dbcsp: User-friendly R package for Distance-Based Common Spatial Patterns},
  journal = {The R Journal},
  year = {2022},
  note = {https://rjournal.github.io/},
  volume = {14},
  issue = {3},
  issn = {2073-4859},
  pages = {80-94}
}