This article explains the usage of R package tvReg, publicly available for download from the Comprehensive R Archive Network, via its application to economic and finance problems. The six basic functions in this package cover the kernel estimation of semiparametric panel data, seemingly unrelated equations, vector autoregressive, impulse response, and linear regression models whose coefficients may vary with time or any random variable. Moreover, this package provides methods for the graphical display of results, forecast, prediction, extraction of the residuals and fitted values, bandwidth selection and nonparametric estimation of the time-varying variance-covariance matrix of the error term. Applications to risk management, portfolio management, asset management and monetary policy are used as examples of these functions usage.
A very popular research area has been brewing in the field of kernel smoothing statistics applied to linear models with time-varying coefficients. In econometrics, Robinson (1989) was the first to analyse these models for linear regressions with time-varying coefficients and stationary variables. Since then, this literature has extended to models with fewer restrictions in the dependence of the variables to models with time dependence in the error term and to multi-equation models. Although these models are potentially applicable to a large number of areas, no comprehensive computational implementation is, to our knowledge, formally available in any of the commercial programming languages. The package tvReg contains the aforementioned functionality, input and output interface, and user-friendly documentation.
Parametric multi-equation linear models have increased in popularity in the last decades due to an increase in access to multiple datasets. Their application extends to, perhaps, every field of quantitative research. Just to mention some, they are found in biostatistics, finance, economics, business, climate, linguistics, psychology, engineering and oceanography. Panel linear models (PLM) are widely used to account for the heterogeneity in the cross-section and time dimensions. Seemingly unrelated equations (SURE) and vector autoregressive models (VAR) are the extensions of linear regressions and autoregressive models to the multi-equation framework. Programs with these algorithms are found in all major programming languages. Particularly in R, the package plm (Croissant and Millo 2008, 2018) contains a comprehensive functionality for panel data models. The package systemfit (Henningsen and Hamann 2007) allows the estimation of coefficients in systems of linear regressions, both with equation error terms correlated among equations (SURE) or uncorrelated. Finally, the package vars (Pfaff 2008) provides the tools to fit VAR models and impulse response functions (IRF). All these functions assume that the coefficients are constant. This assumption might not be true when a time series runs for a long period, and the relationships among variables do change. The package tvReg is relevant in this case.
In comparison to parametric models, the appeal of nonparametric models is their flexibility and robustness to functional form misspecification, with spline-based and kernel-based regression methods being the two main nonparametric estimation techniques, (e.g. Eubank 1999). However, fully nonparametric models are not appropriate when many regressors are in play, as their rate of convergence decreases with the number of regressors, the infamous “curse of dimensionality”. In the case of cross-section data, a popular alternative to avoid this problem are the generalised additive models (GAM), introduced by Hastie and Tibshirani (1993). The GAM is a family of semiparametric models that extends parametric linear models by allowing for non-linear relationships of the explanatory variables and still retaining the additive structure of the model. In the case of time-series data, the most suitable alternative to nonparametric models is the linear models whose coefficients change over time or follow the dynamics of another random variable. This functionality is coded in R, within the single-equation framework, in packages mgm (Haslbeck and Waldorp 2020), and MARSS (Holmes et al. 2012). Package tvReg uses the identical kernel smoothing estimation as package mgm when using a Gaussian kernel to estimate a VAR model with varying coefficients (TVVAR). However, the interpretation of their results is different because they are aimed at different audiences. The mgm focuses in the field of network models, producing network plots to represent relationships between current variables and their lags. Whereas the tvReg focuses in the field of economics where a direct interpretation of the TVVAR coefficients is not meaningful and may be done via the time-varying impulse response function (TVIRF) instead. Models with coefficients varying over time can also be expressed in state space form, which assumes that the coefficients change over time in a determined way for example, as a Brownian motion. These models can be estimated using the Kalman filter or Bayesian techniques, for instance (Primiceri 2005; Liu and Guo 2020). Packages MARSS and bvarsv (Krueger 2015) implement this approach based on the Carter and Kohn (1994) algorithm to estimate the TVVAR. On top of all this and as far as we can tell, the tvReg is the only package containing tools to estimate time-varying coefficients seemingly unrelated equation (TVSURE) and panel linear models (TVPLM) in R.
Simply, the main objective of the tvReg is to provide tools to estimate and forecast linear models with time-varying coefficients in the framework of kernel smoothing estimation, which may be difficult for the nonspecialised end-user to code. For completion, the tvReg also implements methods for the time-varying coefficients linear model (TVLM) and the time-varying coefficients autoregressive (TVAR) model. Often, these can be estimated using packages gam (Hastie 2022) and mgcv (Wood 2017), which combine (restricted) marginal likelihood techniques in combination with nonparametric methodologies. However, the advantage of using the tvReg is that it can handle dependency and any kind of distribution in the error term because it combines least squares techniques with nonparametric methodologies. An example of this is shown in Section 8.
Summing up, this paper presents a review of the most common time-varying coefficient linear models studied in the econometrics literature during the last two decades, their estimation using kernel smoothing techniques, the usage of functions and methods in the package tvReg, and their latest applications. Along these lines, Table 1 offers a glimpse at the tvReg full functionality, displaying a summary of its methods, classes and functions.
Function | Class | Function and Methods for class | Based on |
---|---|---|---|
tvPLM |
"tvPLM" |
tvRE , tvFE , coef , confint , fitted , forecast , plot , predict , print , resid , summary |
plm::plm |
tvSURE |
"tvsure" |
tvGLS , bw , coef , confint , fitted , forecast , plot , predict , print , resid , summary |
systemfit::systemfit |
tvVAR |
"tvvar" |
tvAcoef , tvBcoef , tvIRF , tvOLS , tvPhi , tvPsi , bw , coef , confint , fitted , forecast , plot , predict , print , resid , summary |
vars::VAR |
tvIRF |
"tvirf" |
coef , confint , plot , print , summary |
vars::irf |
tvLM |
"tvlm" |
tvOLS , bw , coef , confint , fitted , forecast , plot , predict , print , resid , summary |
stats::lm |
tvAR |
"tvar" |
tvOLS , bw , coef , confint , fitted , forecast , plot , predict , print , resid , summary |
stats::ar.ols |
A multi-equation model formed by a set of linear models is defined when each equation has its own dependent variable and possible different regressors. Seemingly unrelated equations, panel data models and vector autoregressive models are included in this category.
The SURE was proposed by Zellner (1962) and is referred to as the seemingly unrelated equations model (SURE). The SURE model is useful to exploit the correlation structure between the error terms of each equation. Suppose that there are \(N\) linear regressions of different dependent variables, \[\begin{aligned} \label{eq:tvsure} Y_{t} = X_t \beta(z_t)+U_{t} \quad i=1,\ldots,N\quad t=1,\ldots ,T, \end{aligned} \tag{1}\] where \(Y_{t}=(y_{1t}\ldots y_{Nt})^\top\) with \(y_{i} = (y_{i1}, \ldots, y_{iT})^\top\) denotes the values over the recorded time period of the \(i-th\) dependent variable. Each equation in ((1)) may have a different number of exogenous variables, \(p_{i}\). The regressors matrix, \(X_t=diag(x_{1t}\ldots x_{Nt})\) with \(X_{i}=( x_{i1},\ldots,x_{ip_{i}})\) for equation \(i\) and \(\beta_{z_t}=( \beta_{1}(z_t)^\top,...,\beta_{N}(z_t)^\top)^\top\) is a vector of order \(P = p_1+p_2+ \ldots+p_N\). The error vector, \(U_{t}=(u_{1t}\ldots u_{Nt})^\top\), has zero mean and covariance matrix \(\mathbb{E}(U_tU^\top_t)=\Sigma_t\) with elements \(\sigma_{ii^\prime t}\).
It is important to differentiate between two types of smoothing variables: 1) \(z_t = \tau = t/T\) is the rescaled time with \(\tau \in [0, 1]\), and 2) \(z_t\) is the value at time \(t\) of the random variable \(Z = \{z_t\}_{t=1}^T\). In other words, time-varying coefficients may be defined as unknown functions of time, \(\beta(z_t)= f(\tau)\), or as unknown functions of a random variable, \(\beta(z_t) = f(z_t)\). The estimation of the TVSURE has been studied by Henderson et al. (2015) when for a random \(z_t\) and by Orbe et al. (2005) and Casas et al. (2019) for \(z_t = \tau\). These estimators are consistent and asymptotically normal under certain assumptions on the size of the bandwidth, kernel regularity and error moments, and dependency. Details are left out of this text as can be easily found in the related literature.
The estimation of system ((1)) may be done separately for each equation as if there is no correlation in the error term across equations, i.e. system ((1)) has a total of \(N\) different TVLM with possibly \(N\) different bandwidths, \(b_i\). In this case, the time-varying coefficients are obtained by combining the ordinary least squares (OLS) and the local polynomial kernel estimator, which is extensively studied in Fan and Gijbels (1996). The result is the time-varying OLS denoted by TVOLS herein. Two versions of this estimator are implemented in tvReg: i) the TVOLS that uses the local constant (lc) kernel method, also known as the Nadaraya-Watson estimator; and ii) the TVOLS which uses the local linear (ll) method. Focussing in the single equation \(i\), and assuming that \(\beta_i(\cdot)\) is twice differentiable, an approximation of \(\beta_i(z_t)\) around \(z\) is given by the Taylor rule, \(\beta_i(z_t) \approx \beta_i(z) + \beta_i^{(1)}(z) (z_t -z)\), where \(\beta_i^{(1)}(z) = d\beta_i(z)/dz\) is its first derivative. The estimates resolve the following minimisation: \[(\hat \beta_i(z_t), \hat \beta_i^{(1)}(z_t))= \arg \min_{\theta_0, \theta_1} \sum_{t=1}^T \left[ y_i - X_i^\top \theta_0 - (z_t -z) X_i^\top\theta_1\right]^2 K_{b_i}(z_t -z).\] Roughly, these methodologies fit a set of weighted local regressions with an optimally chosen window size. The size of these windows is given by the bandwidth \(b_i\), and the weights are given by \(K_{b_i}(z_t -z)= b_i^{-1} K(\frac{z_t-z}{b_i})\), for a kernel function \(K(\cdot)\). The local linear estimator general expression is \[\left(\begin{array}{c} \hat\beta_{i}(z_t)\\ \hat\beta_{i}^{(1)}(z_t) \end{array}\right) = \left (\begin{array}{cc} S_{T,0}(z_t) & {S^\top_{T,1}}(z_t)\\ S_{T,1}(z_t) & S_{T,2}(z_t)\end{array}\right)^{-1} \left (\begin{array}{c} T_{T,0}(z_t) \\ T_{T,1}(z_t)\end{array}\right) \label{eq:tvols} \tag{2}\] with \[\begin{aligned} S_{T, s}(z_t) = &\frac{1}{T}\sum_{i=1}^T X_i^\top X_i (z_i -z_t)^s K\left(\frac{z_i -z_t}{b_i}\right) \\ T_{T, s}(z_t) = &\frac{1}{T}\sum_{i=1}^T X_i^{\top} (z_i - z_t)^s K\left(\frac{z_i -z_t}{b_i}\right) y_i \end{aligned}\] and \(s= 0, 1, 2\). The particular case of the local constant estimator is calculated by \(\hat\beta_{i,t} = S_{T,0}^{-1}(z_t) T_{T, 0} (z_t)\) and it is only necessary that \(\beta_i(\cdot)\) has one derivative.
A second option is to use the correlation matrix of the error term in the estimation of system ((1)). This is called the time-varying generalised least squares (TVGLS) estimation. Its mathematical expression is the same as ((2)) with the following matrix components: \[\begin{aligned} S_{T, s}(z_t) = &\frac{1}{T}\sum_{i=1}^T X_i^{\top} K_{B,it}^{1/2}\Sigma _{i}^{-1} K_{B,it}^{1/2} X_i (Z_i -z_t)^s \nonumber\\ T_{T, s}(z_t) = &\frac{1}{T}\sum_{i=1}^T X_i^{\top} K_{B,it}^{1/2} \Sigma _{i}^{-1} K_{B, it}^{1/2} Y_i (Z_i - z_t)^s, \label{eq:tvgls} \end{aligned} \tag{3}\] where \(K_{B,it}= diag( K_{b_1,it}, ... , K_{b_N,it})\) and \(K_{b_i,it}= (T b_i)^{-1} K((Z_i-z_t)/(T b_i))\) is the matrix of weights introducing smoothness according to the vector of bandwidths, \(B=(b_1,\ldots,b_N)^\top\). Note that this minimisation problem accounts for the time-varying structure of the variance-covariance matrix of the errors, \(\Sigma_t\).
The TVGLS assumes that the error variance-covariance matrix is known. In practice, this is unlikely and it must be estimated, resulting in the Feasible TVGLS estimator (TVFGLS). This estimator consists of two steps:
Estimate \(\Sigma_t\) based on the residuals of a line by line estimation (i.e, when \(\Sigma_t\) is the identity matrix). If \(\Sigma_t\) is known to be constant, the sample variance-covariance matrix from the residuals is a consistent estimator of it. If \(\Sigma_t\) changes over time, a nonparametric estimator such the one explained in Section 6 is a consistent alternative.
Estimate the coefficients of the TVSURE by plugging in \(\hat \Sigma_t\) from step 1 into Equation ((3)).
To ensure a good estimation of \(\Sigma_t\), the iterative TVFLGS may be used. First, do steps 1-2 as above to obtain the residuals from step 2, and repeat step 2 until the estimates of \(\Sigma_t\) converge or the maximum number of iterations is reached.
Panel data linear models (PLM) are a particular case of SURE models with the same variables for each equation but measured for different cross-section units, such as countries, and for different points in time. All equations have the same coefficients apart from the intercept which can be different for different cross-sections. Therefore, the data from all cross-sections can be pooled together. The individual effects, \(\alpha_i\), account for the heterogeneity embedded in the cross-section dimension. This package only take into account balanced panel datasets, i.e. with the same number of data points for each cross-section unit.
Coefficient dynamics can be added to classical PLM models using a time-varying coefficients panel data model, TVPLM. Recent developments in this kind of models can be found in Sun et al. (2009; Dong, C. Jiti Gao, J. and Peng, B. 2015; Casas et al. 2021; Dong et al. 2021) among others, with general model, \[y_{it} = \alpha_i + x_{it}^\top \beta(z_{t}) +u_{it} \quad i=1,\ldots,N , \quad t = 1, \ldots, T. \label{eq:tvpanel} \tag{4}\] Note that the smoothing variable only changes in the time dimension, not like in the SURE model where it changed over \(i\) and \(t\). The three estimators of Equation ((4)) in the tvReg are:
The time-varying pooled ordinary least squares (TVPOLS) has the same expression than estimator ((2)) with the following terms: \[\begin{aligned} S_{T, s}(z_t) = & X^{\top} K_{b,t}^* X (Z - z_t)^s \nonumber\\ T_{T, s}(z_t) = &X^{\top} K_{b,t}^* Y (Z - z_t)^s, \label{eq:tvpols} \end{aligned} \tag{5}\] where \(K_{b, t}^*= I_N \otimes diag\{K_b(z_1-z_t),\ldots, K_b(z_T-z_t)\}\). Note that it is not possible to ignore the panel structure in the semiparametric model because the coefficients change over time. The consistency and asymptotic normality of this estimator needs the classical assumptions about the kernel and the regularity of the coefficients, available in the related literature.
The time-varying random effects (TVRE) estimator is also given by Equation ((5)) with a non-identity \(\Sigma\): \[\begin{aligned} S_{T, s}(z_t) = &X^{\top} K_{b,t}^{*1/2} \Sigma_t^{-1} K_{b,t}^{*1/2} X (Z - z_t)^s \nonumber\\ T_{T, s}(z_t) = &K_{b,t}^{*1/2} \Sigma_t^{-1} K_{b,t}^{*1/2}Y(Z - z_t)^s. \label{eq:tvRE} \end{aligned} \tag{6}\] Note that this is a simpler case of ((3)) with the same bandwidth for all equations. The variance-covariance matrix is estimated in the same way using the residuals from the TVPOLS and it may be an iterative algorithm until convergence of the coefficients.
The time-varying fixed effects (TVFE) estimator. Unfortunately, the transformation for the within estimation does not work in the time-varying coefficients model because the coefficients depend on time (Sun et al. 2009 explain the issue in detail). Therefore, it is necessary to make the assumption that \(\sum_{i=1}^N \alpha_i=0\) for identification. The terms in the TVFE estimator are: \[\begin{aligned} S_{T, s}(z_t) = &X^{\top} W_{b,t} X (Z - z_t)^s \nonumber\\ T_{T, s}(z_t) = &X^{\top} W_{b,t} Y (Z - z_t)^s, \label{eq:tvFE} \end{aligned} \tag{7}\] where \(W_{b,t}=D_{t}^\top K_{b, t}^*D_{t}\), \(D_{t}=I_{NT} - D(D^\top K_{b, t}^* D)^{-1} D^\top K_{b,t}^*\), \(D=(-1_{N-1},I_{N-1})^\top \otimes 1_T\), and \(1_k\) is the unity vector of length \(k\). The fixed effects are given by, \(\hat \alpha = (D^\top K_{b,t}^*D)^{-1}D^\top K_{b,t}^*(Y - X^\top \beta)\). Finally, \(\hat \alpha_i = \frac{1}{T} \sum_{t=1}^T \alpha_{it}\) for \(i= 2, \ldots, N\).
Macroeconomic econometrics experienced a revolution when Sims (1980) presented the vector autoregressive (VAR) model: a new way of summarising relationships among several variables while getting around the problem of endogeneity of structural models. The VAR model has lagged values of the dependent variable, \(y_t\), as regressors to which further exogenous variables can be added as regressors. Unless the model is constrained, all variables are the same for every equation, which simplifies the algebra. The model coefficients and variance-covariance matrix may be estimated by maximum likelihood, OLS or GLS. VAR coefficients and the variance-covariance matrix do not have a direct economic interpretation. However, it is possible to use them to recover a structural model by imposing a number of restrictions and so analyse the transmission of a shock, for example, a new monetary policy, to the macroeconomy using the impulse response function (IRF). Lütkepohl (2005) dive into the theoretical properties of these models in detail.
The TVVAR(\(p\)) is an \(N\)-dimensional system of time-varying autoregressive processes of order \(p\) like \[Y_{t}= A_{0,t}+ A_{1,t} Y_{t-1} + \ldots+ A_{p,t} Y_{t-p} + U_t, \ \ t= 1, 2,\ldots, T. \label{eq:tvvar} \tag{8}\] In Equation ((8)), \(Y_t=(y_{1t}, \ldots, y_{Nt})^\top\) and coefficient matrices at each point in time \(A_{j,t}=(a_{1t}^j, \ldots, a_{Nt}^j)\), \(j=1, \ldots, p\) are of dimension \(N\times N\). Then, notation \(A_{j,t}\) means that the elements of this matrix are unknown functions of either the rescaled time value, \(\tau\), or of a random variable at time \(t\). The innovation, \(U_t=(u_{1t}, \ldots, u_{Nt})\), is an \(N\)-dimensional identically distributed random variable with \(E(U_t) = 0\) and possibly a time-varying positive definite variance-covariance matrix, \(E(U_t U_s^\top) = \Sigma_t, for t=s, E(U_t U_s^\top)=0 otherwise\). Here, matrix \(A_{j, t}\) is a function of \(\tau\), then process ((8)) is locally stationary in the sense of Dahlhaus (1997), which occurs when the functions in matrices \(A_{j, t}\) are constant or change smoothly over time. Then, process ((8)) at time \(t\) has a well defined unique solution given by the Wold representation, \[\bar y_t = \sum_{j = 0}^\infty \Phi_{j, t}{U}_{t-j}, \label{eq:6} \tag{9}\] such that \(|Y_t - \bar y_t|\rightarrow 0\) almost surely. Matrix \(\Phi_{0, t} = I_N\) and matrix \(\Phi_{s,t}= \sum_{j=1}^s \Phi_{s-j,t} A_{j,t}\) for horizons \(s = 1, 2,\ldots\) As for the constant model, \(\Phi_{s,t}\) are the time-varying coefficient matrices of the impulse response function (TVIRF). Its element \((t, i, j)\) may be interpreted as the expected response of \(y_{i, t+s}\) to an exogenous shock of \(y_{j,t}\) ceteris paribus lags of \(y_t\) when the innovations are orthogonal. Otherwise, an orthogonal TVIRF can be found as \(\Psi_{j,t} = \Phi_{j,t} P_t\) for \(\Sigma_t = P_t P_t^\top\), the Cholesky decomposition of \(\Sigma_t\) at time \(t\). More theoretical details in (Yan et al. 2021).
In the macroeconomic literature, the Bayesian estimation of process ((8)) has attracted a lot of attention in recent years driven by results in Cogley and Sargent (2005; Primiceri 2005) and Kapetanios et al. (2012). In their approach, the coefficients are assumed to follow a random walk. Recently, Kapetanios et al. (2017) studied the inference of the local constant estimator of a TVVAR(\(p\)) for large sets, and they found an increase in the forecast accuracy in comparison to the forecast accuracy of the VAR(\(p\)).
tvSURE
The main argument of this function is a list of formulas, one for each
equation. The formula
follows the format of formula
in the package
systemfit, which implements estimators of parametric multi-equation
models with constant coefficients. The tvSURE
wraps the tvOLS
and
tvGLS
methods to estimate the coefficients of system
((1)). The tvOLS
method is used by default, calculating
estimates for each equation independently with different bandwidths,
bw
. The user is able to enter a set of bandwidths or a single
bandwidth to be used in the estimation instead. The tvGLS
method has
argument Sigma
where a known variance-covariance matrix of the error
can be entered in Equation ((3)). Otherwise, if
Sigma = NULL
, the variance-covariance matrix \(\Sigma_t\) is estimated
using function tvCov
, which is discussed in Section 6.
In addition to formula
, function tvSURE
has other arguments to
control and choose the desired estimation procedure:
All methods assume by default that the coefficients are unknown
functions of \(\tau = t/T\) and therefore argument z
is set to
NULL
. The user can modify this setting by entering a numeric
vector in argument z
with the values of the random smoothing
variable over the corresponding time period. Note that the current
version only allows one single smoothing random variable, z
,
common for all equations; and balanced panels.
When argument bw
is set to NULL
, it is automatically selected by
leave-one-out cross-validation. It is possible to select it by
leave-\(k\)-out cross-validation (Chu and Marron 1991) by setting argument
cv.block = k
(k=0
by default). This minimisation can be slow for
large datasets, and it should be avoided if the user knows an
appropriate value of the bandwidth for the required problem.
The three choices for this argument are tkernel = "Triweight"
(default), tkernel = "Epa"
and tkernel = "Gaussian"
. The first
two options refer to the Triweight and Epanechnikov kernels, which
are compact in [-1, 1]. The authors recommend the use of either of
those two instead of the Gaussian kernel which, in general, requires
more calculations.
The default estimation methodology is the Nadaraya-Watson or local
constant, which is set as (est = "lc"
) and it fits a constant at
each interval defined by the bandwidth. The argument est = "ll"
can be chosen to perform a local linear estimation (i.e., to fit a
polynomial of order 1).
The tvOLS
method used in the estimation wraps the lm.wfit
method, which at default allows the fitting of a low-rank model, and
the estimation coefficients can be NAs. The user can change the
argument singular.ok
to FALSE
, so that the program stops in case
of a low-rank model.
The user can restrict certain coefficients in the TVSURE model using
arguments R
and r
. Note that the restriction is done by setting
those coefficients to a constant. Furthermore, argument method
defines
the type of estimator to be used. The possible choices in argument
method
are:
"tvOLS"
for a line by line estimation, i.e, with \(\Sigma\) the
identity matrix.
"tvGLS"
to estimate the coefficients of the system using
\(\Sigma_t\), for which the user must enter it in argument Sigma
.
Argument Sigma
takes either a symmetric matrix or an array. If
Sigma
is a matrix (constant over time) then it must have
dimensions neq \(\times\) neq, where neq is the number of
equations in the system. If \(\Sigma_t\) changes with time, then
argument Sigma
is an array of dimension neq \(\times\) neq
\(\times\) obs, where the last dimension measures the number of time
observations. Note that if the user enters a diagonal
variance-covariance matrix with diagonal values different from one,
then a time-varying weighted least squares is performed. If
method ="tvGLS"
is entered but Sigma = NULL
, then tvSURE
is
fitted as if method = "tvOLS"
and a warning is issued.
"tvFGLS"
to estimate the coefficients of the system using an
estimate of \(\Sigma_t\). By default, only one iteration is performed
in the estimation, unless argument control
indicates otherwise.
The user can choose the maximum number of iterations or the level of
tolerance in the estimation of \(\Sigma_t\). See example the below for
details.
The package systemfit contains the Kmenta
dataset, which was first
described in Kmenta (1986), to show the usage of the function systemfit
to fit SURE models. This example has two equations: i) a demand
equation, which explains how food consumption per capita, consump
,
depends on the ratio of food price, price
; and disposable income,
income
; and ii) a supply equation, which shows how consumption depends
on price
, ratio prices received by farmers to general consumer prices,
farmPrice
; and a possible time trend, trend
. Mathematically, this
SURE model is
\[\begin{aligned}
consump_t = &\beta_{10} + \beta_{11} price_t + \beta_{12} income_t + u_{1t}\nonumber\\
consump_t = &\beta_{20} + \beta_{21} price_t+ \beta_{22} farmPrice_t +\beta_{23} t+ u_{2t}.
\label{eq:Kmenta}
\end{aligned} \tag{10}\]
The code below defines the system of equations using two formula
calls
which are put into a "list"
.
> data("Kmenta", package = "systemfit")
> eqDemand <- consump ~ price + income
> eqSupply <- consump ~ price + farmPrice + trend
> system <- list(demand = eqDemand, supply = eqSupply)
Two parametric models are fitted to the data using the function
systemfit
: one assuming that there is no correlation of the errors
setting (the default), OLS.fit
below; and another one assuming the
existence of correlation in the system error term setting
method = "SUR"
, FGLS1.fit
below. Arguing that the coefficients in
((10)) may change over time, the corresponding TVSUREs are
fitted by using the the function tvSURE
with the default in the
argument method
and by method = "tvFGLS"
, respectively. They are
denoted by TVOLS.fit
and TVFGLS1.fit
.
> OLS.fit <- systemfit::systemfit(system, data = Kmenta)
> FGLS1.fit <- systemfit::systemfit(system, data = Kmenta, method = "SUR")
> TVOLS.fit <- tvSURE(system, data = Kmenta)
> TVFGLS1.fit <- tvSURE(system, data = Kmenta, method = "tvFGLS")
In the previous chunk, the FGLS and TVFGLS estimators use only one
iteration. However, the user can choose the iterative FGLS and the
iterative TVFGLS models, which estimate the coefficients iteratively
until convergence. The convergence level can be chosen with the argument
tol
(1e-05 by default) and the argument maxiter
with the maximum
number of iterations. The following chunk illustrates its usage:
> FGLS2.fit <- systemfit::systemfit(system, data = Kmenta, method = "SUR",
+ maxiter = 100)
> TVFGLS2.fit <- tvSURE(system, data = Kmenta, method = "tvFGLS",
+ control = list(tol = 0.001, maxiter = 100))
Some of the coefficients can be restricted to have a certain constant
value in tvSURE
. This can aid statistical inference to test certain
conditions. See an example of this below. Matrix R
has as many rows as
restrictions in r
and as many columns as regressors in the model. In
this case, Model ((10)) has 7 coefficients which are ordered
as they appear in the list of formulas. Note that the time-varying
coefficient of the variable trend
is redundant when an intercept is
included in the second equation of the TVSURE. Therefore, we want to
restrict its coefficient to zero. For illustration, we also impose
\(\beta_{11, t} - \beta_{21, t} = 0.5\):
> Rrestr <- matrix(0, 2, 7)
> Rrestr[1, 7] <- 1; Rrestr[2, 2] <- 1; Rrestr[2, 5] <- -1
> qrestr <- c(0, 0.5)
> TVFGLS.rest <- tvSURE(system, data = Kmenta, method = "tvFGLS",
+ R = Rrestr, r = qrestr,
+ bw = TVFGLS1.fit$bw, bw.cov = TVFGLS1.fit$bw.cov)
Several studies have argued that the three-factor model by
Fama and French (1993) does not explain the whole variation in average returns.
In this line, Fama and French (2015) added two new factors that measure the
differences in profitability (robust and weak) and investment
(conservative and aggressive), creating their five-factor model (FF5F).
This model has been applied in Fama and French (2017) to analyse the
international markets. A time-varying coefficients version of the FF5F
has been studied in Casas et al. (2019), whose dataset is included in the
tvReg under the name of FF5F
. The TVFF5F model is
\[\begin{aligned}
R_{it} - RF_{it} = & a_{it}+ b_{it} \ (RM_{it} -RF_{it}) + s_{it} \ SMB_{it} + h_{it} \ HML_{it} \nonumber\\
&+ r_{it}\ RMW_{it}+c_{it} \ CMA_{it}+u_{it},
\label{eq:tvff5}
\end{aligned} \tag{11}\]
where \(R_{it}\) refers to the price return of the asset of certain
portfolio for market \(i\) at time \(t\), \(RF_t\) is the risk free return
rate, and \(RM_t\) represents the total market portfolio return.
Therefore, \(R_{it} - RF_{it}\) is the expected excess return and
\(RM_{it} -RF_{it}\) is the excess return on the market portfolio. The
other factors, \(SMB_t\) stands for “small minus big” and represents the
size premium, \(HML_t\) stands for “high minus low” and represents the
value premium, \(RMW_t\) is a profitability factor, and \(CMA_t\) accounts
for the investment capabilities of the company. Finally, the error term
structure is
\[\begin{aligned}
\nonumber
E(u_{it}u_{js})=\left\{\begin{array}{lll} \sigma_{iit}= \sigma^2_{it}&\qquad& i=j,\quad t=s\\ \sigma_{ijt}&\qquad& i\neq j,\quad t=s\\ 0&\qquad& t\neq s. \end{array} \right.
\end{aligned}\]
The FF5F
dataset has been downloaded from the Kenneth R. French (2016) data
library. It contains the five factors from four different international
markets: North America (NA), Japan (JP), Europe (EU), and Asia Pacific
(AP). For the dependent variable, the excess returns of portfolios
formed on size and book-to-market have been selected. The period runs
from July 1990 to August 2016 and it has a monthly frequency. The data
contains the Small/Low, Small/High, Big/Low and Big/High portfolios. The
factors in the TVFF5F model explain the variation in returns well if the
intercept is statistically zero. The lines of code below illustrate how
to fit a TVSURE to the Small/Low portfolio.
> data("FF5F")
> eqNA <- NA.SMALL.LoBM - NA.RF ~ NA.Mkt.RF + NA.SMB + NA.HML + NA.RMW + NA.CMA
> eqJP <- JP.SMALL.LoBM - JP.RF ~ JP.Mkt.RF + JP.SMB + JP.HML + JP.RMW + JP.CMA
> eqAP <- AP.SMALL.LoBM - AP.RF ~ AP.Mkt.RF + AP.SMB + AP.HML + AP.RMW + AP.CMA
> eqEU <- EU.SMALL.LoBM - EU.RF ~ EU.Mkt.RF + EU.SMB + EU.HML + EU.RMW + EU.CMA
> system2 <- list(NorthA = eqNA, JP = eqJP, AP = eqAP, EU = eqEU)
> TVFF5F <- tvSURE(system2, data = FF5F, method = "tvFGLS",
+ bw = c(0.56, 0.27, 0.43, 0.18), bw.cov = 0.12)
The package tvReg also includes the functionality to compute
confidence intervals for the coefficients of class attributes "tvlm"
,
"tvar"
, "tvplm"
, "tvsure"
and "tvirf"
by extending the confint
method. The algorithm in Fan and Zhang (2000) and Chen et al. (2017) to calculate
bootstrap confidence intervals has been adapted for all these class
attributes. Argument level
is set to 0.95 (95% confidence interval) by
default. Argument runs
(100 by default) is the number of resamples
used in the bootstrapping calculation. Note that the calculation using
runs = 100
can take long, so we suggest to try a small value in runs
first to get an initial intuition of the results. Because coefficients
are time-varying, only wild bootstrap residual resampling is
implemented. Two choices of wildbootstrap are allowed in argument
tboot
: the default one proposed in Mammen (1993) (tboot = "wild"
); and
the standard normal (tboot = "wild2"
).
In the backend code, coefficient estimates from all replications are
stored in the BOOT
variable. In this way, calculations are not done
again if the user chooses a different level
for the same object. In
the chunk below, the confint
method calculates the 90% confidence
interval of the object TVFF5F
. Posteriorly, the 95% interval is
calculated quickly because the resample calculations in the first
interval are re-used for the second.Thus, the 90% confidence interval
calculation takes around 318 seconds with a 2.2 GHz Intel Core i7
processor and the posterior 95% confidence interval takes only around
0.7 seconds.
> TVFF5F.90 <- confint(TVFF5F, level = 0.90)
> TVFF5F.95 <- confint(TVFF5F.90)
The plot
method is implemented for each of the six class attributes in
tvReg. For example, the 95% confidence intervals of the intercept for
the North American, Japanese, Asia Pacific and European markets
Figure 1 are with plot
statement below, that produces
four independent plots of the first variable (the intercept in this
case) in each equation due to argument vars = 1
.
> plot(TVFF5F.95, vars = 1)
The user can also choose to plot the coefficients of several variables and/or equations. Plots will be grouped by equation, with a maximum of three variables per plot. The piece of code below show how to plot the coefficients of the second and third variables from the Japan market equation, which results can be seen in Figure 2.
> plot(TVFF5F.95, vars = c(2, 3), eqs = 2)
tvPLM
The tvPLM
method is inspired by the plm
method from the package
plm. It converts data
into an object of the class attribute
"pdata.frame"
using argument index
to define the cross-section and
time dimensions. If index = NULL
(default), the two first columns of
data
define the dimensions. The tvPLM
wraps the tvRE
and tvFE
methods to estimate the coefficients of time-varying panel data models.
The user can provide additional optional arguments to modify the default
estimation. See section 3 for details on arguments z
,
bw
, est
and tkernel
. Furthermore, argument method
defines the
estimator used. The possible choices based on package plm choices are:
"pooling"
(default), "random"
and "within"
.
The income elasticity of healthcare expenditure is defined as the
percentage change in healthcare expenditure in response to the
percentage change in income per capita. If this elasticity is greater
than one, then healthcare expenditure grows faster than income, as
luxury goods do, and is driven by market forces alone. The heterogeneity
of health systems among countries and time periods have motivated the
use of panel data models, for example in Gerdtham et al. (1992) who use a FE
model. Recently, Casas et al. (2021) have investigated the problem from the
time-varying panel models perspective using the TVFE estimation. In
addition to the income per capita, measured by the log GDP, the authors
use the proportion of population over 65 years old, the proportion of
population under 15 years old and the share of public funding of
healthcare. The income elasticity estimate with a FE implemented in the
plm
is greater than 1, a counter-intuitive result. This issue is
resolved using the TVFE implemented in the tvReg. The code below
estimates coefficients with the parametric and semiparametric models:
> data("OECD")
> elast.fe <- plm::plm(lhe ~ lgdp + pop65 + pop14 + public, data = OECD,
+ index = c("country", "year"), model = "within")
> elast.tvfe <- tvPLM (lhe ~ lgdp + pop65 + pop14 + public, data = OECD,
+ index = c("country", "year"), method = "within",
+ bw = 0.67)
> elast.fe <- confint(elast.fe)
> elast.tvfe <- confint(elast.tvfe)
Figure 3 shows the elasticity estimates using the FE and TVFE estimators. The constant coefficients model (dashed line) suggests that healthcare is a luxury good (over 1), while the time-varying coefficients (solid line) model suggests it is a value under 0.8.
tvVAR
and tvIRF
A TVVAR(\(p\)) model is a system of time-varying autoregressive equations
of order \(p\). The dependent variable, y
, is of the class attribute
"matrix"
or "data.frame"
with as many columns as equations.
Regressors are the same for all equations and they contain an intercept
if the argument type = "const"
(default) or not if type = "none"
;
lagged values of y
; and other exogenous variables in exogen
.
Econometrically, the tvOLS
method is called to calculate the estimates
for each equation independently using one bandwidth per equation. The
user can choose between automatic bandwidth selection; or entering a one
value in bw
, meaning that all equations will be estimated with the
same bandwidth; or a vector of bandwidths, one for each equation. The
tvVAR
returns a list
of the class attribute tvvar
, which can be
used to estimate the TVIRF model with the function tvIRF
.
The assessment and forecast of the effects of monetary policy on macroeconomic variables, such as inflation, economic output and employment is commonly modelled using the econometric framework of VAR and interpreted by the IRF. In recent years, scholars of macroeconometrics have searched intensely for a way to include time variation in the coefficients and covariance matrix of the VAR model. The reason for this is that the macroeconomic climate evolves over time and effects of monetary policy must be identified locally rather than globally. In the Bayesian framework, Primiceri (2005) used the Carter and Kohn (1994) algorithm to fit the TVP-VAR to this monetary policy problem. Results of the latter can be replicated with the functions in the package bvarsv and compared with results in the tvReg that fits the following TVVAR(4): \[\text{inf}_t = a_{t}^1 +\sum_{i=1}^4 b_{it}^ 1 \ \text{inf}_{t-i} +\sum_{i=1}^4 c_{it}^1 \ \text{une}_{t-i} +\sum_{i=1}^4 d_{it}^1\ \text{tbi}_{t-i} +u _{t}^1\]
\[\text{une}_t = a_{t}^2 +\sum_{i=1}^4 b_{it}^ 2 \ \text{inf}_{t-i} +\sum_{i=1}^4 c_{it}^2 \ \text{une}_{t-i} +\sum_{i=1}^4 d_{it}^2 \ \text{tbi}_{t-i} +u _{t}^2\]
\[\text{tbi}_t = a_{t}^3 +\sum_{i=1}^4 b_{it}^ 3 \ \text{inf}_{t-i} +\sum_{i=1}^4 c_{it}^3 \ \text{une}_{t-i} +\sum_{i=1}^4 d_{it}^3\ \text{tbi}_{t-i} +u _{t}^3.\]
Central banks commonly regulate the money supply by changing the
interest rates to keep a stable inflation growth. The R code below uses
macroeconomic data from the United States, exactly the one used in
Primiceri (2005), with the following three variables: inflation rate
(inf
), unemployment rate (une
) and the three months treasury bill
interest rate (tbi
). For illustration, a VAR(4) model is estimated
using the function VAR
from the package vars, a TVVAR(4) model is
estimated using the function tvVAR
from the package tvReg and a
TVP-VAR(4) model is estimated using the function bvar.sv.tvp
from the
package bvarsv. Furthermore, their corresponding impulse response
functions with horizon 20 are calculated to forecast how the inflation
responds to a positive shock in interest rates. The TVVAR(4) can also be
estimated with function tvmvar
from R package mgm, which will give
the same coefficient estimates than the tvVAR
for the Gaussian kernel
and same bandwidth. However, package mgm does not have an impulse
response function and, for this reason, it is left out of the example.
> data(usmacro, package = "bvarsv")
> VAR.usmacro <- vars::VAR(usmacro, p = 4, type = "const")
> TVVAR.usmacro <- tvVAR(usmacro, p = 4, bw = c(1.14, 20, 20), type = "const")
> TVPVAR.usmacro <- bvarsv::bvar.sv.tvp(usmacro, p = 4, pdrift = TRUE, nrep = 1000,
+ nburn = 1000, save.parameters = TRUE)
The user can provide additional optional arguments to modify the default
estimation. See Section 3 to understand the usage of
arguments bw
, tkernel
, est
and singular.ok
. In addition, the
function tvVAR
has the following arguments:
The number of lags is given by the model order set in the argument
p
.
Other exogenous variables can be included in the model using the
argument exogen
, which accepts a vector or a matrix with the same
number of rows as the argument y
.
The default model contains an intercept (i.e., it has a mean
different from zero). The user can set argument type = "none"
, so
the model has mean zero.
The variance-covariance matrix from the residuals of a TVVAR(\(p\)) can be
used to calculate the orthogonal TVIRF. The plot
method for object of
class attribute "tvvar"
displays as many plots as equations, each plot
with the fitted and residuals values as it is shown in
Figure 4 obtained with:
> plot(TVVAR.usmacro)
Figure 4 shows the residuals of the inflation equation that has a mean close to zero and the fitted values are fitting the observed values closely.
Function tvIRF
estimates the TVIRF with main argument, x
, which is
an object of class attribute "tvvar"
returned by the function tvVAR
.
The user can provide additional optional arguments to modify the default
estimation as explained below.
The user has the option to pick a subset of impulse variables and/or
response variables using arguments impulse
and response
.
The horizon of the TVIRF coefficients can be chosen by the user with
argument n.ahead
, the default is 10.
The orthogonalised impulse response function is computed by default
(ortho = TRUE
). In the orthogonal case, the estimation of the
variance-covariance matrix of the errors is estimated as
time-varying (ortho.cov = "tv"
) by default (see Section
6 for theoretical details). Note that the user can
enter a value of the bandwidth for the variance-covariance matrix
estimation in bw.cov
. It is possible to use a constant
variance-covariance matrix by setting ortho.cov = "const"
.
If the user desires to obtain the cumulative TVIRF values, then
argument cumulative
must be set to TRUE
.
Following the previous example, the lines of code below estimate the IRF using the package vars, the TVP-IRF using the package bvarsv and the TVIRF using the package tvReg.
> IRF.usmacro <- vars::irf(VAR.usmacro, impulse = "tbi", response = "inf", n.ahead = 20)
> TVIRF.usmacro <- tvIRF(TVVAR.usmacro, impulse = "tbi", response = "inf", n.ahead = 20)
> TVPIRF.usmacro <- bvarsv::impulse.responses(TVPVAR.usmacro, impulse.variable = 3,
+ response.variable = 1, draw.plot = FALSE)
A comparison of impulse response functions from the three estimations is plotted in Figure 5, whose R code is shown below:
> irf1 <- IRF.usmacro$irf[["tbi"]]
> irf2 <- TVIRF.usmacro$irf[["tbi"]]
> irf3 <- TVPIRF.usmacro$irf
> ylim <- range(irf1, irf2[150,,], irf3[50,])
> plot(1:20, irf1[-1], ylim = ylim, main = "Impulse variable: tbi from 1990Q2",
+ xlab ="horizon", ylab ="inf", type ="l", lwd = 2)
> lines(1:20, irf2[150,,-1], lty = 2, lwd = 2)
> lines(1:20, irf3[50,], lty = 3, lwd = 2)
Figure 5 displays the IRF, the TVIRF and the TVP-IRF
(the two latter at time 150 in our database, which corresponds to the
second quarter of 1990) for horizons 1 to 20. The IRF and TVIRF follow a
similar pattern: a positive shock of one unit in the short-term interest
rates (tbi
) during 1990Q2 results in an initial drop in inflation
during the first three months, followed by an increase for two or three
months and finally in a steady decrease until it plateaus one year
after. The left plot shows an increase in inflation during the first
three months and a drop after.