Abstract
Despite the increasing relevance of forecasting methods, causal implications of these algorithms remain largely unexplored. This is concerning considering that, even under simplifying assumptions such as causal sufficiency, the statistical risk of a model can differ significantly from its causal risk. Here, we study the problem of causal generalization—generalizing from the observational to interventional distributions—in forecasting. Our goal is to find answers to the question: How does the efficacy of an autoregressive (VAR) model in predicting statistical associations compare with its ability to predict under interventions? To this end, we introduce the framework of causal learning theory for forecasting. Using this framework, we obtain a characterization of the difference between statistical and causal risks, which helps identify sources of divergence between them. Under causal sufficiency, the problem of causal generalization amounts to learning under covariate shifts albeit with additional structure (restriction to interventional distributions). This structure allows us to obtain uniform convergence bounds on causal generalizability for the class of VAR models. To the best of our knowledge, this is the first work that provides theoretical guarantees for causal generalization in the timeseries setting.
references.bib
Causal Forecasting
Vankadara, Faller, Minorics, Ghosdastidar, Janzing
Causal Forecasting:
Generalization Bounds for Autoregressive Models
Leena C Vankadara^{1} &Philipp M Faller^{2} &Lenon Minorics \aistatsaddressUniversity of Tübingen &Karlsruhe Institute of Technology &Amazon Research \aistatsauthorDebarghya Ghoshdastidar &Dominik Janzing \aistatsaddressTechnical University of Munich &Amazon Research
1 Introduction
Forecasting algorithms have gained increasing relevance for a broad variety of applications like meteorology, climatology, economics and business, just to name a few. While traditional economic modelling relies on relatively simple time series models \parencitebrockwell1991time – for example, autoregressive models, or methods like cointegration – modern business planning heavily uses neural network based forecasting \parenciteFaloutsos2018,Januschowski2020,Salinas2021. The advancement of forecast quality, however, sometimes blurs the fact that causal implications of forecasting models remain unclear although Granger’s seminal work \parenciteGranger1969 already described a tight connection between causality and predictive relevance: subject to assumptions like causal sufficiency and absence of contemporaneous influence, a time series causes another one if the past of the former helps for predicting the latter from its own past ^{1}^{1}1see \textcite[Chapter 10]peters2017elements, for a justification of Granger causality from a modern perspective using graphical models. Under appropriate conditions, this justifies interpreting autoregressive models as causal, that is, as structural equations, instead of equations describing statistical associations only. Theoretical understanding of causal implications of deep learning based forecasting is significantly more complex. After all, getting forecasts ‘explainable’ in the sense of feature relevance \parenciteLundberg2017,Molnar2019,janzing2020feature,wang2020 seems the natural step to go first, before asking causal questions. ^{2}^{2}2see, however \textciteNauta2019 for a proposal for deep learning based Granger causality
In this work, we are interested in the following question:
When can we causally interpret forecasting models?
Specifically, we discuss the relation between the quality of a model with respect to prediction and the validity of its causal implications for the simple class of vector autoregressive models (VAR). These models are widely applied in domains ranging from econometrics \parencitelutkepohl2009econometric, grabowski2020tobit and finance \parencitezivot2006vector to neuroscience \parencitevaldes2005estimating. We argue here that even for very simple classes of models and even under simplifying assumptions such as causal sufficiency, causal interpretation of forecasting models is nontrivial.
To motivate the problem of causal generalization informally first, consider a simple setting of a process with strongly correlated observations where , and hence . These observations can be explained either by a causal model with a strong influence of on or a causal model with a strong influence from on . The difference between the models gets apparent when an intervention randomizes and independently. Then, predictions become hard, particularly when and are set to significantly different values. While both models are similar in their statistical predictions, they differ substantially in their causal predictions. This example already shows that, even in a simple setting, causal and statistical predictability can differ significantly.
Connection to Covariate Shift. The problem of causal generalization is closely related to the problem of covariate shift. To see this, we first ignore the time series setting and consider the scenario where a variable should be predicted from a variable , which is known not to be an effect of . If there is no common cause of and , that is, we assume causal sufficiency \parenciteSpirtes1993, the statistical relation between and is entirely due to the influence of on . Therefore, the observational and interventional conditionals coincide ( in Pearl’s language \parencitepearl2009causality) and the true parameters would be optimal both from a statistical and causal perspective. However, due to estimation bias, a prediction model learned using finite samples from may perform poorly when randomized interventions draw values from a different distribution , which is the usual covariate shift scenario \parenciteMasashi2012. In our setting, and are represented by the past and the present values of a (possibly multivariate) time series, respectively. Accordingly, we focus on interventional distributions that are natural for this setting: independent interventions at different time points and components of the multivariate process. Hence, we have additional structure in comparison with the standard covariate shift problem. We are not aware of any theoretical work on covariate shift in the timeseries setting. Nevertheless, we later describe the connections to learning theory in the standard covariate shift setting. To improve the readability of the paper, we defer this discussion and other related work to Section 6.
Our Contributions. Our central goal in this work is to develop a formal and thorough understanding of causal generalization for the class of VAR models. In particular, we take a theoretical approach to addressing the following fundamental question:
How does the efficacy of an autoregressive model in predicting statistical associations compare with its ability to predict under interventions?

[leftmargin=0.2cm, label=.]

To this end, we introduce a new framework of causal learning theory for forecasting to analyze when forecasting models can generalize from the observational to the interventional distributions (Section 3). This is closely related to the setting of learning under domain adaptation.

Using this framework, we provide a thorough characterization of the difference in the statistical risk and the causal risk (Section 4). Such a characterization allows us to identify the sources of divergence between the two quantities. Our results show that the strength of correlation of the underlying process plays a key role in determining causal generalizability. This result also highlights that already for very simple models, causal and statistical errors can even diverge.

Further, we provide finitesample, uniform convergence bounds on causal generalization for the class of VAR models (Section 4). Our bounds suggest that standard regularization, typically used for better statistical generalizability, may also help causal generalization. To the best of our knowledge, this is the first work that provides theoretical guarantees for causal (or even outofdistribution) generalization of any kind in the timeseries setting.

As a byproduct of our analysis, we provide an explicit characterization of the powers of a companion matrix (see Section 2) using symmetric Schur polynomials \parencitemacdonald1998symmetric of its eigenvalues (Lemma 2) which, to the best of our knowledge, has not been noted in the literature. This result could be of independent interest in theoretical endeavors that build upon companion matrices which, for instance, are ubiquitous in stochastic processes and in LinearTimeInvariant dynamical systems \parencitedavison1976robust, melnyk2016estimating.
2 Preliminaries. Structural Causal Models and Autoregressive Models
In this Section we provide relevant background on structural causal models and vector autoregressive models. We begin with some relevant notation.
Notation. For any stochastic process , we use to denote the set of and the variables in the past of . We distinguish this from which denotes the vector . When it is clear from context, to reduce cumbersome notation, we simply use . For any random variable , denotes its expectation. For any matrix , we use and to denote the th row and th column of respectively. We use to denote the th element of . For any vector at time , we use to denote the th element of . We use to denote the maximum and minimum eigenvalues and the condition number of respectively, where . denotes the identity matrix of size , denote the set of natural numbers and integers respectively and denotes the set .
Structural Causal Models and Interventions. Structural Causal Models (SCM) \parencitepearl2009causality are helpful in formalizing causal relationships and hence provide a framework to formally express the effects of manipulations. For a set of variables , SCM’s consist of a set of causal assignments of the form
where denotes the parents of and are mutually independent.
Interventions. For causal questions, it is of fundamental interest to investigate the behavior of a model under interventions. We consider interventions that are particularly natural in timeseries: atomic and relative interventions. By an atomic intervention of a variable , we refer to setting the variable to some constant . We use Pearl’s do notation and denote this by Interventions of the form detach a variable entirely from its parents, and sometimes it is instructive to consider interventions that still preserve dependencies from the parents. For instance, when politicians discuss the impact of changes of the tax system, they would not discuss a change that lets all people pay the same amount of tax regardless of their income. A simple, more feasible, intervention in this setting would be, to keep the dependence on the income but slightly change the function^{3}^{3}3For different types of ‘soft’ interventions see \textciteMarkowetz2005,CIC2020 . In particular, we consider ‘shift interventions’, that is, additive perturbations of strength on some random variable and denote this by
VectorAutoregressive Models, Stability, and Stationarity. Our focus is to understand causal generalization in vector autoregressive models (VAR) which we introduce in Definitions 2.1. For a thorough exposition, we refer the reader to \textcitelutkepohl2013vector.
Definition 2.1 (Vector Autoregressive Model).
A vector autoregressive model (VAR(p)) of dimension and order is defined as
(1) 
where is a vectorvalued timeseries, for all , are the coefficients of the VAR model, and denotes the noise vector such that and and For some , we simply set for enhanced readability. Our results can be easily generalized to arbitrary covariance matrices by means of the spectral properties () of .
Definition 2.2 (Weak Stationarity).
A stochastic process is weakly stationary if the mean and the covariance of the process does not change over time, that is, for all
(2) 
where denotes the autocovariance function.
The autocovariance matrix of plays a central role in our results and analysis. For any , we use to denote the autocovariance matrix of size defined as .
It is often quite convenient to rewrite a VAR model of order in Equation (1) as a VAR(1) model, , where are defined as , , and is a (multi) companion matrix defined as:
(3) 
The eigenvalues of the multicompanion matrix fully characterize the stability and stationarity of the VAR process. For a VAR(p) process to be weakly stationary, the eigenvalues of , which satisfy
(4) 
are constrained to not lie on the unit circle. If the magnitude of the eigenvalues are for all , then the underlying process is stable, that is, its values do not diverge \parencitelutkepohl2013vector.
3 Causal Learning Theory for Forecasting with VAR models
In this section, we introduce a framework to formalize the question we asked earlier: When can we causally interpret forecasting models? We refer to this framework as causal learning theory for forecasting. Consider the standard framework of statistical learning in timeseries prediction. For any stochastic process taking values in , given a loss function , the goal of statistical learning is to learn a function that achieves the smallest generalization risk :
(5) 
Since the true process is unknown, the empirical average () of generalization risk is used to estimate . Statistical generalization bounds of the form: are then used to provide guarantees on the uniform deviation of empirical risk from expected risk given sufficiently many samples and when the “complexity” of the function class is small.
Analogously, the goal of causal learning is to find a function that achieves the smallest causal generalization risk , defined in a natural way as maximal expected loss over all distributions induced by a class of admissible interventions , that is,
(6) 
In contrast to statistical learning, the empirical averages of the causal error cannot be utilized to estimate since we often do not have access to data from the interventional distributions. Instead, we are only provided with data from the observational/statistical distribution of the stochastic process and the goal of causal learning theory is to understand, to what extent is it possible to provide causal generalization guarantees of the form: .
To summarize, we ask: Can the predictors in generalize from the empirical observational distribution to the true interventional distribution assuming that we control the complexity of and that we observe sufficiently many samples drawn from the observational distribution? One cannot address this question in a very general setting and would need model assumptions to make any meaningful statements. To this end, we now formally introduce our problem setup.
Statistical and Causal Models. We assume that follows a weakly stationary VAR(p) model, for some , in the sense of Definitions 2.1 and 2.2. A causal interpretation of the structural equations of the VAR model in (1) naturally yields the corresponding causal model. Analyzing and quantifying the effect of interventions in time series models is already challenging due to the presence of inherent observed confounding between the variables. The addition of a hidden common cause renders the analysis infeasible, and therefore we assume causal sufficiency of the model, that is, there are no hidden common causes. We consider the family of all VAR models as our function class of statistical and causal estimators.
To evaluate the statistical and causal efficacy of an estimator we introduce the notions of statistical and causal forecast risks (Definitions 3.1 and 3.2). To define statistical forecast risk, we consider the setting of step forecasting where the goal is to predict from observations for some . To define the causal forecast risk, we consider interventions on for some .^{4}^{4}4The results for simultaneous atomic interventions are qualitatively similar to those of atomic interventions on single variables, and for ease of exposition, we present our discussion in the latter case.
Definition 3.1 (Statistical forecast error).
The statistical forecast error of an estimator in the prediction of a target variable from , drawn from the observational distribution, can be defined as
(7) 
The empirical counterpart, which we denote by , is defined naturally by replacing the expectation by the empirical mean.
Definition 3.2 (Causal errors).
The interventional forecast error of in predicting the effect of an intervention , on target variable is defined as
(8) 
where is shorthand for and denotes the distribution induced by the intervention .
To isolate from the dependence on specific values that the intervened variables are set to, we present our results via the notion of average causal error. It is defined as the expected interventional error for interventions drawn from the marginal distribution of .
(9) 
4 Causal Generalization for VAR
In this section, we present causal generalization bounds for the family of VAR models under atomic interventions. We defer our results under relative interventions to the Supplementary and instead focus on providing a clear interpretation of the results for atomic interventions. We first provide an overview of our results in the more general case of VAR(p) models and later provide a thorough interpretation of the results, often by deriving simplified versions of the results for AR(p) models. We begin by providing an exact characterization of the difference in statistical and causal errors in terms of the model and estimated parameters and the autocovariance matrix of the underlying process.
Lemma 1 (Difference in Causal and Statistical errors (VAR)).
Consider a vectorvalued time series , following a VAR(q) process parameterized by . Let . For any VAR(p) model with parameters ,
where denotes the autocovariance matrix of of size , is a multicompanion matrix of the form described in (3) with the first rows populated by , with defined as for all and as for all . is analogously defined.
Building on Lemma 1, we establish that the stability of the underlying process controls causal generalizability from the observational to interventional distributions.
Proposition 1 (Stability Controls Causal Generalization (VAR)).
Let follow a VAR(q) process for some . For any VAR(p) model,
(10) 
where denotes the condition number of the autocovariance matrix .
The result states that the difference in expected causal and statistical errors is controlled by the condition number of the autocovaraince matrix of size . The condition number of the autocovariance matrix can get arbitrarily large as the process gets closer to the boundary of the stability domain. This result, therefore, shows that even for very simple classes of forecasting models, causal interpretations can get challenging. We later provide a detailed interpretation of this result and provide an explicit bound on in terms of the stability parameter for AR(p) models (Corollary 2).
Proposition 1 allows us to employ generalization bounds for timeseries \parenciteyu1994rates, meir2000nonparametric, mohriRademacher, mcdonald2017nonparametric to derive finitesample causal generalization bounds for VAR models. In particular, we utilize Rademacher complexity bounds^{5}^{5}5See the supplementary for some background on Rademacher complexity for generalization in timeseries under mixing conditions \parencitemohriRademacher to derive Theorem 1.
Theorem 1 (Finite sample bounds for VAR(p) models).
Let denote the family of all VAR models of dimension and order . For any , let be integers such that and for a fixed constant determined by the underlying process. Let be a finite sample drawn from a VAR(q) process. Then, simultaneously for every , under the square loss truncated at , with probability at least ,
(11) 
where , , and denotes the empirical Rademacher complexity of .
Our causal generalization bound in Theorem 1 suggests that, given sufficiently many samples, the true causal error can be guaranteed to be close to empirical statistical error if our VAR models come from a class with a small Rademacher complexity, particularly when the process is associated with a small stability parameter. Our bounds indicate that standard regularization techniques that help improve statistical generalizability may also help causal generalizability.
Interpreting The Results. We now focus on providing a detailed interpretation of our results. First, we take a minor detour to present a technical result (Lemma 2) which is useful both in deriving some of our main results as well as in interpreting them.
Lemma 2 (Expressing powers of a companion matrix using symmetric polynomials).
For a companion matrix with distinct eigenvalues, for any , the th element of , can be expressed using Schur polynomials of the eigenvalues of , that is, , where refers to the Schur polynomial indexed by .
Lemma 2 shows that the coefficients of the powers of a companion matrix can be fully characterized using symmetric Schur polynomials of its eigenvalues. A good overview of these polynomials can be found in \textcitechaugule2019schur. An advantage of expressing the coefficients using symmetric Schur polynomials is that these polynomials have been a subject of extensive research in combinatorics and an equivalence between several alternate definitions has been established. To name a few, Cauchy’s bialternant expression, \parencitecauchy1815memoire,jacobi1841functionibus, the combinatorial formula \parencitemacdonald1998symmetric or Jacobi–Trudi identity \parencitejacobi1841functionibus are all equivalent ways to define Schur polynomials. It is therefore possible and often beneficial to choose the definition that yields the most useful notion for the context. We utilize this connection to interpret our results. First, for easier interpretation, we simplify Lemma 1 to the following result for scalar AR models.
Corollary 1 (Difference in Causal and Statistical errors (AR)).
Let follow an AR(q) process. Then, for any AR(p) model with parameters ,
(12) 
where, for any , denotes the autocovariance of with lag . and are the corresponding companion matrices of the model and estimated parameters as defined in Lemma 1.
Lemma 1 identifies several factors that entail control causal generalizability. We now describe them.
Poor causal generalizability is a finitesample effect. First, as we would expect, our result shows that the differences in statistical and causal losses arise only as a finite sample effect. If the true model parameters are known, as it can be seen from Lemma 1, causal error coincides with statistical error. Although this does not hold in a general setting, recall that the assumption of causal sufficiency along with the fact that we always intervene on the causes of the target variable (due to time ordering) implies an invariance of the conditional distribution , that is, and therefore the difference in losses arises purely due to shifts in the joint distributions of . However, as in the covariate shift setting, if the model parameters are fully specified then both statistical and causal predictability are optimal (independent of shifts in ) and, as our result shows, the two losses coincide. Note that, in the presence of hidden confounding, the difference in losses would also exist in the population limit, that is, even if the model parameters are known, the presence of systematic bias induces differences between causal and statistical predictability.
Correlations control causal generalizability. Recall our motivating example of the two highly correlated timeseries where the casual and statistical errors diverge. Intuitively, one would therefore expect that large correlations among time series potentially induce large differences between observational and interventional distributions. The quantitative dependence of causal generalizability on the correlation structure of the process is, however, less obvious. Lemma 1 confirms the intuition and shows that correlations between the intervened timeseries across both the components and time instances in control generalizability from observational to the interventional distributions.
Highdimensional and higherorder processes can hurt generalization. For highdimensional processes it is not unlikely to have strong correlations across components, which may obscure causal relations in the same way as strong correlations across time does for univariate processes. Lemma 1 also supports this intuition and shows that strong correlations across components as well as time instances play a role. With increasing order or dimension of the processes, larger orders of covariances across time and dimensions could entail poor causal generalizability.
Dependence on . The dependence of the error on arises through the elements of the matrix power . A simple computation shows that, even for an AR(2) model, the dependence of these coefficients on the model parameters is asymmetric and highly intricate. However, using the Weyl formulation of Schur polynomials, we have that for any AR(p) model, the coefficients can be expressed as , where refers to the elementary symmetric polynomial of order and with variables . While this is not the most interpretable definition per se, the dependence of the coefficients on is easily understood and it is easy to verify that if the underlying model as well as the estimated model are both stable (), the coefficients and hence the difference in errors exponentially decays with interventions arbitrarily in the past of the target variable and if either of the process is not stable (), the difference can indeed diverge.
Proposition 1 allows us to obtain a highlevel perspective on causal generalizability. It states that the condition number of the autocovariance matrix controls causal generalizability. Both the maximum and the minimum eigenvalue of the autocovariance matrix (and hence the condition number) can be used as a measure of stability and hence determine the strength of correlation of the underlying process \parencitebasu2015regularized, melnyk2016estimating. As the process gets closer to the boundary of stability domain, the autocovariance matrix gets singular and hence the condition number of the autocovariance matrix can get arbitrarily large. Proposition 1, therefore, can be interpreted as if the underlying process gets closer to the boundary of the stability domain the causal and statistical errors can diverge. This result further highlights that even for simple classes of forecasting models and with simplifying assumptions such as causal sufficiency, causal interpretation can be challenging. To show this formally, by means of Lemma 2, we can derive an explicit upper bound on the condition number of the autocovariance matrix for AR(p) models and arrive at Corollary 2.
Corollary 2 (Stability Controls Causal Generalization (AR)).
Consider an AR(q) process, such that eigenvalues of its companion matrix satisfy . For any AR(q) model ,
(13) 
where is some finite constant that depends on the order of the underlying process.
The bound in Corollary 2 is elegant due to its simplicity and generality. However, the cost of generality of the bound that relies only on the stability parameter is clearly that it cannot explain the variations in behavior exhibited by individual processes with the same stability parameter. For instance, consider an AR(2) model with parameters and with so that it is essentially an AR(1) model. Then, it is easy to verify that . The combinatorial definition of the Schur polynomials \parencitemacdonald1998symmetric allows us to express the coefficients as follows: Combining this with Corollary 1, it is easy to see that if the estimated model is also close to AR(1), then the coefficients and and hence the difference in statistical and causal errors is close to . The bound in (13) which relies on the stability parameter does not capture this. For tighter bounds that utilize additional information about the spectrum of the companion matrix, we can exploit the connection to Schur polynomials to arrive at the following bound.
where is a constant that depends on , and are the stability parameters of the true and estimated processes respectively and and denote the set of eigenvalues of and respectively.
5 Simulations
To verify the practical behavior of causal and statistical risks, we provide some simple simulations to study the errors of different estimators under AR processes. For each presented plot, we draw parameters for 10,000 stationary processes. We draw the coefficients of each process independently and uniformly from and reject sets of parameters that yield a nonstationary process. For each process, we draw a training sample with 100 timesteps and a test sample with 1000 timesteps. We predict one timestep ahead, i.e. . We also investigate the case where the assumption of causal sufficiency is violated. To this end, we draw the parameters for a twodimensional VAR process analogously to the scalar AR case. We then train and evaluate scalar VAR models on one of the two dimensions. This is the setting for the leftmost plot in Figure 2. In the supplement we provide additional plots with hidden confounder, as well as varying order, sample size and . To estimate the coefficients we use Ordinary Least Squares, Ridge, Lasso and ElasticNet regression, where in Figures 1 and 2 we only show Ridge regression and the other estimators can be found in the supplement. Ridge regression minimizes the empirical statistical error plus the length of the estimated coefficient vector . That is, where denotes the model prediction with estimated parameters and regularization strength . To find the optimal regularization strength with respect to statistical error we use standard grid search with 5fold crossvalidation. We further do a second grid search with crossvalidation where we train on the observational data (as in normal grid search) but evaluate the models after intervening on the heldout fold. This of course, would be hard to do in practice. Let be the optimal strength from the latter grid search.
In line with our theoretical results, we find that even for simple scalar AR processes of small orders, the causal error of the estimators is often several times larger than the statistical error (see Figure 1). The fact that we can have such strong deviations even in such a simplistic setting further emphasizes the need for theoretical guarantees for causal generalization.
Our uniform convergence bounds in Theorem 1 suggest that standard regularization techniques which help improve statistical generalization can also help causal generalization. The comparison of and in Figure 2 shows a high concentration along the diagonal. It suggests that choosing the regularization strength by crossvalidating on observational data yields similar regularization strengths to the optimal regularization strength for causal purposes even under mild violations of causal sufficiency. This might indicate that in our restricted setting, doing hyperparameter optimization with respect to observational data, could be a reasonable approach, even for causal purposes. It should be noted here that a bias towards regularizing more strongly than for considerations of statistical predictability seems to be recommended when hidden common causes are allowed, since [janzingRegularization] shows (in the i.i.d. setting) that, regularization helps mitigate the impact of confounding. In our preliminary experiments, we don’t yet find sufficient evidence to support this. However, we do not recommend interpreting this result beyond our model assumption since a thorough investigation is necessary to make any further claims.
6 Discussion and Related Work
Our work highlights that even for very simple models and even under simplifying assumptions such as causal sufficiency, causal and statistical errors can diverge. This challenges the causal interpretation of forecasting models used in practice, which are typically far more complex. It also emphasizes the need for providing guarantees for causal generalization in a similar vein as providing guarantees for statistical learning. To this end, we initiate a first analysis in this direction by introducing a framework for causal learning theory for forecasting and providing conditions under which one can guarantee generalization in the causal sense for the class of VAR models. We hope that this work inspires more theoretical work that allows certifying the validity of the causally interpreting forecasting models.
From a theoretical point of view, relaxing the assumption of causal sufficiency is considerably more intricate in the time series setting and deserves a separate treatment. For instance, even computing conditional (and interventional) distributions can be highly involved without additional assumptions such as Gaussianity of innovations. Despite this assumption, the dependency of statistical and causal risks on the underlying process gets very technical and deriving meaningful bounds on causal generalization poses a challenge. We currently leave this analysis for future work.
Relation to Theory of Domain Adaptation. The literature that is perhaps most relevant to our context is that of learning theory for domain adaptation, in particular, for covariate shift. Theoretical analysis of domain adaptation when labelled samples from the source distribution and unlabelled samples from the target distribution are generated i.i.d was initiated by \textciteben2007analysis, who provided VC bounds for binary classification under covariate shifts based on a discrepancy measure between source and target distributions that depends on the hypothesis class and is estimable from finite samples. \textcitemansour2009domain extended the work to the context of regression in the i.i.d setting by adapting the discrepancy measure for more general loss functions and by providing tighter, datadependent Rademacher bounds. Despite the i.i.d assumption that is necessary to derive their finitesample bounds, the results in \textcitemansour2009domain are perhaps the most relevant to our setting.
Comparison with existing bounds. We can utilize one of the main results from \textcite[Theorem 8]mansour2009domain which does not rely on the i.i.d assumption to arrive at the following populationlevel bound for our setting: , where indicates the true VAR(q) model. The above upper bound can be quite large since it measures the worstcase error over all possible pairs of true and estimated predictors in the family of stationary VAR models. These bounds are naturally pessimistic since they do not incorporate structural knowledge of the class of interventional distributions. For instance, as we would expect, our bounds in Lemma 1 show that the difference in causal and statistical errors vanishes asymptotically while the bound from \textcitemansour2009domain does not reflect this.
Relation to Estimation of Treatment Effects. Another related problem is that of estimating treatment effects in the potential outcomes framework \parencitehill2006interval, shi2019adapting, where the goal is to estimate the effects of binaryvalued treatments from observational data under a multivariate confounding model. Our setting is more general in that variables in the multivariate process can take a continuum of interventions and play a multiplicity of roles — each variable plays the role of treatment, confounder, and the target variable. Of particular relevance is the work of \textciteshalit2017estimating, johansson2020generalization, who prove generalization error bounds on estimating individuallevel treatment effects in terms of standard generalization error and a distance measure between the treated and control distributions. This result is similar to domain adaptation bounds in [ben2007analysis, mansour2009domain] and may also be interpreted as causal learning theory in the sense of our paper.
Supplementary to Causal Forecasting
Appendix A Background
Notation. We recall the notation and some key definitions here for the reader’s convenience. For any stochastic process , we use to denote the set of and the variables in the past of . We distinguish this from which denotes the vector . When it is clear from context, to reduce cumbersome notation, we simply use . For any random variable , denotes its expectation. For any matrix , we use and to denote the th row and th column of respectively. We use to denote the th element of . For any vector at time , we use to denote the th element of . We use to denote the maximum and minimum eigenvalues and the condition number of respectively, where . denotes the identity matrix of size , denote the set of natural numbers and integers respectively and denotes the set .
Definition A.1 (Vector Autoregressive Model).
A vector autoregressive model (VAR(p)) of dimension and order is defined as
(14) 
where is a vectorvalued timeseries, for all , are the coefficients of the VAR model, and denotes the noise vector such that and and For some , we simply set for enhanced readability. Our results can be easily generalized to arbitrary covariance matrices by means of the spectral properties () of .
Definition A.2 (Weak Stationarity).
A stochastic process is weakly stationary if the mean and the covariance of the process does not change over time, that is, for all
(15) 
where denotes the autocovariance function.
The autocovariance matrix of plays a central role in our results and analysis. For any , we use to denote the autocovariance matrix of size defined as .
It is often quite convenient to rewrite a VAR model of order in Equation (16) as a VAR(1) model, , where are defined as , , and is a (multi) companion matrix defined as:
(16) 
The eigenvalues of the multicompanion matrix fully characterize the stability and stationarity of the VAR process. For a VAR(p) process to be weakly stationary, the eigenvalues of , which satisfy
(17) 
are constrained to not lie on the unit circle. If the magnitude of the eigenvalues are for all , then the underlying process is stable, that is, its values do not diverge \parencitelutkepohl2013vector.