PIE: The Fundamental Problem of Causal Inference

We evaluate policies for a multitude of reasons. On the one hand, we wish to increase our knowledge and learn about its underlying function to improve program design and effectiveness. On the other hand, considerations from economy, society, and politics are the reason behind the evaluation. This may include allocation decisions via cost-benefit analysis (economic), transparency and accountability (social), public sector reform and innovation (social), credibility (politics), and overcoming ideological (fact-free) bickering (politics).

Impact Evaluation can offer some answers. In particular,

  • the effect of a programme can be measured,
  • the effectiveness of the programme can be measured (i.e. how much better of are beneficiaries),
  • how much do outcomes change under alternative designs,
  • how differently are different people impacted, and
  • is the programme cost-effective.

A causal framework is required to obtain the answer. However, there are risks inherent to evaluation. Evaluations not free and unproblematic. The main issues are

  • cost time and resources,
  • distorted incentives by equating measurable and valuable whereby intrinsic motivation is crowded out,
  • Goodhart’s law, i.e. that a measure that becomes a target ceases to be a good measure (because people optimise towards the measure rather than the underlying objective that was evaluated via the measure), and
  • a tendency towards encrustation and self-fulfilling prophecy.

Not every programme needs or should be evaluated: the potential benefits should outweigh the costs.

Causal Inference

Three basic criteria for causation are identified by Hume. A spatial and temporal contiguity, a temporal succession and constant conjunction (Hume, 1740, 1748). The shortcomings were shown by Mill who noted that observation without experimentation (supposing no aid from deduction) can ascertain sequences of co-existences, but cannot prove causation (Mill, 1843). Lastly, Lewis refined the notion as “a cause is something that makes a difference, and the difference it makes must be a difference from what would have happened without it” (Lewis, 1974).

Causal Claim

A causal claim is a statement about what did not happen. A statement “X caused Y” means that Y is presnet, but Y would not have been present if X were not present. This the counterfactual approach to causality. In this approach there is no notion that just because X caused Y that X is the main or the only reason why Y happened or even that X is “responsible” for Y.

This leads a fundamental misunderstanding between attribution and contribution. Atribution would claim that X is the one and only cause for Y, whereas contribution merely states that X contributed towards the outcome Y. The approach cannot figure out the causes of Y, only whether some X contributed to bringing Y about. The reason is that there is never a single cause of Y and there is no reason that the effects of different causes should add up to 100% unless all causes could be added up. Furthermore, causes are not rival. The question should always be “how much does X contribute to Y”, not “does X cause Y”.

Causality and Causal Pathways

Causal mechanisms or causal chains are often used to illustrate causality. This can be misleading, as Holland points out: If A is planing action Y and B tries to prevent it, but C intervenes to stop B. Then both A and C contribute to Y, but Hume’s criteria are not fulfilled for the contribution of C to Y. (Holland, 1986)

Necessary and sufficient conditions

A necessary condition demands for Y to occur, X needs to happened [latex]X \implies Y[/latex]. A sufficient condition demands that if X occurs then Y occurs [latex]not Y \implies not X[/latex].

In causal frameworks the conditions need to be related to allow for probabilistic conditions (probability of Y is higher if X is present) and contingencies (X causes Y if Z is present, but not otherwise).

No Causation Without Manipulation

The counterfactual approach requires one to be able to think through how things might look in different conditions. Causal claims should be restricted to conditions that can conceivably (not necessarily practically) be manipulated (Holland, 1986).

Ruben’s Potential Outcome Framework

In the framework a dichotomous treatment variable [latex]X[/latex] with [latex]x \in \{0,1\}[/latex] where [latex]x=1[/latex] means treated. Additionally, a dichotomous outcome variable [latex]Y[/latex] with [latex]y \in \{0,1\}[/latex]. Furthermore, we define [latex]Y^{x=1}[/latex] as the potential outcome under the treatment and [latex]Y^{x=0}[/latex] as the counterfactual outcome under no treatment.

The outcome of interest would be the individual causal/treatment effect (ITE): [latex]X[/latex] has a causal effect on unit [latex]i[/latex]’s outcome [latex]Y[/latex] if and only if [latex]Y^{x=1}\neq Y^{x=0}[/latex]. However, only one of the outcomes is actually observable (factual). ITE’s are not defined, which is referred to as the fundamental problem of causal inference (Holland, 1986).

We need an alternative measure of the causal effect. It is still possible to figure out whether [latex]X[/latex] causes [latex]Y[/latex] on average if [latex]Pr\[X=1|Y=1\]-Pr\[X=1|Y=0\]=Pr\[Y^{x=1}=1\]-Pr\[Y^{x=0}=1\][/latex], i.e.treatment and control units are exchangeable (statistically speaking [latex]Y^X[/latex] is stochastically indepented of [latex]X[/latex]). Then and only then is the average treatment effect (ATE) of [latex]X[/latex] and [latex]Y[/latex] for a finite population [latex]N[/latex] equal to [latex]Pr\[Y^{x=1}=1\]\neqPr\[Y^{x=0}=1\][/latex] and [latex]E\[Y^{x=1}-Y^{x=0}\][/latex] and [latex]\frac{1}{N}\sum_{i=1}^N \[Y^{x=1}-Y^{x=0}\][/latex].

Stochastic independence can be achieved  either with the scientific approach which relies on homogeneity assumption. This is impossible with heterogeneous units. The statistical approach relies on large numbers and can only achieve exchangeability on average. Furthermore, for the ATE we also need the Stable Unit Treatment Value Assumption (SUVTA) to hold, i.e. no variation in treatment across units and non-interference between the units of observation (treatment of one does not influence others).

References

Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945–960.
Hume, D. (1740). An Abstract of a Book Lately Published; Entituled, A Treatise of Human Nature, &c: Wherein the Chief Argument of that Book is Farther Illustrated and Explained. (C. Borbet [ie Corbet], Ed.). over-against St. Dunstan’s Church, in Fleetstreet: Addison’s Head.
Hume, D. (1748). Philosophical Essays Concerning Human Understanding: By the Author of the Essays Moral and Political. opposite Katherine-Street, in the Strand: Millar A.
Lewis, D. (1974). Causation. The Journal of Philosophy, 70(17), 556–567.
Mill, J. S. (1843). A System of Logic. Parker.