ETH, STP

QPAM: Uncertainty

A first form of uncertainty is randomness. It is a stochastic behaviour that can be dealt with sensitivity analysis, estimates from experience (actuarial) or hedging.

A more complicated form of uncertainty is indeterminacy. It describes situations that are qualitatively known, but cannot be reliably quantified. It is often addressed by attempting to quantify it anyway , using heuristics or stylised facts.

Another form of uncertainty is based on reductionism. Reductionism arises when a complex system is not completely understood and proxy relationships are established. It is a form of epistemological uncertainty and often addressed with lay knowledge (and bringing in lay people) and mixed methods (quantitative and qualitative) .

Yet another form of uncertainty is paradigmatic. Expert knowledge can narrow perspectives and neglects the unseen. Consequently, paradigmatic blind spots arise which can only be dealt with by interdisciplinary co-production of knowledge and staying curious.

The last form of uncertainty is based on unknown relations. This arises when something has not happened before (e.g. how cyber crimes work was unimaginable 30 years ago). It can be summed up as ontological uncertainty. It can only be addressed with humility and the ability to adapt.

Type III errors

Uncertainty may also arise from committing errors. More commonly known are these errors:

  • Type I: False positive, reject null hypothesis when true
  • Type II: False negative, accept null hypothesis when false

An additional third type of error can be summed up as the correct answer to the wrong question. These errors usually arise by using the wrong method (i.e. model design) or use a discipline specific approach (i.e. context) to a non-applicable field (e.g. it was tested whether rats die from heroin-laced water when they could also choose normal, which they did and it was concluded that addiction was so strong that it would make them kill themselves. Follow-up studies showed that when they have other rats and entertainment around rats don’t kill themselves on heroin. So the original research actually answered the question whether rats would commit suicide when being alone and without entertainment).

In the worst case it is used intentionally to distract from a real problem by a form of mental bait-and-switch.

Framing

Yet another source of uncertainty is the frame in which a discussion takes place. Describing a problem often circumscribes the solution. It determines what kind of methods and options are open for debate. It recasts a subjective reality as “objective”. It is a unusual field for engineers and natural scientists who assume an objective reality (e.g. physics). Any issue that comes up for policy analysis has most likely been framed before it is handed to analysts and scientists to process. For instance, economic growth is a usual assumption that cannot be challenged by any solution proposed.

Value conflict resolution

Another source of uncertainty is that value conflicts need to be resolved. Previously mentioned was the problem space. Any solution is essentially political and will always be a negotiation of social forces. It is not typically an academic field and is often dealing with red lines (deeply vs. weakly held values), shifting from why to how (it solves the problem), procedural vs. substantive fairness, obfuscated players (grassroots vs. astroturfing) and it is often a space for missing issues to be attached. Academics are usually hidden players that get called in after the fact to compare minor differences.

Standard
ETH

CGSS: Introduction to Game Theory

A fundamental problem is over-usage. Usually, nobody wants over-usage to occur. However, on an individual level companies want to maximise their profit while they neglect the social cost. The problem is also known as the Tragedy of the commons which is based on the free-rider dilemma.

A game is defined by three components: players, actions, and payoffs. Solutions concepts are usually in the form of a Nash equilibrium.

“Unless the number of individuals (micro) in a group is quite small,  or unless there is coercion or some other special device to make individuals act in their common interest, rational, self-interested individuals will not act to achieve their common or group interest.” – Mancur Olson(1965)

Players  N = \{1,2,...,n\} are a discrete set or continuum population of individuals.

Actions S = S_1 \times... \times S_n \subset \mathbb{R}^+ are a compact and bound set, to fulfil Nash’s proof, but some variant are possible to get a fixed point result.

Payoffs \Pi_i : S \mapsto \mathbb{R} ~ \forall i \in N are most often a set of the real space.

Let  \Gamma = \{ N, S, \Pi \} be a game with  N = \{1,2,...,n\} players. Each  i \in N has a strategy set  S_i where S = S_1 \times... \times S_n is the set of all possible strategy profiles.

Let x_i \in S_i be a strategy action for i and x_{-i} \in S_{-i} be a strategy profile for all players except i. When each i \in N chooses strategy x_i resulting in x = (x_1, ..., x_n) then i obtains profit \Pi_i(x).

A strategy profile x^* \in S is a Nash Equilibrium NE if no unilateral deviation in strategy by any single player is profitable for that player, that is, \Pi_i(x_i^*, x_{-i}^*) > \Pi_i(x_i,x_{-i}^*) ~ \forall x_i \in S_i and i \in N.

Prisoner’s Dilemma

In the prisoner’s dilemma each of the two players has two action, either to cooperate (C)or to defect(D). The payoffs are defined as  CC=(3,3), CD=(1,4), DC(4,1), and DD(2,2). If player start of with CC, any one player can gain an advantage by defecting. Once defected the other player has no choice but to defect as well to optimize his payoff. The Nash Equilibrium of DD is reached where neither player could optimize his payoff by switching strategy on his own. However, if both were to switch at once, they could get a better result. Since no trust between players exist this outcome is not possible without breaking the rationality assumption.

Public Goods Game

The N-person generalisation of the Prisoner’s dilemma. The players are a finite population of N individuals. Each i is endowed with a budget B, common to all players. The action consists of each player i choose some amount  a_i \in \mathbb{R}^+_0 to invest in a shared investment account. The collected investments are a = (a_i)_{i\in N}. The payoff is \phi_i(a_i,a_{-i}) = B - a_i + r \cdot \sum_{j\in N} a_j. For  r < 1 the optimal decision (i.e. the Nash Equilibrium) is not to invest which results in a payoff of zero a_i = 0 ~ \forall i \in N \rightarrow \phi_i = 0. Even though there would be a positive outcome if everybody invests, free-riding would allow some to make more gains. A rational individual would therefore not invest.

Side note:

The lecture on game theory in QPAM is covering the same topic and is therefore not covered in detail.

Standard
ETH, STP

QPAM: Investment Appraisal

To perform investment appraisal we need to analyse cost and revenues. First we need to find profitability indicators, then we need to assess the life cycle cost. Then we can perform a cost effectiveness analysis and last we need to also consider dynamics and sensitivities.

Profitability Indicators

On the cost side we have investment costs, O&M costs, taxes and running cost. On the revenue side the quantity of the output and the price of the output. The cash-flow is the sum of expenses and revenues over a period of time. It is, however, not representative of the investment as it does not include discount rates (not to be confused with social discount rate discussed later). The payback period is the time needed to recover the investment costs based on cash-flow.

Based on discount  real cashflow = \frac{nominal cashflow}{1+discount rate} the real value of nominal cash decreases over time. It therefore represent the opportunity cost of capital. It brings one to the question whether a “similar” investment could bring in more or less return.

Investment is based on equity and debt. The Weighted Average Capital Cost (WACC) combine the capital structure and the cost of debt and the cost of equity.  r = WACC_{pretax} = \frac{E}{V}\cdot k_E + \frac{D}{V} \cdot k_D where V is the investment volume. The Net Present Value (NVP)  NPV = -investment_0 + \sum_{t=1}^T \frac{cashflow_t}{(1+r)^t}. The NVP is termed in a currency and alternatives are usually chosen based on the heighest NVP for the least investment.

The Internal Rate of Return (IRR) is the highest discount rate that can be used such that the NVP turns to 0. It allows to compare investments without having to take NVP. The IRR has disadvantages in complex scenarios where its meaning becomes unclear.

The Profitability Index (PI) is the NPV relative to invested capital.

Life Cycle Cost

The Life Cycle Costs (LCC) consider all cost and savings over the entire lifetime. Cost need to be discounted.  LCC = C_0 + \sum_{t=0}^T\frac{c_t}{(1+r)^t}.

Levelised Cost of Electricity (LCOE) is the constant electricity price over the entire life of an asset to cover all operating expenses, debts and interests and returns to investors LCOE = \frac{\sum_{t=0}^T (CAPEX + OPEX / (1 + r)^t)}{\sum_{t=0}^T (kWh_{initial,net} \cdot (1-Degrade)^t / (1+r)^t)} where CAPEX is the investment cost and OPEX the operation costs. It was first discussed for electricity, however, it is also applicable to any other good that needs to be bought over the lifetime of an asset. It is commonly used by policy makers, planners, researchers and investors. It can compare technologies (with different life times) as long as they produce the same outcome. A famous application are the Feed-in Tariffs (FIT) in Denmark and Germany.

Cost Effectiveness Analysis

Cost Effectiveness Analysis (CEA) has two starting point. One is to reach a certain target at minimal costs and the other is to achieve a maximal impact for a given cost.

First the LCC is performed. Based on the LCC the baseline cost and cost of different policy options can be assessed. The incremental cost of options is the difference to the baseline. Summed over different LCCs the abatement/relative costs  can be computed and options can finally be compared.

Computing the baseline is difficult and is often an issue of political contention.

Dynamics and Sensitivities

Dynamic developments in technology so far have shown that technology gets cheaper over time. Can the development be forecast to get a grasp on the discount rate?

Sensitivity analysis judges the different factors of a analysis and tries to order them according to the impact.

 

Standard
ETH, STP

QPAM: Problem Definition

Before a policy analysis can be performed, the underlying problem needs to be defined. Any problem definition is a function of what the author of the definition cares about and what they assume in terms of necessary relationships. Bardach suggested to be clear about what you care about and what you assume about the fact (Bardach, 2012). He claims that problems are caused by market failures, inequalities and non-efficient government solutions.

Value conflict resolution

An alternative approach to think about is a two-dimensional problem space that is defined by the involvement of money and social consensus (legislation could be passed and the law will remain long-term). The solution to a problem defined in this space would fall into the following categories

  • Private (companies): if no social consensus is there, but money can be made
  • Non-Profit (organisations): if no social consesus is there, but no money can be made
  • Public (government): if a social consensus is there, but no money can be made
  • Ambivalent (companies/government/organisations): if a social consensus is there and money can be made

References

Bardach, E. (2012). A Practical Guide for Policy Analysis (4th ed.). Thousand Oaks, California: Sage.

Standard
ETH, STP

QPAM: Introduction

Quantitative Policy Analysis and Modelling (QPAM) concerns itself with the goals that we set ourselves for society and how government can obtain these goals.

The “Grüne Wirtschaft” initiative currently up for a referendum in Switzerland can be said to have the goal of a sustainable economy by 2050. If accepted it mandates to assess the process every four years. Based on the assessment it is authorized to raise taxes, put subsidies in place, support research and impose regulation to achieve the goal. It is a policy with a policy target and a set of policy instruments (the mandate to review and the authority to enact other instruments). To answer the question, whether it is a good policy it needs to be assess whether it takes Switzerland in a desired direction and whether it is an effective mean to get there. Both the desired direction and the effectiveness of a policy can be highly contested as seen in the current public debate in Switzerland. The particular policy proposed is fairly vague except for the mandate to review (roughly 6 months of assessment every 4 years). The vagueness of the means leaves it open for different groups to interpret them and hence makes its meaning contested.

Policy Analysis

The term is confusing inasmuch it has two major meanings attached. This class will focus on the analysis of the effect of policies put in place. For the “Grüne Wirtschaft” initiative it would be assessing the means (raise taxes, put subsidies in place, support research and impose regulation) by their effectiveness. This approach involves a lot of economics and modelling and will drive the majority of the classes.

In Political Science Departments there is a second meaning to it that focuses on the analysis of the political factors that lead to a policy. It analyses the different coalitions that drive the creation of a policy. Our example would probably focus on how the Green Party drove the policy and how other polities react to it. This topic would be governed in Environmental Governance classes and will not be developed further here.

If  the referendum was to pass the Swiss Environmental Office would be tasked with implementing it and developing and analysing the means that will be the focus of this course.

Policy analysis is also used by NGO’s such as WWF to identify policies that they want to advocate for.

Eugene Bardach’s book “A Practical Guide for Policy Analysis”(Bardach, 2012)is the main resource for the course and I reviewed it here.

References

Bardach, E. (2012). A Practical Guide for Policy Analysis (4th ed.). Thousand Oaks, California: Sage.

Standard