Off to the Chicago Forum on Global Cities

Today I write you as part of a mini-series on my stay at the Chicago Forum on Global Cities (CFGC). I have been kindly sponsored by ETH Zurich and the Chicago Forum to participate in the event. I am currently sitting in my train to Zurich airport and I am looking forward to 3 days of intensive discussions on the future of global cities. You will also find a post about this event on the ETH Ambassadors Blog and ETH Global Facebook and you may look out for some tweets.

I hope for many interesting meetings and conversations at the Forum, especially about my main topics of interest Big Data in Smart Cities – for which I have a short policy brief with me designed in the Argumentation and Science Communication Course of the ISTP – as well as ways to design better cities based on Big Data and knowledge of human (navigation) behaviour – the topic of my soon to start PhD.

PIE: Ex Post Evaluation: Establishing Causality without Experimentation

So far, we discussed evaluation based on ex ante Randomised Control Trials (RCT). In ex post experiments, we have an another opportunity for an evaluation. However, there are strong limitations:

  • Treatment manipulation is no longer possible,
  • observational data only (i.e. the outcome of social processes), and
  • baseline may be missing

To address these issues, the idea is to exploit naturally occurring randomisation (as if randomly assigned) and try to construct a valid counterfactual. In essence, we try to construct ex post RCT based on historical data. The advantage of such a experiment is, that it allows us to learn from the past.

Natural experiments

The randomisation has arisen naturally, for instance after a natural disaster, infrastructure failures or indiscriminate forms of violence. The key tasks here is to establish, that the treatment variation is random with the limitation that it can only be checked for observable parameters.

These natural experiments are also called quasi-experiment.

Regression Discontinuity Design (RDD)

An RDD exploits that the treatment variable [latex]A[/latex] is determined, either completely or partially, by the value of an assignment variable [latex]X[/latex] being on either side of a fixed cutpoint [latex]c[/latex]. In the limit at cutpoint [latex]c[/latex] the assignment of treatment is random/exogenous. The assumption is that units just left and right of the cutpoint [latex]c[/latex] are identical except with regard to the treatment assignment.

A RDD and RCT are closely related considering that each participant is assigned a randomly generated number [latex]v[/latex] from a uniform distribution over the range [latex][0,1][/latex] such that [latex]T_i = 1[/latex] if [latex]v\geq0.5[/latex] and [latex]T_i=0[/latex] otherwise.

However, RDDs are more prone to several issues:

  • Omitted Variable Bias is possible (in contrast to well-designed RCTs), because a variable [latex]Z[/latex], which may affect [latex]T[/latex], change discontinously at the cutpoint [latex]c[/latex].
  • Units may be able to manipulate their value on assignment variable [latex]X[/latex] to influence  treatment assignment around [latex]c[/latex].
  • Global functional form misspecification may lead to non-linearities being interpreted as discontinuities.

Instrumental Variable Regression (IV)

There is a set of problems where endogeneity or joint determinancy of [latex]X[/latex] and [latex]Y[/latex], omitted variable bias (other variables) and measurement errors in [latex]X[/latex] may be an issue.

An instrumental variable [latex]Z[/latex] is introduced. It is considered a valid instrument if and only if:

  • Instrument relevance: [latex]Z[/latex] must be correlated with [latex]X[/latex],
  • Instrument exogeneity: [latex]Z[/latex] must be uncorrelated with all other determinants of [latex]Y[/latex].

Potential sources for instruments are:

  • Nature: e.g. geography, weather, biology in which a truly random source of variation influences [latex]X[/latex] (no endogeneity).
  • History: e.g. things determined a long time ago, which were possibly endogenous contemporaneously, but no longer plausibly influence [latex]Y[/latex].
  • Institutions/Policies: e.g. formal or informal rules that influence the assignment of [latex]X[/latex] in a way unrelated to [latex]Y[/latex].

Potential issues for IV Resssions are:

  • Conditional unconfoundedness of [latex]Z[/latex] regarding [latex]X[/latex] (ideally [latex]Z[/latex] as if random with regard to [latex]X[/latex] such as eligibility rule or encouragement design).
  • Weak instrument: [latex]Z[/latex] and [latex]X[/latex] are only weakly correlated.
  • Violation of exclusion restriction: [latex]Z[/latex] affects [latex]Y[/latex] independent of [latex]X[/latex].

Difference-in-Differences Estimation

Instead of comparing only one point in time, changes are compared over time (i.e. before and after the policy intervention) between participants units and non-participants units. This requires panel data of at least two time periods for participating and non-participating units before and after the policy intervention. Ideally, we have more than two pre-intervention periods.

All participating units should be included, but there are no particular assumptions about how non-participating units are selected. This allows for an arbitrary comparison group as long as they are a valid counterfactual.

However, as always, there are several issues:

  • Time-varying confounders could be an alternative explanation since we estimate time-invariant difference and any omitted variable would have an impact.
  • Parallel trend assumption is required to show that there is a similar trajectory and that the difference is due to the intervention.

Synthetic Control Methods (SCM)

While related to diff-in-diff estimation strategy, there are a few differences as SCM

  • can only have one participating unit;
  • does not need a fixed time period and can be applied more flexibly;
  • requires a continuous outcome;
  • relaxes the modelling assumptions of diff-in-diff; and
  • does not have a method for formal inference (yet).

Non-participating units can be chosen freely (like in diff-in-diff), but work best with many homogeneous units. It also requires panel data, but with multiple pre-intervention years. The longer the time frame available, the better SCM can construct a valid counterfactual. The synthetic control is constructed of weights of the non-participating units.

Typical issues that can arise are as the quality of synthetic control depends on:

  • number of potential controls,
  • homogeneity of potential controls,
  • richness of time varying dataset to create synthetic control,
  • number of pre-intervention period observations, and
  • smoothness of the outcome.

Matching

The idea behind matching is to find identical pairs on a key confounder with multiple confounders across multiple dimensions. This becomes exceedingly difficult and the proposed solution is to estimate each units participation propensity given observable characteristics. There are a variety of matching estimators with different advantages and disadvantages (e.g. nearest neighbour, coerced exact matching, genetic matching, etc.).

In matching, we look at the distribution of the treated and the untreated. Observations that are never treated or untreated should be excluded.

Usual pitfalls include:

  • Quality of matching estimate requires similar assumptions to hold  as regular regression (complete understanding of which factors affect the programme outcome).
  • Matching can be considered a non- or semi-parametric regression, hence not significantly different from a causal inference perspective than multivariate regression.

Conclusion

Quality of ex post evaluation relies on the validity of the counterfactual. RCTs are the gold standard but the ex post methods have the advantage of allowing us to learn from the past. There is no technical/statistical fix that will create a valid counterfactual: it is always a question of design. Finding valid counterfactuals in observational data requires innovative thinking and deep substantive knowledge.

PIE: Ex Ante Evaluations: Randomised Control Trials

For a Randomised Control Trial (RCT) several elements are necessary. Evaluators need to be involved long before it ends – ideally from the conception. Randomisation must take place. The operationalisation and measurement must be defined. The data collection process and the data analysis must be performed rigorously. Randomisation and the data collection process is what makes the difference compared to other experiments.

To run a RCT partners are needed. Often firms and non-governmental organisations (NGOs) are partners since they benefit from evaluating their work. Governments are still rare partners, but the number of government-sponsored RCTs is increasing. The programme under evaluation can be either an actual programme (only simple impact evaluation) or pilot programmes (impact evaluation can become field experiments).

Randomisation needs to be chosen carefully. Usually, access, timing of access or encouragement to participant is randomised. The optimal test would run on access, but ethical concerns may make that impossible. Relaxation of access are obtained by introducing the treatment in waves or by encouraging the population to take up the treatment (and measuring the people who did not take it up as “non-accessed”).

A randomised trial can be run in many circumstances, for instance:

  • New program design,
  • New program,
  • New services,
  • New people,
  • New local,
  • Over- or under-subscription of existing programs,
  • Rotation of program benefits or burdens,
  • Admission cutoffs, and
  • Admission in phases.

The choice of the randomisation level is another important parameter. Often the type of treatment or randomisation opportunities determine the randomisation level. However, the best choice usually would be the individual level. If the level can be picked, there are still several considerations that need to be made when determining the level:

  • Unit of measurement (experiment constraint),
  • spillovers (interaction between observed units),
  • Attrition (loss of units throughout the observation),
  • Compliance (will the treatment work),
  • Statistical Power (number of units available), and
  • Feasibility (can the unit be observed (cost-)effectively).

Often a cross-cutting design is used. Where several treatments are applied and distributes across the units such that all combinations are observed. This allows to assess the individual treatments as well as the cross-interaction between treatments.

The data collection process can be described in three steps.

  1. Baseline Measurement (asserts whether the randomisation works and can assess the bias of non-compliance and attrition)
  2. Midstream Measurement (only in long-term projects)
  3. Endline Measurements (in combination with baseline measurements allows to estimate unit fixed effect (differences-in differences estimation)

Threats to RCTs

The four main threats to RCTs are partial compliance, attrition, spill-overs and evaluation-driven effects.

Partial compliance can be caused by several issues: Implementation staff may depart from the allocation or treatment procedures; Units in treatment group may not be treated or units in control group may be treated; Units in treatment group do not get complete treatment; and Unit exhibit opposite of compliance (so-called defiers).

Attrition may occur for specific reasons. However, often drop-out reasons cannot be measured (or the answer is refused).

Spillovers may occur on different levels: physical, behavioural, informational, general-equilibrium (market-wide) effects (i.e. long-term system-wide effects).

Evaluation-driven effects have been observed. The most important ones include:

  • Hawthorn effect (the treatment group changes behaviour due to being observed; the to counter the effect something that cannot be changed should be measured [alternatively, not telling that participants are observed would help, but is often unethical]);
  • John Henry effect (the control group changes behaviour due to believing being disadvantaged and trying to compensate);
  • resentment and demoralisation effect (selection into treatment and control changes behaviour);
  • demand effect (participants want to produce the result required by the observer or impress the observer);
  • anticipation effect (the psychological state of participants influences there performance [e.g. if they expected to be good at something they score better]); and
  • survey effect (the framing and order of tasks/questions will influence the response).

Summary

RCTs are seen as the gold standard when it comes to impact evaluation, but they are no panacea. Designing a rigorous impact evaluation requires innovative thinking and substantive knowledge of the program and policy area.

Funders in the U.S. and GB have begun to increasingly ask for RCT evaluation of programmes, especially in certain domestic policy areas (e.g. education) and development. Continental Europe is still somewhat lagging in this respect.

PIE: The Fundamental Problem of Causal Inference

We evaluate policies for a multitude of reasons. On the one hand, we wish to increase our knowledge and learn about its underlying function to improve program design and effectiveness. On the other hand, considerations from economy, society, and politics are the reason behind the evaluation. This may include allocation decisions via cost-benefit analysis (economic), transparency and accountability (social), public sector reform and innovation (social), credibility (politics), and overcoming ideological (fact-free) bickering (politics).

Impact Evaluation can offer some answers. In particular,

  • the effect of a programme can be measured,
  • the effectiveness of the programme can be measured (i.e. how much better of are beneficiaries),
  • how much do outcomes change under alternative designs,
  • how differently are different people impacted, and
  • is the programme cost-effective.

A causal framework is required to obtain the answer. However, there are risks inherent to evaluation. Evaluations not free and unproblematic. The main issues are

  • cost time and resources,
  • distorted incentives by equating measurable and valuable whereby intrinsic motivation is crowded out,
  • Goodhart’s law, i.e. that a measure that becomes a target ceases to be a good measure (because people optimise towards the measure rather than the underlying objective that was evaluated via the measure), and
  • a tendency towards encrustation and self-fulfilling prophecy.

Not every programme needs or should be evaluated: the potential benefits should outweigh the costs.

Causal Inference

Three basic criteria for causation are identified by Hume. A spatial and temporal contiguity, a temporal succession and constant conjunction (Hume, 1740, 1748). The shortcomings were shown by Mill who noted that observation without experimentation (supposing no aid from deduction) can ascertain sequences of co-existences, but cannot prove causation (Mill, 1843). Lastly, Lewis refined the notion as “a cause is something that makes a difference, and the difference it makes must be a difference from what would have happened without it” (Lewis, 1974).

Causal Claim

A causal claim is a statement about what did not happen. A statement “X caused Y” means that Y is presnet, but Y would not have been present if X were not present. This the counterfactual approach to causality. In this approach there is no notion that just because X caused Y that X is the main or the only reason why Y happened or even that X is “responsible” for Y.

This leads a fundamental misunderstanding between attribution and contribution. Atribution would claim that X is the one and only cause for Y, whereas contribution merely states that X contributed towards the outcome Y. The approach cannot figure out the causes of Y, only whether some X contributed to bringing Y about. The reason is that there is never a single cause of Y and there is no reason that the effects of different causes should add up to 100% unless all causes could be added up. Furthermore, causes are not rival. The question should always be “how much does X contribute to Y”, not “does X cause Y”.

Causality and Causal Pathways

Causal mechanisms or causal chains are often used to illustrate causality. This can be misleading, as Holland points out: If A is planing action Y and B tries to prevent it, but C intervenes to stop B. Then both A and C contribute to Y, but Hume’s criteria are not fulfilled for the contribution of C to Y. (Holland, 1986)

Necessary and sufficient conditions

A necessary condition demands for Y to occur, X needs to happened [latex]X \implies Y[/latex]. A sufficient condition demands that if X occurs then Y occurs [latex]not Y \implies not X[/latex].

In causal frameworks the conditions need to be related to allow for probabilistic conditions (probability of Y is higher if X is present) and contingencies (X causes Y if Z is present, but not otherwise).

No Causation Without Manipulation

The counterfactual approach requires one to be able to think through how things might look in different conditions. Causal claims should be restricted to conditions that can conceivably (not necessarily practically) be manipulated (Holland, 1986).

Ruben’s Potential Outcome Framework

In the framework a dichotomous treatment variable [latex]X[/latex] with [latex]x \in \{0,1\}[/latex] where [latex]x=1[/latex] means treated. Additionally, a dichotomous outcome variable [latex]Y[/latex] with [latex]y \in \{0,1\}[/latex]. Furthermore, we define [latex]Y^{x=1}[/latex] as the potential outcome under the treatment and [latex]Y^{x=0}[/latex] as the counterfactual outcome under no treatment.

The outcome of interest would be the individual causal/treatment effect (ITE): [latex]X[/latex] has a causal effect on unit [latex]i[/latex]’s outcome [latex]Y[/latex] if and only if [latex]Y^{x=1}\neq Y^{x=0}[/latex]. However, only one of the outcomes is actually observable (factual). ITE’s are not defined, which is referred to as the fundamental problem of causal inference (Holland, 1986).

We need an alternative measure of the causal effect. It is still possible to figure out whether [latex]X[/latex] causes [latex]Y[/latex] on average if [latex]Pr\[X=1|Y=1\]-Pr\[X=1|Y=0\]=Pr\[Y^{x=1}=1\]-Pr\[Y^{x=0}=1\][/latex], i.e.treatment and control units are exchangeable (statistically speaking [latex]Y^X[/latex] is stochastically indepented of [latex]X[/latex]). Then and only then is the average treatment effect (ATE) of [latex]X[/latex] and [latex]Y[/latex] for a finite population [latex]N[/latex] equal to [latex]Pr\[Y^{x=1}=1\]\neqPr\[Y^{x=0}=1\][/latex] and [latex]E\[Y^{x=1}-Y^{x=0}\][/latex] and [latex]\frac{1}{N}\sum_{i=1}^N \[Y^{x=1}-Y^{x=0}\][/latex].

Stochastic independence can be achieved  either with the scientific approach which relies on homogeneity assumption. This is impossible with heterogeneous units. The statistical approach relies on large numbers and can only achieve exchangeability on average. Furthermore, for the ATE we also need the Stable Unit Treatment Value Assumption (SUVTA) to hold, i.e. no variation in treatment across units and non-interference between the units of observation (treatment of one does not influence others).

References

Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945–960.
Hume, D. (1740). An Abstract of a Book Lately Published; Entituled, A Treatise of Human Nature, &c: Wherein the Chief Argument of that Book is Farther Illustrated and Explained. (C. Borbet [ie Corbet], Ed.). over-against St. Dunstan’s Church, in Fleetstreet: Addison’s Head.
Hume, D. (1748). Philosophical Essays Concerning Human Understanding: By the Author of the Essays Moral and Political. opposite Katherine-Street, in the Strand: Millar A.
Lewis, D. (1974). Causation. The Journal of Philosophy, 70(17), 556–567.
Mill, J. S. (1843). A System of Logic. Parker.

ASC: Concepts and Arguments

The evaluation of the correctness of arguments is the core of this blog post.

We will focus on justifications as premises are to be evaluated with the scientific method. However, the quality of premises must be considered. Only true premises can guarantee the truth of the conclusion, so the reasons must be impeccable. Therefore, acceptable premises can provide for the acceptance of the conclusion. Additionally, all premises must be consistent to form the conclusion.

Deductive inference (validity) is then used to come to the conclusion. An inference is valid, if all premises are true and the conclusion must be true (where the must refers to the relation between premisses and conclusion not the conclusion itself). Consequently, a valid inference cannot have a false conclusion from true premises.

A central feature of valid premises is that if the conclusion is false, then one of the premises must be false. However, if all premises are true, but the conclusion is still false, the inference must be invalid.

Formal validity is based on the structure of the assertion, not the meaning. [latex](X \in  A \lor X \in B) \land \neg X \in B \implies X \in A[/latex].

Material validity is based on the relation between the concepts. E.g. a square has four sides of equal length.

Conditional Claims

A sufficient condition A for B [latex]A\implies B[/latex]. Logically, B must be true if A occurs, but could be true due to different condition C. B is a necessary condition for A [latex]\neg B\implies \neg A[/latex]. A is true at most if B is true, however, there could be a C that is also necessary for A to be true.

Inferential schemes for conditional claims If [latex]A[/latex] then [latex]B[/latex] are Modus Ponens [latex]A\implies B[/latex] and Modus Tollens [latex]\neg B \implies \neg A[/latex]. Invalid schemes include denying the antecedent [latex]\net A\implies \neg B[/latex] and affirming the consequent [latex]B\implies A[/latex] are a formal fallacy in reasoning (non-sequitur).

In the fallacy of equivocation the same expression is used in different ways in the premises than in the conclusion.

In the naturalistic fallacy a normative claim is deduced from a descriptive claim.

Non-deductive inferences claim to be correct (but not valid). An inference is correct iff its premises together provide a good reason for accepting its conclusion. However, a central characteristics of correct non-deductive inferences is that the conclusion can be false, even if the premises are true. The conclusion is supported with different degrees and can be strengthened or weakened with additional premises. Non-formal fallacies may occur if the reasons are too weak to support the conclusion.

Inductive inferences are an important class of non-deductive inferences, where the premises are analysed with the help the theory of probability and statistics. Enumerative induction concludes from a sample property distribution the whole population property distribution. Statistical syllogism derives from a population that two properties have been observed in common and concludes that one implies the other. Predictive induction observes two properties in a sample and concludes that one implies the other. Usual fallacies include too small samples, non-representative samples, relevant information not considered, and false deliberation regarding probability.

Argument by analogy

A claim is justified by analogy to another claim. This argument is often fallacious, as illustrative analogies (do not justify conclusions), irrelevant analogies, weak analogy, and not considering a relevant disanalogy.

Causal inferences

A factor F is considered to be a causally relevent if for an event, two situations must differ in that the event only occurs in the situation in which F is present. Typicall fallacies include inference from temporal sequence, inference from positive correlation, and inference the inverse causal relevance.

Inference to Best Explanation

A hypothesis is justified because it is the best (closest) explanation for the obtaining of certain facts.

Rules of reasoning

Shifting the burden of proof, instead of justifying a controversial claim, is done by attacking the opponents position or demanding justification. Other ways of shifting are appeals to authority.

The relevance of reasoning demands that an argument is in favour of owns claim. Throwing in arguments that are not related to the claim break relevance.

The accuracy of reasoning is undermined by a “straw man”-fallacy, where an exaggeration or change of claims of the opponent is introduced to make it more susceptible to criticism. More generally, a different claim is attributed to opponent to attack them.

The freedom of speech needs to be preserved by allowing criticism and justification of arguments. Fallacies include argument ad baculum (if you believe A than you believe B), argument ad misercordiam (have pity because X), and argument ad hominem (attacking the person, rather than the argument).

Implicit premises must be stated if they complete the argument. Fallacies include attributing false implicit premises to opponents and not accepting implicit premises in one owns arguments.

Shared premises have to be accepted to reach a reasonable agreement about a controversial claim. Fallacies include retreating from shared premises or attributing claims to be shared premises.

Accepting results of previous argumentation is necessary. Otherwise fallacies such as argument ad ignorantiam  (taking absence of evidence as evidence of absence) or equating the defence of a claim with its acceptance.

 

SMABSC: Cognitive Agents

Cognitive models are a representation of an agent control mechanism resembling the cognitive architecture of a mind.  It can be understood as a control system (e.g. a flow graph how to react) that takes sensory inputs and produces motor outputs (Piaget, 1985).

More advanced models include adaptive memory (Anderson, 1983).

Famous models include SOaR: State Operator and Result (Laird, Newell, & Rosenbloom, 1987); BDI: Belief, Desire, and Intention (Bratman, Israel, & Pollack, 1988); PECS: Physics, Emotions, Cognitive, Social (Urban & Schmidt, 2001); ACT-R: Adaptive Control of Thought – Rational (Anderson, Matessa, & Lebiere, 1997); CLARION: Connectionist Learning with Adapative Rule Induction On-line (Sun, 2006); and Agent Zero (Epstein, 2014).

The communality of all these is summed up in this slide from the University of Michigan:

 

References

Anderson, J. R. (1983). A spreading activation theory of memory. Journal of Verbal Learning and Verbal Behavior, 22(3), 261–295.
Anderson, J. R., Matessa, M., & Lebiere, C. (1997). ACT-R: A theory of higher level cognition and its relation to visual attention. Human-Computer Interaction, 12(4), 439–462.
Bratman, M. E., Israel, D. J., & Pollack, M. E. (1988). Plans and resource‐bounded practical reasoning. Computational Intelligence, 4(3), 349–355.
Epstein, J. M. (2014). Agent_Zero: Toward neurocognitive foundations for generative social science. Princeton University Press.
Laird, J. E., Newell, A., & Rosenbloom, P. S. (1987). Soar: An architecture for general intelligence. Artificial Intelligence, 33(1), 1–64.
Piaget, J. (1985). The equilibration of cognitive structures: The central problem of intellectual development. University of Chicago Press.
Sun, R. (2006). The CLARION cognitive architecture: Extending cognitive modeling to social simulation. In Cognition and multi-agent interaction: From cognitive modeling to social simulation (pp. 79–99). Cambridge University Press.
Urban, C., & Schmidt, B. (2001). PECS–agent-based modelling of human behaviour. In Emotional and Intelligent–The Tangled Knot of Social Cognition. Presented at the AAAI Fall Symposium Series, North Falmouth, MA.

SMABSC: Disease Propagation

The SIR model was introduced as a mathematical model with differential equations (Kermack & McKendrick, 1927). The basic states are Susceptible, Infected, and Recovered.

[latex]N_i = \frac{dS}{dt}+\frac{di}{dt}+\frac{dR}{dt}[/latex]

In the SIR model, the fundamental trajectory of disease propagation could be captured, immunity was acquired after disease and the population is homogeneous.

But the SIR model has short-comings:

  • populations are not infinite and increase/decrease over time,
  • populations are spatial objects and have (voluntary) spatial interactions,
  • populations are heterogeneous, including isolated sub-populations with irregular interactions as well as significant distances.
  • populations are driven by endogenous social factors and constrained by exogenous environmental circumscription.

Additional states were introduces such as Exposed (i.e. dormant infections that does not infect others yet) and Maternal immunity (i.e. individuals that cannot be infected). The order has been rearranged such as SIS, SEIS, SIRS.

These models still did not address explicit spatial reference to population loci & transportation networks, distribution of social & medical information (temporal and spatial), mechanisms to simulate voluntary & forced quarantines, treatment options and their delivery (temporal and spatial), and characteristics of pathogens and disease vector.

Questions that needs to be answered about a model is the form of circumscription (Carneiro, 1961, 1987, 1988) (e.g. social and environmental forces), instantiation topologies (i.e. abstract or logical relationships, social networks, or space), activation schemes (e.g. random, uniform random, Poisson), and encapsulation.

References

Carneiro, R. L. (1961). Slash-and-burn cultivation among the Kuikuru and it implications for cultural development in the Amazon Basin. In J. Wilbert (Ed.), The evolution of horticultural systems in native South America, causes and consequences. “Antropológica” (pp. 47–67). Caracas, Venezuela: Editorial Sucre.
Carneiro, R. L. (1987). Further reflections on resource concentration and its role in the rise of the state. In L. Manzanilla (Ed.), Studies in the Neolithic and Urban Revolutions (pp. 245–260). Oxford, UK: Archaeopress.
Carneiro, R. L. (1988). The Circumscription Theory: Challenge and response. The American Behavioral Scientist, 31(4), 497–511.
Kermack, W. O., & McKendrick, A. G. (1927). A contribution to the mathematical theory of epidemics. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 115(772).

ISN: Communities and cliques

Dyads are not yet interesting for network research. However, starting at triads interesting behaviour appear. In triads, balance and control appear. Triads appear more commonly in social networks than in random graphs.

Clustering coeffcient

The clustering coefficient measures the amount of transitivity in a network. When A is related to B, and B is in turn related to C, then A is also related to C. The index ranges from 0 to 1. In social networks it is usually between 0.3 and 0.6.

Triad counts

Four patterns are counted between all vertices. Empty (three vertices with no connections), 1-Edge (two vertices are connected via an edge and one vertex is not), 2-Edge (One vertex is connected to both other vertices with an edge) and Triangle (all vertices are connected).

Groups of more than 3

There are a few technical description that are extensions (and modifications) of the complete for more than 3 nodes:

  • Clique: Maximal complete subgraph of [latex]n \geq 3[/latex]
  • k-cliques: Relaxation to [latex]k > 1[/latex], where [latex]k[/latex] is geodesic length
  • k-core: Subgraph where each node adjacent to at least [latex]k[/latex] other nodes
  • k-plex: Maximal subgraph of [latex]g[/latex] nodes where each node adjacent to no fewer than [latex]g-k[/latex] nodes

Communities

Communities are densely connected within and sparsely connected with others. A community structure can affect individuals, groups, networks and give insights into how social systems work.

Community detection

Community detection is a computationally difficult problem. Knowing the optimal solution is not always possible. Algorithmic approximations are often used to detect communities.

Modularity

Modularity is always smaller than 1, but can also take negative values. Higher values means more edges within modules.

[latex]Q = \frac{1}{2m}\sum_{ij}\delta(C_i,C_j)(A_{ij}-P_{ij})[/latex]

Where [latex]A_{ij} encodes that an edge exists and [latex]P_{ij}[/latex] the probability of an edge existing and [latex]m[/latex] the number of edges and [latex]\delta(C_i,C_j)[/latex] describes whether two nodes are inside the same module.

Kernighan-Lin Algorithm

Based on a pre-determined number of communities is randomly assigned and the modularity score is computed for switching any node. The highest achievable modularity with a single switch is assigned. The process is repeated until no more switches could improve the score. The solution, however, is not necessarily optimal, a local maxima may be chosen based on the random initial assignment. The algorithm should be repeated

Edge-Betweenness Clustering Algorithm

Evaluate the edge-betweenness of each edge in the network. Find the edge with the highest score and delete it. As long as the disconnection between two components increases modularity, the algorithm continues. While there is no random variation involved, it may not find the optimal solution, it may not maximise modularity and modularity is slow.

Fast-Greedy Clustering Algorithm

Starting with an empty graph where each node is its own community. The modularity for each possibly join between two nodes is computed and the one with the highest modularity is chosen. The process is repeated until no further increase in modularity is possible. An issue is, that small communities are easily missed. However, a dendrogram allows to judge how many communities could be present.

 

 

PE: Redistribution

The focus of today’s lecture will be on redistribution as discussed in Chapter 3(Mueller, 2003). Additionally, we will discuss papers quantitatively assessing the situation (De Haan & Sturm , 2017; Sturm & de Haan, 2015).

A justification for the state can be redistribution. But redistribution itself can be argued for based on different reasons. In this post, we will illuminate the main arguments. First three voluntary redistribution arguments will be covered, then we will have a look at involuntarily redistribution.

Redistribution as insurance

If one assumes Rawls’ veil of ignorance (Rawls, 2009), redistribution can be seen as an insurance against the uncertainties of what kind of role one will assume in society. Insurance can be covered privately, so at first state intervention may seem inadequate. However, since people can assess their risk, high-risk individuals would select the insurance whereas low-risk individuals would shun the insurance. To overcome the issue of adverse selection, public insurance is introduced. The issue of adverse selection has been introduced by Akerlof (Akerlof, 1970) and shows that information asymmetry can break markets. The public insurance overcomes this issue by forcing a pareto-optimum on a societal level. Typical cases for this are health care insurance, unemployment insurance, and retirement insurance.

Redistribution as public good

Another justification comes from altruism or empathy (“warm glow”). The utility equation is expanded to [latex]max U_m + \alphaU_o[/latex] where [latex]0\leq\alpha\leq1[/latex].

Redistribution as fairness norm

The assumption that fairness is an important norm, is the basis for this redistribution argument. The classical example is the dictator game, where anonymous individuals are paired and one gets an amount of money and may share it with the other. Usually, any individual share around 30% with the other despite being able to keep everything and not knowing anything about the other. So far, the assumption is that the random element of the game let people share their gain because they also could have ended up on the other side.

Redistribution as allocative efficiency

If two individuals ([latex]P[/latex] and [latex]U[/latex]) work a fixed amount of land. The productivity of [latex]P[/latex] is 100 whereas [latex]U[/latex]’ productivity is 50. The connecting curve describes the production possibility frontier. Any initial allocation (e.g. [latex]A[/latex] may not be optimal on a societal level (i.e. [latex]A[/latex] is not tangential on a [latex]45°[/latex] line), the societal optimum would be in [latex]B[/latex], which is however unacceptable for [latex]U[/latex]. The inefficient allocation would end up at [latex]A'[/latex]. The state could either redistribute land to reach [latex]B[/latex] or production to reach [latex]C[/latex]. Note that C in the graph should amount to a value above 100. Alternatively, private contracting could reach the same result given that the state enforces property rights and contracts.

The example is based on (Bös & Kolmar, 2003).

Redistribution as taking

Groups can lobby to increase their utility [latex]U[/latex] by increasing their income [latex]Y[/latex] based on their political resources [latex]R[/latex] available. However, if two antagonistic groups lobby their policies may cancel each other leaving them only with the additional cost of lobbying without any gains.

Measuring redistribution

To measure redistribution, inequality needs to be measured first. A typical measure of inequality is done via the Lorenz curves and the Gini coefficient (Gini, 1912). The Gini coeffcient is the ratio of areas under two curves. The Gini market coefficient (before taxes) and the Gini net coefficient (after taxes and subsidies) are subdivisions that taken at ratio help to assess redistribution.

The causation of inequality is difficult to assess. Some argue for politics (Stiglitz, 2014), whereas others argue for the market-based economies (Muller, 2013). A new line of inquiry attributes inequality to ethno-linguistic fractionalisation reducing the interest in redistribution (Desmet, Ortuño-Ortín, & Wacziarg, 2012).

Sturm  and de Haan (Sturm & de Haan, 2015) follow up on the argument and examine the relationship between capitalism and income inequality. A large sample of countries is analysed using an adjusted economic freedom (EF) index as proxy for capitalism and Gini coefficients as proxy for income inequality. Additionally, they analyse the relation between income inequality and fractionalistion given similar capitalist systems. For the first analysis, there is no conclusive evidence that capitalism and income inequality are linked. However, if fractionalisation is taken into account, than inequality can be explained based on the level of fractionalisation. The more fractionalised a society is, the less redistribution takes place and consequently inequality remains high.

In a second paper de Haan and Sturm (De Haan & Sturm , 2017) analyse how the financial development impacts income inequality. Previous research on financial development, financial liberalisation and banking crises (theoretical and empirical) has been ambiguous. TBC.

References

Akerlof, G. A. (1970). “The market for” lemons”: Quality uncertainty and the market mechanism. The Quarterly Journal of Economics, 488–500.
Bös, D., & Kolmar, M. (2003). Anarchy, efficiency, and redistribution. Journal of Public Economics, 87(11), 2431–2457.
De Haan, J., & Sturm , J.-E. (2017). Finance and Income Inequality: A review and new evidence. European Journal of Political Economy, Forthcoming.
Desmet, K., Ortuño-Ortín, I., & Wacziarg, R. (2012). The  Political Economy of Linguistic Cleavages. Journal of Development Economics, 97(2), 322–338.
Gini, C. (1912). Variabilità e mutabilità. In E. Pizetti & T. Salvemini (Eds.), Memorie di metodologica statistica (p. 1). Rome: Libreria Eredi Virgilio Veschi.
Mueller, D. C. (2003). Public Choice III. Cambridge, UK: Cambridge University Press.
Muller, J. Z. (2013). Capitalism and inequality: What the right and the left get wrong. Foreign Affairs, 92(2), 30–51.
Rawls, J. (2009). A theory of justice. Harvard university press.
Stiglitz, J. (2014). Inequality is not inevitable. New York Times, pp. 1–2.
Sturm, S., Jan-Egbert, & de Haan, J. (2015). Income Inequality, Capitalism and Ethno- Linguistic Fractionalization. American Economic Review: Papers and Proceedings, 105(5), 593–597.

ISN: Data Collection Strategies

Data collection refers to the collection of an offline social network. The information about a particular community is collect. A group needs to be defined (boundaries), which may be easy (e.g. school class or company) or difficult (e.g. needle-sharing).

Complete network data

A group with clear boundaries, such as a formal group or organisation. All information is collected, either by a roster (e.g. class list) or by a name generator (e.g. each person lists their contacts).

Snowball-sampled network

A population of unknown size or unclear boundaries. A step-wise sampling technique is applied to reveal larger parts of the network until the sample is large enough.

Ego-centered network data

Samples of individuals and their personal relationships structure. For instance, a person mentions their friends (ego-alter relations) and optionally the relation amongst them (or even others; alter-alter relations) .

Informed consent and ethics

For any data collection, the individuals need to be informed about the goals of the study and must be able to withdraw. A participant must be aware that she/he is studied. The data collected furthermore must be anonymous. This is increasingly difficult in social network analysis as the names of people are intrinsic to the analysis. Keeping the personal data secure and separate from results.