# CGSS: Complex Networks

Behind complex networks, there are networks that describe the interaction between components.

Basics

A network is  a set of nodes interconnected by a set of links. The adjacency matrix $A$ of a network is the matrix which contains non-zero element $A_{ij}$ if there exists an edge between node $i$ and $j$. The resulting graph is undirected if the adjacency matrix $A$ is symmetric $A_{ij} = A_{ji}$ or directed otherwise.

A cycle exists in a graph if there exists an edge in the graph that can be removed without dividing it into two. A graph without cycles is considered a tree.

A bipartite graph connects two different kinds of nodes. The bipartite graph can be projected onto either type of node by matrix multiplication $AA^T$ and $A^TA$ respectively.

Centrality Measures

To assess a network the centrality of each nodes needs to be analysed. Several options are available:

• Degree: The number of incoming edges is easy to measure, but does not centrality over a larger neighbourhood of nodes.
• Eigenvector: For a vertex $i$ its importance can be defined as a vector $x_i$ which can be computed as $x_i \sum_k A_{ik}x_k$.
• Closeness: A geodesic path connects two vertices with the lowest amount of edges. The length of a path between $i$ and $j$ measured in edges is denoted $d_{ij}$. Closeness then can be expressed as the harmonic mean between $i$ and all other nodes $C_i = \frac{1}{n-1}\sum_{j(\neq i)}\frac{1}{d_{ij}}$.
• Betweenness: The number of geodesic paths between $s$ and $t$ is denoted as $\sigma{st}$. The number of geodesic paths through a node $i$ is denoted as $\sigma_{st}(v_i)$. Then betweenness is defined as $C(v_i) = \sum_{s,t} = \frac{\sigma_{st}(v_i)}{\sigma_{st}}$

$<k^2> = \sum_i P(k=i ) i^2= \sum_{i=0}^\infty i – \gamma i^2$  $<k> = \sum_i P(k=i ) i = \sum_{i=0}^\infty i – \gamma i$

# CGSS: Introduction to Mechanism Design

Game Theory outlines the problem of the free-rider dilemma in public goods. In order to overcome the tragedy of the commons mechanism design was proposed. The basic idea is to define the the payoff and actions in order to drive people towards a preferred behaviour.

Public Goods Game

From the mechanism design perspective  two individuals  $k \in \{i, j \}$ receive a private benefit $b_k$ at the cost of the public project $c \in \mathbb{R}_+$. A decision is denoted as $d\in \{ 0, 1 \}$ where 0 is the decision not to invest and 1 is the decision to invest. Each individual k pays a tax $t_k$. A decision is communicated via a message $M = M(m_i,m_j) = (d,t_i,t_j)$.

Based on this setup a good mechanism would at least consist of:

• Individual Rationality: individuals weakly prefer to participant
• Efficiency: maximizing the sum of utilities of all participating individuals
• Balanced: the taxes cover the costs
• Simplicity: easy to understand and practical (not well defined)

Additional properties could be defined, but are left for more in-depth studies.

Simple Public Goods Mechanism

Based on the work of Jackson & Moulin (1992) a two-stage mechanism can be defined. In the first stage each individual $k \in \{i, j \}$ submits a bit $v_k$ which is an estimate for the joint benefit of a project. If the largest bid $v^* > c$. In the second stage each individual $k \in \{i, j \}$ submits a bid $\beta_k$ indicating private benefit. If a cumulative private benefit $\sum_k \beta_k$ is larger than the cost $v^*$, the project should go ahead, if it is smaller, the project should not go ahead and equality leaves the decision undecided.

The mechanism has the properties of being individually rational, efficient, balanced, simplistic, implementable in dominant strategies, and offers true-preferences revelation.

Proof of optimality TBA.

# CGSS: Introduction to Game Theory

A fundamental problem is over-usage. Usually, nobody wants over-usage to occur. However, on an individual level companies want to maximise their profit while they neglect the social cost. The problem is also known as the Tragedy of the commons which is based on the free-rider dilemma.

A game is defined by three components: players, actions, and payoffs. Solutions concepts are usually in the form of a Nash equilibrium.

“Unless the number of individuals (micro) in a group is quite small,  or unless there is coercion or some other special device to make individuals act in their common interest, rational, self-interested individuals will not act to achieve their common or group interest.” – Mancur Olson(1965)

Players $N = \{1,2,…,n\}$ are a discrete set or continuum population of individuals.

Actions $S = S_1 \times… \times S_n \subset \mathbb{R}^+$ are a compact and bound set, to fulfil Nash’s proof, but some variant are possible to get a fixed point result.

Payoffs $\Pi_i : S \mapsto \mathbb{R} ~ \forall i \in N$ are most often a set of the real space.

Let $\Gamma = \{ N, S, \Pi \}$ be a game with $N = \{1,2,…,n\}$ players. Each $i \in N$ has a strategy set $S_i$ where $S = S_1 \times… \times S_n$ is the set of all possible strategy profiles.

Let $x_i \in S_i$ be a strategy action for $i$ and $x_{-i} \in S_{-i}$ be a strategy profile for all players except $i$. When each $i \in N$ chooses strategy $x_i$ resulting in $x = (x_1, …, x_n)$ then $i$ obtains profit $\Pi_i(x)$.

A strategy profile $x^* \in S$ is a Nash Equilibrium $NE$ if no unilateral deviation in strategy by any single player is profitable for that player, that is, $\Pi_i(x_i^*, x_{-i}^*) > \Pi_i(x_i,x_{-i}^*) ~ \forall x_i \in S_i$ and $i \in N$.

Prisoner’s Dilemma

In the prisoner’s dilemma each of the two players has two action, either to cooperate (C)or to defect(D). The payoffs are defined as  CC=(3,3), CD=(1,4), DC(4,1), and DD(2,2). If player start of with CC, any one player can gain an advantage by defecting. Once defected the other player has no choice but to defect as well to optimize his payoff. The Nash Equilibrium of DD is reached where neither player could optimize his payoff by switching strategy on his own. However, if both were to switch at once, they could get a better result. Since no trust between players exist this outcome is not possible without breaking the rationality assumption.

Public Goods Game

The N-person generalisation of the Prisoner’s dilemma. The players are a finite population of $N$ individuals. Each $i$ is endowed with a budget $B$, common to all players. The action consists of each player $i$ choose some amount  $a_i \in \mathbb{R}^+_0$ to invest in a shared investment account. The collected investments are $a = (a_i)_{i\in N}$. The payoff is $\phi_i(a_i,a_{-i}) = B – a_i + r \cdot \sum_{j\in N} a_j$. For $r < 1$ the optimal decision (i.e. the Nash Equilibrium) is not to invest which results in a payoff of zero $a_i = 0 ~ \forall i \in N \rightarrow \phi_i = 0$. Even though there would be a positive outcome if everybody invests, free-riding would allow some to make more gains. A rational individual would therefore not invest.

Side note:

The lecture on game theory in QPAM is covering the same topic and is therefore not covered in detail.

# CGSS: Introduction

Complexity and Global Systems Science (CGSS) will cover Game Theory and mechanism design, complex network, socio-physics, and critical thinking essays regarding the topic.

Complexity science is related to systems that are made up of thousands of units, whereas global systems describe large systems. Systemic instabilities are of a major interest and need to be understood.

Collateral costs are hard to track but can be associated with financial crises, conflicts, terrorism, crime and corruption, epidemics and cybercrime. Reducing collateral costs provides a new opportunity to tackle each problem. However, this required to understand complex systems.

In a complex system a large number of interacting system elements follow non-linear interdependencies. They are dynamic and probabilistic and therefore elude easy descriptions.

It is necessary to make a difference between a complicated and a complex systems. On the one hand, a car is a complicated system out of thousands of parts. However, each part constitutes a specific task and can be understood (mostly). On the other hand traffic of cars is complex and predicting traffic jams is fiendishly hard (e.g. the phenomena of phantom traffic jams).

Complex systems often exhibit self-organisation (e.g. pedestrian forming lanes to walk into different directions), however, self-organisation is not a guarantee to an efficient solution (e.g. the Love Parade disaster).

Predictability of complex systems is limited. Dynamics of such system usually are highly sensitive and therefore small differences in initial setup can cause largely different results (e.g. butterfly effect in weather forecasting).

Control over complex system is an illusion due to a irreducible randomness and delays in consequences together with regime shifts (i.e. only if a threshold is met a change becomes (catastrophically) visible). Goodhart’s law/Principle of Le Chatelier states that a system tends to counteract external control attempts.

The unstable supply chains and phantom traffic jams are caused by delays in the system that are then amplified and maintained throughout without possibility of stopping them. However, it can be modelled. Those models often show oscillations that propagate and make it difficult to obtain a specific state. Tragedies of the Commons are another classical case where oscillations eventually cause a break-down.

Strongly coupled system behave different: they have faster dynamics, extreme events, self-organisation, emergent system behaviour and low predictability.

Cascade effects in networks together with probabilistic events and delays make causal analysis difficult. A blackout is such an event where failure in a single node in the system can stop the whole system. Whereas more connectivity allows for quicker results in positive ways, it also allows for quicker spreading of negative results. In addition to an catastrophic event, secondary and tertiary disasters may follow up. A causality network can be modelled to identify n-ary disasters based on a specific disaster. If an effective decoupling strategy could be setup, the catastrophic spread could be interrupted.

Decentralisation seems to be a useful tool in reducing inherent risk.

Big Data is a double-sided sword. The more data you have, the more patterns you find. However, those patterns are mere correlation and do not represent causation. Therefore simply sifting through data does not allow to find causation. The idea was to create AI, that are able to detect patterns and find causation. However, AIs are themselves driven by data and can therefore be manipulated by data (chat bots learn racism from users in the internet, police machine learning discriminates against Black or Hispanic people in the US).