IAP: Introduction

The internet is a global-scale, technically complex artefact of immense international social and political importance. It is formed by the interaction of technical constraints (e.g. speed of light, number of addresses), usage models and behaviour, technological design choices and policy decisions.

This course will focus on the Internet and other networks will only marginally be mentioned (mobile networks, local networks, etc.), even if they are converging. Applications of the Internet such as the Web and social networks and other online services are not covered.

Networking History

One of the earliest networks was developed by  Claude Chappe (1763-1805), a mechanical semaphore. In 1837 the electrical telegraph allowed transmission of Morse code. In 1866 there was a connection between London and New York at a price of 20 words for 100$. In 1863 Reiss developed telephony. In 1895 the first wireless communication was demonstrated, in 1906 radio was broadly introduced. Television was broadcast in 1928.

The Internet is based on packet-switching which was first demonstrated by Kleinrock in 1961 (Kleinrock, 1961) . In 1964 Baran created military nets using packet-switching (Baran, 1964) . In 1967 the ARPAnet was conceived and installed in 1969. In 1972 the ARPAnet had 15 nodes. In 1973 Metcalfe proposed Ethernet (Metcalfe & Boggs, 1976) .  Vinton G. Cerf & Robert E. Kanhn’s itnernetworking principles developed in 1974 (Cerf & Kahn, 1974) . In 1979 the ARPAnet has 200 nodes.

The Internet was commercialised in the 1990s. The ARPAnet was decommissioned. The NSFnet in 1991 allows commercial use and is itself shutdown in 1995 and replaced by the World Wide Web (WWW). In the 2000s the dotcom-bubble for the first time shot the impact potential of the Internet on the real world.

Internet Basics

The Internet carries packets. Packets have headers that describe them, a payload which contains their contents. Officially, Internet routers only care about packets. The explicit analogy is like mailing a letter (inside the envelope is the payload/letter and the headers equals the address on the envelope). This differs to telephone traffic where the traffic is analysed to optimise the traffic (fax versus voice call).

IP addresses have 32 bits and therefore can approximately connect 4 billion devices. An IP address has become a scarce resource. The question arises who allocates addreses, who can be reached globally and should a new protocol be adopted? IP version 6 has been proposed as the solution.


I think the IETF hit the right balance with the 128 bits thing. We can fit MAC addresses in a /64 subnet, and the nanobots will only be able to devour half the planet.


A protocol defines a set of messages that are sent between end-points and define what these messages mean and what end-points should do with these messages.The internet protocol stack consists of 5 layers: physical, link, network, transport and application. Throughout this course we will focus on transport and network.

The data send in a message will get an additional header for each layer that it traverses. The Internet has at the core the IP and does not change this (“narrow waist model”). The layers above (transport and application) or below (physical, link) can be arbitrarily changed. Side note: in Germany successfully carry pigeons were used to send a message. The rigidity of IP is claimed to be the reason for the success of the Internet. In reality, their are many more layers, there have been observed real world packets with 12 layers and more where the IP layer is repeated multiple times. HTTP has become the main protocol, and other protocols are often blocked, consequently, much traffic that is not actually text (e.g. video) is send over it.

The Internet consists of many autonomous system (Internet Service Providers (ISP)) that communicate via Border Gateway Protocol (BGP). Each system advertises where it can delivers messages to, however, they need not be truthful. Incidents include advertising optimal routes to everywhere to attract all traffic (including special regions). Another alternative is to advertise a route that is cheap, but never deliver the packet. It is not clear how to resolve such misuse of the system.

The Internet has been designed insulated from commercial and political pressures, but the reality has changed. The idea for the Internet and the real-world use have diverged. The course focuses on the tension between technology, policy, commerce and politics.


Baran, P. (1964). On distributed communications networks. IEEE Transactions on Communications Systems, 12(1), 1–9.
Cerf, V., & Kahn, R. (1974). A protocol for packet network internetworking. IEEE Trans. Commun , 22, 627–641.
Kleinrock, L. (1961). Information flow in large communication nets. RLE Quarterly Progress Report, 1.
Metcalfe, R. M., & Boggs, D. R. (1976). Ethernet: Distributed packet switching for local computer networks. Communications of the ACM, 19(7), 395–404.

CSD: Introduction

The course “Cognition in Studio Design – analytic tools for evidence-based design” will discuss readings of space syntax (Bafna, 2003) , navigation issues (Carlson, Hölscher, Shipley, & Dalton, 2010) as well as functions and applications of spatial cognition (Montello & Raubal, 2013) .

To compute space syntax DepthmapX will be used.


Bafna, S. (2003). Space syntax: A brief introduction to its logic and analytical techniques. Environment and Behavior, 35(1), 17–59.
Carlson, L. A., Hölscher, C., Shipley, T. F., & Dalton, R. C. (2010). Getting lost in buildings. Current Directions in Psychological Science, 19(5), 284–289.
Montello, D. R., & Raubal, M. (2013). Functions and applications of spatial cognition. In Handbook of Spatial Cognition (pp. 249–264). American Psychological Association (APA).

SMADSC: Introduction

Complex systems are the core topic of  Social Modelling, Agent-Based Simulation, and Complexity. Complex systems usually emerge as an artefact of interaction. The output of a complex system follows the Power Law and may have a regime or phase changes, known as tipping points. Emergent properties and scale-free organisation are a typical feature of complex systems. It would it be possible to analyse top-down, but is best studied bottom-up.

In general, a social system is analysed by creating a mental model of it, deriving hypotheses regarding endogenous and exogenous forces that drive it and finally instantiating an agend-based model (ABM) in code that is simulated in silicio.

Recommended reading for the week is Chapter 9 in Complex adaptive
systems: An introduction to computational models of social life (Miller & Page, 2009) and Chapter 8 in Introduction to computational
social science: principles and applications (Cioffi-Revilla, 2013).

Agent-based Models (ABM)

Usually, an object-oriented software system that instantiates a model of living systems of social entities. Agent-based models go beyond numerical analysis, rather they observe emergent behaviour. Broad paradigms that influence ABMs are cellular automata, big data, social networks, and generative models. Concepts will be emergence, bottom-up computation micro-level rules lead to macro-level behaviours. There are two main dominant characteristics of ABMs:

  1.  A positive representation attempts to closely recreate or capture the abstract or detailed essence  of a prototype system.
  2. A normative representation provides input control for exogenous steering of internal feedback loops.

Generative ABMs are useful in three general cases:

  1. Modelling historical systems, that cannot be revisited
  2. Long-lived systems, that span a longer time than can be observed
  3. Unethical, illegal, unsafe or unlikely environmental  settings or exogenous  stimuli to the system

The Game of Life (Conway, 1970)

A game with two states {dead, alive} and the rules:

Each cell checks the Life State of itself and those of the cells in its local neighbourhood at a Moore distance of 1. If alive then display a pixel if dead do not. If this cell has less than two neighbours alive or more than three neighbours alive then, set this cell dead. If there are exactly three alive neighbours, set Life State alive. Randomized activation of cells continues “forever.”

It uses the concepts of cellular automata and either Moore or von Neumann distance as well as distance-neighbourhoods.

Other famous ABMs are Flocking (Reynolds, 1987), Swarming (Bonabeau & Meyer, 2001), Residential Segregation (Schelling, 1969), Residential Segregation using vector-based GIS (Crooks, 2010)


Bonabeau, E., & Meyer, C. (2001). Swarm intelligence. Harvard Business Review, 79(5), 106–114.
Cioffi-Revilla, C. (2013). Introduction to computational social science: principles and applications. Springer Science & Business Media.
Conway, J. (1970). The game of life. Scientific American, 223(4), 4.
Crooks, A. T. (2010). Constructing and implementing an agent-based model of residential segregation through vector GIS. International Journal of Geographical Information Science, 24(5), 661–675.
Miller, J. H., & Page, S. E. (2009). Complex adaptive systems: An introduction to computational models of social life. Princeton University Press.
Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. ACM SIGGRAPH Computer Graphics, 21(4), 25–34.
Schelling, T. C. (1969). Models of segregation. The American Economic Review, 59(2), 488–493.

ISN: What are Social Networks?

Social networks are based on relations between two or a few individuals from friendships over contracts to work contacts.

Throughout the course, the theory behind social networks will be put into context with methods of comparing and applying social networks. Examples from different scientific disciplines will be used to illustrate the social networks.

Network descriptives

Mathematical descriptions of networks are a useful descriptive. An adjacency matrix can be used to represent a graph as nodes and edges.

Networks can be analysed on different levels:

  • Dyad level ([latex]O(n^2)[/latex]) or connections between nodes
  • Node level ([latex](O(n)[/latex]) or properties of nodes
  • Network level ([latex](O(1)[/latex]) or clustering of nodes.

Centrality could be access to resources, connection between parts, part of interaction. For a detailed report on centrality measures, look at this post in my Complexity and Global Systems Sciences lecture notes. Centrality measures often differ and in larger networks will be different for different measures. The choice of centrality is dependent on the research question.


Generally, for any network, one should start with the following descriptives, before continuing to more advanced analysis.

  1. Start with a visualisation of a network.
  2. Compute density of network (number of edges divided by maximal number of edges; note that the maximal number is different for directed ( [latex]e_{max} = n(n-1)[/latex]) and undirected ([latex]e_{max} = n(n-1)/2[/latex]) graphs).
  3. Measure centrality in social networks.

PE: Institutions and Economic principles

The main reference for today will be Mueller’s Public Choice III Chapter 1 and 2 (Mueller, 2003)as well as Acemoglu’s Political Economy Lecture Notes Chapter 1 (Acemoglu, 2009). Additional readings are Acemoglu’s Chapter 2 and work by Ostrom (Ostrom, 1998) and Schnellenbach (Schnellenbach & Schubert, 2015).

Political Economy joins the fields of Political Science and Economics. To illustrate the case, we can consider the Trump administration: Politics affects the economy and the choice of policy areas benefits some sectors or others. So far, the stock markets reacted positively to the Trump administration in expectation of reduced “red tape”. But upon closer inspection specific sectors benefit whereas others linger or decline. The political decisions will influence which sectors flourish.


As always, the term is widely different defined. We will rely on the definition of “rules of the game” provided by Acemoglu (p.5ff). This includes political institutions (constitution electoral rules, separation of powers, chacks and balances, etc.) and economic institutions (property rights, commercial law, contract law, etc.). It also differs between formal (de jure) and informal (de facto) institutions.

Four different views on institutions compete according to Acemoglu (in no particular order):

  1. Efficient institutions view: maximising total surplus or compensation (Coase theorem), should have no impact on output (not proven empirically) and  troubled by commitment problems (imperfect contracts about compensation).
  2. Social Conflict view: Institutions are chooses by political power (rent maximisation).
  3. Ideology/belief view: Different view on what is best for society.
  4. Incidental institutions view: By-product of other social interactions (taxation implies representation and ultimately leads to parliamentary representation).

This views on their own are often not 100% explanatory. Inefficient institutions can be explained by hold-ups (current elites usually cannot respect future elites), political losers (parliaments usually cannot shrink) or economic losers (reforms benefit some more than others, which may have political power). Consequences from these limitations are 1) that constraints on political power and broad distribution of political power make secure property rights more likely, 2) stable economic institutions are more likely if rents are limited and 3) institutional reforms are more likely to be successful if they do not threaten incumbents.

Economic Principles

Homo Oeconomicus assumes rationality and utility maximisation, but is undermined by findings in Behavioural Economic (Schnellenbach & Schubert, 2015).

Trade and markets are usually a good way to organise economic activity.

Perfect competition/markets assumes well-defined property rights, large number of buyers and sellers, perfect information, homogeneous products, no barriers to entry and exit, all participants are price-takers (no market power), participants are homo oeconomici (and firms are profit-maximising), no externalities, no transaction costs and a constant returns to scale. In other words, a pareto-optimal allocation.

Pareto-efficient/optimal implies that no improvement can be made for some people without making anyone else worse off.

Kaldor-Hicks-optimality is a relaxation of the Pareto-optimality, where the benefiting side hypothetically could compensate the worse-off side.

Market failures occur with public goods, externalities, monopolies, unequal distribution of resources, etc.

Types of Goods: [table id=7 /]

The Freerider Problem arises with public goods where an individual does not have to contribute to receive the public good, but consequently the public good is underprovided.

Externalities are unintended impacts on another individual or firm. The social costs is not covered by the private cost. Pareto-optimality can be achieved by a Pigouvian tax or Coase bargaining.

The Coase Theorem states that if an externality is tradable and has sufficiently low transaction costs, bargaining between involved parties will lead to a pareto-optimal solution regardless of initial allocation of property. However, assignment of property rights and large numbers of individuals undermine the Coase Theorem.

The Tragedy of the Commons is that the individual, ration self-interest is contrary to the common good.

The n-person social dilemma (Ostrom, 1998) stipulates that the more persons cooperate the higher the benefit, but for a given number of cooperating players, a single defecting player will benefit more. This induces non-cooperation that lowers the overall benefit achievable. To overcome non-cooperation, social conventions can be applied from adaptive learning over reputation to social sanctions (in contrast to the homo oeconomicus).

In a Game of Chicken the best outcome would be to cooperate, but for each individual the outcome is better to defect, the ultimate state is the worst for all.

[table id=8 /]

Conceptions of the state

Often states are normatively justified: from survival (better than the state of natural anarchy) over efficiency (e.g. distribution of public goods) to equity (e.g. social fairness).

According to Acemoglu (Acemoglu, 2009) the state is often conceptualised as:

  1. State without agency: no interests of its own, rectifies market failures
  2. State as nexus of cooperation: Hobbesian/Rousseau’s view of the state (as compared to anarchy)
  3. State as agent of a social group: Capitalists, financial sector, ethnic group, men, etc.
  4. State as grabbing hand: Members of the state look after their own interest
  5. State as autonomous bureaucracy: Represents interests beyond their members interst


Acemoglu, D. (2009). Political Economy Lecture Notes. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=
Mueller, D. C. (2003). Public Choice III (3rd ed.). Cambridge, UK: Cambridge University Press.
Ostrom, E. (1998). A Behavioral Approach to the Rational Choice Theory of Collective Action. American Political Science Review, 92(1), 1–22.
Schnellenbach, J., & Schubert, C. (2015). Behavioral Political Economy: A Survey. European Journal of Political Economy, 40, 395–417.

Urban Design I: Tools

Throughout the course Urban Design I several “tools” were introduced that impact urbanity.


Tools of this kind belong to top-down approaches and usually give form to the urbanscape in a radical way.

Megascale-planing (Berlin)

Berlin was an early example of a politically motivated re-organisation of administrative units. Berlin grew from nearly 2 million to 4 million people due to the administrative rearrangement. Infrastructure was created to join the adjacent cities and towns.

Horizontal-vertical grid (New York)

To tackle shanty towns and issues with hygiene in 1811 New York proposed the grid layout of the city. The grid was super-imposed over the old city and only main roads like the Broadway give a glimpse at previous layouts. The grid-structure was complemented in the early 20th century with vertical zoning laws that created the concept of high rises with private plazas that must include residential areas in the buildings.. To compensate for the high density the Central Park was created as a contrasting void.


This set of tools focuses on cities with destroyed urban fabric and potential ways of reconstructing the fabric.

Critical Reconstruction (Berlin)

The “Planwerk Innenstadt Berlin” was a combined effort to fill the holes in the city left by the division and the war. The main idea was to re-discover the historic character of the city and modernise it.

De-urbanisation (Sarajevo)

The urban fabric of the city was not only damaged by the on-going war within the city, but also intentionally destroyed to remove signs of urban co-existence of different ethnic groups. The process has been called urbicide and is intrinsically connected to ethnic cleansing. The use of urban space was shifted and attained new meaning. Open spaces became dangerous due to the constant sniper fire and new spaces had to be acquired. The cold winters forced people to cut all trees for firewood. The city’s transformation was consequently two-fold: enforced by destruction and new uses of the remainders.

Shrinking City (Detroit)

In a shrinking city the core loses its role and the periphery becomes dominant. It is often accompanied by generating suburbia. The decreasing role reduces services provided by the city and requires a drastic rearrangement of budgeting. It is often an ignored reality that is only considered when all potential alternatives failed. See the bankruptcy of Detroit.

Micro/Temporary programmes

This set of tools is limited in either time or space. It focuses on action-driven approaches where either events in the near past triggered the programme or the programme is an answer to an issue of missing urban functionality.

Temporary Urbanism (Berlin)

The empty/negative spaces of Berlin offer space for temporary and spontaneous use. Temporary urbanism arises as a consequence. An example would be the “Kitchen Monument”. A mobile kitchen that is temporarily installed in empty spaces throughout Berlin.

Turbo Urbanism (Sarajevo)

The negative spaces created by the war as well as the urbicide created the need for many functions in the city that were currently not fulfilled. New architectural and urban interventions materialised and transformed economic identities, accompanied by gentrification.

User-generated Urbanism (Athens)

Small scale, user-generated, architectural solutions to urban problems such as self-managed parks, occupation/squatting movements and alternative economy networks. They include new programs for meetings and open assemblies, and new models of production, such as the formation of urban plantations.

Cooperation and Dialogue (Cape Town)

Following the apartheid segregation and the distributive attempts of the 1990s a new paradigm was introduced in the early 2000s. To contain the sprawl, amenity requirements were introduced (access to public infrastructure, social and economic facilities). Instead of attempting total redistribution also intermediate solutions such as upgrades to the infrastructure of informal settlements rather than complete rebuilding were added to the policy portfolio. Local needs were examined and localised solutions actively sought for.

Street Renaissance (New York)

The lack of funding in the Department of Transportation, forced New York to become creative on how to adapt to new urban realities. Street paint was used to reduce space for cars, broaden side walks and introduce bicycle lanes. The reclaimed space was then occupied by pedestrians, restaurants and street furniture. Urban re-engineering enabled a fluid transformation of New York.

Microplanning (São Paulo)

Transforming unused micro-spaces into highly functional pieces of a city. For instance, the Garrido Boxing Gym situated under the unused space below an elevated highway. It uses the urban morphology to and can be considered a micro-intervention that enables local residents to participant in sports. Benches, skate parks, mini lawns and planters can all be considered micro-interventions to improve unused public space.

Active Infill (Detroit)

A reaction to empty decaying space in a city. Can be either tackled top-down with infrastructure restructuring or bottom-up in community driven projects. Infrastructure restructuring includes the condensing of services (thereby dis-servicing certain areas effectively shrinking the city) and offering incentives in “condensing areas”. Community projects make use of the empty space and give new meaning to the local urban fabric.

Informal/Hybrid City

These tools focuses on actors at the border of formal and informal and highlight how both interact and even how they can be combined.

Reactivating the city (Sarajevo)

Political paralysis has caused neglect and destruction due to disagreement over how to proceed. This opened up the city as a new urban frontier. For instance, the Historical Museum of Bosnia and Herzegovina epitomises the phenomenon. To overcome decay due to budget cuts supports of the museum suggest to stretch transparent vinyl over scaffolding to stop water from causing further decay. The museum is tasked with cultural  preservation and opens a venue for society to deal with its traumatic past.

Hybrid City (Caracas)

An interplay of formal and informal settlements characterises housing settlements such as “23 de Enero”. The formal housing structures have been “improved” with informal settlements around them to optimise the use of space and to accommodate social and economic functions (such as shops and restaurant) required by the inhabitants.

Repurposing infrastructure (New York)

Rail viaducts were a common necessity in the early 20th century. In the 1950s trucks displaced trains and robbed the elevated tracks of their purpose. The High Line showcases the repurposing of infrastructure. The viaduct became a linear park that offers a green space through which people can move around the city.

Public Infrastructure/Mobility

These tools demonstrate the interconnectedness between mobility and urbanity and highlight the interaction (both negative and positive).

Oil and Automobile City (Caracas)

A car-centric approach to public infrastructure focuses on freeways and elevated highways that partition the city and thereby segregate it.

Multiple Hubs (Caracas)

The inaccessibility of Slums such as San Augustin up in the mountain hills requires new approaches to urban mobility. Cable cars were introduced and transformed the urban landscape. Not only did they provide transport to the residents, they offered a functional space for formal services (postal, banking, government) in an otherwise informal environment. The overlay of functionalities popularised the method throughout slums of Latin America.

Urban Mobility (São Paulo)

The vector of mobility defines the kind of urban space that will be created. São Paulo showcases a steady move away from public transport towards individual transport reducing public space and decreasing traffic flow.


These tools take influence on an abstract but fundamental level. Often they set the rules of the game and indirectly enforce specific outcomes.

Developer as Architect (Athens)

The deregulation of the construction industry with regulation of building structure effectively removed the architect form the equation. Polykatoikia were constructed throughout Athens without a masterplan. An abstract legislative framework enforced the practise of self-building.

Masterplanning Segregation (Cape Town)

Apartheid planning consisted of deliberately developing the city based on ethnic segregation. Planning was completely top-down and racially motivated and permeated through political and administrative processes.


These tools tackle how to embed humans in the urban fabric and showcase different approaches to creating urbanity.

Post-olympic Urbanism (Athens)

Olympic games are considered a potential urban development catalyst. They can intervene in the short- and long-term development activities. Additionally, they require urban functions that may have been previously lacking. If applied correctly, they can be used to address urban issues (such as inner-city decline or sprawl). However, Athens is a prime example of how to not do it as today most of the Olympic facilities are decaying.

Development through Distribution (Cape Town)

To overcome inequalities distribution policies can be enacted that equalise the urban realm. In Cape Town social housing and Mandela’s promise of “one house per family” drove the creation of new houses. However, the development happened within the geographical constraints set out by the previous apartheid regime and consequently reinforced social segregation. Additionally, due to the required space of a house, the city began to sprawl diffusing the urban core.

Community Projects

This tool focusses on urban functionalities on the community level.

(Infra)Cultural Design (São Paulo)

To strengthen urban fabric Unified Educational Centres (CEU, Centro Educational Unificado) were placed strategically in diffuse urban locations. They offer new socio-cultural opportunities and enable communities to express themselves. Identities can be formed around those hubs and enable the city to become more coherent.


This tool focusses on the interaction between suburban and urban.

Generating Suburbia (Detroit)

To accommodate single houses per family suburban developments were created that rely on roads, cars and telephones to cover distance. The space requirements increases the surface area of a city unproportionally and requires large infrastructure spending to maintain roads. In the case of Detroit it has aggravating side-effects. People moved out of the administrative boundaries of the city into suburbia and reduced the tax base of the city accelerating its decline. The overstretching thins the urban fabric and distributes and diffuses it.

PIPP: Governance beyond the state

International politics differentiate themselves from state politics as the question of sovereignty is answered differently. States have internal and external sovereignty. A consequence is that they are formally equal entities. Therefore states have to coordinate horizontally and negotiate an order mostly based on the power they can display.

International politics would be similar to national politics if there was a world state with a monopoly of force, rule-making and rule enforcement. However, that is not the case.

Global Governance is contrasted with the anarchy that requires institutionalised forms of co-ordinated actions to produce collectively binding agreements. In contrast to government it is non-hierarchical.

Challenges to global governance are posed by security, welfare and freedom.


The absence of a world police causes insecurity in the form of threats of violence, war, terrorism, arms races and competing alliances.


The absence of a regulator and collective-goods-provider causes inefficiency and inequality which further causes market failure, resource depletion, protectionism and underdevelopment.


The absence of interference (non-interference) allows oppression in the form of human right violations and autocratic rule.

Game Theoretic Analysis

Game Theory is used to analyse as it offers an analytical framework that captures interactions between independent, rational agents that maximise utility in an interdependent decision-making process.

[expand title=”Coordination game without distributional conflict”]

A game with two pareto-efficient equilibria. Both have equal utility. It is essential a communication problem that is solved by codification (institution). No bargaining takes place. A typical example would be “oncoming traffic side ” (left or right). Real world example usually involve next generation technologies with no endowment.


[expand title=”Coordination game with distributional conflict”]

Again it’s a game with two pareto-efficient equilibria. However, there is an unequal utility. There is a problem of communication and distribution. The resolving institution involve codification and distributional rules. The bargaining involves the first-mover advantage and the size of gain. A typical example is the “battle of the Sexes” where each partner prefers a different activity but wants to spend time together, so the utility of doing something together one doesn’t like is still higher than doing something alone one likes. Coordinating deep free trade agreements often take the form of this game.


[expand title=”Coordination game with rivalry”]

A game with three pareto-efficient equilibria. There is an unequal utility. The problems involve communication, distribution and reputation. Institutions provide prevention of non-cooperation. In the bargaining process there is a last-mover advantage (or brinkmanship). The game of “Chicken” (two drivers driven towards another head-on the loser deviates first) is a famous example and in the real world the euro crisis.


[expand title=”Dilemma game without distributional conflict”]

A game with two equilibria, one of which is pareto-efficient, also know Assurance Game. The problems are mistrust and uncertainty. Institutions can offer monitoring and capacity-building. There is no bargaining. The textbook example is the stag hunt by Jean-Jacque Rousseau. In the real world international infrastructure cooperation projects often run into this kind of game.


[expand title=”Dilemma game with distributional conflict”]

A game with a separation of optimal and equilibrium solutions, also know Prisoners’ Dilemma. The problems are mistrust and credibility of commitment. Institutions can offer monitoring and sanctioning. There is bargaining in the form of betrayal and non-compliance. The textbook example is the Prisoners’ Dilemma. In the real world the Tragedy of the Commons is the common appearance of the game (e.g. climate change negotiations).


[expand title=”Asymmetrical domination game”]

A game with one equilibrium and an unequal utility. The problems is a lack of incentive to cooperate due to an “upstream/downstream” situation. Institutions can offer an increase of scope. There is bargaining in the form of side payments and issue-linkage. The historical case is polluting a river upstream in one country and the downstream country having to deal with it.


PIPP: Democracy & Governance

Mechanistic institutional definitions of a democracy are based on the electoral systems and the powers it hands to officials. There are also soft definitions of democracy that focus on citizens’ rights to form interest groups (pressure groups, political parties, etc.) and judicial protection of citizens.

The quality of a democracy can be rated based on different criteria. Different organisations use different criteria (e.g. Freedom House, Economist Intelligence Unit, Polity IV) which leads to disparate interpretation of which countries have a good democracy. Implications of criteria have been well studies and can be used to make inferences on economic, social and environmental conditions.

Democracies are a reflection of the history of a country in their institutions as much as in their party landscape.

Change in democracies

Erosion of democracy is usually accompanied by restrictions on media and group formation as well as interference with the judicial system (e.g. Hungary). On the other hand strengthening of democracy is usually founded on more free media and an independent judicial system (e.g. Brazil).

Mono-causal explanations for the rise and fall of democracy fail to prove a strong relation. The complexity behind change is still difficult to grasp. Nonetheless, particular development trajectories from autocracy to democracy and vice versa are well understood, but cannot be generalised as no generalisable necessary or sufficient conditions exist.

Principles of Economics: Imperfect Competition


Barriers to entry are the fundamental cause for the rise of monopoly. Barriers appear in three forms: ownership of key resources, exclusive production rights and an efficient (return-to-)scale.

A firm’s ability to influence the market price is called market power. It entails that a firm can raise the price above some competitive level in a profitable way. The lowest possible price a firm can profitably charge is equal to the marginal cost of production. The market power can be expressed as the difference between the price it charges and the marginal cost. A firm is considered a price maker it exercises its market power; formally defined as [latex]P'(Q) \neq 0[/latex].

Given a price function [latex]P(Q)[/latex] and a monopoly that sets its profit as [latex]\pi(P(Q),Q) = P(Q)Q-C(Q)[/latex] and has the the derivative [latex]\frac{d\pi}{dQ}=P + P'(Q)Q-C'(Q)[/latex].

In perfect competition marginal costs is equal to the price. In the monopoly it is marginal costs plus the derivative of the demand. Monopolies make us of the fact that increased output decreases price and and therefore the marginal revenue is [latex]P(Q)+P'(Q)Q < P(Q)[/latex] and therefore the optimal production of the monopoly is [latex]P(Q)+P'(Q)Q = C'(Q)[/latex]. Reformulated [latex]P-C’ = P'(Q)Q[/latex] and then [latex]\frac{P-C'(Q)}{P} = – \frac{1}{\epsilon}[/latex] where [latex]\epsilon[/latex] is the price elasticity. Consequently, the relative difference between the price and the marginal cost is inversely proportional to the price-elasticity of demand. The more sensitive demand is to the price the lower the relative difference between price and marginal cost. Close substitutes to a monopoly product induce high demand sensitivity and prices will not rise much above the marginal costs.

The market power of monopoly has two consequences: There is a redistributive effect as the profits of the firm increase at the expense of the consumers. There is also a loss of efficiency as the deadweight loss increases (i.e. the difference between the surplus in the competitive and monopolistic case). The allocative inefficiency is not judging whether consumers or producers are more deserving of the surplus, but criticising the deadweight loss. Market power causes market inefficiency as the reduction of output induces a welfare loss.

Rent-seeking behaviour

The existence of a potential rent may entice companies into rent-seeking behaviour. Acquiring a monopoly is of a major advantage and therefore highly sought after. Firms increase spending on monopoly-generating activities such as strategic and administrative expenses (lobbying, bribing, etc.) that do not generate social welfare.

Side note:

Competition laws sometimes prohibit market power above some minimum market power threshold. However, below the threshold the rules do not apply. Those thresholds may also differ for different practices.


Liebenstein 1966 X-inefficiency, Hart 1983 manager under competition, Nickell 1996 uk manifactoring 1972-1986

Natural Monopoly

Efficient scales leave only room for one company and therefore cause natural monopolies in network industries (water, electricity, internet, social networks, etc.). Usually, natural monopolies can produce at lower average costs than multiple firms.

If a monopoly prices at average cost, profits are zero. However, if we price at marginal cost the profits are negative and welfare is maximal. There exists a trade-off between allocative efficiency and productive efficiency. The Ramsey-Boiteux pricing is a policy rule setting out how a monopolist should set prices in order to maximise social welfare under the constraint of profits.

Price Discrimination

Restrictively formulated, price discrimination occurs when the “same good” is sold at different prices. A broader definition expands this to differences in prices that cannot be entirely explained by variation in marginal costs. Price discrimination is only feasible if consumers cannot resale the good to each other.

Price discrinomation has been categorised by Pigou (1932):

  • 1st degree (complete discrimination): Each unit is sold at a different price. The producer captures the whole surplus and no deadweight loss occurs. Production is optimal, but it is never fully realised.
  • 2nd degree (indirect segmentation): a proxy for a group is used (e.g. package size).
  • 3rd degree (direct segmentation): general attributes of a buyer (e.g. age or gender) is considered.

Double marginalisation

Assuming we have two firms with monopolies: upstream firm  [latex]P[/latex] with a production cost [latex]c[/latex] and downstream firm [latex]D[/latex] with a distribution cost [latex]d[/latex]. The Marginal revenue of the downstream firm is going to be the demand function for the upstream firm. However, the upstream firm will use its marginal revenue to calculate the quantity produced. Each monopoly in a chain of marginalisation will reduce the total quantity. For consumers (and welfare) a single monopoly controlling the whole production chain (vertical integration) is better (larger consumer surplus and less deadweight loss.)


Situated between monopoly and perfect competition, an oligopoly is characterised by few producers with market power (albeit less than monopolies).

In 1838 Cournot introduced the first model of oligopoly.

Cournot assumes that the firms take into account the best response of the other firms. The aggregate production is between the competitive and monopoly outcome. Consumers are better off than with a monopoly. The sum of the profit of all firms is lower than the monopoly profit. With each additional firms

  • the individual production decreases, total production increases
  • consumers are better off
  • the profit of each firm and of the industry decreases
  • welfare increases tending towards the optimum (i.e. perfect competition)

The model was challenged by Bertrand in 1883. Without cooperation, the price will settle at the marginal cost. However, several assumptions can be relaxed:

  • Goods are perfect substitutes
  • Consumers can identify the cheapest producer without cost and switch
  • Firms compete and do not collude
  • Firms interact once and not repeatedly
  • The marginal costs of firs are constant and there is no capacity constraint
  • The actions available to firms are limited to price changes.

In 1925 Bertrand was critisised by Edgeworth for not considering productive capacities. In 1983 Edgeworth’s critique was limited by Kreps and Scheinkman who showed that if firms choose capacity first and set prices then, the results are equal to Bertrand’s stipulation.

There is no general model of oligopoly.


Oligopolies arise due to barriers to entry. In contrast to monopolies the barriers to entry are not completely prohibitive, but high enough to keep out a large number of producers. Barriers are constituted by:

  • Cost advantage (key resources)
  • Regulation
  • Economies of scale

In the long run the number of firms is endogenous. Incumbents will try to deter the entry of new competitors. Whether they succeed depends on whether a market is contestable. Baumol (1982) argued that the number of firms in the market does not matter, but whether a new firm can enter (and exit) the market for free.

A hit-and-run entry is a characteristic of contestable markets. Essentially, a firm enters a market, gets profits, and exits before the prices change.

QPAM: Uncertainty

A first form of uncertainty is randomness. It is a stochastic behaviour that can be dealt with sensitivity analysis, estimates from experience (actuarial) or hedging.

A more complicated form of uncertainty is indeterminacy. It describes situations that are qualitatively known, but cannot be reliably quantified. It is often addressed by attempting to quantify it anyway , using heuristics or stylised facts.

Another form of uncertainty is based on reductionism. Reductionism arises when a complex system is not completely understood and proxy relationships are established. It is a form of epistemological uncertainty and often addressed with lay knowledge (and bringing in lay people) and mixed methods (quantitative and qualitative) .

Yet another form of uncertainty is paradigmatic. Expert knowledge can narrow perspectives and neglects the unseen. Consequently, paradigmatic blind spots arise which can only be dealt with by interdisciplinary co-production of knowledge and staying curious.

The last form of uncertainty is based on unknown relations. This arises when something has not happened before (e.g. how cyber crimes work was unimaginable 30 years ago). It can be summed up as ontological uncertainty. It can only be addressed with humility and the ability to adapt.

Type III errors

Uncertainty may also arise from committing errors. More commonly known are these errors:

  • Type I: False positive, reject null hypothesis when true
  • Type II: False negative, accept null hypothesis when false

An additional third type of error can be summed up as the correct answer to the wrong question. These errors usually arise by using the wrong method (i.e. model design) or use a discipline specific approach (i.e. context) to a non-applicable field (e.g. it was tested whether rats die from heroin-laced water when they could also choose normal, which they did and it was concluded that addiction was so strong that it would make them kill themselves. Follow-up studies showed that when they have other rats and entertainment around rats don’t kill themselves on heroin. So the original research actually answered the question whether rats would commit suicide when being alone and without entertainment).

In the worst case it is used intentionally to distract from a real problem by a form of mental bait-and-switch.


Yet another source of uncertainty is the frame in which a discussion takes place. Describing a problem often circumscribes the solution. It determines what kind of methods and options are open for debate. It recasts a subjective reality as “objective”. It is a unusual field for engineers and natural scientists who assume an objective reality (e.g. physics). Any issue that comes up for policy analysis has most likely been framed before it is handed to analysts and scientists to process. For instance, economic growth is a usual assumption that cannot be challenged by any solution proposed.

Value conflict resolution

Another source of uncertainty is that value conflicts need to be resolved. Previously mentioned was the problem space. Any solution is essentially political and will always be a negotiation of social forces. It is not typically an academic field and is often dealing with red lines (deeply vs. weakly held values), shifting from why to how (it solves the problem), procedural vs. substantive fairness, obfuscated players (grassroots vs. astroturfing) and it is often a space for missing issues to be attached. Academics are usually hidden players that get called in after the fact to compare minor differences.