PIPP: Democracy & Governance

Mechanistic institutional definitions of a democracy are based on the electoral systems and the powers it hands to officials. There are also soft definitions of democracy that focus on citizens’ rights to form interest groups (pressure groups, political parties, etc.) and judicial protection of citizens.

The quality of a democracy can be rated based on different criteria. Different organisations use different criteria (e.g. Freedom House, Economist Intelligence Unit, Polity IV) which leads to disparate interpretation of which countries have a good democracy. Implications of criteria have been well studies and can be used to make inferences on economic, social and environmental conditions.

Democracies are a reflection of the history of a country in their institutions as much as in their party landscape.

Change in democracies

Erosion of democracy is usually accompanied by restrictions on media and group formation as well as interference with the judicial system (e.g. Hungary). On the other hand strengthening of democracy is usually founded on more free media and an independent judicial system (e.g. Brazil).

Mono-causal explanations for the rise and fall of democracy fail to prove a strong relation. The complexity behind change is still difficult to grasp. Nonetheless, particular development trajectories from autocracy to democracy and vice versa are well understood, but cannot be generalised as no generalisable necessary or sufficient conditions exist.

Principles of Economics: Imperfect Competition

Monopoly

Barriers to entry are the fundamental cause for the rise of monopoly. Barriers appear in three forms: ownership of key resources, exclusive production rights and an efficient (return-to-)scale.

A firm’s ability to influence the market price is called market power. It entails that a firm can raise the price above some competitive level in a profitable way. The lowest possible price a firm can profitably charge is equal to the marginal cost of production. The market power can be expressed as the difference between the price it charges and the marginal cost. A firm is considered a price maker it exercises its market power; formally defined as [latex]P'(Q) \neq 0[/latex].

Given a price function [latex]P(Q)[/latex] and a monopoly that sets its profit as [latex]\pi(P(Q),Q) = P(Q)Q-C(Q)[/latex] and has the the derivative [latex]\frac{d\pi}{dQ}=P + P'(Q)Q-C'(Q)[/latex].

In perfect competition marginal costs is equal to the price. In the monopoly it is marginal costs plus the derivative of the demand. Monopolies make us of the fact that increased output decreases price and and therefore the marginal revenue is [latex]P(Q)+P'(Q)Q < P(Q)[/latex] and therefore the optimal production of the monopoly is [latex]P(Q)+P'(Q)Q = C'(Q)[/latex]. Reformulated [latex]P-C’ = P'(Q)Q[/latex] and then [latex]\frac{P-C'(Q)}{P} = – \frac{1}{\epsilon}[/latex] where [latex]\epsilon[/latex] is the price elasticity. Consequently, the relative difference between the price and the marginal cost is inversely proportional to the price-elasticity of demand. The more sensitive demand is to the price the lower the relative difference between price and marginal cost. Close substitutes to a monopoly product induce high demand sensitivity and prices will not rise much above the marginal costs.

The market power of monopoly has two consequences: There is a redistributive effect as the profits of the firm increase at the expense of the consumers. There is also a loss of efficiency as the deadweight loss increases (i.e. the difference between the surplus in the competitive and monopolistic case). The allocative inefficiency is not judging whether consumers or producers are more deserving of the surplus, but criticising the deadweight loss. Market power causes market inefficiency as the reduction of output induces a welfare loss.

Rent-seeking behaviour

The existence of a potential rent may entice companies into rent-seeking behaviour. Acquiring a monopoly is of a major advantage and therefore highly sought after. Firms increase spending on monopoly-generating activities such as strategic and administrative expenses (lobbying, bribing, etc.) that do not generate social welfare.

Side note:

Competition laws sometimes prohibit market power above some minimum market power threshold. However, below the threshold the rules do not apply. Those thresholds may also differ for different practices.

Readings

Liebenstein 1966 X-inefficiency, Hart 1983 manager under competition, Nickell 1996 uk manifactoring 1972-1986

Natural Monopoly

Efficient scales leave only room for one company and therefore cause natural monopolies in network industries (water, electricity, internet, social networks, etc.). Usually, natural monopolies can produce at lower average costs than multiple firms.

If a monopoly prices at average cost, profits are zero. However, if we price at marginal cost the profits are negative and welfare is maximal. There exists a trade-off between allocative efficiency and productive efficiency. The Ramsey-Boiteux pricing is a policy rule setting out how a monopolist should set prices in order to maximise social welfare under the constraint of profits.

Price Discrimination

Restrictively formulated, price discrimination occurs when the “same good” is sold at different prices. A broader definition expands this to differences in prices that cannot be entirely explained by variation in marginal costs. Price discrimination is only feasible if consumers cannot resale the good to each other.

Price discrinomation has been categorised by Pigou (1932):

  • 1st degree (complete discrimination): Each unit is sold at a different price. The producer captures the whole surplus and no deadweight loss occurs. Production is optimal, but it is never fully realised.
  • 2nd degree (indirect segmentation): a proxy for a group is used (e.g. package size).
  • 3rd degree (direct segmentation): general attributes of a buyer (e.g. age or gender) is considered.

Double marginalisation

Assuming we have two firms with monopolies: upstream firm  [latex]P[/latex] with a production cost [latex]c[/latex] and downstream firm [latex]D[/latex] with a distribution cost [latex]d[/latex]. The Marginal revenue of the downstream firm is going to be the demand function for the upstream firm. However, the upstream firm will use its marginal revenue to calculate the quantity produced. Each monopoly in a chain of marginalisation will reduce the total quantity. For consumers (and welfare) a single monopoly controlling the whole production chain (vertical integration) is better (larger consumer surplus and less deadweight loss.)

Oligopoly

Situated between monopoly and perfect competition, an oligopoly is characterised by few producers with market power (albeit less than monopolies).

In 1838 Cournot introduced the first model of oligopoly.

Cournot assumes that the firms take into account the best response of the other firms. The aggregate production is between the competitive and monopoly outcome. Consumers are better off than with a monopoly. The sum of the profit of all firms is lower than the monopoly profit. With each additional firms

  • the individual production decreases, total production increases
  • consumers are better off
  • the profit of each firm and of the industry decreases
  • welfare increases tending towards the optimum (i.e. perfect competition)

The model was challenged by Bertrand in 1883. Without cooperation, the price will settle at the marginal cost. However, several assumptions can be relaxed:

  • Goods are perfect substitutes
  • Consumers can identify the cheapest producer without cost and switch
  • Firms compete and do not collude
  • Firms interact once and not repeatedly
  • The marginal costs of firs are constant and there is no capacity constraint
  • The actions available to firms are limited to price changes.

In 1925 Bertrand was critisised by Edgeworth for not considering productive capacities. In 1983 Edgeworth’s critique was limited by Kreps and Scheinkman who showed that if firms choose capacity first and set prices then, the results are equal to Bertrand’s stipulation.

There is no general model of oligopoly.

Entry

Oligopolies arise due to barriers to entry. In contrast to monopolies the barriers to entry are not completely prohibitive, but high enough to keep out a large number of producers. Barriers are constituted by:

  • Cost advantage (key resources)
  • Regulation
  • Economies of scale

In the long run the number of firms is endogenous. Incumbents will try to deter the entry of new competitors. Whether they succeed depends on whether a market is contestable. Baumol (1982) argued that the number of firms in the market does not matter, but whether a new firm can enter (and exit) the market for free.

A hit-and-run entry is a characteristic of contestable markets. Essentially, a firm enters a market, gets profits, and exits before the prices change.

QPAM: Uncertainty

A first form of uncertainty is randomness. It is a stochastic behaviour that can be dealt with sensitivity analysis, estimates from experience (actuarial) or hedging.

A more complicated form of uncertainty is indeterminacy. It describes situations that are qualitatively known, but cannot be reliably quantified. It is often addressed by attempting to quantify it anyway , using heuristics or stylised facts.

Another form of uncertainty is based on reductionism. Reductionism arises when a complex system is not completely understood and proxy relationships are established. It is a form of epistemological uncertainty and often addressed with lay knowledge (and bringing in lay people) and mixed methods (quantitative and qualitative) .

Yet another form of uncertainty is paradigmatic. Expert knowledge can narrow perspectives and neglects the unseen. Consequently, paradigmatic blind spots arise which can only be dealt with by interdisciplinary co-production of knowledge and staying curious.

The last form of uncertainty is based on unknown relations. This arises when something has not happened before (e.g. how cyber crimes work was unimaginable 30 years ago). It can be summed up as ontological uncertainty. It can only be addressed with humility and the ability to adapt.

Type III errors

Uncertainty may also arise from committing errors. More commonly known are these errors:

  • Type I: False positive, reject null hypothesis when true
  • Type II: False negative, accept null hypothesis when false

An additional third type of error can be summed up as the correct answer to the wrong question. These errors usually arise by using the wrong method (i.e. model design) or use a discipline specific approach (i.e. context) to a non-applicable field (e.g. it was tested whether rats die from heroin-laced water when they could also choose normal, which they did and it was concluded that addiction was so strong that it would make them kill themselves. Follow-up studies showed that when they have other rats and entertainment around rats don’t kill themselves on heroin. So the original research actually answered the question whether rats would commit suicide when being alone and without entertainment).

In the worst case it is used intentionally to distract from a real problem by a form of mental bait-and-switch.

Framing

Yet another source of uncertainty is the frame in which a discussion takes place. Describing a problem often circumscribes the solution. It determines what kind of methods and options are open for debate. It recasts a subjective reality as “objective”. It is a unusual field for engineers and natural scientists who assume an objective reality (e.g. physics). Any issue that comes up for policy analysis has most likely been framed before it is handed to analysts and scientists to process. For instance, economic growth is a usual assumption that cannot be challenged by any solution proposed.

Value conflict resolution

Another source of uncertainty is that value conflicts need to be resolved. Previously mentioned was the problem space. Any solution is essentially political and will always be a negotiation of social forces. It is not typically an academic field and is often dealing with red lines (deeply vs. weakly held values), shifting from why to how (it solves the problem), procedural vs. substantive fairness, obfuscated players (grassroots vs. astroturfing) and it is often a space for missing issues to be attached. Academics are usually hidden players that get called in after the fact to compare minor differences.

Principles of Economics: Public and Common Goods

To define Public Goods we need two concepts: Excludable goods and Rival goods.

  • Excludable goods can be prevented from use (food) in contrast to non-excludable goods that can always be consumed (radio or air).
  • Rival goods cannot be consumed without diminishing others’ use of it (food) in contrast to non-rival goods (mp3-files).

Based on the two properties four types of goods can be defined:

  • Private: Excludable & rival
  • Public: Non-excludable & non-rival
  • Common: Non-Excludable & rival
  • Club: Excludable & non-rival

As Public goods are non-excludable and non-rival goods, it is hard to provide public goods with private markets because of the free-rider problem. A free rider receives the benefits of a good, but avoids paying for it.

If the benefit of a public goods exceeds the cost of providing it, the government should provide the good by collecting tax to pay it. However, measuring the benefit is usually difficult. An approach to solve the problem is to perform cost-benefit analysis. Nonetheless, such cost-benefit analyses are imprecise and provide less efficiency than private markets.

In contrast Common Goods are non-excludable and rival. This causes the Tragedies of the Commons as free-riding is the best option for any rational, self-interested actor (i.e. consuming as much as possible without contributing). Several policies are used to restrict the tragedy:

  • Regulated resources
  • Corrective taxs
  • Auctioning of permits
  • Privatisation (e.g. make land private, sell in parcels)

Elinor Ostrom developed 8 principles to govern commons:

  • Clearly defined boundaries;
  • Rules regarding the appropriation and provision of common resources that are adapted to local conditions;
  • Collective-choice arrangements that allow most resource appropriators to participate in the decision-making process;
  • Effective monitoring by monitors who are part of or accountable to the appropriators;
  • A scale of graduated sanctions for resource appropriators who violate community rules;
  • Mechanisms of conflict resolution that are cheap and of easy access;
  • Self-determination of the community recognized by higher-level authorities;
  • In the case of larger common-pool resources (CPR), organization in the form of multiple layers of nested enterprises, with small local CPRs at the base level.

This illustrates that institutions are necessary to manage common goods and they require a high level of administration.

BSTP: Computing

The digital revolution was carried by the development of transistors. The first triode was created in 1907 (similar to the air plane in 1903). Followed by field-effect transistor (FET) in 1925 and finally followed by today’s standard a silicon transistor in 1954.

Based on transistors a first digital computer (ENIAC) was built in 1947 and required the first compiler in 1949 to operate it efficiently. This enabled the first programming languages COBOL and FORTRAN in 1953-54.

The internet is the rise of networks of computers based on the TCP/IP protocol (1983). Text-based interaction was enabled  by the development of HTML (1990) at CERN.

Moore’s law was a marketing campaign to create parallel industries (software industries). The problem was that software development is slow (up to 3 years) so companies targeting today’s hardware would have a hard time selling. Intel postulated its growth of doubling capacity to allow software developer to develop for the future machines. Moore’s law is a corporate policy that revolutionised the software industry by setting a target.

The computing industries are globally diversified. Simplified speaking semiconductor printers are developed in Europe, semiconductors are printed in Asia and software to use the semiconductors is developed in the US.

The technology behind semiconductor productions has been an evolution with small steps taken every year since the 1950s. Currently there are only 3 companies (Intel, Samsung, TMC) are able to produce semiconductors and in the near future it might drop to 2.

 

 

CGSS: Complex Networks

Behind complex networks, there are networks that describe the interaction between components.

Basics

A network is  a set of nodes interconnected by a set of links. The adjacency matrix [latex]A[/latex] of a network is the matrix which contains non-zero element [latex]A_{ij}[/latex] if there exists an edge between node [latex]i[/latex] and [latex]j[/latex]. The resulting graph is undirected if the adjacency matrix [latex]A[/latex] is symmetric [latex]A_{ij} = A_{ji}[/latex] or directed otherwise.

A cycle exists in a graph if there exists an edge in the graph that can be removed without dividing it into two. A graph without cycles is considered a tree.

A bipartite graph connects two different kinds of nodes. The bipartite graph can be projected onto either type of node by matrix multiplication [latex]AA^T[/latex] and [latex]A^TA[/latex] respectively.

Centrality Measures

To assess a network the centrality of each nodes needs to be analysed. Several options are available:

  • Degree: The number of incoming edges is easy to measure, but does not centrality over a larger neighbourhood of nodes.
  • Eigenvector: For a vertex [latex]i[/latex] its importance can be defined as a vector [latex]x_i[/latex] which can be computed as [latex]x_i \sum_k A_{ik}x_k[/latex].
  • Closeness: A geodesic path connects two vertices with the lowest amount of edges. The length of a path between [latex]i[/latex] and [latex]j[/latex] measured in edges is denoted [latex]d_{ij}[/latex]. Closeness then can be expressed as the harmonic mean between [latex]i[/latex] and all other nodes [latex]C_i = \frac{1}{n-1}\sum_{j(\neq i)}\frac{1}{d_{ij}}[/latex].
  • Betweenness: The number of geodesic paths between [latex]s[/latex] and [latex]t[/latex] is denoted as [latex]\sigma{st}[/latex]. The number of geodesic paths through a node [latex]i[/latex] is denoted as [latex]\sigma_{st}(v_i)[/latex]. Then betweenness is defined as [latex]C(v_i) = \sum_{s,t} = \frac{\sigma_{st}(v_i)}{\sigma_{st}}[/latex]

[latex] <k^2> = \sum_i P(k=i ) i^2= \sum_{i=0}^\infty i – \gamma i^2 [/latex]  [latex]<k> = \sum_i P(k=i ) i  = \sum_{i=0}^\infty i – \gamma i[/latex]

Principles of Economics: Externalities

Externality

An uncompensated impact of one person’s action on the well-being of a bystander. It is a type of market failure as it reduces the efficiency of the market. In general, it is caused by self-interested buyers and sellers neglecting the external costs or benefits of their actions. However, public policy can reduce externalities and increase efficiency.

On the one hand, positive externalities include herd immunity by vaccination, R&D or higher education. On the other hand, negative externalities include air and water pollution.

Internalisation

The idea is to alter incentives such that people take into account the external effects of their actions. Negative externalities are usually larger than socially desirable, whereas positive externalities are usually smaller than socially desirable. Example remedies to internalise respectively are tax and subsidise food. A tax on production may make a firm’s costs equal to social costs.

Public policies can be roughly divided in two:

  • Command-and-control policies regulate behaviour directly;
  • Market-based policies provide incentives so that private decision-makers will choose to solve the problem on their own.

Taxes and Subsidies

A corrective tax designed to induce private decision-makers to take account of the social costs that arise from a negative externality.

Arthur Pigou (1877-1959) introduced the Pigovian tax which is equal to the external costs. Subsidies are therefore negative taxes to optimally compensate for the external benefit. Those taxes should align private incentives with society’s interests. An import fact is that Pigovian tax moves an economy toward a more efficient allocation of resources. This contrasts with other taxes and subsidies, which actually distort incentives and move an economy away from the social optimum.

The marginal damage is the increase of pollution per additionally produced unit. The marginal cost of abatement is the cost to reduce an additionally produced unit of pollution. Small (initial) reductions in pollution are cheap, however, the price to reduce pollution increases exponentially. The optimal pollution is defined as the equilibrium where the marginal abatement costs equals the marginal damage. A tax equal to the equilibrium point therefore forces companies to reduce the pollution to the equilibrium point (otherwise they pay taxes higher than the cost of abatement). For the abatement costs higher than the tax the company pays taxes as this is cheaper than abatement.

However, each company will have different costs of pollution abatement. Firms with the lowest abatement costs will reduce pollution most. Firms with high abatement costs have a greater willingness to pay the tax. This contrasts with a regulation to only emit a certain amount, as the reduction in pollution is obtained in the most efficient way.

Corrective taxes are better for the environment:

  • reduce pollution up to the level of tax
  • cleaner technology are quickly adopted to reduce tax load.

Tradable pollution permits

Tradable pollution permit systems reduce pollution at lower cost than regulation. Firms with low cost of reducing pollution do so and sell their unused permits. Firms with high cost of reducing pollution buy permits. Pollution reduction is concentrated among those firms with the lowest cost.

The permit price is set at the marginal cost of abatement. At the optimal pollution the marginal damage is gain equal to the marginal abatement cost. If the pollution is larger than the optimal pollution, additional permits need to be bought, on the other hand, if pollution is lower than the optimal pollution, permits can be sold.

However, in the example of the EU ETS, the profits of companies increased and the price for end-users as well. Pollution permits – while similar to Pigou taxes in design – are not a form of taxation.

There are limits as environmentalists argue no one should be able to buy the right to pollute and that pricing the environment is inherently impossible. However, without pricing it becomes difficult to judge the value of clean air and water. So far, market-based approaches have reduced the cost of environmental protection and therefore seem to be partially justified.

The Coase Theorem

Assuming everyone has perfect information, consumers and producers are price-takers, enforcing agreements in courts is costless, no transaction costs and no strategic behaviour. Based on those assumptions the initial assignment of property rights regarding externalities does not matter for efficiency. In this world it is a secondary issue who pays and all externalities are internalised. Property rights are merely used for gaining efficiency (not redistribution).

In reality, pollution causes potentially large transaction costs, which depend on the initial allocation of property rights. Free-riding also causes problems due to the asymmetric information on damages and costs. Lastly, no mention of any compensation for victims in conjunction with the previous points makes the Coase Theorem unrealistic.

To contain these problems (and get closer to the circumstances of the Coase Theorem) environmental damage is evaluated. Indirect methods compute affection by pollution on isolated markets, which revealed preferences. Direct methods (contingent valuation) are based survey on consumer willingness to pay for a better environment, which is based on stated preferences.

 

BSTP: Further Industrial Revolution

Technological development occurs in two forms: intensive growth (development of new methods) and extensive growth (improving current methods). At the time (1930) Keynes predicted 15-hour work weeks by 2030, based on the reduced work necessary to reach the same economic productivity. This was based on the intensive growth of the 19th and 20th century.

The Second Industrial Revolution covers roughly from Telephone to Airplane and is mostly driven by the rise of the use of electricity. Culminating in Airplanes.

The scientific and information revolutions moved from the FM Radio to Transistor, the PC, lasers, cell phones and recently the World Wide Web. The combination of mechanical and communication technology led to rapid advancement in “Mobility” which eased the transport for people and goods. Travel by sea, air and land changed rapidly.

The Panama and Suez canals changed the transportation by sea. and reduced average travel times by more than half (e.g. US East to West cost dropped form 21.000km to 8000km). Another factor was ship propulsion in the doldrums. Sail boats would get stranded in those low wind areas. Coal was to heavy to take along, but engines based on liquid fuel could store fuel efficient enough to take it along.

The power to weight ratio in engines doubles every 7 years since the 1950s. It is a similar phenomena as Moore’s law. This underpins the technological drive behind air planes. While the power increased, the noise decreased. However, this is not due to technological interest, but to policy demands of having less noise in cities.

Along with the capability of movement comes the willingness of movement. Human Capital Flight (or “Brain Drain”) is a consequence of the ability to move. The easy access to foreign labour markets drives those movements. It mostly occurs from developing countries to developed countries.

Another consequence is multinational enterprises. There are around 20 companies that are more influential than probably the bottom quarter of countries. Those companies trade among each other independent of nation states in the international vacuum.

BSTP: Of cars

In the context of Geel’s book – specifically cars – we discuss the following questions:

How did niches emerge in the context of existing technology regime?

A horse-based transport moved to electric-based transport in the 19th century, before the internal combustion engine took off. Electric batteries and plugs where not standardised and therefore it was difficult to use electric-based mobile units. Also the low energy density of batteries would require a very dense charging network that was unrealistic at the time. At the same time the bicycle were created and technology (chains, rubber-wheels, gears, etc.) spilled into the development of early cars. Electric trams and bicycles allowed people to get used to faster transportation (which was still a topic of public discussion at the time) and created a desire for mobility. Cars filled also a leisure niche (touring and racing) that allowed combustion engines to overtake electric approaches.

To which ongoing processes at the level of the existing technology regime and the landscape did developments in the niche link up?

On a side note, electric trams could overtake horse-based transport as electric tram produced a day market for electricity (people used it mainly at night) and it lowered the cost of transport (feeding and housing of horses). Suburbanisation drove the development of cars as well as as the density of people was lower in the suburbs that made trams less efficient (in terms of prices).

What role did policy play in affecting the development of technologies?

The subsidies to suburbanisation in US made cars nearly the only option for transport. In conjunction with a nearly 4 times lower density of people than in Europe cars were needed more.

 

CGSS: Introduction to Mechanism Design

Game Theory outlines the problem of the free-rider dilemma in public goods. In order to overcome the tragedy of the commons mechanism design was proposed. The basic idea is to define the the payoff and actions in order to drive people towards a preferred behaviour.

Public Goods Game

From the mechanism design perspective  two individuals  [latex]k \in \{i, j \}[/latex] receive a private benefit [latex]b_k[/latex] at the cost of the public project [latex]c \in \mathbb{R}_+[/latex]. A decision is denoted as [latex]d\in \{ 0, 1 \}[/latex] where 0 is the decision not to invest and 1 is the decision to invest. Each individual k pays a tax [latex]t_k[/latex]. A decision is communicated via a message [latex]M = M(m_i,m_j) = (d,t_i,t_j)[/latex].

Based on this setup a good mechanism would at least consist of:

  • Individual Rationality: individuals weakly prefer to participant
  • Efficiency: maximizing the sum of utilities of all participating individuals
  • Balanced: the taxes cover the costs
  • Simplicity: easy to understand and practical (not well defined)

Additional properties could be defined, but are left for more in-depth studies.

Simple Public Goods Mechanism

Based on the work of Jackson & Moulin (1992) a two-stage mechanism can be defined. In the first stage each individual [latex]k \in \{i, j \}[/latex] submits a bit [latex]v_k[/latex] which is an estimate for the joint benefit of a project. If the largest bid [latex]v^* > c[/latex]. In the second stage each individual [latex]k \in \{i, j \}[/latex] submits a bid [latex]\beta_k[/latex] indicating private benefit. If a cumulative private benefit [latex]\sum_k \beta_k[/latex] is larger than the cost [latex]v^*[/latex], the project should go ahead, if it is smaller, the project should not go ahead and equality leaves the decision undecided.

The mechanism has the properties of being individually rational, efficient, balanced, simplistic, implementable in dominant strategies, and offers true-preferences revelation.

Proof of optimality TBA.