ETH, STP

CSD: Space Syntax Theory

Space syntax is a social theory on the use of space. It encompasses a set of theories and techniques that examines relationships between man (e.g. individual/user/society) and the environment (in/outdoor).

Recommended basic readings are Lynch’s “The image of the city” (Lynch, 1960) as well as “Space is the machine” (Hillier, 2007) . Advanced readings are “The social Logic of Space” (Hillier & Hanson, 1989) , which also introduced Space Syntax.

Spatial Configuration

Spatial configuration defines how the relation between spaces A and B is modified by their relation to space C (Hillier, 2007) .

Representation of Space

Isovists, also called a view shed in geography, are the volume/area of the 360° field of view from particular points of view.  Lines of sight are used to construct the isovists. Partial isovists are also constructed to mimic the human field of view. Psychologists have suggested (but not yet quite proven) that the shapes of isovist polygons influence the behaviour of humans. Each point of view generates its own isovist. Visibility Graph Analysis converts the set of isovists into a measure of visibility.

When people move, they like to move in straight lines (confirmed in Spatial Cognition). Axial lines provide a potential for long straight lines which could be walked upon. Typical analysis chooses the minimal set of longest axial lines that allows to see the complete space.

Major assumptions of space syntax assume that people move in lines (axial lines), see changes in visual fields (VGA), and interact in convex space (which is not covered).

Measuring centrality and graphs

To convert a road network into a graph. The roads are taken as nodes and connections between roads are edges. Curves are replaced by a set of lines that mimic the curvature. Segment angular analysis splits roads into segments (according to connections to other roads), however, additionally the connections are weighted by changes of direction. Essentially, degree centrality is measured. Also, other measures of network centrality are used (see previous link). Closeness Centrality is called Integration in Space Syntax. Betweenness Centrality is called Choice in Space Syntax. Other centrality measures are currently not applied in Space Syntax.

References

Hillier, B. (2007). Space is the machine: a configurational theory of architecture. Space Syntax.
Hillier, B., & Hanson, J. (1989). The social logic of space. Cambridge university press.
Lynch, K. (1960). The image of the city. MIT press.

Standard
ETH, STP

SMADSC: Social Networks

Social networks often give structure to relations. They can be considered as abstract, mathematically, tractable and computationally instantiatable systems. Social networks have become a field of their own. It is very interdisciplinary touching mathematics (graph theory), computer science (algorithms), sociology (population group trends), psychology (individual and social behaviour), and complex network theory.

Interpersonal contact caused social networks to emerge. It can be understood as a descriptor for social trends (Cioffi-Revilla, 2013) . The basic elements are Nodes (units of observation), Edges (relationships), and Aggregations (Dyads, Triads, Clique, Clusters, etc.). More advanced elements are Descriptive Properties (e.g. centrality measures).

A network can also be seen as an abstract topology and “social glue”. Agents can move around the network, by jumping from node to node, either there is a connecting edge or in general. Alternatively, nodes can be mapped onto agents, either by allowing agents to move around a raster or along the edges.

A network trades off regularity and complexity, relative size and relative complexity as well as network complexity and network connectivity.

Social Network Analysis

Social Network Analysis (SNA) is based on a machine-readable representation of a social network, i.e. an adjacency matrix. While there is no “best measure” to describe a node or edge, there are several useful descriptive properties.

Bridging and spanning nodes can be identified. Also,cliques and clusters can be identified which gives a relative density of the network. Lastly, measures  of relative Connectedness and Centrality are often used (see this post).

Social Psychology

Instead of observing the network as a whole. It can be analysed from the node perspective. Nodes can be grouped into a “self” (ego) or “other (alter). The “self”‘s purpose  is “self-motivated” action relative to their role and their subjective network knowledge. If nodes are “other” then their function is that of an arbiter or reactive agent. In this view, edges represent social connectivity in the network. They represent evidence of physical, informational, and or some other material or non-material transfer or contact between nodes. Typically, the edges suggest some social binding between individuals and/or groups of nodes. Finally, an edge often connotes implicit temporal properties. Dyads are any two connected nodes in the network, whereas triads are any three connected nodes, whereas cliques are larger. Simmelian ties are strong, bidirectional social bindings.

Standard
ETH, STP

ISN: Network visualisation

Today’s topic will be to visualise networks and centrality measures. We visualise a network to better understand the underlying data. A visualisation should be driven by the question that we would like to answer. Nonetheless, visualisations are by their nature exploratory. Also, visualisations do not provide evidence for hypothesis.

Visualisation usually tries to convey information by the layout. Density tries to convey cohesion. Distance tries to convey graph-theoretic distance, tie length tries to convey attached values. Geometric symmetries try to convey structural symmetries.

General rules of graph visualisation is that no edge crossing, overlap, asymmetry or meaningless edge ledge/node side should occur.

Visualisation in R

We will use either the “igraph” or “sna” library to visualise the data.

Standard
ETH, STP

PE: Public Good Game

Public Good Game

Each subject secretly chooses how much of their initial endowment to put into a public pot. The joint value in this pot is multiplied by a factor ( 1 < factor < N ) and evenly paid out across all N subjects. All unspent endowments is kept by the respective subject. In one-shot games a non-cooperative strategy is usually applied. In infinitely repeated games, subjects eventually cooperate.

However, experimental results are not in line with predictions of rational choice theory. Even in one-shot, two person prisoners’ dilemma games half the participants cooperate. Voluntary contributions to public goods fall if the game repeats and therefore participants have been called “adaptive egoists” (Mueller, 2003) or “conditionally cooperative”(Gächter, 2006).

People seem to be contributing for “warm glow” preference (utility from contributing) (Palfrey & Prisbrey, 1997), altruistic preferences (want to increase other’s utility) or due to error and learning (testing the best approach).

Gächter found that voluntary cooperation is fragile, that there exists social interaction effects in voluntary cooperation, group composition matters (likemindedness), and that management of believes matters. Also, path dependency has been observed where the first round is the most important.

In other field experiments, students were informed how many others contributed to social funds (64% versus 46%) and were influenced by the numbers (Frey & Meier, 2004).

The cooperative environment (Ostrom, 1998)

Models of complete rationality do not work well in non-competitive situations. Verbal communication can improve trust and allow groups to reciprocate. There is also some capacity to solve second-order social dilemma that change the structure of the first-order dilemma (e.g. institute punishment). Ostrom proposed a model of bounded rationality where individuals use heuristics, norms and rules to improve outcomes of non-competitive, not frequently repeated situations.

According to Ostrom, reciprocity, reputation, and trust can overcome strong temptations of short-run self-interest. Consequently, a self-reinforceing process can be created that increases the level of cooperation and leads to higher net benefits for all, but it is very fragile.

The system is stable if it has a small size, symmetry of assets and resources, long time horizon, and a low-cost production function. As the group size increases, marginal gains from contributing falls and it is more difficult to identify and punish defectors. As the stakes increase, cooperation is reduced. Other issues that may arise are: monitoring becomes more costly, economies of scale may not apply and marginalisation behaviour within the group may be questions (is it fair punishment?).

Current Research (Lanz, Wurlod, Panzone, & Swanson, 2017)

A field experiment in supermarkets in greater London area to compare quantitative impact of three measures to reduce footprint of consumption (welfare analysis). The experiment tested conditions on four types of goods: soda, milk, spreads, and meat. Three conditions were used:

  1. Information label
  2. Pigouvian tax based on relatively higher footprint of “dirty-type” product alternative
  3. Neutrally framed price change.

The main findings includes effectiveness of all policy interventions is higher if substitutability is higher and the motivation crowding out due to taxation relevant only for low effort products.

Critiques were raised since it was a “one-shot” game testing “warm glow” and it required a state-solution.

Cognitive biases

Way of thinking that can lead to systematic deviations from the benchmark of rationality. The concept originates from psychology and behavioural economics. Classic biases include availability heuristics (remembered events seem more likely), confirmation bias (consume new information as complementing preconceptions), endowment effect (ownerships changes value perception), and framing effect (the presentation affects the conclusion).

Preference Aggregation

The problem with the state without agency and the state as a nexus of cooperation (Acemoglu, 2009) is that the government is treated as a black box into which takes as input individual preferences and provides as output a collective choice. Arrow’s Impossibility Theorem  (Arrow, 1963) shows that “the only voting method that is not flawed is a dictatorship.” Arrow stipulates that even under reasonable requirements there can’t exist a ranked voting system  (i.e. social welfare function or preference aggregation rule) that transforms the set of preferences into a single global societal order for two or more participants with three or more option. Arrow specifies reasonable requirements as:

  1. Non-dictatorship
  2. Universality (unique and complete ranking)
  3. Independence of irrelevant alternatives
  4. Pareto efficient (unanimity)

Pairwise voting alternatives do not lead to a complete ordering. Only through violation of reasonable requirements (dictatorship, Kaldor-Hicks efficiency, non-universality) can preference aggregation be performed. Cox  (Cox & McCubbins, 2000) showed that voting rules shape policy outcomes even if the voter preference is fixed.

Nudges or “Libertarian paternalism”

A nudge can be understood as a change in the choice context (that would be irrelevant to the homo oeconomicus) to intentionally steer real-world agents’ behaviour in the direction of the homo oeconomicus benchmark (see organ donor debate). Critiques of such “behavioural welfare economics” are again that no global social preference exists and that without appropriate information manipulation could occur. It also implies a benevolent paternalist state, which is debatable.

Endogenous versus exogenous institutions

The difference between whether on asses how institutions arose (endogenous) or how institutions impact policy (exogenous). This course focuses on the latter.

References

Acemoglu, D. (2009). Political economy lecture notes. Retrieved February 27, 2017, from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.682.3171
Arrow, K. J. (1963). Social Choice and Individual Values. New York: Wiley.
Cox, G. W., & McCubbins, M. D. (2000). Political structure and economic policy: The institutional determinants of policy outcomes. In Presidents, Parliaments and Policy (pp. 21–96). Cambridge University Press.
Frey, B. S., & Meier, S. (2004). Social comparisons and pro-social behavior: Testing “conditional cooperation” in a field experiment. The American Economic Review, 94(5), 1717–1722.
Gächter, S. (2006). Conditional cooperation: Behavioral regularities from the lab and the field and their policy implications. CeDEX Discussion Paper, 3.
Lanz, B., Wurlod, J.-D., Panzone, L., & Swanson, T. (2017). The behavioural effect of Pigovian regulation: Evidence from a field experiment. IRENE Working Paper.
Mueller, D. C. (2003). Public Choice. Springer.
Ostrom, E. (1998). A behavioral approach to the rational choice theory of collective action: Presidential address, American Political Science Association. American Political Science Review, 1, 1–22.
Palfrey, T. R., & Prisbrey, J. E. (1997). Anomalous behavior in public goods experiments: How much and why? The American Economic Review, 829–846.

Standard
ETH, STP

ASC: Introduction

 Argumentation and Science Communication will discuss how scientific arguments are made and how they are eventually communicated.

The first weeks readings are listed in the references (Bradley & Steele, 2015; Lempert, Nakicenovic, Sarewitz, & Schlesinger, 2004). A particular focus will be on (Mueller, 2010), for which the following questions should be answered:

  1. What is a computer model?
  2. What are the basic reasons for limits/errors/uncertainties of (climate) models or model predictions? Which ones, do you think, are most important, and for which situations?
  3. What kind of results are generated by models?
  4. What are reasons for and against the claim that computer models inform us about the world?

Scientific knowledge makes sense in its context, but communicating it across fields or indeed beyond science makes it necessary to pick an appropriate language. There is another issue: science is asked to be informative rather than prescriptive, however, usually it is presented/perceived in a prescriptive manner.

Argumentation

Argumentation is needed in policy analysis because decisions are made under deep uncertainty. Argumentation analysis is a philosophical method to address the uncertainty.

Predict-then-act (Lempert, Nakicenovic, Sarewitz, & Schlesinger, 2004)

A rational decision for policy alternatives which are ranked on the basis of their expected utility, contingent on the probabilities of alternative future states of the world. Under Rational Choice Theory the outcome with the highest utility would be picked. Rational Choice would therefore require no democratic deliberation, if science is done properly. However, each step of the scientific endeavour requires deliberation. Science itself is agenda-setting and therefore cannot be performed unquestioned. The rational choice assumption are not fulfilled.

 [Deep uncertainty exists when] decision-makers do not know or cannot agree on: (i) the system models, (ii) the prior probability distributions for inputs to the system model(s) an their inter-dependencies, and/or (iii) the value system(s) used to rank alternatives. (Lempert et al., 2004)

Such a reductive approach is an idealisation that abstracts many aspects. On the upside, it is a smart formal approach that can represent  diverse measures within one holistic measure (expected utility). On the downside, it may not matter for a decision (different statistical value of life for different countries) or it does not apply (requirements may not e fulfilled due to a lack of knowledge beyond information about outcomes).

Summa summarum, rational choice should be seen as a special case, rather than a general paradigm to conceive a policy decision problem. Argumentation is necessary to delineate the problem and frame the options, characterise uncertainties of outcomes, characterise value uncertainties, evaluate uncertainties, and deliberate a fair decision from plural perspectives (Hansson & Hirsch Hadorn, 2016).

Philosphy

Philosophical methods address two questions in a systematic way:

  1. What do you mean? Answer requires an analysis of respective concepts.
  2. How do you know? Answer requires an analysis of respective arguments.

Philosophy does not generate new knowledge, but refers to what is known. It makes a relevant distinction and describes consequences for normative claims about knowledge (epistemology) and action (ethics). It points at the limits of what we can belief and therefore can be considered a method for critical thinking.

Critical Thinking

The goal is to argue, communicate and act responsibly. It requires the ability to accept criticism as well as a willingness to reach an understanding. Based on expertise in a scientific field or regarding real-world problems, critical thinking tries to apply cognitive skills to answer the two basic questions of philosophy mentioned above.

Short Guide to Analysing Texts (Brun & Hirsch Hadorn, 2014)

A guide that structures analysing texts in 5 steps:

  1. Rules for Working and Principles of Understanding
    • Refer explicitly to the analysed text.
    • Write down results and your main reasoning.
    • Develop text analyses and results which are understandable to others.
    • Give reasons for your text analyses/results.
    • Read texts several times to test and revise understanding of both the parts and the whole.
    • Start with the assumption that the author makes true statements and gives sound arguments and go on to test this assumption (principle of charity).
  2. Preparing the Text Analysis: How to Proceed
    • Gather basic information about the text: Authors (ghostwriting?), publishing (scientific or popular?), topic, context of writing and publishing, target readership, type/function of text, and impact
  3. Reading: How to Work on the Text
  4. Structuring: How to Analyse the Structure of the Text
    • Divide the text into smaller passages (e.g. paragraphs) and number them consecutively
    • For every passage of text, consider the following questions:
      1. Content: What is this passage about?
      2. Function: Which function does this passage serve?
      3. Summary: How would you title the passage (content- and function-wise)?
  5. Summarising: How to Capture the Essential
    • Represent concisely essential statements, central arguments and the basic structure of the text.
    • Summaries have to be comprehensible without acquaintance with the original text.
    • Adapt the representation to the aim of your text analysis and to the question it should answer(a short text is not always the optimal solution).

References

Bradley, R., & Steele, K. (2015). Making climate decisions. Philosophy Compass, 10(11), 799–810.
Brun, G., & Hirsch Hadorn, G. (2014). Textanalyse in den Wissenschaften: Inhalte und Argumente analysieren und verstehen. vdf Hochschulverlag AG.
Hansson, S. O., & Hirsch Hadorn, G. (2016). The Argumentative Turn in Policy Analysis. Springer International Publishing.
Lempert, R., Nakicenovic, N., Sarewitz, D., & Schlesinger, M. (2004). Characterizing climate-change uncertainties for decision-makers. An editorial essay. Climatic Change, 65(1), 1–9.
Mueller, P. (2010). Constructing climate knowledge with computer models. Wiley Interdisciplinary Reviews: Climate Change, 1(4), 565–580.

Standard
ETH, STP

Urban Design II: Los Angeles

Today’s topic will be the Urban Design of Los Angeles. The main tools will be top-down infrastructure (Ecology/landscape), fragmented sub-urban (suburbia) and places for experimentation (micro/temporary programs).

Los Angeles is a car city. It is the antagonist to New York, the incarnation of the battle between the East and West Coast. Hollywood is located in Los Angeles and Hollywood produces the modern understanding of America.

Los Angeles is also a (horizontal) grid city. The city grew out of property speculation with Asian migrants building the railways in hope of a better future. Architects came to Los Angeles to build a small bungalow or two. Los Angeles is multiple cities in one place (not geographically, but imaginary).

Los Angeles is also a Postmodern City. The modern idea ended with the Second World Ware. Postmodernity interweaves the past with the present and expectations of the future in contrast to Modern concepts of Communism, Scientific endeavour and Socialism.

Ed Soja describes Los Angeles as highly fragmented and inducing a feeling of being lost and being dislocated epitomised in the Bonaventura. Conventional understandings of a city are questioned by Los Angeles. Conventional standards of planning do not apply. Individuals, pressure groups and planning authorities vie for the decision how to develop the city. The non-planning has become a characteristic of the city. Frank Gehry is a product of Los Angeles. Los Angelites spend 1 month per year in their car.

Los Angeles is also plagued by skid rows (homeless housing in tents) and riots, both forming the character of the city. The Mexican past of the city mixes with the American identity and incoming Asian cultures (Korean, Chinese and Japanese).

The city of Los Angeles has a density of 3000 inhabitants per squarekilometer with 4 million inhabitants, whereas the county of Los Angeles drops to 900 inhabitants per squarekilometer at 10 million inhabitants. The greater Los Angeles incorporates neighbouring counties and the number of inhabitants rises to 18 million.

Top-down Infrastructure

Los Angeles laid down the grid and people were invited to come to Los Angeles. Real estate taxes drove the engine of Los Angeles. 160 administrative subdivision are contained within the 5 counties Greater Los Angeles covers.

Los Angeles was unexplored and presented itself as a desirable opportunity. Route 66 was the road from Chicago to Los Angeles that people travelled in pursue of happiness. Los Angeles was sold as an antidote to  the urban city by promoting the suburban. Trams connected the suburbs to the centre, but soon where complimented and overtaken by highways. Within 10 years in the 1920s Los Angeles grew with the incoming migrants from empty fields to full-blown city. As a side-note, this is happening in most major cities in developing countries from China over India to Nigeria. Los Angeles also planned quarters that enforced the segregation.

Los Angeles had no notable economy before 1920, but soon produced 25% of US oil and soon expanded its importance via the film industry.

The floodwater infrastructure cuts through the city and carries valuable water to the ocean. The hostile environment (swamp, desert, mountains) made occupying the area a difficult task. Vast and strategic infrastructure is necessary to work. Water is imported through massive aqueducts from Colorado. Electricity is imported with the longest high-voltage electricity line from the Pacific Northwest. The hot summer makes people use air condition on an incredible scale. The interstate highway connection to New York binds it to the rest of the US. The former Los Angeles River has been converted from meandering to a straight line. Most of the time it is an empty 8 meter gap in the city, but during strong down pours it may completely fill up with floodwater. Graffiti artists reclaim the floodwater systems by filling them with art. People reclaim the floodwater system with sports and recreation. (Reclaiming) Infrastructure will become a core task of architecture.

Fragmented Sub-urban

Suburbia was an attempt to decentralise cities in the face of nuclear threats in the cold war. Los Angeles has no strong centralised core, but equally spans into all directions. The fragmentation ensues as all suburban areas create their own nucleus.

Pacific Electric Railway connected the disparate suburbia and enabled movement, followed by the highway.  Industry gathered along the river and fragmented the city even more: industrial suburbs.

Hollywood is a suburb. The suburb was projected to the rest of the US with the help of the film industry. The suburb was also a dream where each hard-working American can have a house.

The suburban and exurban has become urban as the density of suburbia and the communication technologies removes any distance. Modern cities are hybrid, both urban and non-urban.

Places for experimentation

New architecture was tested in Los Angeles. Many architects went to Los Angeles to build bungalows and tested new approaches how to built. Los Angeles is the playground of architecture. The Lovell beach house was built in the 1930s and looks like contemporary 2010s architecture.

Lovell Beach House

Lovell Beach House

Architecture of single buildings is the first step towards Urban Design. Houses are the atomic elements of Urbanism. Frank Lloyd Wright becomes the greatest American architect and inspires many architects in Los Angeles giving the Urban Characteristic to the city. Architects pick up elements of the past, mix them with their interpretation and creating buildings for the future in the present. Los Angeles was the core of Postmodern Architecture in the US. Experimental Architecture becomes a core concept to create new architecture put forward by Gehry such as the Disney Concert Hall.

Disney Concert Hall

Disney Concert Hall

 

 

Standard
ETH, STP

IAP: Introduction

The internet is a global-scale, technically complex artefact of immense international social and political importance. It is formed by the interaction of technical constraints (e.g. speed of light, number of addresses), usage models and behaviour, technological design choices and policy decisions.

This course will focus on the Internet and other networks will only marginally be mentioned (mobile networks, local networks, etc.), even if they are converging. Applications of the Internet such as the Web and social networks and other online services are not covered.

Networking History

One of the earliest networks was developed by  Claude Chappe (1763-1805), a mechanical semaphore. In 1837 the electrical telegraph allowed transmission of Morse code. In 1866 there was a connection between London and New York at a price of 20 words for 100$. In 1863 Reiss developed telephony. In 1895 the first wireless communication was demonstrated, in 1906 radio was broadly introduced. Television was broadcast in 1928.

The Internet is based on packet-switching which was first demonstrated by Kleinrock in 1961 (Kleinrock, 1961) . In 1964 Baran created military nets using packet-switching (Baran, 1964) . In 1967 the ARPAnet was conceived and installed in 1969. In 1972 the ARPAnet had 15 nodes. In 1973 Metcalfe proposed Ethernet (Metcalfe & Boggs, 1976) .  Vinton G. Cerf & Robert E. Kanhn’s itnernetworking principles developed in 1974 (Cerf & Kahn, 1974) . In 1979 the ARPAnet has 200 nodes.

The Internet was commercialised in the 1990s. The ARPAnet was decommissioned. The NSFnet in 1991 allows commercial use and is itself shutdown in 1995 and replaced by the World Wide Web (WWW). In the 2000s the dotcom-bubble for the first time shot the impact potential of the Internet on the real world.

Internet Basics

The Internet carries packets. Packets have headers that describe them, a payload which contains their contents. Officially, Internet routers only care about packets. The explicit analogy is like mailing a letter (inside the envelope is the payload/letter and the headers equals the address on the envelope). This differs to telephone traffic where the traffic is analysed to optimise the traffic (fax versus voice call).

IP addresses have 32 bits and therefore can approximately connect 4 billion devices. An IP address has become a scarce resource. The question arises who allocates addreses, who can be reached globally and should a new protocol be adopted? IP version 6 has been proposed as the solution.

 

I think the IETF hit the right balance with the 128 bits thing. We can fit MAC addresses in a /64 subnet, and the nanobots will only be able to devour half the planet.

Nanobots

A protocol defines a set of messages that are sent between end-points and define what these messages mean and what end-points should do with these messages.The internet protocol stack consists of 5 layers: physical, link, network, transport and application. Throughout this course we will focus on transport and network.

The data send in a message will get an additional header for each layer that it traverses. The Internet has at the core the IP and does not change this (“narrow waist model”). The layers above (transport and application) or below (physical, link) can be arbitrarily changed. Side note: in Germany successfully carry pigeons were used to send a message. The rigidity of IP is claimed to be the reason for the success of the Internet. In reality, their are many more layers, there have been observed real world packets with 12 layers and more where the IP layer is repeated multiple times. HTTP has become the main protocol, and other protocols are often blocked, consequently, much traffic that is not actually text (e.g. video) is send over it.

The Internet consists of many autonomous system (Internet Service Providers (ISP)) that communicate via Border Gateway Protocol (BGP). Each system advertises where it can delivers messages to, however, they need not be truthful. Incidents include advertising optimal routes to everywhere to attract all traffic (including special regions). Another alternative is to advertise a route that is cheap, but never deliver the packet. It is not clear how to resolve such misuse of the system.

The Internet has been designed insulated from commercial and political pressures, but the reality has changed. The idea for the Internet and the real-world use have diverged. The course focuses on the tension between technology, policy, commerce and politics.

References

Baran, P. (1964). On distributed communications networks. IEEE Transactions on Communications Systems, 12(1), 1–9.
Cerf, V., & Kahn, R. (1974). A protocol for packet network internetworking. IEEE Trans. Commun , 22, 627–641.
Kleinrock, L. (1961). Information flow in large communication nets. RLE Quarterly Progress Report, 1.
Metcalfe, R. M., & Boggs, D. R. (1976). Ethernet: Distributed packet switching for local computer networks. Communications of the ACM, 19(7), 395–404.

Standard
ETH, STP

CSD: Introduction

The course “Cognition in Studio Design – analytic tools for evidence-based design” will discuss readings of space syntax (Bafna, 2003) , navigation issues (Carlson, Hölscher, Shipley, & Dalton, 2010) as well as functions and applications of spatial cognition (Montello & Raubal, 2013) .

To compute space syntax DepthmapX will be used.

References

Bafna, S. (2003). Space syntax: A brief introduction to its logic and analytical techniques. Environment and Behavior, 35(1), 17–59.
Carlson, L. A., Hölscher, C., Shipley, T. F., & Dalton, R. C. (2010). Getting lost in buildings. Current Directions in Psychological Science, 19(5), 284–289.
Montello, D. R., & Raubal, M. (2013). Functions and applications of spatial cognition. In Handbook of Spatial Cognition (pp. 249–264). American Psychological Association (APA).

Standard
ETH, STP

SMADSC: Introduction

Complex systems are the core topic of  Social Modelling, Agent-Based Simulation, and Complexity. Complex systems usually emerge as an artefact of interaction. The output of a complex system follows the Power Law and may have a regime or phase changes, known as tipping points. Emergent properties and scale-free organisation are a typical feature of complex systems. It would it be possible to analyse top-down, but is best studied bottom-up.

In general, a social system is analysed by creating a mental model of it, deriving hypotheses regarding endogenous and exogenous forces that drive it and finally instantiating an agend-based model (ABM) in code that is simulated in silicio.

Recommended reading for the week is Chapter 9 in Complex adaptive
systems: An introduction to computational models of social life (Miller & Page, 2009) and Chapter 8 in Introduction to computational
social science: principles and applications (Cioffi-Revilla, 2013).

Agent-based Models (ABM)

Usually, an object-oriented software system that instantiates a model of living systems of social entities. Agent-based models go beyond numerical analysis, rather they observe emergent behaviour. Broad paradigms that influence ABMs are cellular automata, big data, social networks, and generative models. Concepts will be emergence, bottom-up computation micro-level rules lead to macro-level behaviours. There are two main dominant characteristics of ABMs:

  1.  A positive representation attempts to closely recreate or capture the abstract or detailed essence  of a prototype system.
  2. A normative representation provides input control for exogenous steering of internal feedback loops.

Generative ABMs are useful in three general cases:

  1. Modelling historical systems, that cannot be revisited
  2. Long-lived systems, that span a longer time than can be observed
  3. Unethical, illegal, unsafe or unlikely environmental  settings or exogenous  stimuli to the system

The Game of Life (Conway, 1970)

A game with two states {dead, alive} and the rules:

Each cell checks the Life State of itself and those of the cells in its local neighbourhood at a Moore distance of 1. If alive then display a pixel if dead do not. If this cell has less than two neighbours alive or more than three neighbours alive then, set this cell dead. If there are exactly three alive neighbours, set Life State alive. Randomized activation of cells continues “forever.”

It uses the concepts of cellular automata and either Moore or von Neumann distance as well as distance-neighbourhoods.

Other famous ABMs are Flocking (Reynolds, 1987), Swarming (Bonabeau & Meyer, 2001), Residential Segregation (Schelling, 1969), Residential Segregation using vector-based GIS (Crooks, 2010)

References

Bonabeau, E., & Meyer, C. (2001). Swarm intelligence. Harvard Business Review, 79(5), 106–114.
Cioffi-Revilla, C. (2013). Introduction to computational social science: principles and applications. Springer Science & Business Media.
Conway, J. (1970). The game of life. Scientific American, 223(4), 4.
Crooks, A. T. (2010). Constructing and implementing an agent-based model of residential segregation through vector GIS. International Journal of Geographical Information Science, 24(5), 661–675.
Miller, J. H., & Page, S. E. (2009). Complex adaptive systems: An introduction to computational models of social life. Princeton University Press.
Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. ACM SIGGRAPH Computer Graphics, 21(4), 25–34.
Schelling, T. C. (1969). Models of segregation. The American Economic Review, 59(2), 488–493.

Standard