Imaginary beliefs

In a discussion of the utility of religious beliefs, Norm makes this claim:

A person can’t intelligibly say, ‘I know that p is false, but it’s useful for me to think it’s true, so I will.’ “

(Here, p is some proposition – that is, some statement about the world which may be either true or false, but not both and not neither.)
In fact, a person can indeed intelligibly say this, and pure mathematicians do it all the time.   Perhaps the example in mathematics which is easiest to grasp is the use of the square root of minus one, the number usually denoted by the symbol i.   Negative numbers cannot have square roots, since there are no numbers which when squared (multiplied by themselves) lead to a negative number.  However, it turns out that believing that these imaginary numbers do exist leads to a beautiful and subtle mathematical theory, called the theory of complex numbers. This theory has multiple practical applications, from mathematics to physics to engineering.  One area of application we have known for about a  century is the theory of alternating current in electricity;  blogging – among much else of modern life – would perhaps be impossible, or at least very different, without this belief in imaginary entities underpinning the theory of electricity.
And, as I have argued before (eg, here and here), effective business strategy development and planning under uncertainty requires holding multiple incoherent beliefs about the world simultaneously.   The scenarios created by scenario planners are examples of such mutually inconsistent beliefs about the world.   Most people – and most companies – find it difficult to maintain and act upon mutually-inconsistent beliefs.   For that reason the company that pioneered the use of scenario planning, Shell, has always tried to ensure that probabilities are never assigned to scenarios, because managers tend to give greater credence and hence attention to scenarios having higher-probabilities.  The utilitarian value of scenario planning is greatest when planners consider seriously the consequences of low-likelihood, high-impact scenarios (as Shell found after the OPEC oil price in 1973), not the scenarios they think are most probable.  To do this well, planners need to believe statements that they judge to be false, or at least act as if they believe these statements.
Here and here I discuss another example, taken from espionage history.

Alan Greenspan in 2004

Alan Greenspan, then Chairman of the US Federal Reserve Bank System, speaking in January 2004, discussed the failure of traditional methods in econometrics to provide adequate guidance to monetary policy decision-makers.   His words included:

Given our inevitably incomplete knowledge about key structural aspects of an ever-changing economy and the sometimes asymmetric costs or benefits of particular outcomes, a central bank needs to consider not only the most likely future path for the economy but also the distribution of possible outcomes about that path. The decisionmakers then need to reach a judgment about the probabilities, costs, and benefits of the various possible outcomes under alternative choices for policy.”
The product of a low-probability event and a potentially severe outcome was judged a more serious threat to economic performance than the higher inflation that might ensue in the more probable scenario.”

Self-fulfilling prophecies

It has always struck me that Karl Marx’s prediction that capitalism would be eclipsed by socialism and then by communism was a self-denying prophecy: because he made this prediction, and because of the widespread popularity of his (and other socialists’) ideas, politicians and businessmen were moved to act in ways which allowed capitalism to adapt, rather than to die. It seems that the end of communism may have been partly due to similar reflective-system effects.
In her book, Stasiland: True Stories from Behind the Berlin Wall, Anna Funder writes the following about the opposition to the Socialist Unity Party (SED) in the former German Democratic Republic (the DDR):

I once saw a note on a Stasi file from early 1989 that I would never forget. In it a young lieutenant alerted his superiors to the fact that there were so many informers in church opposition groups at demonstrations that they were making these groups appear stronger than they really were. In one of the most beautiful ironies I have ever seen, he dutifully noted that it appeared that, by having swelled the ranks of the opposition, the Stasi was giving the people heart to keep demonstrating against them. (pp. 197-198)

NOTE:  A comment about the processes which led to the end of communism in the USSR is contained in this post.
Anna Funder [2003]: Stasiland: True Stories from Behind the Berlin Wall. (London, UK: Granta Books).

Resilient capitalism

Yesterday began with a meeting at an investment bank in Paternoster Square, London, which turned out to be inaccessible to visitors and the public.   The owners of the Square had asked the police to close public access to prevent its occupation by the anti-capitalism (OWS) protesters, encamped between the Square and St Paul’s Cathedral.  So our meeting took place in a cafe beside the square.

The day ended with a debate at the Royal Society, organized by The Foundation for Science and Technology, on developing adaptation policy in response to climate change.     The speakers were Dr Rupert Lewis of DEFRA, Sir Graham Wynne of the Sub-Committee on Adaptation, UK Committee on Climate Change, and Tom Bolt, Director of Performance Management at LLoyd’s of London.  (Their presentations will eventually be posted here.) As Bolt remarked, insurance companies have to imagine potential global futures in which climate change has wrecked social and economic havoc, and so are major consumers of scientific prognoses.   One commentator from the audience suggested that insurers, particularly, may have a vested short-term financial interest in us all being pessimistic about the long term future, although this inference was not obvious to me:  one human reaction to a belief in a certainly-ruinous future is not to save or insure for it, but rather to spend today.
A very interesting issue raised by some audience members is just how do we engineer and build infrastructure for adaptability?  What would a well-adapted society look like?     One imagines that the floating houses built in the Netherlands to survive floods would fit any such description.  Computer scientists have some experience in creating and managing robust, designing resilient and adaptive systems, and so it may be useful to examine that experience for lessons for design and engineering efforts for other infrastructure.

Markets as feedback mechanisms

I just posted after hearing a talk by economic journalist Tim Harford at LSE.  At the end of that post, I linked to a critical review of Harford’s latest book,  Adapt – Why Success Always Starts with Failure, by Whimsley.  This review quotes Harford talking about markets as feedback mechanisms:

To identify successful strategies, Harford argues that “we should not try to design a better world. We should make better feedback loops” (140) so that failures can be identified and successes capitalized on. Harford just asserts that “a market provides a short, strong feedback loop” (141), because “If one cafe is ordering a better combination of service, range of food, prices, decor, coffee blend, and so on, then more customers will congregate there than at the cafe next door“, but everyday small-scale examples like this have little to do with markets for credit default swaps or with any other large-scale operation.

Yes, indeed.  The lead-time between undertaking initial business planning in order to raise early capital investments and the launching of services to the  public for  global satellite communications networks is in the order of 10 years (since satellites, satellite networks and user devices need to be designed, manufactured, approved by regulators, deployed, and connected before they can provide service).  The time between initial business planning and the final decommissioning of an international gas or oil pipeline is about 50 years.  The time between initial business planning and the final decommissioning of an international undersea telecommunications cable may be as long as 100 years.   As I remarked once previously, the design of Transmission Control Protocol (TCP) packets, the primary engine of communication in the 21st century Internet, is closely modeled on the design of telegrams first sent in the middle of the 19th century.  Some markets, if they work at all, only work over the long run, but as Keynes famously said, in the long run we are all dead.
I have experience of trying to design telecoms services for satellite networks (among others), knowing that any accurate feedback for design decisions may come late or not at all, and when it comes may be vague and ambiguous, or even misleading.   Moreover, the success or failure of the selected marketing strategy may not ever be clear, since its success may depend on the quality of execution of the strategy, so that it may be impossible to determine what precisely led to the outcome.   I have talked about this issue before, both regarding military strategies and regarding complex decisions in general.  If the quality of execution also influences success (as it does), then just who or what is the market giving feedback to?
In other words, these coffees are not always short and strong (in Harford’s words), but may be cold, weak, very very slow in arriving, and even their very nature contested.   I’ve not yet read Harford’s book, but if he thinks all business is as simple as providing fmc (fast-moving consumer) services, his book is not worth reading.
Once again, an economist argues by anecdote and example.  And once again, I wonder at the world:  That economists have a reputation for talking about reality, when most of them evidently know so little about it, or reduce its messy complexities to homilies based on the operation of suburban coffee shops.

Tim Harford at LSE: Dirigisme in action

This week  I heard economic journalist Tim Harford talk at the London School of Economics (LSE), on a whirlwind tour (7 talks, I think he told us, this week) to promote his new book.   Each talk is on one topic covered in the book, and at LSE he talked about the GFC and his suggestions for preventing its recurrence.
Harford’s talk itself was chatty, anecdotal, and witty.    Economics is still in deep thrall to its 19th century fascination with physical machines, and this talk was no exception.   The anecdotes mostly concerned Great Engineering Disasters of our time, with Harford emphasizing the risks that arise from tightly-coupling of components in systems and, ironically, frequent misguided attempts to improve their safety which only worsen it.
Anecdotal descriptions of failed engineering artefacts may have relevance to the preventing a repeat of the GFC, but Harford did not make any case that they do.  He just gave examples from engineering and from financial markets, and asserted that these were examples of the same conceptual phenomena.    However, as metaphors for economies machines and mechanical systems are worse than useless, since they emphasize in people’s minds, especially in the minds of regulators and participants, mechanical and stand-alone aspects of systems which are completely inappropriate here.   Economies and marketplaces are NOT like machines, with inanimate parts whose relationships are static and that move when levers are pulled, or effects which can be known or predicted when causes are instantiated, or components designed centrally to achieve some global objectives.  Autonomous, intelligent components having dynamic relationships describes few machines or mechanical systems, and certainly none from the 19th century.   
A better category of failure metaphors would be ecological and biological.   We introduce cane toads to North Queensland to prey upon a sugar cane pest, and the cane toads, having no predators themselves,  take over the country.    Unintended and unforeseen consequences of actions, not arising merely because the  system is complex or its parts tightly-coupled, but arise because the system comprises multiple autonomous and goal-directed actors with different beliefs, histories and motivations, and whose relationships with one another change as a result of their interactions.  
Where, I wanted to shout to Harford, were the ecological metaphors?  Why, I wanted to ask, does this 19th-century fascination with deterministic, centralized machines and mechanisms persist in economics, despite its obvious irrelevance and failings? Who, if not rich FT journalists with time to write books, I wanted to know, will think differently about these problems?
Finally, only economists strongly in favour of allowing market forces to operate unfettered would have used the dirigismic methods that the LSE did to allocate people to seats for this lecture.  We were forced to sit in rows in our order of arrival in the auditorium. Why was this?  When I asked an usher for the reason, the answer I was given made no sense:   Because we expect a full hall.    Why were the organizers so afraid of allowing people to exercise their own preferences as to where to sit?  We don’t all have the same hearing and sight capabilities, we don’t all have the same preferences as to side of the hall, or side  of the aisle, etc. We don’t all arrive in parties of the same size.  We don’t all want to sit behind a tall person or near a noisy group.
The hall was not full, as it happened, so we were crammed into place in part of the hall like passive objects in a consumer choice model of voting, instead of as free, active citizens in a democracy occupying whatever position we most preferred of those still available.  But even if the hall had been full, there are less-centralized and less-unfriendly methods of matching people to seats.  The 20 or so LSE student ushers on hand, for instance, could been scattered about the hall to direct latecomers to empty seats, rather than lining the aisles like red-shirted troops to prevent people sitting where they wanted to.
What hope is there that our economic problems will be solved when the London School of Economics, of all places, uses central planning to sit people in public lectures?
Update: There is an interesting critical review of Harford’s latest book, here.

What use are models?

What are models for?   Most developers and users of models, in my experience, seem to assume the answer to this question is obvious and thus never raise it.   In fact, modeling has many potential purposes, and some of these conflict with one another.   Some of the criticisms made of particular models arise from mis-understandings or mis-perceptions of the purposes of those models, and the modeling activities which led to them.
Liking cladistics as I do, I thought it useful to list all the potential purposes of models and modeling.   The only discussion that considers this topic that I know is a brief discussion by game theorist Ariel Rubinstein in an appendix to a book on modeling rational behaviour (Rubinstein 1998).  Rubinstein considers several alternative purposes for economic modeling, but ignores many others.   My list is as follows (to be expanded and annotated in due course):

  • 1. To better understand some real phenomena or existing system.   This is perhaps the most commonly perceived purpose of modeling, in the sciences and the social sciences.
  • 2. To predict (some properties of) some real phenomena or existing system.  A model aiming to predict some domain may be successful without aiding our understanding  of the domain at all.  Isaac Newton’s model of the motion of planets, for example, was predictive but not explanatory.   I understand that physicist David Deutsch argues that predictive ability is not an end of scientific modeling but a means, since it is how we assess and compare alternative models of the same phenomena.    This is wrong on both counts:  prediction IS an end of much modeling activity (especially in business strategy and public policy domains), and it not the only means we use to assess models.  Indeed, for many modeling activities, calibration and prediction are problematic, and so predictive capability may not even be  possible as a form of model assessment.
  • 3. To manage or control (some properties of) some real phenomena or existing system.
  • 4. To better understand a model of some real phenomena or existing system.  Arguably, most of economic theorizing and modeling falls into this category, and Rubinstein’s preferred purpose is this type.   Macro-economic models, if they are calibrated at all, are calibrated against artificial, human-defined, variables such as employment, GDP and inflation, variables which may themselves bear a tenuous and dynamic relationship to any underlying economic reality.   Micro-economic models, if they are calibrated at all, are often calibrated with stylized facts, abstractions and simplifications of reality which economists have come to regard as representative of the domain in question.    In other words, economic models are not not usually calibrated against reality directly, but against other models of reality.  Similarly, large parts of contemporary mathematical physics (such as string theory and brane theory) have no access to any physical phenomena other than via the mathematical model itself:  our only means of apprehension of vibrating strings in inaccessible dimensions beyond the four we live in, for instance, is through the mathematics of string theory.    In this light, it seems nonsense to talk about the effectiveness, reasonable or otherwise, of mathematics in modeling reality, since how we could tell?
  • 5. To predict (some properties of) a model of some real phenomena or existing system.
  • 6. To better understand, predict or manage some intended (not-yet-existing) artificial system, so to guide its design and development.   Understanding a system that does  not yet exist is qualitatively different to understanding an existing domain or system, because the possibility of calibration is often absent and because the model may act to define the limits and possibilities of subsequent design actions on the artificial system.  The use of speech act theory (a model of natural human language) for the design of artificial machine-to-machine languages, or the use of economic game theory (a mathematical model of a stylized conceptual model of particular micro-economic realities) for the design of online auction sites are examples here.   The modeling activity can even be performative, helping to create the reality it may purport to describe, as in the case of the Black-Scholes model of options pricing.
  • 7. To provide a locus for discussion between relevant stakeholders in some business or public policy domain.  Most large-scale business planning models have this purpose within companies, particularly when multiple partners are involved.  Likewise, models of major public policy issues, such as epidemics, have this function.  In many complex domains, such as those in public health, models provide a means to tame and domesticate the complexity of the domain.  This helps stakeholders to jointly consider concepts, data, dynamics, policy options, and assessment of potential consequences of policy options,  all of which may need to be socially constructed. 
  • 8. To provide a means for identification, articulation and potentially resolution of trade-offs and their consequences in some business or public policy domain.   This is the case, for example, with models of public health risk assessment of chemicals or new products by environmental protection agencies, and models of epidemics deployed by government health authorities.
  • 9. To enable rigorous and justified thinking about the assumptions and their relationships to one another in modeling some domain.   Business planning models usually serve this purpose.   They may be used to inform actions, both to eliminate or mitigate negative consequences and to enhance positive consequences, as in retroflexive decision making.
  • 10. To enable a means of assessment of managerial competencies of the people undertaking the modeling activity. Investors in start-ups know that the business plans of the company founders are likely to be out of date very quickly.  The function of such business plans is not to model reality accurately, but to force rigorous thinking about the domain, and to provide a means by which potential investors can challenge the assumptions and thinking of management as way of probing the managerial competence of those managers.    Business planning can thus be seen to be a form of epideictic argument, where arguments are assessed on their form rather than their content, as I have argued here.
  • 11. As a means of play, to enable the exercise of human intelligence, ingenuity and creativity, in developing and exploring the properties of models themselves.  This purpose is true of that human activity known as doing pure mathematics, and perhaps of most of that academic activity known as doing mathematical economics.   As I have argued before, mathematical economics is closer to theology than to the modeling undertaken in the natural sciences. I see nothing wrong with this being a purpose of modeling, although it would be nice if academic economists were honest enough to admit that their use of public funds was primarily in pursuit of private pleasures, and any wider social benefits from their modeling activities were incidental.

POSTSCRIPT (Added 2011-06-17):  I have just seen Joshua Epstein’s 2008 discussion of the purposes of modeling in science and social science.   Epstein lists 17 reasons to build explicit models (in his words, although I have added the label “0” to his first reason):

0. Prediction
1. Explain (very different from predict)
2. Guide data collection
3. Illuminate core dynamics
4. Suggest dynamical analogies
5. Discover new questions
6. Promote a scientific habit of mind
7. Bound (bracket) outcomes to plausible ranges
8. Illuminate core uncertainties
9. Offer crisis options in near-real time. [Presumably, Epstein means “crisis-response options” here.]
10. Demonstrate tradeoffe/ suggest efficiencies
11. Challenge the robustness of prevailing theory through peturbations
12. Expose prevailing wisdom as imcompatible with available data
13. Train practitioners
14. Discipline the policy dialog
15. Educate the general public
16. Reveal the apparently simple (complex) to be complex (simple).

These are at a lower level than my list, and I believe some of his items are the consequences of purposes rather than purposes themselves, at least for honest modelers (eg, #11, #12, #16).
Joshua M Epstein [2008]: Why model? Keynote address to the Second World Congress on Social Simulation, George Mason University, USA.  Available here (PDF).
Robert E Marks [2007]:  Validating simulation models: a general framework and four applied examples. Computational Economics, 30 (3): 265-290.
David F Midgley, Robert E Marks and D Kunchamwar [2007]:  The building and assurance of agent-based models: an example and challenge to the field. Journal of Business Research, 60 (8): 884-893.
Robert Rosen [1985]: Anticipatory Systems. Pergamon Press.
Ariel Rubinstein [1998]: Modeling Bounded Rationality. Cambridge, MA, USA: MIT Press.  Zeuthen Lecture Book Series.
Ariel Rubinstein [2006]: Dilemmas of an economic theorist. Econometrica, 74 (4): 865-883.

Distributed cognition

Some excerpts from an ethnographic study of the operations of a Wall Street financial trading firm, bearing on distributed cognition and joint-action planning:

This emphasis on cooperative interaction underscores that the cognitive tasks of the arbitrage trader are not those of some isolated contemplative, pondering mathematical equations and connected only to to a screen-world.  Cognition at International Securities is a distributed cognition.  The formulas of new trading patterns are formulated in association with other traders.  Truly innovative ideas, as one senior trader observed, are slowly developed through successions of discreet one-to-one conversations.
. . .
An idea is given form by trying it out, testing it on others, talking about it with the “math guys,” who, significantly, are not kept apart (as in some other trading rooms),  and discussing its technical intricacies with the programmers (also immediately present).”   (p. 265)
The trading room thus shows a particular instance of Castell’s paradox:  As more information flows through networked connectivity, the more important become the kinds of interactions grounded in a physical locale. New information technologies, Castells (2000) argues, create the possibility for social interaction without physical contiguity.  The downside is that such interactions can become repititive and programmed in advance.  Given this change, Castells argues that as distanced, purposeful, machine-like interactions multiply, the value of less-directd, spontaneous, and unexpected interactions that take place in physical contiguity will become greater (see also Thrift 1994; Brown and Duguid 2000; Grabhar 2002).  Thus, for example, as surgical techniques develop together with telecommunications technology, the surgeons who are intervening remotely on patients in distant locations are disproportionately clustering in two or three neighbourhoods of Manhattan where they can socialize with each other and learn about new techniques, etc.” (p. 266)
“One examplary passage from our field notes finds a senior trader formulating an arbitrageur’s version of Castell’s paradox:
“It’s hard to say what percentage of time people spend on the phone vs. talking to others in the room.   But I can tell you the more electronic the market goes, the more time people spend communicating with others inside the room.”  (p. 267)
Of the four statistical arbitrage robots, a senior trader observed:
“We don’t encourage the four traders in statistical arb to talk to each other.  They sit apart in the room.  The reason is that we have to keep diversity.  We could really hammered if the different robots would have the same P&L [profit and loss] patterns and the same risk profiles.”  (p. 283)

Daniel Beunza and David Stark [2008]:  Tools of the trade:  the socio-technology of arbitrage in a Wall Street trading room.  In:  Trevor Pinch and Richard Swedborg (Editors):  Living in a Material World:  Economic Sociology Meets Science and Technology Studies. Cambridge, MA, USA: MIT Press.  Chapter 8, pp. 253-290.
M. Castells [1996]:  The Information Age:  Economy, Society and Culture. Blackwell, Second Edition.

In defence of futures thinking

Norm at Normblog has a post defending theology as a legitimate area of academic inquiry, after an attack on theology by Oliver Kamm.  (Since OK’s post is behind a paywall, I have not read it, so my comments here may be awry with respect to that post.)  Norm argues, very correctly, that it is legitimate for theology, considered as a branch of philosophy to, inter alia, reflect on the properties of entities whose existence has not yet been proven.  In strong support of Norm, let me add:  Not just in philosophy!
In business strategy, good decision-making requires consideration of the consequences of potential actions, which in turn requires the consideration of the potential actions of other actors and stakeholders in response to the first set of actions.  These actors may include entities whose existence is not yet known or even suspected, for example, future competitors to a product whose launch creates a new product category.   Why, there’s even a whole branch of strategy analysis, devoted to scenario planning, a discipline that began in the military analysis of alternative post-nuclear worlds, and whose very essence involves the creation of imagined futures (for forecasting and prognosis) and/or imagined pasts (for diagnosis and analysis).   Every good air-crash investigation, medical diagnosis, and police homicide investigation, for instance, involves the creation of imagined alternative pasts, and often the creation of imaginary entities in those imagined pasts, whose fictional attributes we may explore at length.   Arguably, in one widespread view of the philosophy of mathematics, pure mathematicians do nothing but explore the attributes of entities without material existence.
And not just in business, medicine, the military, and the professions.   In computer software engineering, no new software system development is complete without due and rigorous consideration of the likely actions of users or other actors with and on the system, for example.   Users and actors here include those who are the intended target users of the system, as well as malevolent or whimsical or poorly-behaved or bug-ridden others, both human and virtual, not all of whom may even exist when the system is first developed or put into production.      If creative articulation and manipulation of imaginary futures (possible or impossible) is to be outlawed, not only would we have no literary fiction or much poetry, we’d also have few working software systems either.