What use are models?

What are models for?   Most developers and users of models, in my experience, seem to assume the answer to this question is obvious and thus never raise it.   In fact, modeling has many potential purposes, and some of these conflict with one another.   Some of the criticisms made of particular models arise from mis-understandings or mis-perceptions of the purposes of those models, and the modeling activities which led to them.
Liking cladistics as I do, I thought it useful to list all the potential purposes of models and modeling.   The only discussion that considers this topic that I know is a brief discussion by game theorist Ariel Rubinstein in an appendix to a book on modeling rational behaviour (Rubinstein 1998).  Rubinstein considers several alternative purposes for economic modeling, but ignores many others.   My list is as follows (to be expanded and annotated in due course):

  • 1. To better understand some real phenomena or existing system.   This is perhaps the most commonly perceived purpose of modeling, in the sciences and the social sciences.
  • 2. To predict (some properties of) some real phenomena or existing system.  A model aiming to predict some domain may be successful without aiding our understanding  of the domain at all.  Isaac Newton’s model of the motion of planets, for example, was predictive but not explanatory.   I understand that physicist David Deutsch argues that predictive ability is not an end of scientific modeling but a means, since it is how we assess and compare alternative models of the same phenomena.    This is wrong on both counts:  prediction IS an end of much modeling activity (especially in business strategy and public policy domains), and it not the only means we use to assess models.  Indeed, for many modeling activities, calibration and prediction are problematic, and so predictive capability may not even be  possible as a form of model assessment.
  • 3. To manage or control (some properties of) some real phenomena or existing system.
  • 4. To better understand a model of some real phenomena or existing system.  Arguably, most of economic theorizing and modeling falls into this category, and Rubinstein’s preferred purpose is this type.   Macro-economic models, if they are calibrated at all, are calibrated against artificial, human-defined, variables such as employment, GDP and inflation, variables which may themselves bear a tenuous and dynamic relationship to any underlying economic reality.   Micro-economic models, if they are calibrated at all, are often calibrated with stylized facts, abstractions and simplifications of reality which economists have come to regard as representative of the domain in question.    In other words, economic models are not not usually calibrated against reality directly, but against other models of reality.  Similarly, large parts of contemporary mathematical physics (such as string theory and brane theory) have no access to any physical phenomena other than via the mathematical model itself:  our only means of apprehension of vibrating strings in inaccessible dimensions beyond the four we live in, for instance, is through the mathematics of string theory.    In this light, it seems nonsense to talk about the effectiveness, reasonable or otherwise, of mathematics in modeling reality, since how we could tell?
  • 5. To predict (some properties of) a model of some real phenomena or existing system.
  • 6. To better understand, predict or manage some intended (not-yet-existing) artificial system, so to guide its design and development.   Understanding a system that does  not yet exist is qualitatively different to understanding an existing domain or system, because the possibility of calibration is often absent and because the model may act to define the limits and possibilities of subsequent design actions on the artificial system.  The use of speech act theory (a model of natural human language) for the design of artificial machine-to-machine languages, or the use of economic game theory (a mathematical model of a stylized conceptual model of particular micro-economic realities) for the design of online auction sites are examples here.   The modeling activity can even be performative, helping to create the reality it may purport to describe, as in the case of the Black-Scholes model of options pricing.
  • 7. To provide a locus for discussion between relevant stakeholders in some business or public policy domain.  Most large-scale business planning models have this purpose within companies, particularly when multiple partners are involved.  Likewise, models of major public policy issues, such as epidemics, have this function.  In many complex domains, such as those in public health, models provide a means to tame and domesticate the complexity of the domain.  This helps stakeholders to jointly consider concepts, data, dynamics, policy options, and assessment of potential consequences of policy options,  all of which may need to be socially constructed. 
  • 8. To provide a means for identification, articulation and potentially resolution of trade-offs and their consequences in some business or public policy domain.   This is the case, for example, with models of public health risk assessment of chemicals or new products by environmental protection agencies, and models of epidemics deployed by government health authorities.
  • 9. To enable rigorous and justified thinking about the assumptions and their relationships to one another in modeling some domain.   Business planning models usually serve this purpose.   They may be used to inform actions, both to eliminate or mitigate negative consequences and to enhance positive consequences, as in retroflexive decision making.
  • 10. To enable a means of assessment of managerial competencies of the people undertaking the modeling activity. Investors in start-ups know that the business plans of the company founders are likely to be out of date very quickly.  The function of such business plans is not to model reality accurately, but to force rigorous thinking about the domain, and to provide a means by which potential investors can challenge the assumptions and thinking of management as way of probing the managerial competence of those managers.    Business planning can thus be seen to be a form of epideictic argument, where arguments are assessed on their form rather than their content, as I have argued here.
  • 11. As a means of play, to enable the exercise of human intelligence, ingenuity and creativity, in developing and exploring the properties of models themselves.  This purpose is true of that human activity known as doing pure mathematics, and perhaps of most of that academic activity known as doing mathematical economics.   As I have argued before, mathematical economics is closer to theology than to the modeling undertaken in the natural sciences. I see nothing wrong with this being a purpose of modeling, although it would be nice if academic economists were honest enough to admit that their use of public funds was primarily in pursuit of private pleasures, and any wider social benefits from their modeling activities were incidental.

POSTSCRIPT (Added 2011-06-17):  I have just seen Joshua Epstein’s 2008 discussion of the purposes of modeling in science and social science.   Epstein lists 17 reasons to build explicit models (in his words, although I have added the label “0” to his first reason):

0. Prediction
1. Explain (very different from predict)
2. Guide data collection
3. Illuminate core dynamics
4. Suggest dynamical analogies
5. Discover new questions
6. Promote a scientific habit of mind
7. Bound (bracket) outcomes to plausible ranges
8. Illuminate core uncertainties
9. Offer crisis options in near-real time. [Presumably, Epstein means “crisis-response options” here.]
10. Demonstrate tradeoffe/ suggest efficiencies
11. Challenge the robustness of prevailing theory through peturbations
12. Expose prevailing wisdom as imcompatible with available data
13. Train practitioners
14. Discipline the policy dialog
15. Educate the general public
16. Reveal the apparently simple (complex) to be complex (simple).

These are at a lower level than my list, and I believe some of his items are the consequences of purposes rather than purposes themselves, at least for honest modelers (eg, #11, #12, #16).
References:
Joshua M Epstein [2008]: Why model? Keynote address to the Second World Congress on Social Simulation, George Mason University, USA.  Available here (PDF).
Robert E Marks [2007]:  Validating simulation models: a general framework and four applied examples. Computational Economics, 30 (3): 265-290.
David F Midgley, Robert E Marks and D Kunchamwar [2007]:  The building and assurance of agent-based models: an example and challenge to the field. Journal of Business Research, 60 (8): 884-893.
Robert Rosen [1985]: Anticipatory Systems. Pergamon Press.
Ariel Rubinstein [1998]: Modeling Bounded Rationality. Cambridge, MA, USA: MIT Press.  Zeuthen Lecture Book Series.
Ariel Rubinstein [2006]: Dilemmas of an economic theorist. Econometrica, 74 (4): 865-883.

The otherness of the other

In previous posts (eg, here and here), I have talked about the difficulty of assessing the intentions of others, whether for marketing or for computer network design or for national security. The standard English phrase speaks of “putting ourselves in the other person’s shoes”.  But this is usually not sufficient:  we have to put them into their shoes, with their beliefs, their history, their desires, and their constraints, not ourselves, in order to understand their goals and intentions, and to anticipate their likely strategies and actions.    In a fine political thriller by Henry Porter, I come across this statement (page 220):

‘Motive is always difficult to read,’ he replied.  ‘We make a rational assumption about someone’s behaviour based on what we would, or would not, do in the same circumstances, ignoring the otherness of the other. We consider only influences that make us what we are and impose those beliefs on them.  It is the classic mistake of intelligence analysis.’  “

Reference:
Henry Porter [2009]: The Dying Light. London, UK:  Orion Books.
Obscure fact:  Porter (born 1953) is the grand-nephew of novelist Howard Sturgis (1855-1920), step-cousin to George Santayana (1863-1952).

On Getting Things Done

New York Times Op-Ed writer, David Brooks, has two superb articles about the skills needed to be a success in contemporary technological society, the skills I refer to as Getting-Things-Done IntelligenceOne is a short article in The New York Times (2011-01-17), reacting to the common, but wrong-headed, view that technical skill is all you need for success, and the other a long, fictional disquisition in The New Yorker (2011-01-17) on the social skills of successful people.  From the NYT article:

Practicing a piece of music for four hours requires focused attention, but it is nowhere near as cognitively demanding as a sleepover with 14-year-old girls. Managing status rivalries, negotiating group dynamics, understanding social norms, navigating the distinction between self and group — these and other social tests impose cognitive demands that blow away any intense tutoring session or a class at Yale.
Yet mastering these arduous skills is at the very essence of achievement. Most people work in groups. We do this because groups are much more efficient at solving problems than individuals (swimmers are often motivated to have their best times as part of relay teams, not in individual events). Moreover, the performance of a group does not correlate well with the average I.Q. of the group or even with the I.Q.’s of the smartest members.
Researchers at the Massachusetts Institute of Technology and Carnegie Mellon have found that groups have a high collective intelligence when members of a group are good at reading each others’ emotions — when they take turns speaking, when the inputs from each member are managed fluidly, when they detect each others’ inclinations and strengths.
Participating in a well-functioning group is really hard. It requires the ability to trust people outside your kinship circle, read intonations and moods, understand how the psychological pieces each person brings to the room can and cannot fit together.
This skill set is not taught formally, but it is imparted through arduous experiences. These are exactly the kinds of difficult experiences Chua shelters her children from by making them rush home to hit the homework table.”

These articles led me to ask exactly what is involved in reading a social situation?  Brooks mentions some of the relevant aspects, but not all.   To be effective, a manager needs to parse the social situation of the groups he or she must work with – those under, those over and peer groups to the side – to answer questions such as the following:

  • Who has power or influence over each group?  Is this exercised formally or informally?
  • What are the norms and practices of the group, both explicit and implicit, known and unconscious?
  • Who in the group is reliable as a witness?   Whose stories can be believed?
  • Who has agendas and what are these?
  • Who in the group is competent or capable or intelligent?  Whose promises to act can be relied upon?  Who, in contrast, needs to be monitored or managed closely?
  • What constraints does the group or its members operate under?  Can these be removed or side-stepped?
  • What motivates the members of the group?  Can or should these motivations be changed, or enhanced?
  • Who is open to new ideas, to change, to improvements?
  • What obstacles and objections will arise in response to proposals for change?  Who will raise these?  Will these objections be explicit or hidden?
  • Who will resist or oppose change?  In what ways? Who will exercise pocket vetos?

Parsing new social situations – ie, answering these questions in a specific situation – is not something done in a few moments.  It may take years of observation and participation to understand a new group in which one is an outsider.  People who are good at this may be able to parse the key features of a new social landscape within a few weeks or months, depending on the level of access they have, and the willingness of the group members to trust them.     Good management consultants, provided their sponsors are sufficiently senior, can often achieve an understanding within a few weeks.   Experience helps.
Needless to say, most academic research is pretty useless for these types of questions.  Management theory has either embarked on the reduce-and-quantify-and-replicate model of academic psychology, or else undertaken the narrative descriptions of successful organizations of most books by business gurus.   Narrative descriptions of failures would be far more useful.
The best training for being able to answer such questions – apart from experience of life – is the study of anthropology or literature:  Anthropology because it explores the social structures of other cultures and the factors within a single lifetime which influence these structures, and Literature because it explores the motivations and consequences of human actions and interactions.   The golden age of television drama we are currently fortunate to be witness to also provides good training for viewers in human motivations, actions and interactions.  It is no coincidence, in my view, that the British Empire was created and run by people mostly trained in Classics, with its twofold combination of the study of alien cultures and literatures, together with the analytical rigor and intellectual discipline acquired through the incremental learning of those difficult subjects, Latin and Ancient Greek languages.
UPDATE (2011-02-16): From Norm Scheiber’s profile of US Treasury Secretary Timothy Geithner in The New Republic (2011-02-10):

“Tim’s real strength … is that he’s really quick at reading the culture of any institutions,” says Leslie Lipschitz, a former Geithner deputy.

The profile also makes evident Geithner’s agonistic planning approach to policy – seeking to incorporate opposition and minority views into both policy formation processes and the resulting policies.

Coupling preferences and decision-processes

I have expressed my strong and long-held criticisms of classical decision theory – that based on maximum expected utility (MEU) –  before and again before that.  I want to expand here on one of my criticisms.
One feature of MEU theory is that the preferences of a decision-maker are decoupled from the decision-making process itself.  The MEU process works independently of the preferences of the decision-maker, which are assumed to be independent inputs to the decision-making process.    This may be fine for some decisions, and for some decision-makers, but there are many, many real-world decisions where this decoupling is infeasible or undesirable, or both.
For example, I have talked before about network goods, goods for which the utility received by one consumer depends on the utility received by other consumers.   A fax machine, in the paradigm example, provides no benefits at all to someone whose network of contacts or colleagues includes no one else with a fax machine.   A rational consumer (rational in the narrow sense of MEU theory, as well as rational in the prior sense of being reason-based) would wait to see whether other consumers  in her network decide to purchase such a good (or are likely to decide to purchase it) before deciding to do so herself.   In this case, her preferences are endogeneous to the decision-process, and it makes no sense to model preferences as logically or chronologically prior to the process.   Like most people  in marketing, I have yet to encounter a good or service which is not a network good:  even so-called commodities, like coal, are subject to fashion, to peer-group pressures, and to imitative purchase behaviors.  (In so far as something looks like a commodity in the real world, some marketing manager is not doing his or her job.)
A second class of decisions also require us to consider preferences and decision-processes as strongly coupled.  These are situations where there are multiple decision-makers or stakeholders.     A truly self-interested agent (such as those assumed by mainstream micro-economics) cares not a jot for the interests of other stakeholders, but for those of us out here in the real world, this is almost never the case.  In any multiple-stakeholder decision – ie, any decision where the consequences accrue to more than one party – a non-selfish decision-maker would first seek to learn of the consequences of the different decision-options to other stakeholders as well as to herself, and of the preferences of those other stakeholders over these consequences.  Thus, any sensible decision-making process needs to allow for the elicitation and sharing of consequences and preferences between stakeholders.  In any reasonably complex decision – such as deciding whether to restrict use of some chemical on public health grounds, or deciding on a new distribution strategy for a commercial product  – these consequences will be dispersed and non-uniform in their effects.   This is why democratic government regulatory agencies, such as environmental agencies, conduct public hearings, enquiries and consultations exercises prior to making determinations.  And this is why even the most self-interested of corporate decision-makers invariably consider the views of shareholders, of regulators, of funders, of customers, of supply chain partners (both upstream and downstream), or of those internal staff who will be carrying out the decision, when they want the selected decision-option to be executed successfully.    No CEO is an island.
The fact that the consequences of major regulatory and corporate decisions are usually non-uniform in their impacts on stakeholders  – each decision-option advantaging some people or groups, while disadvantaging others – makes the application of any standard, context-independent decision-rule nonsensical.   Applying standard statistical tests as decision rules falls into this nonsensical category, something statisticians have known all along, but others seem not to. (See the references below for more on this.)
Any rational, feasible decision-process intended for the sorts of decisions we citizens, consumers and businesses face every day needs to allow preferences to emerge as part of the decision-making process, with preferences and the decision-process strongly coupled together.  Once again, as on so many other aspects, MEU theory fails.   Remind me again why it stays in Economics text books and MBA curricula.
References:
L. Atkins and D. Jarrett [1979]:  The significance of “significance tests”.  In:  J. Irvine, I. Miles and J. Evans (Editors): Demystifying Social Statistics. London, UK: Pluto Press.
D. J. Fiorino [1989]:  Environmental risk and democratic process:  a critical review.  Columbia Journal of Environmental Law,  14: 501-547.  (This paper presents reasons why deliberative democratic processes are necessary in environmental regulation.)
T. Page [1978]:  A generic view of toxic chemicals and similar risks.  Ecology Law Quarterly.  7 (2): 207-244.

Distributed cognition

Some excerpts from an ethnographic study of the operations of a Wall Street financial trading firm, bearing on distributed cognition and joint-action planning:

This emphasis on cooperative interaction underscores that the cognitive tasks of the arbitrage trader are not those of some isolated contemplative, pondering mathematical equations and connected only to to a screen-world.  Cognition at International Securities is a distributed cognition.  The formulas of new trading patterns are formulated in association with other traders.  Truly innovative ideas, as one senior trader observed, are slowly developed through successions of discreet one-to-one conversations.
. . .
An idea is given form by trying it out, testing it on others, talking about it with the “math guys,” who, significantly, are not kept apart (as in some other trading rooms),  and discussing its technical intricacies with the programmers (also immediately present).”   (p. 265)
The trading room thus shows a particular instance of Castell’s paradox:  As more information flows through networked connectivity, the more important become the kinds of interactions grounded in a physical locale. New information technologies, Castells (2000) argues, create the possibility for social interaction without physical contiguity.  The downside is that such interactions can become repititive and programmed in advance.  Given this change, Castells argues that as distanced, purposeful, machine-like interactions multiply, the value of less-directd, spontaneous, and unexpected interactions that take place in physical contiguity will become greater (see also Thrift 1994; Brown and Duguid 2000; Grabhar 2002).  Thus, for example, as surgical techniques develop together with telecommunications technology, the surgeons who are intervening remotely on patients in distant locations are disproportionately clustering in two or three neighbourhoods of Manhattan where they can socialize with each other and learn about new techniques, etc.” (p. 266)
“One examplary passage from our field notes finds a senior trader formulating an arbitrageur’s version of Castell’s paradox:
“It’s hard to say what percentage of time people spend on the phone vs. talking to others in the room.   But I can tell you the more electronic the market goes, the more time people spend communicating with others inside the room.”  (p. 267)
Of the four statistical arbitrage robots, a senior trader observed:
“We don’t encourage the four traders in statistical arb to talk to each other.  They sit apart in the room.  The reason is that we have to keep diversity.  We could really hammered if the different robots would have the same P&L [profit and loss] patterns and the same risk profiles.”  (p. 283)

References:
Daniel Beunza and David Stark [2008]:  Tools of the trade:  the socio-technology of arbitrage in a Wall Street trading room.  In:  Trevor Pinch and Richard Swedborg (Editors):  Living in a Material World:  Economic Sociology Meets Science and Technology Studies. Cambridge, MA, USA: MIT Press.  Chapter 8, pp. 253-290.
M. Castells [1996]:  The Information Age:  Economy, Society and Culture. Blackwell, Second Edition.

Agonistic planning

One key feature of the Kennedy and Johnson administrations identified by David Halberstam in his superb account of the development of US policy on Vietnam, The Best and the Brightest, was groupthink:  the failure of White House national security, foreign policy and defense staff to propose or even countenance alternatives to the prevailing views on Vietnam, especially when these alternatives were in radical conflict with the prevailing wisdom.   Among the junior staffers working in those administrations was Richard Holbrooke, now the US Special Representative for Afghanistan and Pakistan in the Obama administration.  A New Yorker profile of Holbrooke last year included this statement by him, about the need for policy planning processes to incorporate agonism:

“You have to test your hypothesis against other theories,” Holbrooke said. “Certainty in the face of complex situations is very dangerous.” During Vietnam, he had seen officials such as McGeorge Bundy, Kennedy’s and Johnson’s national-security adviser, “cut people to ribbons because the views they were getting weren’t acceptable.” Washington promotes tactical brilliance framed by strategic conformity—the facility to outmaneuver one’s counterpart in a discussion, without questioning fundamental assumptions. A more farsighted wisdom is often unwelcome. In 1975, with Bundy in mind, Holbrooke published an essay in Harpers in which he wrote, “The smartest man in the room is not always right.” That was one of the lessons of Vietnam. Holbrooke described his method to me as “a form of democratic centralism, where you want open airing of views and opinions and suggestions upward, but once the policy’s decided you want rigorous, disciplined implementation of it. And very often in the government the exact opposite happens. People sit in a room, they don’t air their real differences, a false and sloppy consensus papers over those underlying differences, and they go back to their offices and continue to work at cross-purposes, even actively undermining each other.”  (page 47)
Of course, Holbrooke’s positing of policy development as distinct from policy implementation is itself a dangerous simplification of the reality for most complex policy, both private and public, where the relationship between the two is usually far messier.    The details of policy, for example, are often only decided, or even able to be decided, at implementation-time, not at policy design-time.    Do you sell your new hi-tech product via retail outlets, for instance?  The answer may depend on whether there are outlets available to collaborate with you (not tied to competitors) and technically capable of selling it, and these facts may not be known until you approach the outlets.  Moreover, if the stakeholders implementing (or constraining implementation) of a policy need to believe they have been adequately consulted in policy development for the policy to be executed effectively (as is the case with major military strategies in democracies, for example here), then a further complication to this reductive distinction exists.
 
 
UPDATE (2011-07-03):
British MP Rory Stewart recounts another instance of Holbrooke’s agonist approach to policy in this post-mortem tribute: Holbrooke, although disagreeing with Stewart on policy toward Afghanistan, insisted that Stewart present his case directly to US Secretary of State Hilary Clinton in a meeting that Holbrooke arranged.
 
References:

David Halberstam [1972]:  The Best and the Brightest.  New York, NY, USA: Random House.
George Packer [2009]:  The last mission: Richard Holbrooke’s plan to avoid the mistakes of Vietnam in AfghanistanThe New Yorker, 2009-09-28, pp. 38-55.

Crowd-sourcing for scientific research

Computers are much better than most humans at some tasks (eg, remembering large amounts of information, tedious and routine processing of large amounts of data), but worse than many humans at others (eg, generating new ideas, spatial pattern matching, strategic thinking). Progress may come from combining both types of machine (humans, computers) in ways which make use of their specific skills.  The journal Nature yesterday carried a report of a good example of this:  video-game players are able to assist computer programs tasked with predicting protein structures.  The abstract:

People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully ‘crowd-sourced’ through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.”

References:
Seth Cooper et al. [2010]: Predicting protein structures with a multiplayer online gameNature, 466:  756–760.  Published:  2010-08-05.
Eric Hand [2010]:  Citizen science:  people powerNature 466, 685-687. Published 2010-08-04.
The Foldit game is here.

Railtrack and the Joint-Action Society


For some time, I have been writing on these pages that the currently-fashionable paradigm of the Information Society is inadequate to describe what most of us do at work and play, or to describe how computing technologies support those activities (see, for example, recently here, with a collection of posts here).   Most work for most people in the developed world is about coordinating their actions with those of others  – colleagues, partners, underlings, bosses, customers, distributors, suppliers, publicists, regulators, und so weiter.   Information collection and transfer, while often important and sometimes essential to the co-ordination of actions,  is not usually itself the main game.
Given the extent to which computing technologies already support and enable human activities (landing our large aircraft automatically when there is fog, for example), the InfoSoc paradigm, although it may describe well the transmission of zeros and ones between machines, is of little value in understanding what these transmissions mean.  Indeed, the ur-text of the Information Society, Shannon’s mathematical theory of communications (Shannon 1948) explicitly ignores the semantics of messages!  In place of the InfoSoc metaphor, we need another new paradigm, a new way to construe what we are all doing.  For now, let me call it the Joint-Action Society, although this does not quite capture all that is intended.
I am pleased to learn that I am not alone in my views about InfoSoc.   I recently came across an article by the late Peter Martin, journalist, editor and e-businessman, about the lessons from that great disaster of privatization of Railtrack in the UK.  (In the 1980s and 90s, the French had grand projets while the British had great project management disasters.)  Here is Martin, writing in the FT in October 2001 (the article does not seem to be available online):

Railtrack had about a dozen prime contractors, which in turn farmed out the work to about 2,000 subcontractors.  Getting this web of relationships to work was a daunting task.  Gaps in communication, and the consequent “blame culture” are thought to be important causes of the track problems that led to the Hatfield crash which undermined Railtrack’s credibility.
.  .  .
These practical advantages of wholesale outsourcing rely, however, on unexamined assumptions.  It is these that the Railtrack episode comprehensively demolishes.
The first belief holds that properly specified contracts can replicate the operations of an integrated business.  Indeed, on this view, they may be better than integration because everyone understands what their responsibilities are, and their  incentives are clear and tightly defined.
This approach had a particular appeal to governments, as they attempted to step back from the minutiae of delivering public services.  British Conservative governments used the approach to break up monolithic nationalised industries into individual entities, such as power generators and distributors.
They put this approach into effect at the top level of the railway system by splitting the task of running the track and the signalling (Railtrack’s job) from the role of operating the trains.  It is not surprising that Railtrack, born into this environment, carried the approach to its logical conclusion in its internal operation.
.  .  .
In 1937, the Nobel prize-winning economist Ronald Coase had explained that companies perform internally those tasks for which the transactional costs of outsourcing are too high.
What fuelled the outsourcing boom of the 1990s was the second unexamined assumption – that the cost of negotiating, monitoring and maintaining contractual relationships with outsourcing partners had dropped sharply, thanks to the revolution in electronic communications.  The management of a much bigger web of contractors – indeed, the creation of a “virtual company” – became feasible.
In practice, of course, the real costs of establishing and maintaining contracts are not those of information exchange but of establishing trust, alignment of interests and a common purpose.  Speedy and cheap electronic communications have only a minor role to play in this process, as Coase himself pointed out in 1997.
.   .   .
And perhaps that is the most useful lesson from the Railtrack story: it is essential to decide what tasks are vital to your corporate purpose and to devote serious resources to achieving them.   Maintaining thousands of miles of steel tracks and stone chippings may be a dull, 19th century kind of task.   But as Railtrack found, if you can’t keep the railway running safely, you haven’t got a business.”

Reference:
Peter Martin [2001]: Lessons from Railtrack.  The collapse has demolished some untested assumptions about outsourcing.  Financial Times, 2001-10-09, page 21.
Claude E. Shannon [1948/1963]: The mathematical theory of communication. Bell System Technical Journal, October and November 1948.  Reprinted in:  C. E. Shannon and W. Weaver [1963]: The Mathematical Theory of Communication. pp. 29-125. Chicago, IL, USA: University of Illinois Press.

Complex Decisions

Most real-world business decisions are considerably more complex than the examples presented by academics in decision theory and game theory.  What makes some decisions more complex than others? Here I list some features, not all of which are present in all decision situations.

  • The problems are not posed in a form amenable to classical decision theory.

    Decision theory requires the decision-maker to know what are his or her action-options, what are the consequences of these, what are the uncertain events which may influence these consequences, and what are the probabilities of these uncertain events (and to know all these matters in advance of the decision). Yet, for many real-world decisions, this knowledge is either absent, or may only be known in some vague, intuitive, way. The drug thalidomide, for example, was tested thoroughly before it was sold commercially – on male and female human subjects, adults and children. The only group not to be tested were pregnant women, which were, unfortunately, the main group for which the drug had serious side effects. These side effects were consequences which had not been imagined before the decision to launch was made. Decision theory does not tell us how to identify the possible consequences of some decision, so what use is it in real decision-making?

  • There are fundamental domain uncertainties.

    None of us knows the future. Even with considerable investment in market research, future demand for new products may not be known because potential customers themselves do not know with any certainty what their future demand will be. Moreover, in many cases, we don’t know the past either. I have had many experiences where participants in a business venture have disagreed profoundly about the causes of failure, or even success, and so have taken very different lessons from the experience.

  • Decisions may be unique (non-repeated).

    It is hard to draw on past experience when something is being done for the first time. This does not stop people trying, and so decision-making by metaphor or by anecdote is an important feature of real-world decision-making, even though mostly ignored by decision theorists.

  • There may be multiple stakeholders and participants to the decision.

    In developing a business plan for a global satellite network, for example, a decision-maker would need to take account of the views of a handful of competitors, tens of major investors, scores of minor investors, approximately two hundred national and international telecommunications regulators, a similar number of national company law authorities, scores of upstream suppliers (eg equipment manufacturers), hundreds of employees, hundreds of downstream service wholesalers, thousands of downstream retailers, thousands or millions of shareholders (if listed publicly), and millions of potential customers. To ignore or oppose the views of any of these stakeholders could doom the business to failure. As it happens, Game Theory isn’t much use with this number and complexity of participants. Moreover, despite the view commonly held in academia, most large Western corporations operate with a form of democracy. (If opinions of intelligent, capable staff are regularly over-ridden, these staff will simply leave, so competition ensures democracy. In addition, good managers know that decisions unsupported by their staff will often be executed poorly, so success of a decision may depend on the extent to which staff believe it has been reached fairly.) Accordingly, all major decisions are decided by groups or teams, not at the sole discretion of an individual. Decision theorists, it seems to me, have paid insufficient attention to group decisions: We hear lots about Bayesian decision theory, but where, for example, is the Bayesian theory of combining subjective probability assessments?

  • Domain knowledge may be incomplete and distributed across these stakeholders.
  • Beliefs, goals and preferences of the stakeholders may be diverse and conflicting.
  • Beliefs, goals and preferences of stakeholders, the probabilities of events and the consequences of decisions, may be determined endogenously, as part of the decision process itself.

    For instance, economists use the term network good to refer to a good where one person’s utility depends on the utility of others. A fax machine is an example, since being the sole owner of fax is of little value to a consumer. Thus, a rational consumer would determine his or her preferences for such a good only AFTER learning the preferences of others. In other words, rational preferences are determined only in the course of the decision process, not beforehand.  Having considerable experience in marketing, I contend that ALL goods and services have a network-good component. Even so-called commodities, such as natural resources or telecommunications bandwidth, have demand which is subject to fashion and peer pressure. You can’t get fired for buying IBM, was the old saying. And an important function of advertising is to allow potential consumers to infer the likely preferences of other consumers, so that they can then determine their own preferences. If the advertisement appeals to people like me, or people to whom I aspire to be like, then I can infer that those others are likely to prefer the product being advertized, and thus I can determine my own preferences for it. Similarly, if the advertisement appeals to people I don’t aspire to be like, then I can infer that I won’t be subject to peer pressure or fashion trends, and can determine my preferences accordingly.
    This is commonsense to marketers, even if heretical to many economists.

  • The decision-maker may not fully understand what actions are possible until he or she begins to execute.
  • Some actions may change the decision-making landscape, particularly in domains where there are many interacting participants.

    A bold announcement by a company to launch a new product, for example, may induce competitors to follow and so increase (or decrease) the chances of success. For many goods, an ecosystem of critical size may be required for success, and bold initiatives may act to create (or destroy) such ecosystems.

  • Measures of success may be absent, conflicting or vague.
  • The consequences of actions, including their success or failure, may depend on the quality of execution, which in turn may depend on attitudes and actions of people not making the decision.

    Most business strategies are executed by people other than those who developed or decided the strategy. If the people undertaking the execution are not fully committed to the strategy, they generally have many ways to undermine or subvert it. In military domains, the so-called Powell Doctrine, named after former US Secretary of State Colin Powell, says that foreign military actions undertaken by a democracy may only be successful if these actions have majority public support. (I have written on this topic before.)

  • As a corollary of the previous feature, success of an action may require extensive and continuing dialog with relevant stakeholders, before, during and after its execution.

    This is not news to anyone in business.

  • Success may require pre-commitments before a decision is finally taken.

    In the 1990s, many telecommunications companies bid for national telecoms licences in foreign countries. Often, an important criterion used by the Governments awarding these licences was how quickly each potential operator could launch commercial service. To ensure that they could launch service quickly, some bidders resorted to making purchase commitments with suppliers and even installing equipment ahead of knowing the outcome of a bid, and even ahead, in at least one case I know, of deciding whether or not to bid.

  • The consequences of decisions may be slow to realize.

    Satellite mobile communications networks have typically taken ten years from serious inception to launch of service.  The oil industry usually works on 50+ year cycles for major investment projects.  BP is currently suffering the consequence in the Gulf of Mexico of what appears to be a decades-long culture which de-emphasized safety and adequate contingency planning.

  • Decision-makers may influence the consequences of decisions and/or the measures of success.
  • Intelligent participants may model each other in reaching a decision, what I term reflexivity.

    As a consequence, participants are not only reacting to events in their environment, they are anticipating events and the reactions and anticipations of other participants, and acting proactively to these anticipated events and reactions. Traditional decision theory ignores this. Following Nash, traditional game theory has modeled the outcomes of one such reasoning process, but not the processes themselves. Evolutionary game theory may prove useful for modeling these reasoning processes, although assuming a sequence of identical, repeated interactions does not strike me as an immediate way to model a process of reflexivity.  This problem still awaits its Nash.

In my experience, classical decision theory and game theory do not handle these features very well; in some cases, indeed, not at all.  I contend that a new theory of complex decisions is necessary to cope with decision domains having these features.

Silicon millenarianism

Here we go again! We have another blogger predicting the end of the office.   Funny how it’s almost always bloggers and journalists and thinktank-swimmers doing this – always people whose work, most of the time, is by themselves, and who therefore fail to understand the nature of actual work in modern organizations.   As I’ve argued before, workplace interactions are primarily about the co-ordination of actions and the assessment of people’s intentions concerning these actions, not (or not merely) about sharing information.  Why did Barack Obama summon the Chairman and CEO of BP to the Oval Office earlier this week?  Why was the CEO also called to testify before Congress?   Why didn’t the President or the Congressional Committee simply place a conference call?  Because it is very difficult, perhaps even impossible, to accurately assess another person’s intentions without immediate physical proximity and face-to-face interaction with said person.
If all you are doing is writing a blog or researching a story, perhaps you don’t ever appreciate this fact about work.  But anyone tasked with doing something other than writing knows it.   Seth Goodin thinks that within 10 years TV programs about office work will seem to be “quaint antiques”.  I bet him they will not at all.  Moreover, I bet the people in those offices will still be using paper, still having meetings, and still talking by the water-cooler.   In fact, while you’re placing my bets, put me down for 100 years, not 10.