Coupling preferences and decision-processes

I have expressed my strong and long-held criticisms of classical decision theory – that based on maximum expected utility (MEU) –  before and again before that.  I want to expand here on one of my criticisms.
One feature of MEU theory is that the preferences of a decision-maker are decoupled from the decision-making process itself.  The MEU process works independently of the preferences of the decision-maker, which are assumed to be independent inputs to the decision-making process.    This may be fine for some decisions, and for some decision-makers, but there are many, many real-world decisions where this decoupling is infeasible or undesirable, or both.
For example, I have talked before about network goods, goods for which the utility received by one consumer depends on the utility received by other consumers.   A fax machine, in the paradigm example, provides no benefits at all to someone whose network of contacts or colleagues includes no one else with a fax machine.   A rational consumer (rational in the narrow sense of MEU theory, as well as rational in the prior sense of being reason-based) would wait to see whether other consumers  in her network decide to purchase such a good (or are likely to decide to purchase it) before deciding to do so herself.   In this case, her preferences are endogeneous to the decision-process, and it makes no sense to model preferences as logically or chronologically prior to the process.   Like most people  in marketing, I have yet to encounter a good or service which is not a network good:  even so-called commodities, like coal, are subject to fashion, to peer-group pressures, and to imitative purchase behaviors.  (In so far as something looks like a commodity in the real world, some marketing manager is not doing his or her job.)
A second class of decisions also require us to consider preferences and decision-processes as strongly coupled.  These are situations where there are multiple decision-makers or stakeholders.     A truly self-interested agent (such as those assumed by mainstream micro-economics) cares not a jot for the interests of other stakeholders, but for those of us out here in the real world, this is almost never the case.  In any multiple-stakeholder decision – ie, any decision where the consequences accrue to more than one party – a non-selfish decision-maker would first seek to learn of the consequences of the different decision-options to other stakeholders as well as to herself, and of the preferences of those other stakeholders over these consequences.  Thus, any sensible decision-making process needs to allow for the elicitation and sharing of consequences and preferences between stakeholders.  In any reasonably complex decision – such as deciding whether to restrict use of some chemical on public health grounds, or deciding on a new distribution strategy for a commercial product  – these consequences will be dispersed and non-uniform in their effects.   This is why democratic government regulatory agencies, such as environmental agencies, conduct public hearings, enquiries and consultations exercises prior to making determinations.  And this is why even the most self-interested of corporate decision-makers invariably consider the views of shareholders, of regulators, of funders, of customers, of supply chain partners (both upstream and downstream), or of those internal staff who will be carrying out the decision, when they want the selected decision-option to be executed successfully.    No CEO is an island.
The fact that the consequences of major regulatory and corporate decisions are usually non-uniform in their impacts on stakeholders  – each decision-option advantaging some people or groups, while disadvantaging others – makes the application of any standard, context-independent decision-rule nonsensical.   Applying standard statistical tests as decision rules falls into this nonsensical category, something statisticians have known all along, but others seem not to. (See the references below for more on this.)
Any rational, feasible decision-process intended for the sorts of decisions we citizens, consumers and businesses face every day needs to allow preferences to emerge as part of the decision-making process, with preferences and the decision-process strongly coupled together.  Once again, as on so many other aspects, MEU theory fails.   Remind me again why it stays in Economics text books and MBA curricula.
References:
L. Atkins and D. Jarrett [1979]:  The significance of “significance tests”.  In:  J. Irvine, I. Miles and J. Evans (Editors): Demystifying Social Statistics. London, UK: Pluto Press.
D. J. Fiorino [1989]:  Environmental risk and democratic process:  a critical review.  Columbia Journal of Environmental Law,  14: 501-547.  (This paper presents reasons why deliberative democratic processes are necessary in environmental regulation.)
T. Page [1978]:  A generic view of toxic chemicals and similar risks.  Ecology Law Quarterly.  7 (2): 207-244.

Distributed cognition

Some excerpts from an ethnographic study of the operations of a Wall Street financial trading firm, bearing on distributed cognition and joint-action planning:

This emphasis on cooperative interaction underscores that the cognitive tasks of the arbitrage trader are not those of some isolated contemplative, pondering mathematical equations and connected only to to a screen-world.  Cognition at International Securities is a distributed cognition.  The formulas of new trading patterns are formulated in association with other traders.  Truly innovative ideas, as one senior trader observed, are slowly developed through successions of discreet one-to-one conversations.
. . .
An idea is given form by trying it out, testing it on others, talking about it with the “math guys,” who, significantly, are not kept apart (as in some other trading rooms),  and discussing its technical intricacies with the programmers (also immediately present).”   (p. 265)
The trading room thus shows a particular instance of Castell’s paradox:  As more information flows through networked connectivity, the more important become the kinds of interactions grounded in a physical locale. New information technologies, Castells (2000) argues, create the possibility for social interaction without physical contiguity.  The downside is that such interactions can become repititive and programmed in advance.  Given this change, Castells argues that as distanced, purposeful, machine-like interactions multiply, the value of less-directd, spontaneous, and unexpected interactions that take place in physical contiguity will become greater (see also Thrift 1994; Brown and Duguid 2000; Grabhar 2002).  Thus, for example, as surgical techniques develop together with telecommunications technology, the surgeons who are intervening remotely on patients in distant locations are disproportionately clustering in two or three neighbourhoods of Manhattan where they can socialize with each other and learn about new techniques, etc.” (p. 266)
“One examplary passage from our field notes finds a senior trader formulating an arbitrageur’s version of Castell’s paradox:
“It’s hard to say what percentage of time people spend on the phone vs. talking to others in the room.   But I can tell you the more electronic the market goes, the more time people spend communicating with others inside the room.”  (p. 267)
Of the four statistical arbitrage robots, a senior trader observed:
“We don’t encourage the four traders in statistical arb to talk to each other.  They sit apart in the room.  The reason is that we have to keep diversity.  We could really hammered if the different robots would have the same P&L [profit and loss] patterns and the same risk profiles.”  (p. 283)

References:
Daniel Beunza and David Stark [2008]:  Tools of the trade:  the socio-technology of arbitrage in a Wall Street trading room.  In:  Trevor Pinch and Richard Swedborg (Editors):  Living in a Material World:  Economic Sociology Meets Science and Technology Studies. Cambridge, MA, USA: MIT Press.  Chapter 8, pp. 253-290.
M. Castells [1996]:  The Information Age:  Economy, Society and Culture. Blackwell, Second Edition.

Good decisions

Which decisions are good decisions?
Since 1945, mainstream economists have arrogated the word “rational” to describe a mode of decision-making which they consider to be best.   This method, called maximum-expected utility (MEU) decision-making, assumes that the decision-maker has only a finite set of possible action-options and that she knows what these are, that she knows the possible consequences of each of these actions and can quantify (or at least can estimate) these consequences, and can do so on a single, common, numerical scale of value (the payoffs), that she knows a finite and complete collection of uncertain events that are possible and which may impact the consequences and their values, and knows (or at least can estimate) the probabilities of these uncertain events, again on a common numerical scale of uncertainty.  The MEU decision procedure is then to quantify the consequences of each action-option, weighting them by the relative likelihood of their arising according to their probabilities of the uncertain events which influence them.
The decision-maker then selects that action-option which has the maximum expected consequential value, ie the consequential value weighted by the probabilities of the uncertain events. Such decision-making, in an abuse of language that cries out for a criminal charges, is then called rational by economists.   Bayesian statistician Dennis Lindley even wrote a book about MEU which included the stunningly-arrogant sentence, “The main conclusion [of this book] is that there is essentially only one way to reach a decision sensibly.”

Rational?  This method is not even feasible, let alone sensible or good!
First, where do all these numbers come from?  With the explicit assumptions that I have listed, economists are assuming that the decision-maker has some form of perfect knowledge.  Well, no one making any real-world decisions has that much knowledge.  Of course, economists often respond, estimates can be used when the knowledge is missing.  But whose estimates?   Sourced from where?   Updated when? Anyone with any corporate or public policy experience knows straight away that consensus on such numbers for any half-way important problem will be hard to find.  Worse than that, any consensus achieved should immediately be suspected and interrogated, since it may be evidence of groupthink.    There simply is no certainty about the future, and if a group of people all do agree on what it holds, down to quantified probabilities and payoffs, they deserve the comeuppance they are likely to get!
Second, the MEU principle simply averages across uncertain events.   What of action-options with potentially catastrophic outcomes?   Their small likelihood of occurrence may mean they disappear in the averaging process, but no real-world decision-maker – at least, none with any experience or common sense – would risk a catastrophic outcome, despite their estimated low probabilities.   Wall Street trading firms have off-street (and often off-city) backup IT systems, and sometimes even entire backup trading floors, ready for those rare events.
Third, look at all the assumptions not made explicit in this framework.  There is no mention of the time allowed for the decision, so apparently the decision-maker has infinities of time available.  No mention is made of the processing or memory resources available for making the decision, so she has infinities of world also.   That makes a change from most real-world decisions:  what a pleasant utopia this MEU-land must be.  Nothing is said – at least nothing explicit – about taking into account the historical or other contexts of the decision, such as past decisions by this or related decision-makers, technology standards, legacy systems, organization policies and constraints, legal, regulatory or ethical constraints, or the strategies of the company or the society in which the decision-maker sits.   How could a decision procedure which ignores such issues be considered, even for a moment, rational?   I think only an academic could ignore context in this way; no business person I know would do so, since certain unemployment would be the result.  And how could members of an academic discipline purporting to be a social science accept and disseminate a decision-making framework which ignores such social, contextual features?
And do the selected action-options just execute themselves?  Nothing is said in this framework about consultation with stakeholders during the decision-process, so presumably the decision-maker has no one to report to, no board members or stockholders or division presidents or ward chairmen or electors to manage or inform or liaise with or mollify or reward or appease or seek re-election from, no technical departments to seek feasibility approval from, no implementation staff to motivate or inspire, no regulators or ethicists or corporate counsel to seek legal approval from, no funders or investors to raise finance from, no suppliers to convince to accept orders with, no distribution channels to persuade to schedule throughput with,  no competitors to second-guess or outwit, and no actual, self-immolating protesters outside one’s office window to avert one’s eyes from and feel guilt about for years afterward.*
For many complex decisions, the ultimate success or failure of the decision can depend significantly on the degree to which those having to execute the decision also support it.  Consequently, the choice of a specific action-option (and the logical reasoning process used to select it) may be far less important for success of the decision than that key stakeholders feel that they have been consulted appropriately during the reasoning process.  In other words, the quality of the decision may depend much more on how and with who the decision-maker reasons than on the particular conclusion she reaches.   Arguably this is true of almost all significant corporate strategy decisions and major public policy decisions:  There is ultimately no point sending your military to prop up an anti-communist regime in South-East Asia, for example, if your own soldiers come to feel they should not be there (as I discuss here, regarding another decision to go to war).
Mainstream economists have a long way to go before they will have a theory of good decision-making.   In the meantime, it would behoove them to show some humility when criticizing the decision-making processes of human beings.**
Notes and Bibliography:
Oskar Lange [1945-46]:  The scope and method of economics.  The Review of Economic Studies, 13 (1): 19-32.
Dennis Lindley [1985]:  Making Decisions.  Second Edition. London, UK: John Wiley and Sons.
L James Savage [1950]: The Foundations of Statistics.  New York, NY, USA:  Wiley.
* I’m sure Robert McNamara, statistician and decision-theory whizz kid, never considered the reactions of self-immolating protesters when making decisions early in his career, but having seen one outside his office window late in his time as Secretary of Defense he seems to have done so subsequently.
** Three-toed sloth comments dialogically and amusingly on MEU theory here.

In defence of futures thinking

Norm at Normblog has a post defending theology as a legitimate area of academic inquiry, after an attack on theology by Oliver Kamm.  (Since OK’s post is behind a paywall, I have not read it, so my comments here may be awry with respect to that post.)  Norm argues, very correctly, that it is legitimate for theology, considered as a branch of philosophy to, inter alia, reflect on the properties of entities whose existence has not yet been proven.  In strong support of Norm, let me add:  Not just in philosophy!
In business strategy, good decision-making requires consideration of the consequences of potential actions, which in turn requires the consideration of the potential actions of other actors and stakeholders in response to the first set of actions.  These actors may include entities whose existence is not yet known or even suspected, for example, future competitors to a product whose launch creates a new product category.   Why, there’s even a whole branch of strategy analysis, devoted to scenario planning, a discipline that began in the military analysis of alternative post-nuclear worlds, and whose very essence involves the creation of imagined futures (for forecasting and prognosis) and/or imagined pasts (for diagnosis and analysis).   Every good air-crash investigation, medical diagnosis, and police homicide investigation, for instance, involves the creation of imagined alternative pasts, and often the creation of imaginary entities in those imagined pasts, whose fictional attributes we may explore at length.   Arguably, in one widespread view of the philosophy of mathematics, pure mathematicians do nothing but explore the attributes of entities without material existence.
And not just in business, medicine, the military, and the professions.   In computer software engineering, no new software system development is complete without due and rigorous consideration of the likely actions of users or other actors with and on the system, for example.   Users and actors here include those who are the intended target users of the system, as well as malevolent or whimsical or poorly-behaved or bug-ridden others, both human and virtual, not all of whom may even exist when the system is first developed or put into production.      If creative articulation and manipulation of imaginary futures (possible or impossible) is to be outlawed, not only would we have no literary fiction or much poetry, we’d also have few working software systems either.

Agonistic planning

One key feature of the Kennedy and Johnson administrations identified by David Halberstam in his superb account of the development of US policy on Vietnam, The Best and the Brightest, was groupthink:  the failure of White House national security, foreign policy and defense staff to propose or even countenance alternatives to the prevailing views on Vietnam, especially when these alternatives were in radical conflict with the prevailing wisdom.   Among the junior staffers working in those administrations was Richard Holbrooke, now the US Special Representative for Afghanistan and Pakistan in the Obama administration.  A New Yorker profile of Holbrooke last year included this statement by him, about the need for policy planning processes to incorporate agonism:

“You have to test your hypothesis against other theories,” Holbrooke said. “Certainty in the face of complex situations is very dangerous.” During Vietnam, he had seen officials such as McGeorge Bundy, Kennedy’s and Johnson’s national-security adviser, “cut people to ribbons because the views they were getting weren’t acceptable.” Washington promotes tactical brilliance framed by strategic conformity—the facility to outmaneuver one’s counterpart in a discussion, without questioning fundamental assumptions. A more farsighted wisdom is often unwelcome. In 1975, with Bundy in mind, Holbrooke published an essay in Harpers in which he wrote, “The smartest man in the room is not always right.” That was one of the lessons of Vietnam. Holbrooke described his method to me as “a form of democratic centralism, where you want open airing of views and opinions and suggestions upward, but once the policy’s decided you want rigorous, disciplined implementation of it. And very often in the government the exact opposite happens. People sit in a room, they don’t air their real differences, a false and sloppy consensus papers over those underlying differences, and they go back to their offices and continue to work at cross-purposes, even actively undermining each other.”  (page 47)
Of course, Holbrooke’s positing of policy development as distinct from policy implementation is itself a dangerous simplification of the reality for most complex policy, both private and public, where the relationship between the two is usually far messier.    The details of policy, for example, are often only decided, or even able to be decided, at implementation-time, not at policy design-time.    Do you sell your new hi-tech product via retail outlets, for instance?  The answer may depend on whether there are outlets available to collaborate with you (not tied to competitors) and technically capable of selling it, and these facts may not be known until you approach the outlets.  Moreover, if the stakeholders implementing (or constraining implementation) of a policy need to believe they have been adequately consulted in policy development for the policy to be executed effectively (as is the case with major military strategies in democracies, for example here), then a further complication to this reductive distinction exists.
 
 
UPDATE (2011-07-03):
British MP Rory Stewart recounts another instance of Holbrooke’s agonist approach to policy in this post-mortem tribute: Holbrooke, although disagreeing with Stewart on policy toward Afghanistan, insisted that Stewart present his case directly to US Secretary of State Hilary Clinton in a meeting that Holbrooke arranged.
 
References:

David Halberstam [1972]:  The Best and the Brightest.  New York, NY, USA: Random House.
George Packer [2009]:  The last mission: Richard Holbrooke’s plan to avoid the mistakes of Vietnam in AfghanistanThe New Yorker, 2009-09-28, pp. 38-55.

Strategy vs. Tactics

What is the difference between strategy and tactics?  In my experience, many people cannot tell the difference, and/or speak as if they conflate the two. Personally, I have never had difficulty telling them apart.
The 18th-century British naval definition was that tactics are for when you can see the enemy’s ships, and strategies are for when you cannot.  When you can see the enemy’s ships there are still important unknown variables, but you should know how many ships there are, where they are located, and (within some degree of accuracy) what hostile actions they are capable of.  If you are close enough to identify the particular enemy ships that you can see, you may also know then the identities of their captains.  With knowledge of past engagements, you may thus be able to estimate the intentions, the likely behaviors, and the fighting will of the ships’ crews.   None of these variables are known when the ships lay beyond the horizon.
Thus, tactics describe your possible actions when you know who the other stakeholders are in the situation you are in, and you have accurate (although not necessarily precise) information about their capabilities, goals, preferences, and intentions.   To the extent that such knowledge is missing is the extent to which reasoning about potential actions becomes strategic rather than tactical.  These distinctions are usually quite clear in marketing contexts.  For instance, licking envelopes for a client’s direct marketing campaign is not strategic consultancy, nor is finding, cleaning, verifying, and compiling the addresses needed by the client to put on the envelopes. (This is not to say that either task can be done well without expertise and experience.) Advising a client to embark on a direct marketing campaign rather than (say) a television ad campaign is closer to strategic consultancy, although in some contexts it may be mere tactics. Determining ahead of time which segments of the potential customer population should be targeted with an advertising campaign is definitely strategic, as is deciding whether or not to enter (or stay) in the market.
The key difference between the two is that articulating a strategy requires taking a view on the values of significant uncertain variables, whereas articulating a tactic generally does not.

Bayesian statistics

One of the mysteries to anyone trained in the frequentist hypothesis-testing paradigm of statistics, as I was, and still adhering to it, as I do, is how Bayesian approaches seemed to have taken the academy by storm.   One wonders, first, how a theory based – and based explicitly – on a measure of uncertainty defined in terms of subjective personal beliefs, could be considered even for a moment for an inter-subjective (ie, social) activity such as Science.

One wonders, second, how a theory justified by appeals to such socially-constructed, culturally-specific, and readily-contestable activities as gambling (ie, so-called Dutch-book arguments) could be taken seriously as the basis for an activity (Science) aiming for, and claiming to achieve, universal validity.   One wonders, third, how the fact that such justifications, even if gambling presents no moral, philosophical or other qualms,  require infinite sequences of gambles is not a little troubling for all of us living in this finite world.  (You tell me you are certain to beat me if we play an infinite sequence of gambles? Then, let me tell you, that I have a religion promising eternal life that may interest you in turn.)

One wonders, fourthly, where are recorded all the prior distributions of beliefs which this theory requires investigators to articulate before doing research.  Surely someone must be writing them down, so that we consumers of science can know that our researchers are honest, and hold them to potential account.   That there is such a disconnect between what Bayesian theorists say researchers do and what those researchers demonstrably do should trouble anyone contemplating a choice of statistical paradigms, surely. Finally, one wonders how a theory that requires non-zero probabilities be allocated to models of which the investigators have not yet heard or even which no one has yet articulated, for those models to be tested, passes muster at the statistical methodology corral.

To my mind, Bayesianism is a theory from some other world – infinite gambles, imagined prior distributions, models that disregard time or requirements for constructability,  unrealistic abstractions from actual scientific practice – not from our own.

So, how could the Bayesians make as much headway as they have these last six decades? Perhaps it is due to an inherent pragmatism of statisticians – using whatever techniques work, without much regard as to their underlying philosophy or incoherence therein.  Or perhaps the battle between the two schools of thought has simply been asymmetric:  the Bayesians being more determined to prevail (in my personal experience, to the point of cultism and personal vitriol) than the adherents of frequentism.  Greg Wilson’s 2001 PhD thesis explored this question, although without finding definitive answers.

Now, Andrew Gelman and the indefatigable Cosma Shalizi have written a superb paper, entitled “Philosophy and the practice of Bayesian statistics”.  Their paper presents another possible reason for the rise of Bayesian methods:  that Bayesianism, when used in actual practice, is most often a form of hypothesis-testing, and thus not as untethered to reality as the pure theory would suggest.  Their abstract:

A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism.  We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.

Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.

References:
Andrew Gelman and Cosma Rohilla Shalizi [2010]:  Philosophy and the practice of Bayesian statistics.  Available from Arxiv.  Blog post here.

Gregory D. Wilson [2001]:   Articulation Theory and Disciplinary Change:  Unpacking the Bayesian-Frequentist Paradigm Conflict in Statistical Science.  PhD Thesis,  Rhetoric and Professional Communication Programme, New Mexico State University.  Las Cruces, NM, USA.  July 2001.

The glass bead game of mathematical economics

Over at the economics blog, A Fine Theorem, there is a post about economic modelling.
My first comment is that the poster misunderstands the axiomatic method in pure mathematics.  It is not the case that “axioms are by assumption true”.  Truth is a bivariant relationship between some language or symbolic expression and the world.  Pure mathematicians using axiomatic methods make no assumptions about the relationship between their symbolic expressions of interest and the world.   Rather they deduce consequences from the axioms, as if those axioms were true, but without assuming that they are.    How do I know they do not assume their axioms to be true?  Because mathematicians often work with competing, mutually-inconsistent, sets of axioms, for example when they consider both Euclidean and non-Euclidean geometries, or when looking at systems which assume the Axiom of Choice and systems which do not.   Indeed, one could view parts of the meta-mathematical theory called Model Theory as being the formal and deductive exploration of multiple, competing sets of axioms.
On the question of economic modeling, the blogger presents the views of Gerard Debreu on why the abstract mathematicization of economics is something to be desired.   One should also point out the very great dangers of this research program, some of which we are suffering now.  The first is that people — both academic researchers and others — can become so intoxicated with the pleasures of mathematical modeling that they mistake the axioms and the models for reality itself.  Arguably the widespread adoption of financial models assuming independent and normally-distributed errors was the main cause of the Global Financial Crisis of 2008, where the errors of complex derivative trades (such as credit default swaps) were neither independent nor as thin-tailed as Normal distributions are.  The GFC led, inexorably, to the Great Recession we are all in now.
Secondly, considered only as a research program, this approach has serious flaws.  If you were planning to construct a realistic model of human economic behaviour in all its diversity and splendour, it would be very odd to start by modeling only that one very particular, and indeed pathological, type of behaviour examplified by homo economicus, so-called rational economic man.   Acting with with infinite mental processing resources and time, with perfect knowledge of the external world, with perfect knowledge of his own capabilities, his own goals, own preferences, and indeed own internal knowledge, with perfect foresight or, if not, then with perfect knowledge of a measure of uncertainty overlaid on a pre-specified sigma-algebra of events, and completely unencumbered with any concern for others, with any knowledge of history, or with any emotions, homo economicus is nowhere to be found on any omnibus to Clapham.  Starting economic theory with such a creature of fiction would be like building a general theory of human personality from a study only of convicted serial killers awaiting execution, or like articulating a general theory of evolution using only a hand-book of British birds.   Homo economicus is not where any reasonable researcher interested in modeling the real world would start from in creating a theory of economic man.
And, even if this starting point were not on its very face ridiculous, the fact that economic systems are complex adaptive systems should give economists great pause.   Such systems are, typically, not continuously dependent on their initial conditions, meaning that a small change in input parameters can result in a large change in output values.   In other words, you could have a model of economic man which was arbitrarily close to, but not identical with, homo economicus, and yet see wildly different behaviours between the two.  Simply removing the assumption of infinite mental processing resources creates a very different economic actor from the assumed one, and consequently very different properties at the level of economic systems.  Faced with such overwhelming non-continuity (and non-linearity), a naive person might expect economists to be humble about making predictions or giving advice to anyone living outside their models.   Instead, we get an entire profession labeling those human behaviours which their models cannot explain as “irrational”.
My anger at The Great Wen of mathematical economics arises because of the immorality this discipline evinces:   such significant and rare mathematical skills deployed, not to help alleviate suffering or to make the world a better place (as those outside Economics might expect the discipline to aspire to), but to explore the deductive consequences of abstract formal systems, systems neither descriptive of any reality, nor even always implementable in a virtual world.

Complex Decisions

Most real-world business decisions are considerably more complex than the examples presented by academics in decision theory and game theory.  What makes some decisions more complex than others? Here I list some features, not all of which are present in all decision situations.

  • The problems are not posed in a form amenable to classical decision theory.

    Decision theory requires the decision-maker to know what are his or her action-options, what are the consequences of these, what are the uncertain events which may influence these consequences, and what are the probabilities of these uncertain events (and to know all these matters in advance of the decision). Yet, for many real-world decisions, this knowledge is either absent, or may only be known in some vague, intuitive, way. The drug thalidomide, for example, was tested thoroughly before it was sold commercially – on male and female human subjects, adults and children. The only group not to be tested were pregnant women, which were, unfortunately, the main group for which the drug had serious side effects. These side effects were consequences which had not been imagined before the decision to launch was made. Decision theory does not tell us how to identify the possible consequences of some decision, so what use is it in real decision-making?

  • There are fundamental domain uncertainties.

    None of us knows the future. Even with considerable investment in market research, future demand for new products may not be known because potential customers themselves do not know with any certainty what their future demand will be. Moreover, in many cases, we don’t know the past either. I have had many experiences where participants in a business venture have disagreed profoundly about the causes of failure, or even success, and so have taken very different lessons from the experience.

  • Decisions may be unique (non-repeated).

    It is hard to draw on past experience when something is being done for the first time. This does not stop people trying, and so decision-making by metaphor or by anecdote is an important feature of real-world decision-making, even though mostly ignored by decision theorists.

  • There may be multiple stakeholders and participants to the decision.

    In developing a business plan for a global satellite network, for example, a decision-maker would need to take account of the views of a handful of competitors, tens of major investors, scores of minor investors, approximately two hundred national and international telecommunications regulators, a similar number of national company law authorities, scores of upstream suppliers (eg equipment manufacturers), hundreds of employees, hundreds of downstream service wholesalers, thousands of downstream retailers, thousands or millions of shareholders (if listed publicly), and millions of potential customers. To ignore or oppose the views of any of these stakeholders could doom the business to failure. As it happens, Game Theory isn’t much use with this number and complexity of participants. Moreover, despite the view commonly held in academia, most large Western corporations operate with a form of democracy. (If opinions of intelligent, capable staff are regularly over-ridden, these staff will simply leave, so competition ensures democracy. In addition, good managers know that decisions unsupported by their staff will often be executed poorly, so success of a decision may depend on the extent to which staff believe it has been reached fairly.) Accordingly, all major decisions are decided by groups or teams, not at the sole discretion of an individual. Decision theorists, it seems to me, have paid insufficient attention to group decisions: We hear lots about Bayesian decision theory, but where, for example, is the Bayesian theory of combining subjective probability assessments?

  • Domain knowledge may be incomplete and distributed across these stakeholders.
  • Beliefs, goals and preferences of the stakeholders may be diverse and conflicting.
  • Beliefs, goals and preferences of stakeholders, the probabilities of events and the consequences of decisions, may be determined endogenously, as part of the decision process itself.

    For instance, economists use the term network good to refer to a good where one person’s utility depends on the utility of others. A fax machine is an example, since being the sole owner of fax is of little value to a consumer. Thus, a rational consumer would determine his or her preferences for such a good only AFTER learning the preferences of others. In other words, rational preferences are determined only in the course of the decision process, not beforehand.  Having considerable experience in marketing, I contend that ALL goods and services have a network-good component. Even so-called commodities, such as natural resources or telecommunications bandwidth, have demand which is subject to fashion and peer pressure. You can’t get fired for buying IBM, was the old saying. And an important function of advertising is to allow potential consumers to infer the likely preferences of other consumers, so that they can then determine their own preferences. If the advertisement appeals to people like me, or people to whom I aspire to be like, then I can infer that those others are likely to prefer the product being advertized, and thus I can determine my own preferences for it. Similarly, if the advertisement appeals to people I don’t aspire to be like, then I can infer that I won’t be subject to peer pressure or fashion trends, and can determine my preferences accordingly.
    This is commonsense to marketers, even if heretical to many economists.

  • The decision-maker may not fully understand what actions are possible until he or she begins to execute.
  • Some actions may change the decision-making landscape, particularly in domains where there are many interacting participants.

    A bold announcement by a company to launch a new product, for example, may induce competitors to follow and so increase (or decrease) the chances of success. For many goods, an ecosystem of critical size may be required for success, and bold initiatives may act to create (or destroy) such ecosystems.

  • Measures of success may be absent, conflicting or vague.
  • The consequences of actions, including their success or failure, may depend on the quality of execution, which in turn may depend on attitudes and actions of people not making the decision.

    Most business strategies are executed by people other than those who developed or decided the strategy. If the people undertaking the execution are not fully committed to the strategy, they generally have many ways to undermine or subvert it. In military domains, the so-called Powell Doctrine, named after former US Secretary of State Colin Powell, says that foreign military actions undertaken by a democracy may only be successful if these actions have majority public support. (I have written on this topic before.)

  • As a corollary of the previous feature, success of an action may require extensive and continuing dialog with relevant stakeholders, before, during and after its execution.

    This is not news to anyone in business.

  • Success may require pre-commitments before a decision is finally taken.

    In the 1990s, many telecommunications companies bid for national telecoms licences in foreign countries. Often, an important criterion used by the Governments awarding these licences was how quickly each potential operator could launch commercial service. To ensure that they could launch service quickly, some bidders resorted to making purchase commitments with suppliers and even installing equipment ahead of knowing the outcome of a bid, and even ahead, in at least one case I know, of deciding whether or not to bid.

  • The consequences of decisions may be slow to realize.

    Satellite mobile communications networks have typically taken ten years from serious inception to launch of service.  The oil industry usually works on 50+ year cycles for major investment projects.  BP is currently suffering the consequence in the Gulf of Mexico of what appears to be a decades-long culture which de-emphasized safety and adequate contingency planning.

  • Decision-makers may influence the consequences of decisions and/or the measures of success.
  • Intelligent participants may model each other in reaching a decision, what I term reflexivity.

    As a consequence, participants are not only reacting to events in their environment, they are anticipating events and the reactions and anticipations of other participants, and acting proactively to these anticipated events and reactions. Traditional decision theory ignores this. Following Nash, traditional game theory has modeled the outcomes of one such reasoning process, but not the processes themselves. Evolutionary game theory may prove useful for modeling these reasoning processes, although assuming a sequence of identical, repeated interactions does not strike me as an immediate way to model a process of reflexivity.  This problem still awaits its Nash.

In my experience, classical decision theory and game theory do not handle these features very well; in some cases, indeed, not at all.  I contend that a new theory of complex decisions is necessary to cope with decision domains having these features.

Vale: Don Day

This post is to mark the passing on of Don Day (1924-2010), former member of the New South Wales Legislative Assembly (the so-called “Bearpit”, roughest of Australia’s 15 parliamentary assemblies) and former NSW Labor Minister.   I knew Don when he was my local MLA in the 1970s and 1980s, when he won a seat in what was normally ultra-safe Country Party (now National Party) country – first, the electorate of Casino, and then, Clarence.  Indeed, he was for a time the only Labor MLA in the 450 miles of the state north of Newcastle.  His win was repeated several times, and his seat was crucial to Neville Wran’s surprise 1-seat majority in May 1976, returning Labor to power in NSW after 11 years in opposition, and after a searing loss in the Federal elections of December 1975.

In his role as Minister for Primary Industries and Decentralisation, Don was instrumental in saving rural industries throughout NSW.   Far North Coast dairy farmers were finally allowed to sell milk to Sydney households, for example, breaking the quota system, a protectionist economic racket which favoured only a minority of dairy farmers and which was typical of the crony-capitalist policies of the Country Party.  Similarly, his actions saved the NSW sugar industry from closure.   NSW Labor’s rural policies were (and still are) better for the majority of people in the bush than those of the bush’s self-proclaimed champions.

Like many Labor representatives of his generation, Don Day had fought during WW II, serving in the RAAF.  After the war, he established a small business in Maclean.   He was one of the most effective meeting chairmen I have encountered:  He would listen carefully and politely to what people were saying, summarize their concerns fairly and dispassionately (even when he was passionate himself on the issues being discussed), and was able to identify quickly the nub of an issue or a way forward in a complex situation.  He could usually separate his assessment of an argument from his assessment of the person making it, which helped him be dispassionate.  Although The Grafton Daily Examiner has an obit here, I doubt he will be remembered much elsewhere on the web, hence this post.

Update (2010-06-12): SMH obit is here.