Retroflexive decision-making

How do companies make major decisions?  The gurus of classical Decision Theory – people like economist Jimmie Savage and statistician Dennis Lindley – tell us that there is only one correct way to make decisions:  List all the possible actions, list the potential consequences of each action, assign utilities  and probabilities of occurrence to each consequence, multiply these numbers together for each consequence and then add the resulting products for each action to get an expected utility for each action, and finally choose that action which maximizes expected utility.
There are many, many problems with this model, not least that it is not what companies – or intelligent, purposive individuals for that matter – actually do.  Those who have worked in companies know that nothing so simplistic or static describes intelligent, rational decision making, nor should it.  Moreover, that their model was flawed as a description of reality was known at the time to Savage, Lindley, et al,  because it was pointed out to them six decades ago by people such as George Shackle, an economist who had actually worked in industry and who drew on his experience.  The mute, autistic behemoth that is mathematical economics, however, does not stop or change direction merely because its utter disconnection with empirical reality is noticed by someone, and so – TO THIS VERY DAY – students in business schools still learn the classical theory.  I guess for the students it’s a case of:  Who are we going to believe – our textbooks, or our own eyes?    From my first year as an undergraduate taking Economics 101, I had trouble believing my textbooks.
So what might be a better model of decision-making?  First, we need to recognize that corporate decision-making is almost always something dynamic, not static – it takes place over time, not in a single stage of analysis, and we would do better to describe a process, rather than just giving a formula for calculating an outcome.   Second, precisely because the process is dynamic, many of the inputs assumed by the classical model do not exist, or are not known to the participants, at the start, but emerge in the course of the decision-making process.   Here, I mean things such as:  possible actions, potential consequences, preferences (or utilities), and measures of uncertainty (which may or may not include probabilities).     Third, in large organizations, decision-making is a group activity, with inputs and comments from many people.   If you believe – as Savage and Lindley did – that there is only one correct way to make a decision, then your model would contain no scope for subjective inputs or stakeholder revisions, which is yet another of the many failings of the classical model.    Fourth, in the real world, people need to consider – and do consider – the potential downsides as well as the upsides of an action, and they need to do this – and they do do this – separately, not merged into a summary statistic such as “utility”.   So, if  one possible consequence of an action-option is catastrophic loss, then no amount of maximum-expected-utility quantitative summary gibberish should permit a rational decision-maker to choose that option without great pause (or insurance).   Shackle knew this, so his model considers downsides as well as upsides.   That Savage and his pals ignored this one can only assume is the result of the impossibility of catastrophic loss ever occurring to a tenured academic.
So let us try to articulate a staged process for what companies actually do when they make major decisions, such as major investments or new business planning:

  1. Describe the present situation and the way or ways it may evolve in the future.  We call these different future paths scenarios.   Making assumptions about the present and the future is also called taking a view.
  2. For each scenario, identify a list of possible actions, able to be executed under the scenario.
  3. For each scenario and action, identify the possible upsides and downsides.
  4. Some actions under some scenarios will have attractive upsides.   What can be done to increase the likelihood of these upsides occurring?  What can be done to make them even more attractive?
  5. Some actions under some scenarios will have unattractive downsides.   What can be done to eliminate these downsides altogether or to decrease their likelihood of occurring?   What can be done to ameliorate, to mitigate, to distribute to others, or to postpone the effects of these downsides?
  6. In the light of what was learned in doing steps 1-5, go back to step 1 and repeat it.
  7. In the light of what was learned in doing steps 1-6, go back to step 2 and repeat steps 2-5.  For example, by modifying or combining actions, it may be possible to shift attractive upsides or unattractive downsides from one action to another.
  8. As new information comes to hand, occasionally repeat step 1. Repeat step 7 as often as time permits.

This decision process will be familiar to anyone who has prepared a business plan for a new venture, either for personal investment, or for financial investors and bankers, or for business partners.   Having access to spreadsheet software such as Lotus 1-2-3 or Microsoft EXCEL has certainly made this process easier to undertake.  But, contrary to the beliefs of many, people made major decisions before the invention of spreadsheets, and they did so using processes similar to this, as Shackle’s work evidences.
Because this model involves revision of initial ideas in repeated stages, it bears some resemblance to the retroflexive argumentation theory of philosopher Harald Wohlrapp.  Hence, I call it Retroflexive Decision Theory.  I will explore this model in more detail in future posts.
References:
D. Lindley [1985]:  Making Decisions.  Second Edition. London, UK: John Wiley and Sons.
L. J. Savage [1950]: The Foundations of Statistics.  New York, NY, USA:  Wiley.
G. L. S. Shackle [1961]: Decision, Order and Time in Human Affairs. Cambridge, UK:  Cambridge University Press.
H. Wohlrapp [1998]:  A new light on non-deductive argumentation schemes.  Argumentation, 12: 341-350.

Epideictic arguments

Suppose you are diagnosed with a serious medical condition, and you seek advice from two doctors.  The first doctor, let’s call him Dr Barack, says that there are three possible courses of treatment.   He labels these courses, A, B and C, and then proceeds to walk you methodically through each course – what separate basic procedures are involved, in what order, with what likely side effects, and with what costs and durations, what chances of success or failure, and what likely survival rates.   He finishes this methodical exposition by summing up each treatment, with pithy statements such as, “Course A is the cheapest and most proven.  Course B is an experimental treatment, which makes it higher risk, but it may be the most effective.  Course C . . .” etc.
The other doctor, let’s call him Dr John, in contrast talks in a manner which is apparently lacking all structure. He begins a long, discursive narrative about the many different basic procedures possible, not in any particular order, jumping back and forth between these as he focuses first on the costs of procedures, then switching to their durations, then back again to costs, then onto their expected side effects, with tangential discussions in the middle about the history of the experimental tests undertaken of one of the procedures and about his having suffered torture while a POW in Vietnam, etc, etc.  And he does all this without any indication that some procedures are part of larger courses of treatment, or are even linked in any way, and speaking without using any patient-friendly labelling or summarizing of the decision-options.
Which doctor would you choose to treat you?  If this description was all that you knew, then Doctor Barack would appear to be the much better organized of the two doctors.   Most of us would have more confidence being treated by a doctor who sounds better-organized, who appears to know what he was doing, compared to a doctor who sounds dis-organized.   More importantly, it is also evident that Doctor Barack knows how to structure what he knows into a coherent whole, into a form which makes his knowledge easier to transmit to others, easier for a patient to understand, and which also facilitates the subsequent decision-making by the patient.  We generally have more confidence in the underlying knowledge and expertise of people able to explain their knowledge and expertise well, than in those who cannot.
If we reasoned this way, we would be choosing between the two doctors on the basis of their different rhetorical styles:  we would be judging the contents of their arguments (in this case, the content is their ability to provide us with effective treatment) on the basis of the styles of their arguments.  Such reasoning processes, which use form to assess content, are called epideictic, as are arguments which draw attention to their own style.
Advertising provides many examples of epideictic arguments, particularly in cultures where the intended target audience is savvy regarding the form of advertisements.  In Britain, for instance, the film director Michael Winner starred in a series of TV advertisements for an insurance company in which the actors pretending to be real people giving endorsements revealed that they were just actors, pretending to be real people giving endorsements.   This was a glimpse behind the curtain of theatrical artifice, with the actors themselves pulling back the curtain.  Why do this?  Well, self-reference only works with a knowledgeable audience, perhaps so knowledgeable that they have even grown cynical with the claims of advertisers.   By winking at the audience, the advertisers are colluding with this cynicism, saying to the audience, “we know you think this and we agree, so our advert is pitched to you, you cynical sophisticates, not to those others who don’t get it.”
The world-weary tone of the narration of Apple’s “Future” series of adverts here is another example of advertisements which knowingly direct our attention to their own style.
Apple Future Advertisement – Robots
And Dr Barack and Dr John?  One argument against electing Senator Obama to the US Presidency was that he lacked executive experience.  A counter-argument, made even by the good Senator Obama himself, was that he demonstrated his executive capabilities through the competence, professionalism and effectiveness of his management of his own campaign.   This is an epideictic argument.
There is nothing necessarily irrational or fallacious about such arguments or such modes of reasoning; indeed, it is often the case that the only relevant information available for a decision on a claim of substantive content is the form of the claim.   Experienced investors in hi-tech start-ups, for example, know that the business plan they are presented with is most unlikely to be implemented, because the world changes too fast and too radically for any plan to endure.   A key factor in the decision to invest must therefore be an assessment of the capability of the management team to adjust the business plan to changing circumstances, from recognizing that circumstances have in fact changed, to acting quickly and effectively in response, through to evaluating the outcomes.   How to assess this capability for decision-making robustness?  Well, one basis is the past experience of the team.  But experience may well hinder managerial flexibility rather than facilitate it, especially in a turbulent environment.  Another way to assess this capability is to subject the team to a stress test – contesting the assumptions and reasoning of the business plan, being unreasonable in questions and challenges, prodding and poking and provoking the team to see how well and how quickly they can respond, in real time, without preparation.   In all of this, a decision on the substance of the investment is being made from evidence about the form – about how well the management team responds to such stress testing.   This is perfectly rational, given the absence of any other basis on which to make a decision and given our imperfect knowledge of the future.
Likewise, an assessment of Senator Obama’s capabilities for high managerial office on the basis of his competence at managing his campaign was also eminently rational and perfectly justifiable.   The incoherent nature of Senator McCain’s campaign and the panic-struck and erratic manner in which he responded to surprising events (such as the financial crisis of September 2008) was similarly an indication of his likely style of government; the style here did not produce confidence in the content.  For many people,  the choice between candidates in the US Presidential campaign was an epideictic one.
POSTSCRIPT (2011-12-14):
Over at Normblog, Norm has a nice example of epideictic reasoning:  deciding between two arguments on the basis of how the arguments were made (presented), rather than by their content.  As always with such reasoning – and contrary to much educated opinion – such reasoning can be perfectly rational, as is the case here.
PS2 (2016-09-05): 
John Lanchester in a book review of a book about investor activism gives a nice example of attempting to influence people’s opinions using epideictic means: Warren Buffet’s annual letters to investors in Berkshire Hathaway:

Even the look of the letters – deliberately plain to the point of hokiness, with old-school fonts and layout hardly changed in fifty years – is didactic. The message is: no flash here, only substance. Go to the company’s Web site, arguably the ugliest in the world, and you are greeted by “A Message from Warren E. Buffet” telling you that he doesn’t make stock recommendations but that you will save money by insuring your car with GEICO and buying your jewelry from Borsheims.” (page 78)

PS3 (2017-04-02):
Dale Russakof, in a New Yorker profile of now-Senator Cory Booker, says:

Over lunch at Andros Diner, Booker told me that [fellow Yale Law School student Ed] Nicoll taught him an invaluable lesson: “Investors bet on people, not on business plans, because they know successful people will find a way to be successful.” (page 60)
 

Refs and Acks
The medical example is due to William Rehg.
John Lanchester [2016]: Cover letter. New Yorker, 5 September 2016, pp.76-79.
William Rehg [1997]: Reason and rhetoric in Habermas’s theory of argumentation,  pp. 358-377 in:  W. Jost and M. J. Hyde (Editors): Rhetoric and Hermeneutics in Our Time: A Reader. New Haven, CN, USA: Yale University Press.
Dale Russakoff [2014]: Schooled. The New Yorker, 19 May 2014, pp. 58-73.

Kaleidic moments

Is the economy like a pendulum, gradually oscillating around a fixed point until it reaches a static equilibrium?  This metaphor, borrowed from Newtonian physics, still dominates mainstream economic thinking and discussion.  Not all economists have agreed, not least because the mechanistic Newtonian viewpoint seems to allow no place for new information to arrive or for changes in subjective responses.   The 20th-century economists George Shackle and Ludwig Lachmann, for example, argued that a much more realistic metaphor for the modern economy is a kaleidoscope.  The economy is a “kaleidic society, interspersing its moments or intervals of order, assurance and beauty with sudden disintegration and a cascade into a new pattern.” (Shackle 1972, p.76).
The arrival of new information, or changes in the perceptions and actions of marketplace participants, or changes in their subjective beliefs and intentions, are the events which trigger these sudden disruptions and discontinuous realignments.   Recent events in the financial markets show we are in a kaleidic moment right now.  If there’s an invisible hand, it’s not holding a pendulum but busy shaking the kaleidoscope.
Reference:
Geoge L S Shackle [1972]: Epistemics and Economics.  Cambridge, UK:  Cambridge University Press.
 

Banking on Linda

Over at “This Blog Sits”, Grant McCracken has a nice post about a paradigm example often used in mainstream economics to chastise everyday human reasoners. A nice discussion has developed. I thought to re-post one of my comments, which I do here:

“The first point — which should be obvious to anyone who deals professionally with probability, but often seems not — is that the answer to a problem involving uncertainty depends very crucially on its mathematical formulation. We are given a situation expressed in ordinary English words and asked to use it to make a judgment. The probability theorists have arrived at a way of translating such situations from natural human language into a formal mathematical language, and using this formalism, to arrive at an answer to the situation which they deem correct. However, natural language may be imprecise (as in the example, as gek notes). Imprecision of natural language is a key reason for attempting a translation into a formal language, since doing so can clarify what is vague or ambiguous. But imprecision also means that there may be more than one reasonable translation of the same problem situation, even if we all agreed on what formal language to use and on how to do the translation. There may in fact be more than one correct answer.
There is much of background relevance here that may not be known to everyone, First, note that it took about 250 years from the first mathematical formulations of uncertainty using probability (in the 1660s) to reach a sort-of consensus on a set of mathematical axioms for probability theory (the standard axioms, due to Andrei Kolmogorov, in the 1920s).   By contrast, the differential calculus, invented about the same time as Probability in the 17th century, was already rigorously formalized (using epsilon-delta arguments)  by the mid-19th century.   Dealing formally with uncertainty is hard, and intuitions differ greatly, even for the mathematically adept.
Second, even now, the Kolmogorov axioms are not uncontested. Although it often comes as a surprise to statisticians and mathematicians, there is a whole community of intelligent, mathematically-adept people in Artificial Intelligence who prefer to use alternative formalisms to probability theory, at least for some problem domains. These alternatives (such as Dempster-Shafer theory and possibility theory) are preferred to probability theory because they are more expressive (more situations can be adequately represented) and because they are easier to manipulate for some types of problems than probability theory. Let no one believe, then, that probability theory is accepted by every mathematically-adept expert who works with uncertainty.
Historical aside: In fact, ever since the 1660s, there has been a consistent minority of people dissenting from the standard view of probability theory, a minority which has mostly been erased from the textbooks. Typically, these dissidents have tried unsuccessfully to apply probability theory to real-world problems, such as those encountered by judges and juries (eg, Leibniz in the 17th century), doctors (eg, von Kries in the 19th), business investors (eg, Shackle in the 20th), and now intelligent computer systems (since the 1970s). One can have an entire university education in mathematical statistics, as I did, and never hear mention of this dissenting stream. A science that was confident of its own foundations would surely not need to suppress alternative views.
Third, intelligent, expert, mathematically-adept people who work with uncertainty do not even yet agree on what the notion of “probability” means, or to what it may validly apply. Donald Gillies, a professor of philosophy at the University of London, wrote a nice book, Philosophical Theories of Probability, which outlines the main alternative interpretations. A key difference of opinion concerns the scope of probability expressions (eg, over which types of natural language statements may one validly apply the translation mechanism). Note that Gillies wrote his book 70-some years after Kolmogorov’s axioms. In addition, there are other social or cultural factors, usually ignored by mathematically-adept experts, which may inform one’s interpretations of uncertainty and probability. A view that the universe is deterministic, or that one’s spiritual fate is pre-determined before birth, may be inconsistent with any of these interpretations of uncertainty, for instance. I have yet to see a Taoist theory of uncertainty, but I am sure it would differ from anything developed so far.
I write this comment to give some context to our discussion. Mainstream economists and statisticians are fond of castigating ordinary people for being confused or for acting irrationally when faced with situations involving uncertainty, merely because the judgements of ordinary people do not always conform to the Kolmogorov axioms and the deductive consequences of these axioms. It is surely unreasonable to cast such aspersions when experts themselves disagree on what probability is, to what statements probabilities may be validly applied, and on how uncertainty should be formally represented.
Reference:
Donald Gillies [2000]: Philosophical Theories of Probability. (London, UK: Routledge)