In an IHT op-ed on puns, Joseph Tartakovsky mentions Richard Whately (1787-1863), Church of Ireland Archbishop of Dublin and former Oxford Professor of Political Economy, as being renowned for his puns. Whately was possibly the first person to represent an argument using a diagram, in his 1826 text book on logic, Elements of Logic. Philosopher Tim van Gelder has a reproduction of Whately’s diagram here.
How nice to be remembered 150 years after one’s death for both one’s wit and one’s visual thinking processes.
References:
Chris Reed, Douglas Walton and Fabrizio Macagno [2007]: Argument diagramming in logic, law and artificial intelligence. Knowledge Engineering Review, 22 (1): 87-109.
Joseph Tartakovsky [2009]: Pun for the ages. International Herald Tribune. 28 March 2009.
Richard Whately [1826]: Elements of Logic. London, UK: Longmans, Green and Company, 1913. First published 1826.
Archive for the ‘Decision theory’ Category
Page 7 of 8
On prophecy
They know not what to make of the Words, little time, speedily, shortly, suddenly, soon. They would have me define the Time, in the Prophecies of my ancient Servants. Yet those Predictions carried in them my authority, and were fulfilled soon enough, for those that suffered under them . . . I have seen it best, not to assign the punctual Times, by their Definition among Men; that I might keep Men always in their due distance, and reverential Fear of invading what I reserve, in secret, to myself . . . The Tower-Guns are the Tormenta e Turre aethera, with which this City I have declared should be battered . . . I have not yet given a Key to Time in this Revelation.”
John Lacy, explaining to his followers among a millenarian French Huguenot sect in Britain in 1707 why his prophecies had not yet been fulfilled, cited in Schwartz 1980, p. 99.
Reference:
Hillel Schwartz [1980]: The French Prophets: The History of a Millenarian Group in Eighteenth-Century England (Berkeley, CA, USA: University of California Press)
Writing as thinking
Anyone who has done any serious writing knows that the act of writing is a form of thinking. Formulating vague ideas and half-articulated concepts into coherent, reasoned, justified, well-defended written arguments is not merely the reporting of thinking but is indeed the very doing of thinking. Michael Gerson, former policy advisor and chief speech-writer to President George W. Bush, has a nice statement of this view, in an article in the Washington Post defending President Barack Obama’s use of teleprompters, here. An excerpt:
“For politicians, the teleprompter has always been something of an embarrassing vice — the political equivalent of purchasing cigarettes, Haagen-Dazs and a Playboy at the convenience store.
This derision is based on the belief that the teleprompter exaggerates the gap between image and reality — that it involves a kind of deception. It is true that there is often a distinction between a president on and off his script. With a teleprompter, Obama can be ambitiously eloquent; without it, he tends to be soberly professorial. Ronald Reagan with a script was masterful; during news conferences he caused much wincing and cringing. It is the rare politician, such as Tony Blair, who speaks off the cuff in beautifully crafted paragraphs.
But it is a mistake to argue that the uncrafted is somehow more authentic. Those writers and commentators who prefer the unscripted, who use “rhetoric” as an epithet, who see the teleprompter as a linguistic push-up bra, do not understand the nature of presidential leadership or the importance of writing to the process of thought.
Governing is a craft, not merely a talent. It involves the careful sorting of ideas and priorities. And the discipline of writing — expressing ideas clearly and putting them in proper order — is essential to governing. For this reason, the greatest leaders have taken great pains with rhetoric. Lincoln continually edited and revised his speeches. Churchill practiced to the point of memorization. Such leaders would not have been improved by being “unplugged.” When it comes to rhetoric, winging it is often shoddy and self-indulgent — practiced by politicians who hear Mozart in their own voices while others perceive random cymbals and kazoos. Leaders who prefer to speak from the top of their heads are not more authentic, they are often more shallow — not more “real,” but more undisciplined.
. . .
The speechwriting process that puts glowing words on the teleprompter screen serves a number of purposes. Struggling over the precise formulations of a text clarifies a president’s own thinking. It allows others on his staff to have input — to make their case as a speech is edited. The final wording of a teleprompter speech often brings internal policy debates to a conclusion. And good teamwork between a president and his speechwriters can produce memorable rhetoric — the kind of words that both summarize a historical moment and transform it.”
Anyone (and this includes most everybody in management consulting) who has tried to achieve a team consensus over some issue knows the truth of this last paragraph. The writing of a jointly-agreed text or presentation enables different views to be identified, to surface, and to be accommodated (or ignored explicitly). Just as writing is a form of thinking, developing team presentations is a form of group cognition and group co-ordination.
Of quacking ducks and homeostasis
After reading a very interesting essay (PDF) by biologist J. Scott Turner discussing Intelligent Design (ID) and Evolution which presents an anti-anti-ID case, I was led to read Turner’s recent book, “The Tinkerer’s Accomplice: How Design Emerges from Life Itself“. Turner argues that Darwinian Evolution requires, but lacks, a notion of intentionality. Despite the use of an apparently teleological concept, he is no creationist: he argues that both Evolutionary theorists (who refuse to consider any such notions) and Creationists/IDers (who have such a notion, but refuse to examine it scientifically) are missing something important and necessary.
Turner’s key notion is that biological and ecological systems contain entities who create environments and seek to regulate them. Typically, such entities seek to maintain their environment in a particular state, i.e., they aim for environmental homeostasis. The concept of homeostasis is due to the French pioneer of physiology, Claude Bernard (1813-1878), who observed that the human body and its various organs seek to maintain various homeostatic states internally, for example, the chemical composition of the blood stream. That indefatigable complex systems theorist and statistician Cosma Shalizi has thus proposed calling entities which create and regulate environments, Bernard Machines, and Turner also uses this name. (Turner credits Shalizi for the name but provides no citation to anything written by Shalizi, not even a URL — I think this very unprofessional of Turner.)
For Turner, these entities have some form of intentionality, and thus provide the missing component of Darwinian evolution. For a computer scientist, at least for those who have kept up with research since 1990, a Bernard Machine is just an intelligent agent: they are reactive (they respond to changes in their environment), they are pro-active (ie, goal-directed), and they are autonomous (in that they may decide within some parameters, how, when, and whether to act). Some Bernard Machines may also have a sense of sociality, i.e., awareness of the existence of other agents in their environment, to complete the superfecta of the now-standard definition of agenthood due to Wooldridge and Jennings (1995).
I understand that the more materialist biologists become agitated at any suggestion of non-human entities possibly having anything like intentionality (a concept with teleological or spiritual connotations, apparently), and thus they question whether goal-directedness can in fact be said to be the same as intentionality. But this argument is exactly like the one we witnessed over the last two decades in computer science over the concept of autonomy of software systems: If it looks like a duck, walks like a duck, and quacks like a duck, there is nothing to be gained, either in practice or in theory, by insisting that it isn’t really a duck. Indeed, as software agent people know very well (see Wooldridge 2000), one cannot ever finally verify the internal states of agents (or Bernard machines, or indeed ducks, for that matter), since any sufficiently clever software developer can design an agent with any required internal state. Indeed, the cleverest software developers can even design agents themselves sufficiently clever to be able to emulate insincerely, and wittingly insincerely, any required internal states.
POSTSCRIPT: Of course, with man-made systems such as economies and societies, we cannot assume all agents are homeostatic; some may simply seek to disrupt the system. For computational systems, we cannot even assume all agents always act in their own self-interest (however they perceive that), since they may simply have buggy code.
References:
J. Scott Turner [2007]: Signs of design. The Christian Century, June 12, 2007, 124: 18-22. Reprinted in: Jimmy Carter and Philip Zaleski (Editors): Best American Spiritual Writing 2008. Houghton Mifflin.
J. Scott Turner [2007]: The Tinkerer’s Accomplice: How Design Emerges from Life Itself. Cambridge, MA, USA: Harvard University Press.
Michael J. Wooldridge [2000]: Semantic issues in the verification of agent communication languages. Journal of Autonomous Agents and Multi-Agent Systems, 3 (1): 9-31.
Michael J. Wooldridge and Nicholas R. Jennings [1995]: Intelligent agents: theory and practice. The Knowledge Engineering Review, 10 (2): 115-152.
Evaluating prophecy
With the mostly-unforeseen global financial crisis uppermost in our minds, I am led to consider a question that I have pondered for some time: How should we assess forecasts and prophecies? Within the branch of philosophy known as argumentation, a lot of attention has been paid to the conditions under which a rational decision-maker would accept particular types of argument.
For example, although it is logically invalid to accept an argument only on the grounds that the person making it is an authority on the subject, our legal system does this all the time. Indeed, the philosopher Charles Willard has argued that modern society could not function without most of us accepting arguments-from-authority most of the time, and it is usually rational to do so. Accordingly, philosophers of argumentation have investigated the conditions under which a rational person would accept or reject such arguments. Douglas Walton (1996, pp. 64-67) presents an argumentation scheme for such acceptance/rejection decisions, the Argument Scheme for Arguments from Expert Opinion, as follows:
- Assume E is an expert in domain D.
- E asserts that statement A is known to be true.
- A is within D.
Therefore, a decision-maker may plausibly take A to be true, unless one or more of the following Critical Questions (CQ) is answered in the negative:
- CQ1: Is E a genuine expert in D?
- CQ2: Did E really assert A?
- CQ3: Is A relevant to domain D?
- CQ4: Is A consistent with what other experts in D say?
- CQ5: Is A consistent with known evidence in D?
One could add further questions to this list, for example:
- CQ6: Is E’s opinion offered without regard to any reward or benefit upon statement A being taken to be true by the decision-maker?
Walton himself presents some further critical questions first proposed by Augustus DeMorgan in 1847 to deal with cases under CQ2 where the expert’s opinion is presented second-hand, or in edited form, or along with the opinions of others.
Clearly, some of these questions are also pertinent to assessing forecasts and prophecies. But the special nature of forecasts and prophecies may enable us to make some of these questions more precise. Here is my Argument Scheme for Arguments from Prophecy:
- Assume E is a forecaster for domain D.
- E asserts that statement A will be true of domain D at time T in the future.
- A is within D.
Therefore, a decision-maker may plausibly take A to be true at time T, unless one or more of the following Critical Questions (CQ) is answered in the negative:
- CQ1: Is E a genuine expert in forecasting domain D?
- CQ2: Did E really assert that A will be true at T?
- CQ3: Is A relevant to, and within the scope of, domain D?
- CQ4: Is A consistent with what is said by other forecasters with expertise in D?
- CQ5: Is A consistent with known evidence of current conditions and trends in D?
- CQ6: Is E’s opinion offered without regard to any reward or benefit upon statement A being adopted by the decision-maker as a forecast?
- CQ7: Do the benefits of adopting A being true at time T in D outweigh the costs of doing so, to the decision-maker?
In attempting to answer these questions, we may explore more detailed questions:
- CQ1-1: What is E’s experience as forecaster in domain D?
- CQ1-2: What is E’s track record as a forecaster in domain D?
- CQ2-1: Did E articulate conditions or assumptions under which A will become true at T, or under which it will not become true? If so, what are these?
- CQ2-2: How sensitive is the forecast of A being true at T to the conditions and assumptions made by E?
- CQ2-3: When forecasting that A would become true at T, did E assert a more general statement than A?
- CQ2-4: When forecasting that A would become true at T, did E assert a more general time than T?
- CQ2-5: Is E able to provide a rational justification (for example, a computer simulation model) for the forecast that A would be true at T?
- CQ2-6: Did E present the forecast of A being true at time T qualified by modalities, such as possibly, probably, almost surely, certainly, etc.
- CQ4-1: If this forecast is not consistent with those of other forecasters in domain D, to what extent are they inconsistent? Can these inconsistencies be rationally justified or explained?
- CQ5-1: What are the implications of A being true at time T in domain D? Are these plausible? Do they contradict any known facts or trends?
- CQ6-1: Will E benefit if the decision-maker adopts A being true at time T as his/her forecast for domain D?
- CQ6-2: Will E benefit if the decision-maker does not adopt A being true at time T as his/her forecast for domain D?
- CQ6-3: Will E benefit if many decision-makers adopt A being true at time T as their forecast for domain D?
- CQ6-4: Will E benefit if few decision-makers adopt A being true at time T as their forecast for domain D?
- CQ6-5: Has E acted in such a way as to indicate that E had adopted A being true at time T as their forecast for domain D (eg, by making an investment betting that A will be true at T)?
- CQ7-1: What are the costs and benefits to the decision-maker for adopting statement A being true at time T in domain D as his or her forecast of domain D?
- CQ7-2: How might these costs and benefits be compared? Can a net benefit/cost for the decision-maker be determined?
Automating these questions and the process of answering them is on my list of next steps, because automation is needed to design machines able to reason rationally about the future. And rational reasoning about the future is needed if we want machines to make decisions about actions.
References:
Augustus DeMorgan [1847]: Formal Logic. London, UK: Taylor and Walton.
Douglas N. Walton [1996]: Argument Schemes for Presumptive Reasoning. Mahwah, NJ, USA: Lawrence Erlbaum.
Charles A. Willard [1990]: Authority. Informal Logic, 12: 11-22.
Retroflexive decision-making
How do companies make major decisions? The gurus of classical Decision Theory – people like economist Jimmie Savage and statistician Dennis Lindley – tell us that there is only one correct way to make decisions: List all the possible actions, list the potential consequences of each action, assign utilities and probabilities of occurrence to each consequence, multiply these numbers together for each consequence and then add the resulting products for each action to get an expected utility for each action, and finally choose that action which maximizes expected utility.
There are many, many problems with this model, not least that it is not what companies – or intelligent, purposive individuals for that matter – actually do. Those who have worked in companies know that nothing so simplistic or static describes intelligent, rational decision making, nor should it. Moreover, that their model was flawed as a description of reality was known at the time to Savage, Lindley, et al, because it was pointed out to them six decades ago by people such as George Shackle, an economist who had actually worked in industry and who drew on his experience. The mute, autistic behemoth that is mathematical economics, however, does not stop or change direction merely because its utter disconnection with empirical reality is noticed by someone, and so – TO THIS VERY DAY – students in business schools still learn the classical theory. I guess for the students it’s a case of: Who are we going to believe – our textbooks, or our own eyes? From my first year as an undergraduate taking Economics 101, I had trouble believing my textbooks.
So what might be a better model of decision-making? First, we need to recognize that corporate decision-making is almost always something dynamic, not static – it takes place over time, not in a single stage of analysis, and we would do better to describe a process, rather than just giving a formula for calculating an outcome. Second, precisely because the process is dynamic, many of the inputs assumed by the classical model do not exist, or are not known to the participants, at the start, but emerge in the course of the decision-making process. Here, I mean things such as: possible actions, potential consequences, preferences (or utilities), and measures of uncertainty (which may or may not include probabilities). Third, in large organizations, decision-making is a group activity, with inputs and comments from many people. If you believe – as Savage and Lindley did – that there is only one correct way to make a decision, then your model would contain no scope for subjective inputs or stakeholder revisions, which is yet another of the many failings of the classical model. Fourth, in the real world, people need to consider – and do consider – the potential downsides as well as the upsides of an action, and they need to do this – and they do do this – separately, not merged into a summary statistic such as “utility”. So, if one possible consequence of an action-option is catastrophic loss, then no amount of maximum-expected-utility quantitative summary gibberish should permit a rational decision-maker to choose that option without great pause (or insurance). Shackle knew this, so his model considers downsides as well as upsides. That Savage and his pals ignored this one can only assume is the result of the impossibility of catastrophic loss ever occurring to a tenured academic.
So let us try to articulate a staged process for what companies actually do when they make major decisions, such as major investments or new business planning:
- Describe the present situation and the way or ways it may evolve in the future. We call these different future paths scenarios. Making assumptions about the present and the future is also called taking a view.
- For each scenario, identify a list of possible actions, able to be executed under the scenario.
- For each scenario and action, identify the possible upsides and downsides.
- Some actions under some scenarios will have attractive upsides. What can be done to increase the likelihood of these upsides occurring? What can be done to make them even more attractive?
- Some actions under some scenarios will have unattractive downsides. What can be done to eliminate these downsides altogether or to decrease their likelihood of occurring? What can be done to ameliorate, to mitigate, to distribute to others, or to postpone the effects of these downsides?
- In the light of what was learned in doing steps 1-5, go back to step 1 and repeat it.
- In the light of what was learned in doing steps 1-6, go back to step 2 and repeat steps 2-5. For example, by modifying or combining actions, it may be possible to shift attractive upsides or unattractive downsides from one action to another.
- As new information comes to hand, occasionally repeat step 1. Repeat step 7 as often as time permits.
This decision process will be familiar to anyone who has prepared a business plan for a new venture, either for personal investment, or for financial investors and bankers, or for business partners. Having access to spreadsheet software such as Lotus 1-2-3 or Microsoft EXCEL has certainly made this process easier to undertake. But, contrary to the beliefs of many, people made major decisions before the invention of spreadsheets, and they did so using processes similar to this, as Shackle’s work evidences.
Because this model involves revision of initial ideas in repeated stages, it bears some resemblance to the retroflexive argumentation theory of philosopher Harald Wohlrapp. Hence, I call it Retroflexive Decision Theory. I will explore this model in more detail in future posts.
References:
D. Lindley [1985]: Making Decisions. Second Edition. London, UK: John Wiley and Sons.
L. J. Savage [1950]: The Foundations of Statistics. New York, NY, USA: Wiley.
G. L. S. Shackle [1961]: Decision, Order and Time in Human Affairs. Cambridge, UK: Cambridge University Press.
H. Wohlrapp [1998]: A new light on non-deductive argumentation schemes. Argumentation, 12: 341-350.
Organizational Cognition
Over at Unrepentant Generalist, Eric Nehrlich is asking interesting questions about organizational cognition. His post brings to mind the studies of decision-making by traders in financial markets undertaken in recent years by Donald MacKenzie at Edinburgh University, who argues that the locus of decision-making is usually not an individual trader, nor a group of traders working for a single company, nor even a group of traders working for a single company together with their computer models, but a group of traders working for a single company with their computer models and with their competitors. Information about market events, trends and opportunities is passed from traders at one company to traders from another through informal contacts and personal networks, and this information then informs decision-making by all of them.
It is possible, of course, for traders to pass false or self-serving information to competitors, but in an environment of repeated interactions and of job movements, the negative consequences of such actions will eventually be felt by the perpetrators themselves. As evolutionary game theory would predict, everyone thus has long-term incentives to behave honourably in the short-term. Of course, different market participants may evaluate this long-term/short-term tradeoff differently, and so we may still see the creation and diffusion of false rumours, something which financial markets regulators have tried to prevent.
Reference:
Donald MacKenzie [2009]: Material Markets: How Economic Agents are Constructed. Oxford, UK: Oxford University Press.
Presidential planning
Gordon Goldstein has some advice for President-elect Obama in managing his advisors. Goldstein prefaces his remarks by a potted history of John F. Kennedy’s experience with the CIA-planned Bay of Pigs action, an attempted covert invasion of Cuba. Although Goldstein’s general advice to Obama may be wise, he profoundly mis-characterizes the Bay of Pigs episode, and thus the management lessons it provides. As we have remarked before, one aspect of that episode was that although the action was planned and managed by CIA, staff in the White House – including JFK himself! – unilaterally revised the plans right up until the moment of the invasion. Indeed, the specific site in Cuba of the invasion was changed – at JFK’s order, and despite CIA’s reluctance – just 4 days before the scheduled date. This left insufficient time to revise the plans adequately, and all but guaranteed failure. The CIA man in charge, Dick Bissell, in his memoirs, regretted that he had not opposed the White House revisions more forcefully.
Anyone who has worked for a US multi-national will be familiar with this problem – bosses flying in, making profound, last-minute changes to detailed plans without proper analysis and apparently on whim, and then leaving middle management to fix everything. Middle management are also assigned the role of taking the blame. This has happened so often in my experience, I have come to see it as a specific trope of contemporary American culture — the supermanager, able to change detailed plans at a moment’s notice! Even Scott Adams has recorded the phenomenon. It is to JFK’s credit that he took the public blame for the Bay of Pigs fiasco (although he also ensured that senior CIA people were made to resign for his error). But so indeed he should have, since so much of the real blame rests squarely with the President himself and his White House national security staff.
The Bay of Pigs action had another, more existential, problem. CIA wished to scare the junta running Cuba into resigning from office, by making them think the island was being invaded by a vastly superior force. It was essential to the success of the venture that the Cuban government therefore think that the force was backed by the USA, the only regional power with such a capability and intent. It was also essential to the USA’s international reputation that the USA could plausibly deny that they were in any way involved in the action, in order for the venture not to escalate (via the Cold War with the USSR) into a larger military conflict. Thus, Kennedy ruled out the use of USAF planes to provide cover to the invading troops, and he continually insisted that the plans minimize “the noise level” of the invasion. These two objectives were essentially contradictory, since reducing the noise level decreased the likelihood of the invasion scaring Castro from office.
The Bay of Pigs fiasco provides many lessons for management, both to US Presidents and to corporate executives. One of these, seemingly forgotten in Vietnam and again in Iraq, is that plans do matter. Success is rarely something reached by accident, or by a series of on-the-fly, ad hoc, decisions, each undertaken without careful analysis, reflection and independent assessment.
Epideictic arguments
Suppose you are diagnosed with a serious medical condition, and you seek advice from two doctors. The first doctor, let’s call him Dr Barack, says that there are three possible courses of treatment. He labels these courses, A, B and C, and then proceeds to walk you methodically through each course – what separate basic procedures are involved, in what order, with what likely side effects, and with what costs and durations, what chances of success or failure, and what likely survival rates. He finishes this methodical exposition by summing up each treatment, with pithy statements such as, “Course A is the cheapest and most proven. Course B is an experimental treatment, which makes it higher risk, but it may be the most effective. Course C . . .” etc.
The other doctor, let’s call him Dr John, in contrast talks in a manner which is apparently lacking all structure. He begins a long, discursive narrative about the many different basic procedures possible, not in any particular order, jumping back and forth between these as he focuses first on the costs of procedures, then switching to their durations, then back again to costs, then onto their expected side effects, with tangential discussions in the middle about the history of the experimental tests undertaken of one of the procedures and about his having suffered torture while a POW in Vietnam, etc, etc. And he does all this without any indication that some procedures are part of larger courses of treatment, or are even linked in any way, and speaking without using any patient-friendly labelling or summarizing of the decision-options.
Which doctor would you choose to treat you? If this description was all that you knew, then Doctor Barack would appear to be the much better organized of the two doctors. Most of us would have more confidence being treated by a doctor who sounds better-organized, who appears to know what he was doing, compared to a doctor who sounds dis-organized. More importantly, it is also evident that Doctor Barack knows how to structure what he knows into a coherent whole, into a form which makes his knowledge easier to transmit to others, easier for a patient to understand, and which also facilitates the subsequent decision-making by the patient. We generally have more confidence in the underlying knowledge and expertise of people able to explain their knowledge and expertise well, than in those who cannot.
If we reasoned this way, we would be choosing between the two doctors on the basis of their different rhetorical styles: we would be judging the contents of their arguments (in this case, the content is their ability to provide us with effective treatment) on the basis of the styles of their arguments. Such reasoning processes, which use form to assess content, are called epideictic, as are arguments which draw attention to their own style.
Advertising provides many examples of epideictic arguments, particularly in cultures where the intended target audience is savvy regarding the form of advertisements. In Britain, for instance, the film director Michael Winner starred in a series of TV advertisements for an insurance company in which the actors pretending to be real people giving endorsements revealed that they were just actors, pretending to be real people giving endorsements. This was a glimpse behind the curtain of theatrical artifice, with the actors themselves pulling back the curtain. Why do this? Well, self-reference only works with a knowledgeable audience, perhaps so knowledgeable that they have even grown cynical with the claims of advertisers. By winking at the audience, the advertisers are colluding with this cynicism, saying to the audience, “we know you think this and we agree, so our advert is pitched to you, you cynical sophisticates, not to those others who don’t get it.”
The world-weary tone of the narration of Apple’s “Future” series of adverts here is another example of advertisements which knowingly direct our attention to their own style.
Apple Future Advertisement – Robots
And Dr Barack and Dr John? One argument against electing Senator Obama to the US Presidency was that he lacked executive experience. A counter-argument, made even by the good Senator Obama himself, was that he demonstrated his executive capabilities through the competence, professionalism and effectiveness of his management of his own campaign. This is an epideictic argument.
There is nothing necessarily irrational or fallacious about such arguments or such modes of reasoning; indeed, it is often the case that the only relevant information available for a decision on a claim of substantive content is the form of the claim. Experienced investors in hi-tech start-ups, for example, know that the business plan they are presented with is most unlikely to be implemented, because the world changes too fast and too radically for any plan to endure. A key factor in the decision to invest must therefore be an assessment of the capability of the management team to adjust the business plan to changing circumstances, from recognizing that circumstances have in fact changed, to acting quickly and effectively in response, through to evaluating the outcomes. How to assess this capability for decision-making robustness? Well, one basis is the past experience of the team. But experience may well hinder managerial flexibility rather than facilitate it, especially in a turbulent environment. Another way to assess this capability is to subject the team to a stress test – contesting the assumptions and reasoning of the business plan, being unreasonable in questions and challenges, prodding and poking and provoking the team to see how well and how quickly they can respond, in real time, without preparation. In all of this, a decision on the substance of the investment is being made from evidence about the form – about how well the management team responds to such stress testing. This is perfectly rational, given the absence of any other basis on which to make a decision and given our imperfect knowledge of the future.
Likewise, an assessment of Senator Obama’s capabilities for high managerial office on the basis of his competence at managing his campaign was also eminently rational and perfectly justifiable. The incoherent nature of Senator McCain’s campaign and the panic-struck and erratic manner in which he responded to surprising events (such as the financial crisis of September 2008) was similarly an indication of his likely style of government; the style here did not produce confidence in the content. For many people, the choice between candidates in the US Presidential campaign was an epideictic one.
POSTSCRIPT (2011-12-14):
Over at Normblog, Norm has a nice example of epideictic reasoning: deciding between two arguments on the basis of how the arguments were made (presented), rather than by their content. As always with such reasoning – and contrary to much educated opinion – such reasoning can be perfectly rational, as is the case here.
PS2 (2016-09-05):
John Lanchester in a book review of a book about investor activism gives a nice example of attempting to influence people’s opinions using epideictic means: Warren Buffet’s annual letters to investors in Berkshire Hathaway:
Even the look of the letters – deliberately plain to the point of hokiness, with old-school fonts and layout hardly changed in fifty years – is didactic. The message is: no flash here, only substance. Go to the company’s Web site, arguably the ugliest in the world, and you are greeted by “A Message from Warren E. Buffet” telling you that he doesn’t make stock recommendations but that you will save money by insuring your car with GEICO and buying your jewelry from Borsheims.” (page 78)
PS3 (2017-04-02):
Dale Russakof, in a New Yorker profile of now-Senator Cory Booker, says:
Over lunch at Andros Diner, Booker told me that [fellow Yale Law School student Ed] Nicoll taught him an invaluable lesson: “Investors bet on people, not on business plans, because they know successful people will find a way to be successful.” (page 60)
Refs and Acks
The medical example is due to William Rehg.
John Lanchester [2016]: Cover letter. New Yorker, 5 September 2016, pp.76-79.
William Rehg [1997]: Reason and rhetoric in Habermas’s theory of argumentation, pp. 358-377 in: W. Jost and M. J. Hyde (Editors): Rhetoric and Hermeneutics in Our Time: A Reader. New Haven, CN, USA: Yale University Press.
Dale Russakoff [2014]: Schooled. The New Yorker, 19 May 2014, pp. 58-73.
Why vote?
Someone once joked that economists are people who see something working in practice, and then wonder if it will also work in theory. One practice that mainstream economists have long failed to explain theoretically is voting. Following the (so-called) rational choice models of Arrow and Downs, they calculate the likely net monetary benefit of voting to an individual voter, and compare that to the likely net costs to the voter. With long queues due to inadequately-resourced or incompetently-managed voting administrations (such as those in many US states), these costs can be considerable. Since one vote is very unlikely to have any marginal consequences, economists are stumped as to why any person votes.
One explanation for voting, of course, is that voters are indeed feeble-minded or irrational, unable to calculate the costs and benefits themselves, or, if they can, unable to act in their own self-interest. This is the standard explanation, and it strikes me as morally reprehensible: a failure to explain or model some phenomenon theoretically is justified on the grounds that the phenomenon should not exist.
Another explanation for voting may be that the rational-choice models understate the benefits or overstate the costs to individuals of voting. Some economists, as if in a parody of themselves, have now – in 2008! – discovered altruism. Factor in the benefits to others, this study claims, and the balance of benefits to costs may move more in favour of benefits.
A third explanation for voting may be that rational-choice models are simply inappropriate to the phenomena under study. The rational choice model assumes that citizens in a democracy are passive consumers of political ideas and proposals, with their only action being the selection of representatives at election times. Since at least the English Peasants’ Revolt of 1381, this quaint notion of a passive citizenry has been rebutted repeatedly by direct political action by citizens. The most famous example, of course, was the uprising against colonial taxation known as the American War of Independence, which, one imagines, some economist or two may have heard speak of. There’s also the various revolutions and uprisings of 1789, 1791, 1848, 1854, 1871, 1905, 1910, 1917, 1926, 1949, 1953, 1956, 1968 and 1989, just to list the most important since economics began to be studied systematically.
An historically-informed observer would surely conclude that a model of voting in which citizens produce as well as consume political ideas is likely to have more calibrative traction than one in which citizens do nothing except (if they so choose) vote. Such a theory already exists in political science, where it goes under the name of deliberative democracy. One wonders what terrors would strike the earth were an economist to read the relevant literature before modeling some domain.
People vote not only out of their own self-interest (if they ever do that), but also to influence the direction of their country, to act in solidarity with others, to elect to join a group, to demonstrate membership of a group, to respond to peer pressure, because the law requires they do, or to exercise a hard-won civil right. Only a person with no sense of history – an economist, say – would fail to understand the importance – indeed, the extreme rationality – of this last factor, especially during a year when a major political party has nominated a black candidate for President of the USA, and the other party a woman for Veep. At the founding of the USA, neither candidate would have been allowed to vote.
Not for the first time, mainstream economics has ignored social structures and processes when studying social phenomena, focusing only on those factors which can be assigned to an individual (indeed, some idealized, self-interested, desiccated calculating machine) and, within these, only on factors able to be quantified. The big question here is not why people vote, which is obvious, but why economists seem unable to recognize social structures and processes which can be clearly seen by most everyone else. What is it about mainstream economists that makes them autistic in this regard? Do they simply have an under-supply of inter-personal intelligence, unable to empathize with or reason about others?
References and Acknowledgments:
Hat-tip to Normblog.
Kenneth J. Arrow [1951]: Social Choice and Individual Values. New York City, NY, USA: Wiley.
J. Bessette [1980]: “Deliberative Democracy: The majority principle in republican government”, pp. 102-116, in: R. A. Goldwin and W. A. Schambra (Editors): How Democratic is the Constitution? Washington, DC, USA: American Enterprise Institute.
James Bohman and William Rehg (Editors) [1997]: Deliberative Democracy: Essays on Reason and Politics. Cambridge, MA, USA: MIT Press.
Anthony Downs [1957]: An Economic Theory of Democracy. New York City, NY, USA: Harper and Row.