Economic historians Philip Mirowski and Edward Nik-Khah have published a new book on the role of information in post-war economics. The introductory chapter contains a nice, high-level summary of the failures of the standard model of decision-making in mainstream micro Economics, Maximum Expected Utility Theory or so-called rational choice theory. Because the MEU model continues to dominate academic economics despite working neither in practice nor in theory, I have written about it often before, for example here
. Listen to Mirowski and Nik-Khah:
Given the massive literature on so-called rationality in the social sciences, it gives one pause to observe what a dark palimpsest the annals of rational choice has become. The modern economist, who avoids philosophy and psychology as the couch potato avoids the gym, has almost no appreciation for the rich archive of paradoxes of rationality. This has come to pass primarily by insisting upon a distinctly peculiar template as the necessary starting point of all discussion, at least from the 1950s onwards. Neoclassical economists frequently characterize their schema as comprising three components: (a) a consistent well-behaved preference ordering reflecting the mindset of some individual; (b) the axiomatic method employed to describe mental manipulations of (a) as comprising the definition of “rational choice”; and (c) reduction of all social phenomena to be attributed to the activities of individual agents applying (b) to (a). These three components may be referred to in shorthand as: “utility” functions, formal axiomatic definitions (including maximization provisions and consistency restrictions), and some species of methodological individualism.
The immediate response is to marvel at how anyone could have confused this extraordinary contraption with the lush forest of human rationality, however loosely defined. Start with component (a). The preexistence of an inviolate preference order rules out of bounds most phenomena of learning, as well as the simplest and most commonplace of human experiences—that feeling of changing one’s mind. The obstacles that this doctrine pose for problems of the treatment of information turns out to be central to our historical account. People have been frequently known to make personally “inconsistent” evaluations of events both observed and unobserved; yet in rational choice theory, committing such a solecism is the only real mortal sin—one that gets you harshly punished at minimum and summarily drummed out of the realm of the rational in the final analysis. Now, let’s contemplate component (b). That dogma insists the best way to enshrine rationality is by mimicking a formal axiomatic system—as if that were some sterling bulwark against human frailty and oblique hidden flaws of hubris. One would have thought Gödel’s Theorem might have chilled the enthusiasm for this format, but curiously, the opposite happened instead. Every rational man within this tradition is therefore presupposed to conform to his own impregnable axiom system—something that comes pre-loaded, like Microsoft on a laptop. This cod-Bourbakism ruled out many further phenomena that one might otherwise innocently call “rational”: an experimental or pragmatic stance toward the world; a life where one understands prudence as behaving different ways (meaning different “rationalities”) in different contexts; a self-conception predicated on the possibility that much personal knowledge is embodied, tacit, inarticulate, and heavily emotion driven. Furthermore, it strangely banishes many computational approaches to cognition: for instance, it simply elides the fact that much algorithmic inference can be shown to be noncomputable in practice; or a somewhat less daunting proposition, that it is intractable in terms of the time and resources required to carry it out. The “information revolution” in economics primarily consisted of the development of Rube Goldberg–type contraptions to nominally get around these implications. Finally, contemplate component (c): complaints about methodological individualism are so drearily commonplace in history that it would be tedious to reproduce them here. Suffice it to say that (c) simply denies the very existence of social cognition in its many manifestations as deserving of the honorific “rational.”
There is nothing new about any of these observations. Veblen’s famous quote summed them up more than a century ago: “The hedonistic conception of man is that of a lightning calculator of pleasures and pains, who oscillates like a homogeneous globule of desire of happiness under the impulse of stimuli that shift him about the area, but leave him intact.” The roster of latter-day dissenters is equally illustrious, from Herbert Simon to Amartya Sen to Gerd Gigerenzer, if none perhaps is quite up to his snuff in stylish prose or withering skepticism. It is commonplace to note just how ineffectual their dissent has been in changing modern economic practice.
Why anyone would come to mistake this virtual system of billiard balls careening across the baize as capturing the white-hot conviction of rationality in human life is a question worthy of a few years of hard work by competent intellectual historians; but that does not seem to be what we have been bequeathed. In its place sits the work of (mostly) historians of economics and a few historians of science treating these three components of rationality as if they were more or less patently obvious, while scouring over fine points of dispute concerning the formalisms involved, and in particular, an inordinate fascination for rival treatments of probability theory within that framework. We get histories of ordinal versus cardinal utility, game theory, “behavioral” peccadillos, preferences versus “capacities,” social choice theory, experimental interventions, causal versus evidential decision theory, formalized management theory, and so forth, all situated within a larger framework of the inexorable rise of neoclassical economics. Historians treat components (a–c) as if they were the obvious touchstone of any further research, the alpha and omega of what it means to be “rational.” Everything that comes after this is just a working out of details or a cleaning up of minor glitches. If and when this “rational choice” complex is observed taking root within political science, sociology, biology, or some precincts of psychology, it is often treated as though it had “migrated” intact from the economists’ citadel. If that option is declined, then instead it is intimated that “science” and the “mathematical tools” made the figures in question revert to certain stereotypic caricatures of rationality.” [Mirowski and Nik-Khah 2017, locations 318-379 of the Kindle edition].
Philip Mirowski and Edward Nik-Khah : The Knowledge We Have Lost in Information: The History of Information in Modern Economics. Oxford, UK: Oxford University Press.
This is a list of movies which play with alternative possible realities, in various ways:
- It’s a Wonderful Life [Frank Capra, USA 1947]
- Przypadek (Blind Chance) [Krzysztof Kieslowski, Poland 1987]
- Lola Rennt (run lola run) [Tom Tykwer, Germany 1998]
- Sliding Doors [Peter Howitt, UK 1998]
- The Family Man [Brett Ratner, USA 2000]
- Me Myself I [Pip Karmel, Australia 2000]
On the topic of possible worlds, this post may be of interest.
Kriegsspiel 1914: A war game re-enactment of the battles between German and Allied (French, Belgian, BEF) forces on the Western Front between late August and late September 1914, organized by Philip Sabin of the War Studies Department at King’s College London. Our team comprised Evan Sterling, Nicholas Reynolds and myself, and played as Germany. We beat the Allies, capturing more territory than Germany had captured in actuality in 1914. In other words, we not only beat the Allies, we beat History.
The photo shows the final placement of German forces (black boxes) after 6 rounds of fighting, with the yellow boxes showing territory held by Germany. Cells without yellow boxes are held by the Allies. This was immense fun. (Photo credit: Nicholas Reynolds.)
The standard or classical model in decision theory is called Maximum Expected Utility (MEU) theory, which I have excoriated here and here (and which Cosma Shalizi satirized here). Its flaws and weaknesses for real decision-making have been pointed out by critics since its inception, six decades ago. Despite this, the theory is still taught in economics classes and MBA programs as a normative model of decision-making.
A key feature of MEU is the the decision-maker is required to identify ALL possible action options, and ALL consequential states of these options. He or she then reasons ACROSS these consequences by adding together the utilites of the consquential states, weighted by the likelihood that each state will occur.
However, financial and business planners do something completely contrary to this in everyday financial and business modeling. In developing a financial model for a major business decision or for a new venture, the collection of possible actions is usually infinite and the space of possible consequential states even more so. Making human sense of the possible actions and the resulting consequential states is usually a key reason for undertaking the financial modeling activity, and so cannot be an input to the modeling. Because of the explosion in the number states and in their internal complexity, business planners cannot articulate all the actions and all the states, nor even usually a subset of these beyond a mere handful.
Therefore, planners typically choose to model just 3 or 4 states – usually called cases or scenarios – with each of these combining a complex mix of (a) assumed actions, (b) assumed stakeholder responses and (c) environmental events and parameters. The assumptions and parameter values are instantiated for each case, the model run, and the outputs of the 3 or 4 cases compared with one another. The process is usually repeated with different (but close) assumptions and parameter values, to gain a sense of the sensitivity of the model outputs to those assumptions.
Often the scenarios will be labeled “Best Case”, “Worst Case”, “Base Case”, etc to identify the broad underlying principles that are used to make the relevant assumptions in each case. Actually adopting a financial model for (say) a new venture means assuming that one of these cases is close enough to current reality and its likely future development in the domain under study- ie, that one case is realistic. People in the finance world call this adoption of one case “taking a view” on the future.
Taking a view involves assuming (at least pro tem) that one trajectory (or one class of trajectories) describes the evolution of the states of some system. Such betting on the future is the complete opposite cognitive behaviour to reasoning over all the possible states before choosing an action, which the protagonists of the MEU model insist we all do. Yet the MEU model continues to be taught as a normative model for decision-making to MBA students who will spend their post-graduation life doing business planning by taking a view.
Bayesians are so prevalent in Artificial Intelligence (and, to be honest, so strident) that it can sometimes be lonely being a Frequentist. So it is nice to see a critical review of Nate Silver’s new book on prediction from a frequentist perspective. The reviewers are Gary Marcus and Ernest Davis from New York University, and here are some paras from their review in The New Yorker:
Silver’s one misstep comes in his advocacy of an approach known as Bayesian inference. According to Silver’s excited introduction,
Bayes’ theorem is nominally a mathematical formula. But it is really much more than that. It implies that we must think differently about our ideas.
Lost until Chapter 8 is the fact that the approach Silver lobbies for is hardly an innovation; instead (as he ultimately acknowledges), it is built around a two-hundred-fifty-year-old theorem that is usually taught in the first weeks of college probability courses. More than that, as valuable as the approach is, most statisticians see it is as only a partial solution to a very large problem.
A Bayesian approach is particularly useful when predicting outcome probabilities in cases where one has strong prior knowledge of a situation. Suppose, for instance (borrowing an old example that Silver revives), that a woman in her forties goes for a mammogram and receives bad news: a “positive” mammogram. However, since not every positive result is real, what is the probability that she actually has breast cancer? To calculate this, we need to know four numbers. The fraction of women in their forties who have breast cancer is 0.014, which is about one in seventy. The fraction who do not have breast cancer is therefore 1 – 0.014 = 0.986. These fractions are known as the prior probabilities. The probability that a woman who has breast cancer will get a positive result on a mammogram is 0.75. The probability that a woman who does not have breast cancer will get a false positive on a mammogram is 0.1. These are known as the conditional probabilities. Applying Bayes’s theorem, we can conclude that, among women who get a positive result, the fraction who actually have breast cancer is (0.014 x 0.75) / ((0.014 x 0.75) + (0.986 x 0.1)) = 0.1, approximately. That is, once we have seen the test result, the chance is about ninety per cent that it is a false positive. In this instance, Bayes’s theorem is the perfect tool for the job.
This technique can be extended to all kinds of other applications. In one of the best chapters in the book, Silver gives a step-by-step description of the use of probabilistic reasoning in placing bets while playing a hand of Texas Hold ’em, taking into account the probabilities on the cards that have been dealt and that will be dealt; the information about opponents’ hands that you can glean from the bets they have placed; and your general judgment of what kind of players they are (aggressive, cautious, stupid, etc.).
But the Bayesian approach is much less helpful when there is no consensus about what the prior probabilities should be. For example, in a notorious series of experiments, Stanley Milgram showed that many people would torture a victim if they were told that it was for the good of science. Before these experiments were carried out, should these results have been assigned a low prior (because no one would suppose that they themselves would do this) or a high prior (because we know that people accept authority)? In actual practice, the method of evaluation most scientists use most of the time is a variant of a technique proposed by the statistician Ronald Fisher in the early 1900s. Roughly speaking, in this approach, a hypothesis is considered validated by data only if the data pass a test that would be failed ninety-five or ninety-nine per cent of the time if the data were generated randomly. The advantage of Fisher’s approach (which is by no means perfect) is that to some degree it sidesteps the problem of estimating priors where no sufficient advance information exists. In the vast majority of scientific papers, Fisher’s statistics (and more sophisticated statistics in that tradition) are used.
Unfortunately, Silver’s discussion of alternatives to the Bayesian approach is dismissive, incomplete, and misleading. In some cases, Silver tends to attribute successful reasoning to the use of Bayesian methods without any evidence that those particular analyses were actually performed in Bayesian fashion. For instance, he writes about Bob Voulgaris, a basketball gambler,
Bob’s money is on Bayes too. He does not literally apply Bayes’ theorem every time he makes a prediction. But his practice of testing statistical data in the context of hypotheses and beliefs derived from his basketball knowledge is very Bayesian, as is his comfort with accepting probabilistic answers to his questions.
But, judging from the description in the previous thirty pages, Voulgaris follows instinct, not fancy Bayesian math. Here, Silver seems to be using “Bayesian” not to mean the use of Bayes’s theorem but, rather, the general strategy of combining many different kinds of information.
To take another example, Silver discusses at length an important and troubling paper by John Ioannidis, “Why Most Published Research Findings Are False,” and leaves the reader with the impression that the problems that Ioannidis raises can be solved if statisticians use Bayesian approach rather than following Fisher. Silver writes:
[Fisher’s classical] methods discourage the researcher from considering the underlying context or plausibility of his hypothesis, something that the Bayesian method demands in the form of a prior probability. Thus, you will see apparently serious papers published on how toads can predict earthquakes… which apply frequentist tests to produce “statistically significant” but manifestly ridiculous findings.
But NASA’s 2011 study of toads was actually important and useful, not some “manifestly ridiculous” finding plucked from thin air. It was a thoughtful analysis of groundwater chemistry that began with a combination of naturalistic observation (a group of toads had abandoned a lake in Italy near the epicenter of an earthquake that happened a few days later) and theory (about ionospheric disturbance and water composition).
The real reason that too many published studies are false is not because lots of people are testing ridiculous things, which rarely happens in the top scientific journals; it’s because in any given year, drug companies and medical schools perform thousands of experiments. In any study, there is some small chance of a false positive; if you do a lot of experiments, you will eventually get a lot of false positive results (even putting aside self-deception, biases toward reporting positive results, and outright fraud)—as Silver himself actually explains two pages earlier. Switching to a Bayesian method of evaluating statistics will not fix the underlying problems; cleaning up science requires changes to the way in which scientific research is done and evaluated, not just a new formula.
It is perfectly reasonable for Silver to prefer the Bayesian approach—the field has remained split for nearly a century, with each side having its own arguments, innovations, and work-arounds—but the case for preferring Bayes to Fisher is far weaker than Silver lets on, and there is no reason whatsoever to think that a Bayesian approach is a “think differently” revolution. “The Signal and the Noise” is a terrific book, with much to admire. But it will take a lot more than Bayes’s very useful theorem to solve the many challenges in the world of applied statistics.” [Links in original]
Also worth adding here that there is a very good reason experimental sciences adopted Frequentist approaches (what the reviewers call Fisher’s methods) in journal publications. That reason is that science is intended to be a search for objective truth using objective methods. Experiments are – or should be – replicable by anyone. How can subjective methods play any role in such an enterprise? Why should the journal Nature or any of its readers care what the prior probabilities of the experimenters were before an experiment? If these prior probabilities make a difference to the posterior (post-experiment) probabilities, then this is the insertion of a purely subjective element into something that should be objective and replicable. And if the actual numeric values of the prior probabilities don’t matter to the posterior probabilities (as some Bayesian theorems would suggest), then why does the methodology include them?
When the wild bird cries its melodies from the treetops,
Its voice carries the message of the patriarch.
When the mountain flowers are in bloom,
Their full meaning comes along with their scent.
I have remarked twice before that modern westerners, even very clever ones, fail to understand the nature of synchronicity in Taoist and Zen philosophy when discussing the art of John Cage. If you believe the universe is subject to invisible underlying forces, as Taoist and Zen adherents may do (and as Cage did), then there is no chance, no randomness, no lack of relationships between events, only a personal inability to perceive such relationships. The I Ching is intended as a means to reveal some of these hidden connections.
In a recent essay on Silence in the TLS, Paul Griffiths ends with:
Another of Cage’s favourite maxims, this one taken from Ananda Coomaraswamy and delivered five times in Silence, was that the purpose of art is to “imitate nature in her manner of operation”, which is almost another way of stating his first catchphrase, since natural objects and phenomena have nothing to say. They are not, of course, saying it. We say it for them. And in our doing so, experiencing their voicelessness and taking it into ourselves, a great deal comes to be said. There is no message in the changing pattern of cloud shadow and reflected sunlight on the sea. It may, nevertheless, thrill us, calm us, and fix our sustained attention.”
But, of course, for a Zen adherent there are indeed messages in the changing patterns of clouds and in sunlight reflected on the sea. Even more so are there messages in human artefacts such as musical compositions, even those (perhaps especially those!) using so-called random methods for creation. For Cage, the particular gamuts (clusters of sounds) that he selected for any particular one of his random compositions were selected as the direct result of the spiritual forces acting on him at that particular moment of selection, through his use of the I Ching, for instance. Similarly, under this world-view, the same forces are active in those compositions allowing apparently-random leeway to the performers or listeners.
One can criticize or reject this spiritual world-view, but first one has to understand it. Griffiths, like so many others, has failed to understand it.
In a discussion of the utility of religious beliefs, Norm makes this claim:
A person can’t intelligibly say, ‘I know that p is false, but it’s useful for me to think it’s true, so I will.’ “
(Here, p is some proposition – that is, some statement about the world which may be either true or false, but not both and not neither.)
In fact, a person can indeed intelligibly say this, and pure mathematicians do it all the time. Perhaps the example in mathematics which is easiest to grasp is the use of the square root of minus one, the number usually denoted by the symbol i. Negative numbers cannot have square roots, since there are no numbers which when squared (multiplied by themselves) lead to a negative number. However, it turns out that believing that these imaginary numbers do exist leads to a beautiful and subtle mathematical theory, called the theory of complex numbers. This theory has multiple practical applications, from mathematics to physics to engineering. One area of application we have known for about a century is the theory of alternating current in electricity; blogging – among much else of modern life – would perhaps be impossible, or at least very different, without this belief in imaginary entities underpinning the theory of electricity.
And, as I have argued before (eg, here and here), effective business strategy development and planning under uncertainty requires holding multiple incoherent beliefs about the world simultaneously. The scenarios created by scenario planners are examples of such mutually inconsistent beliefs about the world. Most people – and most companies – find it difficult to maintain and act upon mutually-inconsistent beliefs. For that reason the company that pioneered the use of scenario planning, Shell, has always tried to ensure that probabilities are never assigned to scenarios, because managers tend to give greater credence and hence attention to scenarios having higher-probabilities. The utilitarian value of scenario planning is greatest when planners consider seriously the consequences of low-likelihood, high-impact scenarios (as Shell found after the OPEC oil price in 1973), not the scenarios they think are most probable. To do this well, planners need to believe statements that they judge to be false, or at least act as if they believe these statements.
Here and here I discuss another example, taken from espionage history.
Alan Greenspan, then Chairman of the US Federal Reserve Bank System, speaking in January 2004, discussed the failure of traditional methods in econometrics to provide adequate guidance to monetary policy decision-makers. His words included:
Given our inevitably incomplete knowledge about key structural aspects of an ever-changing economy and the sometimes asymmetric costs or benefits of particular outcomes, a central bank needs to consider not only the most likely future path for the economy but also the distribution of possible outcomes about that path. The decisionmakers then need to reach a judgment about the probabilities, costs, and benefits of the various possible outcomes under alternative choices for policy.”
The product of a low-probability event and a potentially severe outcome was judged a more serious threat to economic performance than the higher inflation that might ensue in the more probable scenario.”
Many proponents of Bayesianism point to Cox’s theorem as the justification for arguing that there is only one coherent method for representing uncertainty. Cox’s theorem states that any representation of uncertainty satisfying certain assumptions is isomorphic to classical probability theory. As I have long argued, this claim depends upon the law of the excluded middle (LEM).
Mark Colyvan, an Australian philosopher of mathematics, published a paper in 2004 which examined the philosophical and logical assumptions of Cox’s theorem (assumptions usually left implicit by its proponents), and argued that these are inappropriate for many (perhaps even most) domains with uncertainty.
M. Colyvan : The philosophical significance of Cox’s theorem. International Journal of Approximate Reasoning, 37: 71-85.
Colyvan’s work complements Glenn Shafer’s attack on the theorem, which noted that it assumes that belief should be represented by a real-valued function.
G. A. Shafer : Comments on “Constructing a logic of plausible inference: a guide to Cox’s theorem” by Kevin S. Van Horn. International Journal of Approximate Reasoning, 35: 97-105.
Although these papers are several years old, I mention them here for the record – and because I still encounter invocations of Cox’s Theorem.
IME, most statisticians, like most economists, have little historical sense. This absence means they will not appreciate a nice irony: the person responsible for axiomatizing classical probability theory – Andrei Kolmogorov – is also one of the people responsible for axiomatizing intuitionistic logic, a version of classical logic which dispenses with the law of the excluded middle. One such axiomatization is called BHK Logic (for Brouwer, Heyting and Kolmogorov) in recognition.
What are models for? Most developers and users of models, in my experience, seem to assume the answer to this question is obvious and thus never raise it. In fact, modeling has many potential purposes, and some of these conflict with one another. Some of the criticisms made of particular models arise from mis-understandings or mis-perceptions of the purposes of those models, and the modeling activities which led to them.
Liking cladistics as I do, I thought it useful to list all the potential purposes of models and modeling. The only discussion that considers this topic that I know is a brief discussion by game theorist Ariel Rubinstein in an appendix to a book on modeling rational behaviour (Rubinstein 1998). Rubinstein considers several alternative purposes for economic modeling, but ignores many others. My list is as follows (to be expanded and annotated in due course):
- 1. To better understand some real phenomena or existing system. This is perhaps the most commonly perceived purpose of modeling, in the sciences and the social sciences.
- 2. To predict (some properties of) some real phenomena or existing system. A model aiming to predict some domain may be successful without aiding our understanding of the domain at all. Isaac Newton’s model of the motion of planets, for example, was predictive but not explanatory. I understand that physicist David Deutsch argues that predictive ability is not an end of scientific modeling but a means, since it is how we assess and compare alternative models of the same phenomena. This is wrong on both counts: prediction IS an end of much modeling activity (especially in business strategy and public policy domains), and it not the only means we use to assess models. Indeed, for many modeling activities, calibration and prediction are problematic, and so predictive capability may not even be possible as a form of model assessment.
- 3. To manage or control (some properties of) some real phenomena or existing system.
- 4. To better understand a model of some real phenomena or existing system. Arguably, most of economic theorizing and modeling falls into this category, and Rubinstein’s preferred purpose is this type. Macro-economic models, if they are calibrated at all, are calibrated against artificial, human-defined, variables such as employment, GDP and inflation, variables which may themselves bear a tenuous and dynamic relationship to any underlying economic reality. Micro-economic models, if they are calibrated at all, are often calibrated with stylized facts, abstractions and simplifications of reality which economists have come to regard as representative of the domain in question. In other words, economic models are not not usually calibrated against reality directly, but against other models of reality. Similarly, large parts of contemporary mathematical physics (such as string theory and brane theory) have no access to any physical phenomena other than via the mathematical model itself: our only means of apprehension of vibrating strings in inaccessible dimensions beyond the four we live in, for instance, is through the mathematics of string theory. In this light, it seems nonsense to talk about the effectiveness, reasonable or otherwise, of mathematics in modeling reality, since how we could tell?
- 5. To predict (some properties of) a model of some real phenomena or existing system.
- 6. To better understand, predict or manage some intended (not-yet-existing) artificial system, so to guide its design and development. Understanding a system that does not yet exist is qualitatively different to understanding an existing domain or system, because the possibility of calibration is often absent and because the model may act to define the limits and possibilities of subsequent design actions on the artificial system. The use of speech act theory (a model of natural human language) for the design of artificial machine-to-machine languages, or the use of economic game theory (a mathematical model of a stylized conceptual model of particular micro-economic realities) for the design of online auction sites are examples here. The modeling activity can even be performative, helping to create the reality it may purport to describe, as in the case of the Black-Scholes model of options pricing.
- 7. To provide a locus for discussion between relevant stakeholders in some business or public policy domain. Most large-scale business planning models have this purpose within companies, particularly when multiple partners are involved. Likewise, models of major public policy issues, such as epidemics, have this function. In many complex domains, such as those in public health, models provide a means to tame and domesticate the complexity of the domain. This helps stakeholders to jointly consider concepts, data, dynamics, policy options, and assessment of potential consequences of policy options, all of which may need to be socially constructed.
- 8. To provide a means for identification, articulation and potentially resolution of trade-offs and their consequences in some business or public policy domain. This is the case, for example, with models of public health risk assessment of chemicals or new products by environmental protection agencies, and models of epidemics deployed by government health authorities.
- 9. To enable rigorous and justified thinking about the assumptions and their relationships to one another in modeling some domain. Business planning models usually serve this purpose. They may be used to inform actions, both to eliminate or mitigate negative consequences and to enhance positive consequences, as in retroflexive decision making.
- 10. To enable a means of assessment of managerial competencies of the people undertaking the modeling activity. Investors in start-ups know that the business plans of the company founders are likely to be out of date very quickly. The function of such business plans is not to model reality accurately, but to force rigorous thinking about the domain, and to provide a means by which potential investors can challenge the assumptions and thinking of management as way of probing the managerial competence of those managers. Business planning can thus be seen to be a form of epideictic argument, where arguments are assessed on their form rather than their content, as I have argued here.
- 11. As a means of play, to enable the exercise of human intelligence, ingenuity and creativity, in developing and exploring the properties of models themselves. This purpose is true of that human activity known as doing pure mathematics, and perhaps of most of that academic activity known as doing mathematical economics. As I have argued before, mathematical economics is closer to theology than to the modeling undertaken in the natural sciences. I see nothing wrong with this being a purpose of modeling, although it would be nice if academic economists were honest enough to admit that their use of public funds was primarily in pursuit of private pleasures, and any wider social benefits from their modeling activities were incidental.
POSTSCRIPT (Added 2011-06-17): I have just seen Joshua Epstein’s 2008 discussion of the purposes of modeling in science and social science. Epstein lists 17 reasons to build explicit models (in his words, although I have added the label “0” to his first reason):
1. Explain (very different from predict)
2. Guide data collection
3. Illuminate core dynamics
4. Suggest dynamical analogies
5. Discover new questions
6. Promote a scientific habit of mind
7. Bound (bracket) outcomes to plausible ranges
8. Illuminate core uncertainties
9. Offer crisis options in near-real time. [Presumably, Epstein means “crisis-response options” here.]
10. Demonstrate tradeoffe/ suggest efficiencies
11. Challenge the robustness of prevailing theory through peturbations
12. Expose prevailing wisdom as imcompatible with available data
13. Train practitioners
14. Discipline the policy dialog
15. Educate the general public
16. Reveal the apparently simple (complex) to be complex (simple).
These are at a lower level than my list, and I believe some of his items are the consequences of purposes rather than purposes themselves, at least for honest modelers (eg, #11, #12, #16).
Joshua M Epstein : Why model? Keynote address to the Second World Congress on Social Simulation, George Mason University, USA. Available here (PDF).
Robert E Marks : Validating simulation models: a general framework and four applied examples. Computational Economics, 30 (3): 265-290.
David F Midgley, Robert E Marks and D Kunchamwar : The building and assurance of agent-based models: an example and challenge to the field. Journal of Business Research, 60 (8): 884-893.
Robert Rosen : Anticipatory Systems. Pergamon Press.
Ariel Rubinstein : Modeling Bounded Rationality. Cambridge, MA, USA: MIT Press. Zeuthen Lecture Book Series.
Ariel Rubinstein : Dilemmas of an economic theorist. Econometrica, 74 (4): 865-883.