Limits of Bayesianism

Many proponents of Bayesianism point to Cox’s theorem as the justification for arguing that there is only one coherent method for representing uncertainty. Cox’s theorem states that any representation of uncertainty satisfying certain assumptions is isomorphic to classical probability theory. As I have long argued, this claim depends upon the law of the excluded middle (LEM).
Mark Colyvan, an Australian philosopher of mathematics, published a paper in 2004 which examined the philosophical and logical assumptions of Cox’s theorem (assumptions usually left implicit by its proponents), and argued that these are inappropriate for many (perhaps even most) domains with uncertainty.
M. Colyvan [2004]: The philosophical significance of Cox’s theorem. International Journal of Approximate Reasoning, 37: 71-85.
Colyvan’s work complements Glenn Shafer’s attack on the theorem, which noted that it assumes that belief should be represented by a real-valued function.
G. A. Shafer [2004]: Comments on “Constructing a logic of plausible inference: a guide to Cox’s theorem” by Kevin S. Van Horn. International Journal of Approximate Reasoning, 35: 97-105.
Although these papers are several years old, I mention them here for the record –  and because I still encounter invocations of Cox’s Theorem.
IME, most statisticians, like most economists, have little historical sense. This absence means they will not appreciate a nice irony: the person responsible for axiomatizing classical probability theory – Andrei Kolmogorov – is also one of the people responsible for axiomatizing intuitionistic logic, a version of classical logic which dispenses with the law of the excluded middle. One such axiomatization is called BHK Logic (for Brouwer, Heyting and Kolmogorov) in recognition.

Automating prayer

I have recently re-read Michael Frayn’s The Tin Men, a superb satire of AI.  Among the many wonderful passages is this, on the semantic verification problem of agent communications:

“Ah,” said Rowe, “there’s a difference between a man and a machine when it comes to praying.”   “Aye. The machine would do it better. It wouldn’t pray for things it oughtn’t pray for, and its thoughts wouldn’t wander.”
“Y-e-e-s. But the computer saying the words wouldn’t be the same . . .”
“Oh, I don’t know. If the words ‘O Lord, bless the Queen and her Ministers‘ are going to produce any tangible effects on the Government, it can’t matter who or what says them, can it?”
“Y-e-e-s, I see that. But if a man says the words he means them.”
“So does the computer. Or at any rate, it would take a damned complicated computer to say the words without meaning them. I mean, what do we mean by ‘mean’? If we want to know whether a man or a computer means ‘O Lord, bless the Queen and her Ministers,’ we look to see whether it’s grinning insincerely or ironically as it says the words. We try to find out whether it belongs to the Communist Party. We observe whether it simultaneously passes notes about lunch or fornication. If it passes all the tests of this sort, what other tests are there for telling if it means what it says? All the computers in my department, at any rate, would pray with great sincerity and single-mindedness. They’re devout wee things, computers.” (pages 109-110).

Reference:
Michael Frayn [1995/1965]: The Tin Men (London, UK: Penguin, originally published by William Collins, 1965)

When are agent models or systems appropriate?


In July 2005, inspired by a talk on formation flying by unmanned aircraft by Sandor Veres at the Liverpool Agents in Space Symposium, I wrote down some rules of thumb I have been using informally for determining whether an agent-based modeling (ABM) approach is appropriate for a particular application domain.  Appropriateness is assessed by answering the following questions:

1. Are there multiple entities in the domain, or can the domain be represented as if there are?
2. Do the entities have access to potentially different information sources or do they have potentially different beliefs? For example, differences may be due to geographic, temporal, legal, resource or conceptual constraints on the information available to the entities.
3. Do the entities have potentially different goals or objectives? This will typically be the case if the entities are owned or instructed by different people or organizations.
4. Do the entities have potentially different preferences (or utilities) over their goals or objectives ?
5. Are the relationships between the entities likely to change over time?
6. Does a system representing the domain have multiple threads of control?

If the answers are YES to Question 1 and also YES to any other question, then an agent-based approach is appropriate. If the answer to Question 1 is NO, or if the answers are YES to Question 1 but NO to all other questions, then a traditional object-based approach is more appropriate.
Traditional object-oriented systems involve static relationships between non-autonomous entities sharing the same beliefs, preferences and goals, and in a system with a single thread of control.

Digital aspen forests

Brian Arthur has an article about automated and intelligent machine-to-machine communications creating a second digital economy underlying the first physical one, in the latest issue of The McKinsey Quarterly here.

I want to argue that something deep is going on with information technology, something that goes well beyond the use of computers, social media, and commerce on the Internet. Business processes that once took place among human beings are now being executed electronically. They are taking place in an unseen domain that is strictly digital. On the surface, this shift doesn’t seem particularly consequential—it’s almost something we take for granted. But I believe it is causing a revolution no less important and dramatic than that of the railroads. It is quietly creating a second economy, a digital one.
. . . .
We do have sophisticated machines, but in the place of personal automation (robots) we have a collective automation. Underneath the physical economy, with its physical people and physical tasks, lies a second economy that is automatic and neurally intelligent, with no upper limit to its buildout. The prosperity we enjoy and the difficulties with jobs would not have surprised Keynes, but the means of achieving that prosperity would have.
This second economy that is silently forming—vast, interconnected, and extraordinarily productive—is creating for us a new economic world. How we will fare in this world, how we will adapt to it, how we will profit from it and share its benefits, is very much up to us.”

Reference:
W. Brian Arthur [2011]:  The Second EconomyThe McKinsey Quarterly, October 2011.

Vale Dennis Ritchie (1941-2011)

A post to note the passing on of Dennis Ritchie (1941-2011), co-developer of C programming language and of the Unix operating system.  The Guardian’s obituary is here, a brief note from Wired Magazine here, and John Naughton’s tribute in the Observer here.    So much of modern technology we owe to just a few people, and Ritchie was one of them.
An index to posts about the Matherati is here.

Networks of Banks

The first plenary speaker at the 13th International Conference on E-Commerce (ICEC 2011) in Liverpool last week was Robert, Lord May, Professor of Ecology at Oxford University, former Chief UK Government Scientific Advisor, and former President of the Royal Society.  His talk was part of the special session on Robustness and Reliability of Electronic Marketplaces (RREM 2011), and it was insightful, provocative and amusing.
May began life as an applied mathematician and theoretical physicist (in the Sydney University Physics department of Harry Messel), then applied his models to food webs in ecology, and now finds the same types of network and lattice models useful for understanding inter-dependencies in networks of banks.  Although, as he said in his talk, these models are very simplified, to the point of being toy models, they still have the power to demonstrate unexpected outcomes:  For example, that actions which are individually rational may not be desirable from the perspective of a system containing those individuals.  (It is one of the profound differences between Computer Science and Economics, that such an outcome would be unlikely to be surprising to most computer scientists, yet seems to be so to mainstream Economists, imbued with a belief in metaphysical carpal entities.)
From the final section of Haldane and May (2011):

The analytic model outlined earlier demonstrates that the topology of the financial sector’s balance sheet has fundamental implications for the state and dynamics of systemic risk. From a public policy perspective, two topological features are key.
First, diversity across the financial system. In the run-up to the crisis, and in the pursuit of diversification, banks’ balance sheets and risk management systems became increasingly homogenous. For example, banks became increasingly reliant on wholesale funding on the liabilities side of the balance sheet; in structured credit on the assets side of their balance sheet; and managed the resulting risks using the same value-at-risk models. This desire for diversification was individually rational from a risk perspective. But it came at the expense of lower diversity across the system as whole, thereby increasing systemic risk. Homogeneity bred fragility (N. Beale and colleagues, manuscript in preparation).
In regulating the financial system, little effort has as yet been put into assessing the system-wide characteristics of the network, such as the diversity of its aggregate balance sheet and risk management models. Even less effort has been put into providing regulatory incentives to promote diversity of balance sheet structures, business models and risk management systems. In rebuilding and maintaining the financial system, this systemic diversity objective should probably be given much greater prominence by the regulatory community.
Second, modularity within the financial system. The structure of many non-financial networks is explicitly and intentionally modular.  This includes the design of personal computers and the world wide web and the management of forests and utility grids. Modular configurations prevent contagion infecting the whole network in the event of nodal failure. By limiting the potential for cascades, modularity protects the systemic resilience of both natural and constructed networks.
The same principles apply in banking. That is why there is an ongoing debate on the merits of splitting banks, either to limit their size (to curtail the strength of cascades following failure) or to limit their activities (to curtail the potential for cross-contamination within firms). The recently proposed Volcker rule in the United States, quarantining risky hedge fund, private equity and proprietary trading activity from other areas of banking business, is one example of modularity in practice. In the United Kingdom, the new government have recently set up a Royal Commission to investigate the case for encouraging modularity and diversity in banking ecosystems, as a means of buttressing systemic resilience.
It took a generation for ecological models to adapt. The same is likely to be true of banking and finance.”

It would be interesting to consider network models which are more realistic than these toy versions, for instance, with nodes representing banks with goals, preferences and beliefs.
 
References:
F. Caccioli, M. Marsili and P. Vivo [2009]: Eroding market stability by proliferation of financial instruments. The European Physical Journal B, 71: 467–479.
Andrew Haldane and Robert May [2011]: Systemic risk in banking ecosystems. Nature, 469:  351-355.
Robert May, Simon Levin and George Sugihara [2008]: Complex systems: ecology for bankers. Nature, 451, 893–895.
Also, the UK Government’s 2011 Foresight Programme on the Future of Computer Trading in Financial Markets has published its background and working papers, here.
 

The Matherati: Index

The psychologist Howard Gardner identified nine distinct types of human intelligence. It is perhaps not surprising that people with great verbal and linguistic dexterity have long had a word to describe themselves, the Literati. Those of us with mathematical and logical reasoning capabilities I have therefore been calling the Matherati, defined here. I have tried to salute members of this group as I recall or encounter them.
This page lists the people I have currently written about or mentioned, in alpha order:
Alexander d’Arblay, John Aris, John Atkinson, John Bennett, Christophe Bertrand, Matthew Piers Watt Boulton, Joan Burchardt, David Caminer, Boris N. Delone, the Delone family, Nicolas Fatio de Duillier, Michael Dummett, Sean Eberhard, Edward FrenkelMartin Gardner, Kurt Godel, Charles Hamblin, Thomas Harriott, Martin Harvey, Fritz JohnErnest Kaye, Robert May, Robin Milner, Isaac NewtonHenri PoincareMervyn Pragnell, Malcolm Rennie, Dennis Ritchie, Ibn Sina, Adam Spencer, Bella Subbotovskaya, Bill Thurston, Alan Turing, Alexander Yessenin-Volpin.
And lists:
20th-Century Mathematicians.

What use are models?

What are models for?   Most developers and users of models, in my experience, seem to assume the answer to this question is obvious and thus never raise it.   In fact, modeling has many potential purposes, and some of these conflict with one another.   Some of the criticisms made of particular models arise from mis-understandings or mis-perceptions of the purposes of those models, and the modeling activities which led to them.
Liking cladistics as I do, I thought it useful to list all the potential purposes of models and modeling.   The only discussion that considers this topic that I know is a brief discussion by game theorist Ariel Rubinstein in an appendix to a book on modeling rational behaviour (Rubinstein 1998).  Rubinstein considers several alternative purposes for economic modeling, but ignores many others.   My list is as follows (to be expanded and annotated in due course):

  • 1. To better understand some real phenomena or existing system.   This is perhaps the most commonly perceived purpose of modeling, in the sciences and the social sciences.
  • 2. To predict (some properties of) some real phenomena or existing system.  A model aiming to predict some domain may be successful without aiding our understanding  of the domain at all.  Isaac Newton’s model of the motion of planets, for example, was predictive but not explanatory.   I understand that physicist David Deutsch argues that predictive ability is not an end of scientific modeling but a means, since it is how we assess and compare alternative models of the same phenomena.    This is wrong on both counts:  prediction IS an end of much modeling activity (especially in business strategy and public policy domains), and it not the only means we use to assess models.  Indeed, for many modeling activities, calibration and prediction are problematic, and so predictive capability may not even be  possible as a form of model assessment.
  • 3. To manage or control (some properties of) some real phenomena or existing system.
  • 4. To better understand a model of some real phenomena or existing system.  Arguably, most of economic theorizing and modeling falls into this category, and Rubinstein’s preferred purpose is this type.   Macro-economic models, if they are calibrated at all, are calibrated against artificial, human-defined, variables such as employment, GDP and inflation, variables which may themselves bear a tenuous and dynamic relationship to any underlying economic reality.   Micro-economic models, if they are calibrated at all, are often calibrated with stylized facts, abstractions and simplifications of reality which economists have come to regard as representative of the domain in question.    In other words, economic models are not not usually calibrated against reality directly, but against other models of reality.  Similarly, large parts of contemporary mathematical physics (such as string theory and brane theory) have no access to any physical phenomena other than via the mathematical model itself:  our only means of apprehension of vibrating strings in inaccessible dimensions beyond the four we live in, for instance, is through the mathematics of string theory.    In this light, it seems nonsense to talk about the effectiveness, reasonable or otherwise, of mathematics in modeling reality, since how we could tell?
  • 5. To predict (some properties of) a model of some real phenomena or existing system.
  • 6. To better understand, predict or manage some intended (not-yet-existing) artificial system, so to guide its design and development.   Understanding a system that does  not yet exist is qualitatively different to understanding an existing domain or system, because the possibility of calibration is often absent and because the model may act to define the limits and possibilities of subsequent design actions on the artificial system.  The use of speech act theory (a model of natural human language) for the design of artificial machine-to-machine languages, or the use of economic game theory (a mathematical model of a stylized conceptual model of particular micro-economic realities) for the design of online auction sites are examples here.   The modeling activity can even be performative, helping to create the reality it may purport to describe, as in the case of the Black-Scholes model of options pricing.
  • 7. To provide a locus for discussion between relevant stakeholders in some business or public policy domain.  Most large-scale business planning models have this purpose within companies, particularly when multiple partners are involved.  Likewise, models of major public policy issues, such as epidemics, have this function.  In many complex domains, such as those in public health, models provide a means to tame and domesticate the complexity of the domain.  This helps stakeholders to jointly consider concepts, data, dynamics, policy options, and assessment of potential consequences of policy options,  all of which may need to be socially constructed. 
  • 8. To provide a means for identification, articulation and potentially resolution of trade-offs and their consequences in some business or public policy domain.   This is the case, for example, with models of public health risk assessment of chemicals or new products by environmental protection agencies, and models of epidemics deployed by government health authorities.
  • 9. To enable rigorous and justified thinking about the assumptions and their relationships to one another in modeling some domain.   Business planning models usually serve this purpose.   They may be used to inform actions, both to eliminate or mitigate negative consequences and to enhance positive consequences, as in retroflexive decision making.
  • 10. To enable a means of assessment of managerial competencies of the people undertaking the modeling activity. Investors in start-ups know that the business plans of the company founders are likely to be out of date very quickly.  The function of such business plans is not to model reality accurately, but to force rigorous thinking about the domain, and to provide a means by which potential investors can challenge the assumptions and thinking of management as way of probing the managerial competence of those managers.    Business planning can thus be seen to be a form of epideictic argument, where arguments are assessed on their form rather than their content, as I have argued here.
  • 11. As a means of play, to enable the exercise of human intelligence, ingenuity and creativity, in developing and exploring the properties of models themselves.  This purpose is true of that human activity known as doing pure mathematics, and perhaps of most of that academic activity known as doing mathematical economics.   As I have argued before, mathematical economics is closer to theology than to the modeling undertaken in the natural sciences. I see nothing wrong with this being a purpose of modeling, although it would be nice if academic economists were honest enough to admit that their use of public funds was primarily in pursuit of private pleasures, and any wider social benefits from their modeling activities were incidental.

POSTSCRIPT (Added 2011-06-17):  I have just seen Joshua Epstein’s 2008 discussion of the purposes of modeling in science and social science.   Epstein lists 17 reasons to build explicit models (in his words, although I have added the label “0” to his first reason):

0. Prediction
1. Explain (very different from predict)
2. Guide data collection
3. Illuminate core dynamics
4. Suggest dynamical analogies
5. Discover new questions
6. Promote a scientific habit of mind
7. Bound (bracket) outcomes to plausible ranges
8. Illuminate core uncertainties
9. Offer crisis options in near-real time. [Presumably, Epstein means “crisis-response options” here.]
10. Demonstrate tradeoffe/ suggest efficiencies
11. Challenge the robustness of prevailing theory through peturbations
12. Expose prevailing wisdom as imcompatible with available data
13. Train practitioners
14. Discipline the policy dialog
15. Educate the general public
16. Reveal the apparently simple (complex) to be complex (simple).

These are at a lower level than my list, and I believe some of his items are the consequences of purposes rather than purposes themselves, at least for honest modelers (eg, #11, #12, #16).
References:
Joshua M Epstein [2008]: Why model? Keynote address to the Second World Congress on Social Simulation, George Mason University, USA.  Available here (PDF).
Robert E Marks [2007]:  Validating simulation models: a general framework and four applied examples. Computational Economics, 30 (3): 265-290.
David F Midgley, Robert E Marks and D Kunchamwar [2007]:  The building and assurance of agent-based models: an example and challenge to the field. Journal of Business Research, 60 (8): 884-893.
Robert Rosen [1985]: Anticipatory Systems. Pergamon Press.
Ariel Rubinstein [1998]: Modeling Bounded Rationality. Cambridge, MA, USA: MIT Press.  Zeuthen Lecture Book Series.
Ariel Rubinstein [2006]: Dilemmas of an economic theorist. Econometrica, 74 (4): 865-883.

A salute to Charles Hamblin

This short biography of Australian philosopher and computer scientist Charles L. Hamblin was initially commissioned by the Australian Computer Museum Society.

Charles Leonard Hamblin (1922-1985) was an Australian philosopher and one of Australia’s first computer scientists. His main early contributions to computing, which date from the mid 1950s, were the development and application of reverse polish notation and the zero-address store. He was also the developer of one of the first computer languages, GEORGE. Since his death, his ideas have become influential in the design of computer interaction protocols, and are expected to shape the next generation of e-commerce and machine-communication systems.
Continue reading ‘A salute to Charles Hamblin’