Michael Dummett RIP

The death has just occurred of the philosopher Michael Dummett (1925-2011), formerly Wykeham Professor of Logic at Oxford.    His writings on the philosophy of language and the philosophy of mathematics have influenced me, particularly his thorough book on intuitionism.   Having been educated by pure mathematicians who actively disparaged intuitionist and constructivist ideas, I found it liberating to see these ideas taken seriously and considered carefully.  The precision of Dummett’s writing and thought clearly marked him out as a member of the Matherati, as also his other formal work, such as that on voting procedures.
POSTSCRIPT (2012-01-21):  The logician Graham Priest remembers Dummett as follows:

It is clear that Dummett was one of the most important — perhaps the most important — British philosopher of the last half century. His work on the philosophy of language and metaphysics, inspired by themes in intuitionist logic, was truly groundbreaking. He took intuitionism from a somewhat esoteric doctrine in the philosophy of mathematics to a mainstream philosophical position.
Perhaps his greatest achievement, as far as I am concerned, was to demonstrate beyond doubt the intellectual respectability of a fully-fledged philosophical position based on a contemporary heterodox logic. Philosophers in the United Kingdom, even if they do not subscribe to Dummett’s views, no longer doubt the possibility of this. Dummett had an influence in Australia, too. It was quieter there than in the U.K., but the relevant philosophical lesson was amplified by logicians who endorsed heterodox logics of a different stripe (for which, I think, Dummett had little sympathy). The result has been much the same.
In the United States, though, Dummett had virtually no significant impact. Indeed, I am continually surprised how conservative philosophy in the United States is with regard to heterodox logics. It is still awaiting a Dummett to awaken it from its dogmatic logical slumbers.
Graham Priest, City University of New York Graduate Center, and the University of Melbourne (Australia)

References:
His Guardian obituary is here.  An index to posts about members of the Matherati can be found here.
M. Dummett [1977/2000]: Elements of Intuitionism. (Oxford: Clarendon Press, 1st edition 1977; 2nd edition 2000).

A brief history of mathematics

Australian category-theorist Ross Street has an elegant, one-page summary of the first 2,500 years of western mathematics, here.  This was apparently a handout given in a talk to the Macquarie University Philosophy Students Society in 1984.  I found Street’s high-level view of what (some important) mathematicians have (mostly) been doing illuminating and thought-provoking, and so I reproduce it here.
A nice way to think about topoi, of course, is that due to Rob Goldblatt:  a topos is the most general object that has all the properties of the category of sets.
 

Space, Sets and Beyond
First Cycle:  General spaces advance the study of naive geometry
1. Naive geometry:  Zeno, Eudoxus.
2. Axiomatic geometry (unique model intended): Euclid, Apollonius (c. 300-200 BC).
3. Algebraic techniques (coordinate geometry): Descartes 1596-1650.
4. Non-Euclidean geometry (independence of the “parallels axiom”:  models without parallels axiom constructed from a model with it):  Gauss, Bolyai, Lobatchewski (early 19C).
5. Locally Euclidean spaces:  Riemann 1826-1866, Lie.
6. Relationships between spaces (continuity, linearity): Cauchy, Cayley, Weierstrass, Dedekind (1880-present).
 
Second Cycle:  Toposes can be viewed as even more general spaces
1. Naive set theory: Peano, Cantor (c. 1900).
2. Axiomatic set theory (unique model intended): Hilbert, Godel, Bernays, Zermelo, Zorn, Fraenkel.
3. Abstract algebra (mathematical logic): Boole, Poincare, Hilbert, Heyting, Brouwer, Noether, Church, Turing.
4. Non-standard set theories (independence of the “axiom of choice” and “continuum hypothesis”; Boolean-valued models; non-standard analysis): Godel, Cohen, Robinson (1920-1950).
5. Local set theory (sheaves): Leray, Serre, Grothendieck, Lawvere, Tierney (1945-1970).
6. Relationships between toposes (a “topos” is a generalized set theory): 1970-present.

Suddenly, the fog lifts . . .

Andrew Wiles, prover of Wiles’ Theorem (aka Fermat’s Last Theorem), on the doing of mathematics:

Perhaps I could best describe my experience of doing mathematics in terms of entering a dark mansion. One goes into the first room, and it’s dark, completely dark. One stumbles around bumping into the furniture, and gradually, you learn where each piece of furniture is, and finally, after six months or so, you find the light switch. You turn it on, and suddenly, it’s all illuminated. You can see exactly where you were.

This describes my experience, over shorter time-frames, in studying pure mathematics as an undergraduate, with each new topic covered: epsilon-delta arguments in analysis; point-set topology; axiomatic set theory; functional analysis; measure theory; group theory; algebraic topology; category theory; statistical decision theory; integral geometry; etc.    A very similar process happens in learning a new language, whether a natural (human) language or a programming language.     Likewise, similar words describe the experience of entering a new organization (either as an employee or as a management consultant), and trying to understand how the organization works, who has the real power, what are the social relationships and dynamics within the organization, etc, something I have previously described here.
One encounters a new discipline or social organization, one studies it and thinks about it from as many angles and perspectives as one can, and eventually, if one is clever and diligent, or just lucky, a light goes on and all is illuminated.    Like visiting a new city and learning its layout by walking through it, frequently getting lost and finding one’s way again,  enlightenment requires work.  Over time, one learns not to be afraid in encountering a new subject, but rather to relish the state of inchoateness and confusion in the period between starting and enlightenment.  The pleasure and wonder of the enlightenment is so great, that it all the prior pain is forgotten.
Reference:
Andrew Wiles [1996],  speaking in Fermat’s Last Theorem, a BBC documentary by S. Singh and John Lynch: Horizon, BBC 1996,  cited in Frans Oort [2011 ]:  Did earlier thoughts inspire Grothendieck? (Hat tip).

Limits of Bayesianism

Many proponents of Bayesianism point to Cox’s theorem as the justification for arguing that there is only one coherent method for representing uncertainty. Cox’s theorem states that any representation of uncertainty satisfying certain assumptions is isomorphic to classical probability theory. As I have long argued, this claim depends upon the law of the excluded middle (LEM).
Mark Colyvan, an Australian philosopher of mathematics, published a paper in 2004 which examined the philosophical and logical assumptions of Cox’s theorem (assumptions usually left implicit by its proponents), and argued that these are inappropriate for many (perhaps even most) domains with uncertainty.
M. Colyvan [2004]: The philosophical significance of Cox’s theorem. International Journal of Approximate Reasoning, 37: 71-85.
Colyvan’s work complements Glenn Shafer’s attack on the theorem, which noted that it assumes that belief should be represented by a real-valued function.
G. A. Shafer [2004]: Comments on “Constructing a logic of plausible inference: a guide to Cox’s theorem” by Kevin S. Van Horn. International Journal of Approximate Reasoning, 35: 97-105.
Although these papers are several years old, I mention them here for the record –  and because I still encounter invocations of Cox’s Theorem.
IME, most statisticians, like most economists, have little historical sense. This absence means they will not appreciate a nice irony: the person responsible for axiomatizing classical probability theory – Andrei Kolmogorov – is also one of the people responsible for axiomatizing intuitionistic logic, a version of classical logic which dispenses with the law of the excluded middle. One such axiomatization is called BHK Logic (for Brouwer, Heyting and Kolmogorov) in recognition.

O ignorance! O mores!

In the last few weeks, it was reported that mathematician Edward Nelson of Princeton had claimed to show that Peano Arithmetic, one of many possible axiomatic systems for the numbers, was internally inconsistent.   Within a short period, his claim and proof were subject to examination by other pure mathematicians, not least Terence Tao of UCLA, who thought Nelson’s argument had potential flaws.   Nelson initially defended himself and then, accepting the criticisms, retracted his claim.  More details can be found in a post by John Baez on the n-category blog, which initiated a dialog in which both Tao and Nelson participated, and where Nelson announced his retraction.   A subsequent discussion of what happened in this dialog and the lessons for the philosophy of mathematics can be found on the blog of Catarina Dutilh Novaes, a discussion to which Tao again contributed, this time on his methods.
This example of fast proposal-criticism-retraction contrasts sharply with mainstream Economics, where an error in deductive reasoning may be pointed out, with neither retraction nor revision nor apparent learning from its adherents 70 years on.  Keynes’ criticisms of conventional austerity economics were first uttered in the 1930s, and yet they still have to be repeatedRelcalcitrant ignorance indeed.
One of the key insights of Keynesian economics is that a government is not like a household:  Governments can increase their income by increasing their spending, something most households cannot do.   Another key insight is that the effect of one person doing something may be very different if many people also do it.  To see better at a baseball stadium, for instance, you can stand up, but this only works if the people in front of you stay seated; if everyone stands, you will see no better than if everyone stayed seated.    Likewise, the economy-wide effects of individuals saving may be deleterious even when the effects are beneficial for an individual.   Keynes called this the savings trap.  Instead of learning from such insights, we get a British Prime Minister telling us all in 2011 to save hard and reduce our personal debt, and treating the national budget as if he were running a a household in Grantham.

The Matherati: Index

The psychologist Howard Gardner identified nine distinct types of human intelligence. It is perhaps not surprising that people with great verbal and linguistic dexterity have long had a word to describe themselves, the Literati. Those of us with mathematical and logical reasoning capabilities I have therefore been calling the Matherati, defined here. I have tried to salute members of this group as I recall or encounter them.

This page lists the people I have currently written about or mentioned, in alpha order:
Alexander d’Arblay, John Aris, John Atkinson, John Bennett, Christophe Bertrand, Matthew Piers Watt Boulton, Joan Burchardt, David Caminer, Boris N. Delone, the Delone family, Nicolas Fatio de Duillier, Michael Dummett, Sean Eberhard, Edward FrenkelMartin Gardner, Kurt Godel, Charles Hamblin, Thomas Harriott, Martin Harvey, Fritz JohnErnest Kaye, Robert May, Robin Milner, Isaac NewtonHenri PoincareMervyn Pragnell, Malcolm Rennie, Dennis Ritchie, Ibn Sina, Adam Spencer, Bella Subbotovskaya, Bill Thurston, Alan Turing, Alexander Yessenin-Volpin.

And lists:
20th-Century Mathematicians.

Music as thought

I have remarked before that music is a form of thinking.  It is a form of thinking for the composer and may also be for the listener.  If the performers are to transmit its essence effectively and well, it will be a form of thinking for them also. Listening recently to the music of Prokofiev,  I realize I don’t think in the way he does, and so I find his music alien.

But what is the nature of this musical thought?
Continue reading ‘Music as thought’

Reliable Knowledge

How little scientists know who only know science!  Thanks again to Norm, I learn about some statements by a retired professor of chemistry, Peter Atkins, about how we know what we know.   Atkins is quoted as saying:

The scientific method is the only reliable method of achieving knowledge.”

Well, first, it is worth saying that the scientific method does not produce reliable knowledge.  One of the two defining features of science is that scientific claims are defeasible:  they may be contested, questioned, challenged, and even overthrown, if the evidence warrants.   There is nothing inherently reliable about any scientific claim or theory, since new evidence may be found at any time to overthrow it.  The history of science is littered with examples.   (The second key feature is that anyone may do this contesting; science is not, or rather should  not be, a priesthood.)
Continue reading ‘Reliable Knowledge’

What use are models?

What are models for?   Most developers and users of models, in my experience, seem to assume the answer to this question is obvious and thus never raise it.   In fact, modeling has many potential purposes, and some of these conflict with one another.   Some of the criticisms made of particular models arise from mis-understandings or mis-perceptions of the purposes of those models, and the modeling activities which led to them.
Liking cladistics as I do, I thought it useful to list all the potential purposes of models and modeling.   The only discussion that considers this topic that I know is a brief discussion by game theorist Ariel Rubinstein in an appendix to a book on modeling rational behaviour (Rubinstein 1998).  Rubinstein considers several alternative purposes for economic modeling, but ignores many others.   My list is as follows (to be expanded and annotated in due course):

  • 1. To better understand some real phenomena or existing system.   This is perhaps the most commonly perceived purpose of modeling, in the sciences and the social sciences.
  • 2. To predict (some properties of) some real phenomena or existing system.  A model aiming to predict some domain may be successful without aiding our understanding  of the domain at all.  Isaac Newton’s model of the motion of planets, for example, was predictive but not explanatory.   I understand that physicist David Deutsch argues that predictive ability is not an end of scientific modeling but a means, since it is how we assess and compare alternative models of the same phenomena.    This is wrong on both counts:  prediction IS an end of much modeling activity (especially in business strategy and public policy domains), and it not the only means we use to assess models.  Indeed, for many modeling activities, calibration and prediction are problematic, and so predictive capability may not even be  possible as a form of model assessment.
  • 3. To manage or control (some properties of) some real phenomena or existing system.
  • 4. To better understand a model of some real phenomena or existing system.  Arguably, most of economic theorizing and modeling falls into this category, and Rubinstein’s preferred purpose is this type.   Macro-economic models, if they are calibrated at all, are calibrated against artificial, human-defined, variables such as employment, GDP and inflation, variables which may themselves bear a tenuous and dynamic relationship to any underlying economic reality.   Micro-economic models, if they are calibrated at all, are often calibrated with stylized facts, abstractions and simplifications of reality which economists have come to regard as representative of the domain in question.    In other words, economic models are not not usually calibrated against reality directly, but against other models of reality.  Similarly, large parts of contemporary mathematical physics (such as string theory and brane theory) have no access to any physical phenomena other than via the mathematical model itself:  our only means of apprehension of vibrating strings in inaccessible dimensions beyond the four we live in, for instance, is through the mathematics of string theory.    In this light, it seems nonsense to talk about the effectiveness, reasonable or otherwise, of mathematics in modeling reality, since how we could tell?
  • 5. To predict (some properties of) a model of some real phenomena or existing system.
  • 6. To better understand, predict or manage some intended (not-yet-existing) artificial system, so to guide its design and development.   Understanding a system that does  not yet exist is qualitatively different to understanding an existing domain or system, because the possibility of calibration is often absent and because the model may act to define the limits and possibilities of subsequent design actions on the artificial system.  The use of speech act theory (a model of natural human language) for the design of artificial machine-to-machine languages, or the use of economic game theory (a mathematical model of a stylized conceptual model of particular micro-economic realities) for the design of online auction sites are examples here.   The modeling activity can even be performative, helping to create the reality it may purport to describe, as in the case of the Black-Scholes model of options pricing.
  • 7. To provide a locus for discussion between relevant stakeholders in some business or public policy domain.  Most large-scale business planning models have this purpose within companies, particularly when multiple partners are involved.  Likewise, models of major public policy issues, such as epidemics, have this function.  In many complex domains, such as those in public health, models provide a means to tame and domesticate the complexity of the domain.  This helps stakeholders to jointly consider concepts, data, dynamics, policy options, and assessment of potential consequences of policy options,  all of which may need to be socially constructed. 
  • 8. To provide a means for identification, articulation and potentially resolution of trade-offs and their consequences in some business or public policy domain.   This is the case, for example, with models of public health risk assessment of chemicals or new products by environmental protection agencies, and models of epidemics deployed by government health authorities.
  • 9. To enable rigorous and justified thinking about the assumptions and their relationships to one another in modeling some domain.   Business planning models usually serve this purpose.   They may be used to inform actions, both to eliminate or mitigate negative consequences and to enhance positive consequences, as in retroflexive decision making.
  • 10. To enable a means of assessment of managerial competencies of the people undertaking the modeling activity. Investors in start-ups know that the business plans of the company founders are likely to be out of date very quickly.  The function of such business plans is not to model reality accurately, but to force rigorous thinking about the domain, and to provide a means by which potential investors can challenge the assumptions and thinking of management as way of probing the managerial competence of those managers.    Business planning can thus be seen to be a form of epideictic argument, where arguments are assessed on their form rather than their content, as I have argued here.
  • 11. As a means of play, to enable the exercise of human intelligence, ingenuity and creativity, in developing and exploring the properties of models themselves.  This purpose is true of that human activity known as doing pure mathematics, and perhaps of most of that academic activity known as doing mathematical economics.   As I have argued before, mathematical economics is closer to theology than to the modeling undertaken in the natural sciences. I see nothing wrong with this being a purpose of modeling, although it would be nice if academic economists were honest enough to admit that their use of public funds was primarily in pursuit of private pleasures, and any wider social benefits from their modeling activities were incidental.

POSTSCRIPT (Added 2011-06-17):  I have just seen Joshua Epstein’s 2008 discussion of the purposes of modeling in science and social science.   Epstein lists 17 reasons to build explicit models (in his words, although I have added the label “0” to his first reason):

0. Prediction
1. Explain (very different from predict)
2. Guide data collection
3. Illuminate core dynamics
4. Suggest dynamical analogies
5. Discover new questions
6. Promote a scientific habit of mind
7. Bound (bracket) outcomes to plausible ranges
8. Illuminate core uncertainties
9. Offer crisis options in near-real time. [Presumably, Epstein means “crisis-response options” here.]
10. Demonstrate tradeoffe/ suggest efficiencies
11. Challenge the robustness of prevailing theory through peturbations
12. Expose prevailing wisdom as imcompatible with available data
13. Train practitioners
14. Discipline the policy dialog
15. Educate the general public
16. Reveal the apparently simple (complex) to be complex (simple).

These are at a lower level than my list, and I believe some of his items are the consequences of purposes rather than purposes themselves, at least for honest modelers (eg, #11, #12, #16).
References:
Joshua M Epstein [2008]: Why model? Keynote address to the Second World Congress on Social Simulation, George Mason University, USA.  Available here (PDF).
Robert E Marks [2007]:  Validating simulation models: a general framework and four applied examples. Computational Economics, 30 (3): 265-290.
David F Midgley, Robert E Marks and D Kunchamwar [2007]:  The building and assurance of agent-based models: an example and challenge to the field. Journal of Business Research, 60 (8): 884-893.
Robert Rosen [1985]: Anticipatory Systems. Pergamon Press.
Ariel Rubinstein [1998]: Modeling Bounded Rationality. Cambridge, MA, USA: MIT Press.  Zeuthen Lecture Book Series.
Ariel Rubinstein [2006]: Dilemmas of an economic theorist. Econometrica, 74 (4): 865-883.

The Matherati: Martin Harvey

Writing in Bertinoro, Italy, I have just learnt of the death earlier this year of J. Martin Harvey (1949-2011), a friend and former colleague, and one of Zimbabwe’s great mathematicians.

Martin was the first black student to gain First Class Honours in Mathematics from the University of Rhodesia (as it then was, now the University of Zimbabwe), the first person to gain a doctorate in mathematics from that University, and the first black lecturer appointed to teach mathematics there. (Indeed, his three degree certificates name three different universities – London, Rhodesia, and Zimbabwe – but all were granted by the same physical institution.) He later became an actuary, one of the few of any colour in Zimbabwe, but this was a career that lost value with the declining Zimbabwe dollar: actuarial science is about financial planning under uncertainty, and planning is impossible and pointless in an economy with hyper-inflation. He then became part of the great Zimbabwean diaspora, lecturing at the University of the Western Cape, in South Africa.

Martin was a true child of the sixties, with all the best qualities of that generation – open, generous, tolerant, curious, unpompous, democratic, sincere. He was a category theorist, and like most, a deep thinker. Martin was a superb jazz flautist, and on his travels would seek out jazz musicians to jam with. He wrote and recited poetry, and indeed could talk with knowledge on a thousand topics.

I once spent a month traveling the country with him on a market research project we did together, and his conversation was endlessly fascinating. Despite our very different childhoods, I recall a long, enjoyable evening with him in a shebeen in rural Zimbabwe talking about the various American and Japanese TV series we had both seen growing up (which I mentioned here). Among many memories, I recall him once arguing that a university in a marxist state should have only two faculties: a Faculty for the Forces of Production, and a Faculty for the Relations of Production. There was great laughter as he insisted that the arts and humanities were essential to effective production, and so belonged in the former faculty; this argument was typical of his wit and erudition.

Our mutual friend, Professor Heneri Dzinotyiweyi, another great Zimbabwean mathematician, has a tribute here. I send my condolences to his wife, Winnie Harvey, and family. Vale, Martin. It has been an honour to have known you.

Bibliography:

J. M. Harvey [1977]: T0-separation in topological categories. Quaestiones Mathematicae, 2 (1-3): 177 – 190. An earlier version apperared in: Proceedings of the Second Symposium on Categorical Topology, University of Cape Town, Cape Town, RSA, 1976.
J. M. Harvey [1979]: Topological functors from factorizations. In:  Categorical Topology, Proceedings Berlin 1978.  Lecture Notes in Mathematics #719, pp. 102-111.  Berlin, West Germany:  Springer.
J. M. Harvey [1982]: A note on topological hom-functors. Proceedings of the American Mathematical Society, 85 (4): 517-519. Available here.
J. M. Harvey (pseudonymously, as F. ka Yevrah) [1983]: Topological functors and factorizations.  In: Vojtech Srmr (Editor): Proceedings, Winter Meeting 1982, Durand Mathematical Association, pp. 35-38.

J. M. Harvey [1983]: Categorical characterization of uniform hyperspaces. Mathematical Proceedings of the Cambridge Philosophical Society, 94: 229-233. Available here.

J. M. Harvey [1985]: Reflective subcategories. Illinois Journal of Mathematics, 29 (3): 365-369. Available here.