A commentator on Andrew Sullivan’s blog asks: Where is the Darwinian theory of evil? Because modern biologists this last century or so have been very concerned to avoid teleological arguments, modern biology has still only an impoverished theory of intentionality. Living organisms are focused, in the standard evolutionary account, on surviving themselves in the here-and-now, apparently going through these daily motions unwittingly to ensure those diaphonous creatures, genes, can achieve THEIR memetic goals. Without a rich and subtle theory of intentionality, I don’t believe one can explain complex, abstract human phenomena such as evil or altruism or art or religion very compellingly.
Asking for a theory of intentions and intentionality does not a creationist one make, despite the vitriol often deployed by supporters of evolution. One non-creationist evolutionary biologist who has long been a critic of this absence of a subtle theory of intentionality in biology is J. Scott Turner, whose theories are derived from homeostasis he has observed in natural ecologies. I previously discussed some of his ideas here.
Alfred Gell : Art and Agency: An Anthropological Theory. Oxford, UK: Clarendon Press.
J. Scott Turner : The Tinkerer’s Accomplice: How Design Emerges from Life Itself. Cambridge, MA, USA: Harvard University Press.
William Safire, a speech-writer for Richard Nixon and later an op-ed columnist with The New York Times, has just died. To his memory, I retrieve a statement from his novel Full Disclosure, which nicely expresses a different model of decision-making to that taught in Decision Theory classes:
The truth about big decisions, Ericson mused, was that they never marched through logical processes, staff systems, option papers, and yellow pads to a conclusion. No dramatic bottom lines, no Thurberian captains with their voices like thin ice breaking, announcing, “We’re going through!” The big ones were a matter of mental sets, predispositions, tendencies – taking a lifetime to determine – followed by the battering of circumstance, the search for a feeling of what was right – never concluded at some finite moment of conclusion, but in the recollection of having “known” what the decision would be some indeterminate time before. For weeks now, Ericson knew he had known he was ready to do what he had to do, if only Andy or somebody could be induced to come up with a solution that the President could then put through his Decision-Making Process. That made his decision a willingness not to obstruct, rather than a decision to go ahead, much like Truman’s unwillingness to stop the train of events that led to the dropping of the A-bomb – not on the same level of magnitude, but the same type of reluctant going-along.” (pp. 491-492)
William Safire : Full Disclosure. (Garden City, NY, USA: Doubleday and Company).
Recently, I posted a salute to Mervyn Pragnell, a logician who was present in the early days of computer science. I was reminded of the late Malcolm Rennie, the person who introduced me to formal logic, and whom I acknowledged here. Rennie was the most enthusiastic and inspiring lecturer I ever had, despite using no multi-media wizardry, usually not even an overhead projector. Indeed, he mostly just sat and spoke, moving his body as little as possible and writing only sparingly on the blackboard, because he was in constant pain from chronic arthritis. He was responsible for part of an Introduction to Formal Logic course I took in my first year (the other part was taken by Paul Thom, for whom I wrote an essay on the notion of entailment in a system of Peter Geach). The students in this course were a mix of first-year honours pure mathematicians and later-year philosophers (the vast majority), and most of the philosophers struggled with non-linguistic representations (ie, mathematical symbols). Despite the diversity, Rennie managed to teach to all of us, providing challenging questions and discussions with and for both groups. He was also a regular entrant in the competitions which used to run in the weekly Nation Review (and a fellow-admirer of the My Sunday cartoons of Victoria Roberts), and I recall one occasion when a student mentioned seeing his name as a competition winner, and the class was then diverted into an enjoyable discussion of tactics for these competitions.
Continue reading ‘Australian logic: a salute to Malcolm Rennie’
Ode I:XI of Horace, Tu ne quaesieris (translated by David West), ending with the advice, carpe diem.
Don’t you ask, Leuconoe – the gods do not wish it to be known –
what end they have given me or to you, and don’t meddle with
Babylonian horoscopes. How much better to accept whatever comes,
whether Jupiter gives us other winters or whether this is our last
now wearying out the Tyrrhenian sea on the pumice stones
opposing it. Be wise, strain the wine and cut back long hope
into a small space. Even as we speak, envious time
flies past. Harvest the day and leave as little as possible for tomorrow.
Horace [1997 AD/23 BCE]: The Complete Odes and Epodes. Translation by David West. Oxford, UK: Oxford University Press.
Thanks to Normblog, I have seen Terry Eagleton’s recent interview on matters of religion, in which he is reported as saying:
All performatives imply propositions. There’s no point in my operating a performative like, say, promising, or cursing, unless I have certain beliefs about the nature of reality: that there is indeed such an institution as promising, that I am able to perform it, and so on. The performative and the propositional work into each other.
Before commenting on the substance here (ie, religion), some words on Eagleton’s evident mis-understanding of speech act theory and the philosophy of language, a mis-understanding that should have been clear if he tested his words against his own experiences of life. His statement concerns performatives — utterances which potentially change the state of the world by their being uttered. Examples include promises, commands, threats, entreaties, prayers, various legal declarations (eg, that a certain couple are now wed), etc. But mere propositional statements (that some description of the world is true) may also change the state of the world by the mere fact of being uttered.
Continue reading ‘Speech acts’
I had mentioned previously the unusually close political relationship between geographic neighbours Australia and New Zealand. But would you let your neighbours use your bathroom when their’s was broken? I guess you would if they were getting dressed to meet the US President for lunch:
Speaking at a lunch in New York, [Australian Prime Minister] Kevin Rudd revealed that he had woken on Wednesday morning, New York time, to find [New Zealand Prime Minister John] Key and his Foreign Affairs Minster, Murray McCully, lining up in their dressing gowns to use his bathroom at the residence of the Australian ambassador to the United Nations. It seems that, in a very Brian-like moment, the plumbing in the Kiwis’ hotel next door had failed.”
Normblog has a regular feature, Writer’s Choice, where writers give their opinions of books which have influenced them. Seeing this led me recently to think of the mathematical ideas which have influenced my own thinking. In an earlier post, I wrote about the writers whose books (and teachers whose lectures) directly influenced me. I left many pure mathematicians and statisticians off that list because most mathematics and statistics I did not receive directly from their books, but indirectly, mediated through the textbooks and lectures of others. It is time to make amends.
Here then is a list of mathematical ideas which have had great influence on my thinking, along with their progenitors. Not all of these ideas have yet proved useful in any practical sense, either to me or to the world – but there is still lots of time. Some of these theories are very beautiful, and it is their elegance and beauty and profundity to which I respond. Others are counter-intuitive and thus thought-provoking, and I recall them for this reason.
- Euclid’s axiomatic treatment of (Euclidean) geometry
- The various laws of large numbers, first proven by Jacob Bernoulli (which give a rational justification for reasoning from samples to populations)
- The differential calculus of Isaac Newton and Gottfried Leibniz (the first formal treatment of change)
- The Identity of Leonhard Euler: exp ( i * \pi) + 1 = 0, which mysteriously links two transcendental numbers (\pi and e), an imaginary number i (the square root of minus one) with the identity of the addition operation (zero) and the identity of the multiplication operation (1).
- The epsilon-delta arguments for the calculus of Augustin Louis Cauchy and Karl Weierstrauss
- The non-Euclidean geometries of Janos Bolyai, Nikolai Lobachevsky and Bernhard Riemann (which showed that 2-dimensional (or plane) geometry would be different if the surface it was done on was curved rather than flat – the arrival of post-modernism in mathematics)
- The diagonalization proofof Gregor Cantor that the Real numbers are not countable (showing that there is more than one type of infinity) (a proof-method later adopted by Godel, mentioned below)
- The axioms for the natural numbers of Guiseppe Peano
- The space-filling curves of Guiseppe Peano and others (mapping the unit interval continuously to the unit square)
- The axiomatic treatments of geometry of Mario Pieri and David Hilbert (releasing pure mathematics from any necessary connection to the real-world)
- The algebraic topology of Henri Poincare and many others (associating algebraic structures to topological spaces)
- The paradox of set theory of Bertrand Russell (asking whether the set of all sets contains itself)
- The Fixed Point Theorem of Jan Brouwer (which, inter alia, has been used to prove that certain purely-artificial mathematical constructs called economies under some conditions contain equilibria)
- The theory of measure and integration of Henri Lebesgue
- The constructivism of Jan Brouwer (which taught us to think differently about mathematical knowledge)
- The statistical decision theory of Jerzy Neyman and Egon Pearson (which enabled us to bound the potential errors of statistical inference)
- The axioms for probability theory of Andrey Kolmogorov (which formalized one common method for representing uncertainty)
- The BHK axioms for intuitionistic logic, associated to the names of Jan Brouwer, Arend Heyting and Andrey Kolmogorov (which enabled the formal treatment of intuitionism)
- The incompleteness theorems of Kurt Godel (which identified some limits to mathematical knowledge)
- The theory of categories of Sam Eilenberg and Saunders Mac Lane (using pure mathematics to model what pure mathematicians do, and enabling concise, abstract and elegant presentations of mathematical knowledge)
- Possible-worlds semantics for modal logics (due to many people, but often named for Saul Kripke)
- The topos theory of Alexander Grothendieck (generalizing the category of sets)
- The proof by Paul Cohen of the logical independence of the Axiom of Choice from the Zermelo-Fraenkel axioms of Set Theory (which establishes Choice as one truly weird axiom!)
- The non-standard analysis of Abraham Robinson and the synthetic geometry of Anders Kock (which formalize infinitesimal arithmetic)
- The non-probabilistic representations of uncertainty of Arthur Dempster, Glenn Shafer and others (which provide formal representations of uncertainty without the weaknesses of probability theory)
- The information geometry of Shunichi Amari, Ole Barndorff-Nielsen, Nikolai Chentsov, Bradley Efron, and others (showing that the methods of statistical inference are not just ad hoc procedures)
- The robust statistical methods of Peter Huber and others
- The proof by Andrew Wiles of The Theorem Formerly Known as Fermat’s Last (which proof I don’t yet follow).
Some of these ideas are among the most sublime and beautiful thoughts of humankind. Not having an education which has equipped one to appreciate these ideas would be like being tone-deaf.
This week I was invited to participate as an expert in a Delphi study of The Future Internet, being undertaken by an EC-funded research project. One of the aims of the project is to identify multiple plausible future scenarios for the socio-economic role(s) of the Internet and related technologies, after which the project aim to reach a consensus on a small number of these scenarios. Although the documents I saw were unclear as to exactly which population this consensus was to be reached among, I presume it was intended to be a consensus of the participants in the Delphi Study.
I have a profound philosophical disagreement with this objective, and indeed with most of the EC’s many efforts in standardization. Tim Berners-Lee invented Hyper-Text Transfer Protocol (HTTP), for example, in order to enable physicists to publish their research documents to one another in a manner which enabled author-control of document appearance. Like most new technologies. HTTP was not invented for the many other uses to which it has since been put; indeed, many of these other applications have required hacks or fudges to HTTP in order to work. For example, because HTTP does not keep track of the state of a request, fudges such as cookies are needed. If we had all been in consensual agreement with The Greatest Living Briton about the purposes of HTTP, we would have no e-commerce, no blogging, no social networking, no easy remote access to databases, no large-scale distributed collaborations, no easy action-at-a-distance, in short no transformation of our society and life these last two decades, just the broadcast publishing of text documents.
Let us put aside this childish, warm-and-fuzzy, touchy-feely seeking after consensus. Our society benefits most from a diversity of opinions and strong disagreements, a hundred flowers blooming, a cacophony of voices in the words of Oliver Wendell Holmes. This is particularly true of opinions regarding the uses and applications of innovations. Yet the EC persists, in some recalcitrant chasing after illusive certainty, in trying to force us all into straitjackets of standards and equal practice. These efforts are misguided and wrong-headed, and deserve to fail.
What are the odds, eh? On the same day that the Guardian publishes an obituary of theoretical computer scientist, Peter Landin (1930-2009), pioneer of the use of Alonzo Church’s lambda calculus as a formal semantics for computer programs, they also report that the Government is planning only to fund research which has relevance to the real-world. This is GREAT NEWS for philosophers and pure mathematicians!
What might have seemed, for example, mere pointless musings on the correct way to undertake reasoning – by Aristotle, by Islamic and Roman Catholic medieval theologians, by numerous English, Irish and American abstract mathematicians in the 19th century, by an entire generation of Polish logicians before World War II, and by those real-world men-of-action Gottlob Frege, Bertrand Russell, Ludwig Wittgenstein and Alonzo Church – turned out to be EXTREMELY USEFUL for the design and engineering of electronic computers. Despite Russell’s Zen-influenced personal motto – “Just do! Don’t think!” (later adopted by IBM) – his work turned out to be useful after all. I can see the British research funding agencies right now, using their sophisticated and proven prognostication procedures to calculate the society-wide economic and social benefits we should expect to see from our current research efforts over the next 2300 years – ie, the length of time that Aristotle’s research on logic took to be implemented in technology. Thank goodness our politicians have shown no myopic utilitarianism this last couple of centuries, eh what?!
All while this man apparently received no direct state or commercial research funding for his efforts as a computer pioneer, playing with “pointless” abstractions like the lambda calculus.
And Normblog also comments.
POSTSCRIPT (2014-02-16): And along comes The Cloud and ruins everything! Because the lower layers of the Cloud – the physical infrastructure, operating system, even low-level application software – are fungible and dynamically so, then the Cloud is effectively “dark” to its users, beneath some level. Specifying and designing applications that will run over it, or systems that will access it, thus requires specification and design to be undertaken at high levels of abstraction. If all you can say about your new system is that in 10 years time it will grab some data from the NYSE, and nothing (yet) about the format of that data, then you need to speak in abstract generalities, not in specifics. It turns out the lambda calculus is just right for this task and so London’s big banks have been recruiting logicians and formal methods people to spec & design their next-gen systems. You can blame those action men, Church and Russell.
The Asian scholar Arthur Waley once wrote:
All argument consists in proceeding from the known to the unknown, in persuading people that the new thing you want them to think is not essentially different from or at any rate is not inconsistent with the old things they think already. This is the method of science, just as much as it is the method of rhetoric and poetry. But, as between science and forms of appeal such as poetry, there is a great difference in the nature of the link that joins the new to the old. Science shows that the new follows from the old according to the same principles that built up the old. “If you don’t accept what I now ask you to believe,” the scientist says, “you have no right to go on believing what you believe already.” The link used by science is a logical one. Poetry and rhetoric are also concerned with bridging the gap between the new and the old; but they do not need to build a formal bridge. What they fling across the intervening space is a mere filament such as no sober foot would dare to tread. But it is not with the sober that poetry and eloquence have to deal. Their te, their essential power, consists in so intoxicating us that, endowed with the recklessness of drunken men, we dance across the chasm, hardly aware how we reached the other side.” (Waley 1934, Introduction, pp. 96-97)
Arthur Waley : The Way and its Power: A Study of the Tao Te Ching and its Place in Chinese Thought. London, UK: George Allen and Unwin.