Chance would be a fine thing

Music critic Alex Ross discusses John Cage’s music in a recent article in The New Yorker.    Ross goes some way before he trips up, using those dreaded  – and completely inappropriate – words “randomness” and “chance”:

Later in the forties, he [Cage] laid out “gamuts” – gridlike arrays of preset sounds – trying to go from one to the next without consciously shaping the outcome.  He read widely in South Asian and East Asian thought, his readings guided by the young Indian musician Gita Sarabhai and, later, by the Zen scholar Daisetz Suzuki.  Sarabhai supplied him with a pivotal formulation of music’s purpose:  “to sober and quiet the mind, thus rendering it susceptible to divine influences.”  Cage also looked to Meister Eckhart and Thomas Aquinas, finding another motto in Aquinas’s declaration that “art imitates nature in the manner of its operation.”
. . .
In 1951, writing the closing movement of his Concerto for Prepared Piano, he finally let nature run its course, flipping coins and consulting the I Ching to determine which elements of his charts should come next.   “Music of Changes,” a forty-three-minute piece of solo piano, was written entirely in this manner, the labor-intensive process consuming most of a year.
As randomness took over, so did noise.  “Imaginary Landscape No. 4″ employs twelve radios, whose tuning, [page-break] volume, and tone are governed by chance operations.”  [pages 57-58]

That even such a sympathetic, literate, and erudite observer as Alex Ross should misconstrue what Cage was doing with the I Ching as based on chance events is disappointing.  But, as I’ve argued before about Cage’s music, the belief that the material world is all there is is so deeply entrenched in contemporary western culture that westerners seem rarely able to conceive of other ways of being.  Tossing coins may seem to be a chance operation to someone unversed in eastern philosophy, but was surely not to John Cage.
References:
Alex Ross [2010]:  Searching for silence.  John Cage’s art of noise.   The New Yorker, 4 October 2010, pp. 52-61.
James Pritchett [1993]:  The Music of John Cage.  Cambridge, UK:  Cambridge University Press.
Here are other posts on music and art.

In defence of futures thinking

Norm at Normblog has a post defending theology as a legitimate area of academic inquiry, after an attack on theology by Oliver Kamm.  (Since OK’s post is behind a paywall, I have not read it, so my comments here may be awry with respect to that post.)  Norm argues, very correctly, that it is legitimate for theology, considered as a branch of philosophy to, inter alia, reflect on the properties of entities whose existence has not yet been proven.  In strong support of Norm, let me add:  Not just in philosophy!
In business strategy, good decision-making requires consideration of the consequences of potential actions, which in turn requires the consideration of the potential actions of other actors and stakeholders in response to the first set of actions.  These actors may include entities whose existence is not yet known or even suspected, for example, future competitors to a product whose launch creates a new product category.   Why, there’s even a whole branch of strategy analysis, devoted to scenario planning, a discipline that began in the military analysis of alternative post-nuclear worlds, and whose very essence involves the creation of imagined futures (for forecasting and prognosis) and/or imagined pasts (for diagnosis and analysis).   Every good air-crash investigation, medical diagnosis, and police homicide investigation, for instance, involves the creation of imagined alternative pasts, and often the creation of imaginary entities in those imagined pasts, whose fictional attributes we may explore at length.   Arguably, in one widespread view of the philosophy of mathematics, pure mathematicians do nothing but explore the attributes of entities without material existence.
And not just in business, medicine, the military, and the professions.   In computer software engineering, no new software system development is complete without due and rigorous consideration of the likely actions of users or other actors with and on the system, for example.   Users and actors here include those who are the intended target users of the system, as well as malevolent or whimsical or poorly-behaved or bug-ridden others, both human and virtual, not all of whom may even exist when the system is first developed or put into production.      If creative articulation and manipulation of imaginary futures (possible or impossible) is to be outlawed, not only would we have no literary fiction or much poetry, we’d also have few working software systems either.

At Swim-two-birds


Brian Dillon reviews a British touring exhibition of the art of John Cage, currently at the Baltic Mill Gateshead.
Two quibbles:  First, someone who compare’s Cage’s 4′ 33” to a blank gallery wall hasn’t actually listened to the piece.  If Dillon had compared it to a glass window in the gallery wall allowing a view of the outside of the gallery, then he would have made some sense.  But Cage’s composition is not about silence, or even pure sound, for either of which a blank gallery wall might be an appropriate visual representation.  The composition is about ambient sound, and about what sounds count as music in our culture.
Second, Dillon rightly mentions that the procedures used by Cage for musical composition from 1950 onwards (and later for poetry and visual art) were based on the Taoist I Ching.  But he wrongly describes these procedures as being based on “the philosophy of chance.”     Although widespread, this view is nonsense, accurate neither as to what Cage was doing, nor even as to what he may have thought he was doing.   Anyone subscribing to the Taoist philosophy underlying them understands the I Ching procedures as examplifying and manifesting hidden causal mechanisms, not chance.   The point of the underlying philosophy is that the random-looking events that result from the procedures express something unique, time-dependent, and personal to the specific person invoking the I Ching at the particular time they invoke it. So, to a Taoist, the resulting music or art is not “chance” or “random” or “aleatoric” at all, but profoundly deterministic, being the necessary consequential expression of deep, synchronistic, spiritual forces. I don’t know if Cage was himself a Taoist (I’m not sure that anyone does), but to an adherent of Taoist philosophy Cage’s own beliefs or attitudes are irrelevant to the workings of these forces.  I sense that Cage had sufficient understanding of Taoist and Zen ideas (Zen being the Japanese version of Taoism) to recognize this particular feature:  that to an adherent of the philosophy the beliefs of the invoker of the procedures are irrelevant.
In my experience, the idea that the I Ching is a deterministic process is a hard one for many modern westerners to understand, let alone to accept, so entrenched is the prevailing western view that the material realm is all there is.  This entrenched view is only historically recent in the west:  Isaac Newton, for example, was a believer in the existence of cosmic spiritual forces, and thought he had found the laws which governed their operation.    Obversely, many easterners in my experience have difficulty with notions of uncertainty and chance; if all events are subject to hidden causal forces, the concepts of randomness and of alternative possible futures make no sense.  My experience here includes making presentations and leading discussions on scenario analyses with senior managers of Asian multinationals.
We are two birds swimming, each circling the pond, warily, neither understanding the other, neither flying away.
References:
Kyle Gann [2010]: No Such Thing as Silence.  John Cage’s 4′ 33”.  New Haven, CT, USA:  Yale University Press.
James Pritchett [1993]:  The Music of John Cage.  Cambridge, UK:  Cambridge University Press.

Bayesian statistics

One of the mysteries to anyone trained in the frequentist hypothesis-testing paradigm of statistics, as I was, and still adhering to it, as I do, is how Bayesian approaches seemed to have taken the academy by storm.   One wonders, first, how a theory based – and based explicitly – on a measure of uncertainty defined in terms of subjective personal beliefs, could be considered even for a moment for an inter-subjective (ie, social) activity such as Science.

One wonders, second, how a theory justified by appeals to such socially-constructed, culturally-specific, and readily-contestable activities as gambling (ie, so-called Dutch-book arguments) could be taken seriously as the basis for an activity (Science) aiming for, and claiming to achieve, universal validity.   One wonders, third, how the fact that such justifications, even if gambling presents no moral, philosophical or other qualms,  require infinite sequences of gambles is not a little troubling for all of us living in this finite world.  (You tell me you are certain to beat me if we play an infinite sequence of gambles? Then, let me tell you, that I have a religion promising eternal life that may interest you in turn.)

One wonders, fourthly, where are recorded all the prior distributions of beliefs which this theory requires investigators to articulate before doing research.  Surely someone must be writing them down, so that we consumers of science can know that our researchers are honest, and hold them to potential account.   That there is such a disconnect between what Bayesian theorists say researchers do and what those researchers demonstrably do should trouble anyone contemplating a choice of statistical paradigms, surely. Finally, one wonders how a theory that requires non-zero probabilities be allocated to models of which the investigators have not yet heard or even which no one has yet articulated, for those models to be tested, passes muster at the statistical methodology corral.

To my mind, Bayesianism is a theory from some other world – infinite gambles, imagined prior distributions, models that disregard time or requirements for constructability,  unrealistic abstractions from actual scientific practice – not from our own.

So, how could the Bayesians make as much headway as they have these last six decades? Perhaps it is due to an inherent pragmatism of statisticians – using whatever techniques work, without much regard as to their underlying philosophy or incoherence therein.  Or perhaps the battle between the two schools of thought has simply been asymmetric:  the Bayesians being more determined to prevail (in my personal experience, to the point of cultism and personal vitriol) than the adherents of frequentism.  Greg Wilson’s 2001 PhD thesis explored this question, although without finding definitive answers.

Now, Andrew Gelman and the indefatigable Cosma Shalizi have written a superb paper, entitled “Philosophy and the practice of Bayesian statistics”.  Their paper presents another possible reason for the rise of Bayesian methods:  that Bayesianism, when used in actual practice, is most often a form of hypothesis-testing, and thus not as untethered to reality as the pure theory would suggest.  Their abstract:

A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism.  We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.

Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.

References:
Andrew Gelman and Cosma Rohilla Shalizi [2010]:  Philosophy and the practice of Bayesian statistics.  Available from Arxiv.  Blog post here.

Gregory D. Wilson [2001]:   Articulation Theory and Disciplinary Change:  Unpacking the Bayesian-Frequentist Paradigm Conflict in Statistical Science.  PhD Thesis,  Rhetoric and Professional Communication Programme, New Mexico State University.  Las Cruces, NM, USA.  July 2001.

Gray on Akerlof and Shiller

Philosopher John Gray has a review in the LRB of Akerlof and Shiller’s new book on the errors of mainstream economics, a review which mentions the sadly-neglected economist George Shackle.  Shackle, unlike most academic economists, actually worked in industry and Government and had made investment decisions, and knew whereof he wrote.

If Akerlof and Shiller’s grip on the history of economic thought is shaky, they also fail to grasp why Keynes rejected the idea that markets are self-stabilising. Throughout Animal Spirits they portray him as reintegrating psychology with economic theory. No doubt this was one of Keynes’s goals, but it is not his most fundamental revision of economic orthodoxy. Among his other accomplishments he was the author of A Treatise on Probability (1921), in which he tried to develop a theory of ‘rational degrees of belief’. By his own account he failed, and in his canonical General Theory of Employment, Interest and Money (1936) he concluded that there was no way anyone could make forecasts. Future interest rates and prices, new inventions and the likelihood of a European war cannot be predicted: there is no ‘basis on which to form any calculable probability whatever. We simply do not know!’ For Keynes, markets are unstable less because they are driven by emotion than because the future is unknowable. To suggest that the source of market volatility is unreason is to imply that if people were fully rational markets could be stable. But even if people were affectless calculating machines they would still be ignorant of the future, and markets would still be volatile. The root cause of market instability is the insuperable limitation of human knowledge.
Continue reading ‘Gray on Akerlof and Shiller’

Great mathematical ideas

Normblog has a regular feature, Writer’s Choice, where writers give their opinions of books which have influenced them.   Seeing this led me recently to think of the mathematical ideas which have influenced my own thinking.   In an earlier post, I wrote about the writers whose  books (and teachers whose lectures) directly influenced me.  I left many pure mathematicians and statisticians off that list because most mathematics and statistics I did not receive directly from their books, but indirectly, mediated through the textbooks and lectures of others.  It is time to make amends.
Here then is a list of mathematical ideas which have had great influence on my thinking, along with their progenitors.  Not all of these ideas have yet proved useful in any practical sense, either to me or to the world – but there is still lots of time.   Some of these theories are very beautiful, and it is their elegance and beauty and profundity to which I respond.  Others are counter-intuitive and thus thought-provoking, and I recall them for this reason.

  • Euclid’s axiomatic treatment of (Euclidean) geometry
  • The various laws of large numbers, first proven by Jacob Bernoulli (which give a rational justification for reasoning from samples to populations)
  • The differential calculus of Isaac Newton and Gottfried Leibniz (the first formal treatment of change)
  • The Identity of Leonhard Euler:  exp ( i * \pi) + 1 = 0, which mysteriously links two transcendental numbers (\pi and e), an imaginary number i (the square root of minus one) with the identity of the addition operation (zero) and the identity of the multiplication operation (1).
  • The epsilon-delta arguments for the calculus of Augustin Louis Cauchy and Karl Weierstrauss
  • The non-Euclidean geometries of Janos Bolyai, Nikolai Lobachevsky and Bernhard Riemann (which showed that 2-dimensional (or plane) geometry would be different if the surface it was done on was curved rather than flat – the arrival of post-modernism in mathematics)
  • The diagonalization proofof Gregor Cantor that the Real numbers are not countable (showing that there is more than one type of infinity) (a proof-method later adopted by Godel, mentioned below)
  • The axioms for the natural numbers of Guiseppe Peano
  • The space-filling curves of Guiseppe Peano and others (mapping the unit interval continuously to the unit square)
  • The axiomatic treatments of geometry of Mario Pieri and David Hilbert (releasing pure mathematics from any necessary connection to the real-world)
  • The algebraic topology of Henri Poincare and many others (associating algebraic structures to topological spaces)
  • The paradox of set theory of Bertrand Russell (asking whether the set of all sets contains itself)
  • The Fixed Point Theorem of Jan Brouwer (which, inter alia, has been used to prove that certain purely-artificial mathematical constructs called economies under some conditions contain equilibria)
  • The theory of measure and integration of Henri Lebesgue
  • The constructivism of Jan Brouwer (which taught us to think differently about mathematical knowledge)
  • The statistical decision theory of Jerzy Neyman and Egon Pearson (which enabled us to bound the potential errors of statistical inference)
  • The axioms for probability theory of Andrey Kolmogorov (which formalized one common method for representing uncertainty)
  • The BHK axioms for intuitionistic logic, associated to the names of Jan Brouwer, Arend Heyting and Andrey Kolmogorov (which enabled the formal treatment of intuitionism)
  • The incompleteness theorems of Kurt Godel (which identified some limits to mathematical knowledge)
  • The theory of categories of Sam Eilenberg and Saunders Mac Lane (using pure mathematics to model what pure mathematicians do, and enabling concise, abstract and elegant presentations of mathematical knowledge)
  • Possible-worlds semantics for modal logics (due to many people, but often named for Saul Kripke)
  • The topos theory of Alexander Grothendieck (generalizing the category of sets)
  • The proof by Paul Cohen of the logical independence of the Axiom of Choice from the Zermelo-Fraenkel axioms of Set Theory (which establishes Choice as one truly weird axiom!)
  • The non-standard analysis of Abraham Robinson and the synthetic geometry of Anders Kock (which formalize infinitesimal arithmetic)
  • The non-probabilistic representations of uncertainty of Arthur Dempster, Glenn Shafer and others (which provide formal representations of uncertainty without the weaknesses of probability theory)
  • The information geometry of Shunichi Amari, Ole Barndorff-Nielsen, Nikolai Chentsov, Bradley Efron, and others (showing that the methods of statistical inference are not just ad hoc procedures)
  • The robust statistical methods of Peter Huber and others
  • The proof by Andrew Wiles of The Theorem Formerly Known as Fermat’s Last (which proof I don’t yet follow).

Some of these ideas are among the most sublime and beautiful thoughts of humankind.  Not having an education which has equipped one to appreciate these ideas would be like being tone-deaf.

Epistemic modal logic at the CIA

Jim Angleton
A recent issue of the TLS ran a review by Terence Hawkes of the biography by Michael Holzman of Jim Angleton, head of counter-intelligence at the CIA.  Holzman’s book, although mostly written from secondary sources, is a fine summary of Angleton’s life and career.  It is marred, however, by (a) Holzman’s annoying (academic) habit of quoting something or somebody and  then repeating, verbatim, key words from that very quotation in the following paragraph, as if we readers were idiots, unable to read for ourselves or contemplate an idea for longer than a paragraph.  And, (b) by a casual sloppiness about dates.  Call me old-fashioned, but I think a historian should not simply say “in  May that  year”  when the last mention of the specific year was some tens of pages and several anecdotes or set-pieces back.   No doubt Holzman always knows which of the 71 years of Angleton’s life and the various ones before or since he is currently referring to, but this is rarely obvious to the reader of this book, even to a careful reader.   In view of the subject matter and Holzman’s theme (that Angleton’s training in so-called practical criticism was invaluable to his career in counter-intelligence), one has to wonder if such sloppiness is deliberate.
Holzman also does not tell us much about the actual theory and practice of counter-intelligence, despite the title and the claims he makes up front.   In particular, his treatment of the Nosenko case is misleading, partly he believes the official CIA line and because he does not refer to the most recent publication on the case, namely the book by Bagley. Hawkes seems to have followed Holzman in his garden-path-up-straying.
Unlike literary criticism, espionage is not only about what to believe, it is also about what to do.  It may be the case that Yuri Nosenko was a genuine Soviet defector, as Holzman claims CIA eventually came to believe.  Others closely involved in the case, such as retired CIA agent Tennent Bagley (2007) have argued compellingly that Nosenko was in fact a KGB plant, not a genuine defector.
Whether or not Nosenko was a genuine defector, and whether or not CIA leadership believed him to be a genuine defector, CIA would also need to concern itself with what impact a revelation of their beliefs would have on KGB, as I have argued before, and thus on what proposition to seek to have KGB believe about CIA’s beliefs in the matter.   If CIA were seen by KGB to accept Nosenko’s testimony (inconsistent and incomplete, by his own admission) too quickly, KGB may not accept as genuine any CIA profession of belief in his bona fides.  So, some delay and equivocation in decision-making was called for.  If CIA professed to believe that Nosenko was a plant or allowed KGB to conclude that CIA believed Nosenko to be a plant, then CIA risked signalling to KGB that they (CIA) were also rejecting all the testimony he arrived in the west with, which included detailed protestations of KGB non-involvement in the assassination of President John F. Kennedy.    Whether or not CIA believed that KGB were involved in that assassination, they may or may not have wished to let KGB know what they believed, at least at that particular moment.  In any case, perhaps a clever (and cunning) CIA would seek to have KGB believe that Nosenko was believed, in order to see how the game played itself out.
So, one possible course of action for CIA was to signal to KGB that they accepted Nosenko as a genuine defector, but to signal also that they came to this decision only slowly and painfully.   How better to do this than to interrogate the man at length and (allegedly) harshly, and then, after years of apparent indecision and multiple internal investigations (some of which may even have been genuine), decide to accept him publicly as a true defector.   This public acceptance – consultancy fees, letters, flags, medals, and all – even now, four decades later, may have absolutely no connection whatever with what CIA leadership really believed then or, indeed, what they believe now.
It’s not only litcrit that gets an outing in these events.  If any philosopher reading this wonders about the practical usefulness of dynamic epistemic modal logic, wonder no more.
References:
Tennent H. Bagley [2007]:  Spy Wars.  New Haven, CT, USA:  Yale University Press.
Terence Hawkes [2009]: “William Empson’s Influence on the CIA.”  Times Literary Supplement, 2009-06-10.
Michael Holzman [2008]:  James Jesus Angleton, the CIA and the Craft of Counterintelligence.  Boston, MA, USA: University of Massachusetts Press.

Black swans of trespass

gould-blackswan
Nassim Taleb has an article in the FinTimes presenting ten principles he believes would reduce the occurrence of rare, catastrophic events (events he has taken to calling black swans).  Many of his principles are not actionable, and several are ill-advised.  Take, for instance, # 3:

3. People who were driving a school bus blindfolded (and crashed it) should never be given a new bus.

If this principle was applied, the bus would have no drivers at all.   All of us are driving blindfolded, with our only guide to the road ahead being what we can apprehend from the rear-view mirror.  Past performance, as they say, is no guide to the future direction of the road.
Or take #6:

6. Do not give children sticks of dynamite, even if they come with a warning.  Complex derivatives need to be banned because nobody understands them and few are rational enough to know it. Citizens must be protected from themselves, from bankers selling them “hedging” products, and from gullible regulators who listen to economic theorists.

Well, what precisely is “complex”?  Surely, Dr Taleb is not suggesting the banning of plain futures and options, as these serve a valuable function in our economy (enabling the parceling and trading of risk).  But even these are too complex for some people (such as those farmers, dentists, and local government officials currently with burnt fingers), and surely such people need protection from themselves much more so than the quant-jocks and their masters on Wall Street.  So, where would one draw the line between allowed derivative and disallowed?
Once again, it appears there has been a mis-understanding of the cause of the recent problems.  It is not complex derivatives per se that are the problem, but the fact that many of these financial instruments have, unusually, been highly-correlated.  Thus, the failure of one instrument (and subsequently, one bank) brings down all the others with it — there is a systemic risk as well as a participant risk involved in their use.   Dr Taleb, who has long been a critic of the unthinking use of Gaussian models in finance, I am sure realises this.

The decade around 1664

We noted before that one consequence of the rise of coffee-houses in 17th-century Europe was the development of probability theory as a mathematical treatment of reasoning with uncertainty.   Ian Hacking’s history of the emergence of probabilistic ideas in Europe has a nice articulation of the key events, all of which took place a decade either side of 1664:

  • 1654:  Pascal wrote to Fermat with his ideas about probability
  • 1657: Huygens wrote the first textbook on probability to be published, and Pascal was the first to apply probabilitiy ideas to problems other than games of chance
  • 1662: The Port Royal Logic was the first publication to mention numerical measurements of something called probability, and Leibniz applied probability to problems in legal reasoning
  • 1662:  London merchant John Gaunt published the first set of statistics drawn from records of mortality
  • Late 1660s:  Probability theory was used by John Hudde and by Johan de Witt in Amsterdam to provide a sound basis for reasoning about annuities (Hacking 1975, p.11).

Developments in the use of symbolic algebra in Italy in the 16th-century provided the technical basis upon which a formal theory of uncertainty could be erected.  And coffee-houses certainly aided the dissemination of probabilistic ideas, both in spoken and written form.   Coffee houses may even have aided the creation of these ideas – new mathematical concepts are only rarely created by a solitary person working alone in a garret, but usually arise instead through conversation and debate among people each having only partial or half-formed ideas.
However, one aspect of the rise of probability in the mid 17th century is still a mystery to me:  what event or phenomena led so many people across Europe to be interested in reasoning about uncertainty at this time?  Although 1664 saw the establishment of a famous brewery in Strasbourg, I suspect the main motivation was the prevalence of bubonic plague in Europe.   Although plague had been around for many centuries, the Catholic vs. Protestant religious wars of the previous 150 years had, I believe, led many intelligent people to abandon or lessen their faith in religious explanations of uncertain phenomena.   Rene Descartes, for example, was led to cogito, ergo sum when seeking beliefs which peoples of all faiths or none could agree on.  Without religion, alternative models to explain or predict human deaths, morbidity and natural disasters were required.   The insurance of ocean-going vessels provided a financial incentive for finding good predictive models of such events.
Hacking notes (pp. 4-5) that, historically, probability theory has mostly developed in response to problems about uncertain reasoning in other domains:  In the 17th century, these were problems in insurance and annuities, in the 18th, astronomy, the 19th, biometrics and statistical mechanics, and the early 20th, agricultural experiments.  For more on the connection between statistical theory and experiments in agriculture, see Hogben (1957).  For the relationship of 20th-century probability theory to statistical physics, see von Plato (1994).
POSTSCRIPT (ADDED 2011-04-25):
There appear to have been major outbreaks of bubonic plague in Seville, Spain (1647-1652), in Naples (1656), in Amsterdam, Holland (1663-1664), in Hamburg (1663), in London, England (1665-1666), and in France (1668).   The organist Heinrich Scheidemann, teacher of Johann Reincken, for example, died during the outbreak in Hamburg in 1663.   Wikipedia now has a listing of global epidemics (albeit incomplete).
 
POSTSCRIPT (ADDED 2018-01-19):
The number 1664 in Roman numerals is MDCLXIV, which uses every Roman numeric symbol precisely once.  The number 1666 has the same property, and for that number, the Roman symbols are in decreasing order.
 
References:
Ian Hacking [1975]:  The Emergence of Probability: a Philosophical study of early ideas about Probability, Induction and Statistical Inference. London, UK: Cambridge University Press.
Lancelot Hogben [1957]: Statistical Theory. W. W. Norton.
J. von Plato [1994]:  Creating Modern Probability:  Its Mathematics, Physics and Philosophy in Historical Perspective.  Cambridge Studies in Probability, Induction, and Decision Theory.  Cambridge, UK:  Cambridge University Press.

Evaluating prophecy

With the mostly-unforeseen global financial crisis uppermost in our minds, I am led to consider a question that I have pondered for some time:   How should we assess forecasts and prophecies?   Within the branch of philosophy known as argumentation, a lot of attention has been paid to the conditions under which a rational decision-maker would accept particular types of argument.

For example, although it is logically invalid to accept an argument only on the grounds that the person making it is an authority on the subject, our legal system does this all the time.  Indeed,  the philosopher Charles Willard has argued that modern society could not function without most of us accepting arguments-from-authority most of the time, and it is usually rational to do so.  Accordingly, philosophers of argumentation have investigated the conditions under which a rational person would accept or reject such arguments.   Douglas Walton (1996, pp. 64-67) presents an argumentation scheme for such acceptance/rejection decisions, the Argument Scheme for Arguments from Expert Opinion, as follows:

  • Assume E is an expert in domain D.
  • E asserts that statement A is known to be true.
  • A is within D.

Therefore, a decision-maker may plausibly take A to be true, unless one or more of the following Critical Questions (CQ) is answered in the negative:

  • CQ1:  Is E a genuine expert in D?
  • CQ2:  Did E really assert A?
  • CQ3:  Is A relevant to domain D?
  • CQ4:  Is A consistent with what other experts in D say?
  • CQ5:  Is A consistent with known evidence in D?

One could add further questions to this list, for example:

  • CQ6:  Is E’s opinion offered without regard to any reward or benefit upon statement A being taken to be true by the decision-maker?

Walton himself presents some further critical questions first proposed by Augustus DeMorgan in 1847 to deal with cases under CQ2 where the expert’s opinion is presented second-hand, or in edited form, or along with the opinions of others.
Clearly, some of these questions are also pertinent to assessing forecasts and prophecies.  But the special nature of forecasts and prophecies may enable us to make some of these questions more precise.  Here is my  Argument Scheme for Arguments from Prophecy:

  • Assume E is a forecaster for domain D.
  • E asserts that statement A will be true of domain D at time T in the future.
  • A is within D.

Therefore, a decision-maker may plausibly take A to be true at time T, unless one or more of the following Critical Questions (CQ) is answered in the negative:

  • CQ1:  Is E a genuine expert in forecasting domain D?
  • CQ2:  Did E really assert that A will be true at T?
  • CQ3:  Is A relevant to, and within the scope of, domain D?
  • CQ4:  Is A consistent with what is said by other forecasters with expertise in D?
  • CQ5:  Is A consistent with known evidence of current conditions and trends in D?
  • CQ6:  Is E’s opinion offered without regard to any reward or benefit upon statement A being adopted by the decision-maker as a forecast?
  • CQ7:  Do the benefits of adopting A being true at time T in D outweigh the costs of doing so, to the decision-maker?

In attempting to answer these questions, we may explore more detailed questions:

  • CQ1-1:  What is E’s experience as forecaster in domain D?
  • CQ1-2: What is E’s track record as a forecaster in domain D?
  • CQ2-1: Did E articulate conditions or assumptions under which A will become true at T, or under which it will not become true?  If so, what are these?
  • CQ2-2:  How sensitive is the forecast of A being true at T to the conditions and assumptions made by E?
  • CQ2-3:  When forecasting that A would become true at T, did E assert a more general statement than A?
  • CQ2-4:  When forecasting that A would become true at T, did E assert a more general time than T?
  • CQ2-5:  Is E able to provide a rational justification (for example, a computer simulation model) for the forecast that A would be true at T?
  • CQ2-6:  Did E present the forecast of A being true at time T qualified by modalities, such as possibly, probably, almost surely, certainly, etc.
  • CQ4-1:  If this forecast is not consistent with those of other forecasters in domain D, to what extent are they inconsistent?   Can these inconsistencies be rationally justified or explained?
  • CQ5-1: What are the implications of A being true at time T in domain D?  Are these plausible?  Do they contradict any known facts or trends?
  • CQ6-1:  Will E benefit if the decision-maker adopts A being true at time T as his/her forecast for domain D?
  • CQ6-2:  Will E benefit if the decision-maker does not adopt A being true at time T as his/her forecast for domain D?
  • CQ6-3:  Will E benefit if many decision-makers adopt A being true at time T as their forecast for domain D?
  • CQ6-4:  Will E benefit if few decision-makers adopt A being true at time T as their forecast for domain D?
  • CQ6-5:  Has E acted in such a way as to indicate that E had adopted A being true at time T as their forecast for domain D (eg, by making an investment betting that A will be true at T)?
  • CQ7-1:  What are the costs and benefits to the decision-maker for adopting statement A being true at time T in domain D as his or her forecast of domain D?
  • CQ7-2:  How might these costs and benefits be compared?  Can a net benefit/cost for the decision-maker be determined?

Automating these questions and the process of answering them is on my list of next steps, because automation is needed to design machines able to reason rationally about the future.   And rational reasoning about the future is needed if  we want machines to make decisions about actions.
References:
Augustus DeMorgan [1847]: Formal Logic.  London, UK:  Taylor and Walton.
Douglas N. Walton [1996]:  Argument Schemes for Presumptive Reasoning. Mahwah, NJ, USA: Lawrence Erlbaum.
Charles A. Willard [1990]: Authority.  Informal Logic, 12: 11-22.