Evaluating prophecy

With the mostly-unforeseen global financial crisis uppermost in our minds, I am led to consider a question that I have pondered for some time:   How should we assess forecasts and prophecies?   Within the branch of philosophy known as argumentation, a lot of attention has been paid to the conditions under which a rational decision-maker would accept particular types of argument.

For example, although it is logically invalid to accept an argument only on the grounds that the person making it is an authority on the subject, our legal system does this all the time.  Indeed,  the philosopher Charles Willard has argued that modern society could not function without most of us accepting arguments-from-authority most of the time, and it is usually rational to do so.  Accordingly, philosophers of argumentation have investigated the conditions under which a rational person would accept or reject such arguments.   Douglas Walton (1996, pp. 64-67) presents an argumentation scheme for such acceptance/rejection decisions, the Argument Scheme for Arguments from Expert Opinion, as follows:

  • Assume E is an expert in domain D.
  • E asserts that statement A is known to be true.
  • A is within D.

Therefore, a decision-maker may plausibly take A to be true, unless one or more of the following Critical Questions (CQ) is answered in the negative:

  • CQ1:  Is E a genuine expert in D?
  • CQ2:  Did E really assert A?
  • CQ3:  Is A relevant to domain D?
  • CQ4:  Is A consistent with what other experts in D say?
  • CQ5:  Is A consistent with known evidence in D?

One could add further questions to this list, for example:

  • CQ6:  Is E’s opinion offered without regard to any reward or benefit upon statement A being taken to be true by the decision-maker?

Walton himself presents some further critical questions first proposed by Augustus DeMorgan in 1847 to deal with cases under CQ2 where the expert’s opinion is presented second-hand, or in edited form, or along with the opinions of others.
Clearly, some of these questions are also pertinent to assessing forecasts and prophecies.  But the special nature of forecasts and prophecies may enable us to make some of these questions more precise.  Here is my  Argument Scheme for Arguments from Prophecy:

  • Assume E is a forecaster for domain D.
  • E asserts that statement A will be true of domain D at time T in the future.
  • A is within D.

Therefore, a decision-maker may plausibly take A to be true at time T, unless one or more of the following Critical Questions (CQ) is answered in the negative:

  • CQ1:  Is E a genuine expert in forecasting domain D?
  • CQ2:  Did E really assert that A will be true at T?
  • CQ3:  Is A relevant to, and within the scope of, domain D?
  • CQ4:  Is A consistent with what is said by other forecasters with expertise in D?
  • CQ5:  Is A consistent with known evidence of current conditions and trends in D?
  • CQ6:  Is E’s opinion offered without regard to any reward or benefit upon statement A being adopted by the decision-maker as a forecast?
  • CQ7:  Do the benefits of adopting A being true at time T in D outweigh the costs of doing so, to the decision-maker?

In attempting to answer these questions, we may explore more detailed questions:

  • CQ1-1:  What is E’s experience as forecaster in domain D?
  • CQ1-2: What is E’s track record as a forecaster in domain D?
  • CQ2-1: Did E articulate conditions or assumptions under which A will become true at T, or under which it will not become true?  If so, what are these?
  • CQ2-2:  How sensitive is the forecast of A being true at T to the conditions and assumptions made by E?
  • CQ2-3:  When forecasting that A would become true at T, did E assert a more general statement than A?
  • CQ2-4:  When forecasting that A would become true at T, did E assert a more general time than T?
  • CQ2-5:  Is E able to provide a rational justification (for example, a computer simulation model) for the forecast that A would be true at T?
  • CQ2-6:  Did E present the forecast of A being true at time T qualified by modalities, such as possibly, probably, almost surely, certainly, etc.
  • CQ4-1:  If this forecast is not consistent with those of other forecasters in domain D, to what extent are they inconsistent?   Can these inconsistencies be rationally justified or explained?
  • CQ5-1: What are the implications of A being true at time T in domain D?  Are these plausible?  Do they contradict any known facts or trends?
  • CQ6-1:  Will E benefit if the decision-maker adopts A being true at time T as his/her forecast for domain D?
  • CQ6-2:  Will E benefit if the decision-maker does not adopt A being true at time T as his/her forecast for domain D?
  • CQ6-3:  Will E benefit if many decision-makers adopt A being true at time T as their forecast for domain D?
  • CQ6-4:  Will E benefit if few decision-makers adopt A being true at time T as their forecast for domain D?
  • CQ6-5:  Has E acted in such a way as to indicate that E had adopted A being true at time T as their forecast for domain D (eg, by making an investment betting that A will be true at T)?
  • CQ7-1:  What are the costs and benefits to the decision-maker for adopting statement A being true at time T in domain D as his or her forecast of domain D?
  • CQ7-2:  How might these costs and benefits be compared?  Can a net benefit/cost for the decision-maker be determined?

Automating these questions and the process of answering them is on my list of next steps, because automation is needed to design machines able to reason rationally about the future.   And rational reasoning about the future is needed if  we want machines to make decisions about actions.
References:
Augustus DeMorgan [1847]: Formal Logic.  London, UK:  Taylor and Walton.
Douglas N. Walton [1996]:  Argument Schemes for Presumptive Reasoning. Mahwah, NJ, USA: Lawrence Erlbaum.
Charles A. Willard [1990]: Authority.  Informal Logic, 12: 11-22.

Hearing is (not necessarily) believing

Someone (let’s call her Alice) tells you that something is true, say the proposition P.  What can you validly infer from that utterance of Alice?  Not that P is necessarily true, since Alice may be mistaken.  You can’t even infer that Alice believes that P is true, since she may be aiming to mislead you.
Can you then infer that Alice wants you to believe that P is true?  Well, not always, since the two of you may have the sort of history of interactions which leads you to mostly distrust what she says, and she may know this about you, so she may be counting on you believing that P is not true precisely because she told you that it is true.  But, you, in turn, may know this about Alice (that she is counting on you not to believe her regarding the truth of P), and she knows that you know, so she is actually expecting you not to not-believe her on P, but to in fact infer either no opinion on P or to believe that P is true.
So, let us try summarizing what you could infer from Alice telling that P is true:

  • That P is true.
  • That Alice believes that P is true.
  • That Alice desires you to believe that P is true.
  • That Alice desires that you believe that Alice desires you to believe that P is true.
  • That Alice desires you to not believe that P is true.
  • That Alice desires that you believe that Alice desires you to not believe that P is true.
  • That Alice desires you to believe that P is not true.
  • That Alice desires that you believe that Alice desires you to believe that P is not true.
  • And so on, ad infinitum.

Apart from life, the universe and everything, you may be wondering where such ideas would find application.   Well, one place is in Intelligence.   Tennent H. Bagley, in his very thorough book on the Nosenko affair, for example, discusses the ructions in CIA caused by doubts about the veracity of the supposed KGB defector, Yuri Nosenko.    Was he a real defector?  Or was he sent by KGB as a fake defector, in order to lead CIA astray with false or misleading information?  If he was a fake defector, should CIA admit this publicly or should they try to convince KGB that they believe Nosenko and his stories?  Does KGB actually want CIA to conclude that Nosenko is a fake defector, for instance, in order to believe something by an earlier defector which CIA may otherwise doubt?  In which case, should CIA pretend to taken in by Nosenko (to make KGB think their plot was successful) or let KGB know that they were not taken in (in order to make KGB believe that CIA does not believe that other earlier information)?  And so on, ad infinitum.
I have seen similar (although far less dramatic) ructions in companies when they learn of some exciting or important piece of competitor intelligence.   Quite often, the recipient company just assumes the information is true and launches itself into vast efforts executing new plans.  Before doing this, companies should explicitly ask, Is this information true?,  and also pay great attention to the separate question, Who would benefit if we (the recipients) were to believe it?
Another application of these ideas is in the design of computer communications systems.   Machines send messages to each other all the time (for example, via the various Internet protocols, whenever a web-page is browsed or email is sent), and most of these are completely believed by the recipient machine.   To the extent that this is so, the recipient machines can hardly be called intelligent.   Designing intelligent communications between machines requires machines able and willing to query and challenge information they receive when appropriate, and then able to reach an informed conclusion about what received information to believe.
Many computer scientists believe that a key component for such intelligent communications is an agreed semantics for communication interactions between machines, so that the symbols exchanged between different machines are understood by them all in the same way.   The most thoroughly-developed machine semantics to date is the Semantic Language SL of the Agent Communications Language ACL of the IEEE Foundation for Intelligent Physical Agents (IEEE FIPA), which has been formalized in a mix of epistemic and doxastic logics (ie, logics of knowledge and belief).   Unfortunately, the semantics of FIPA ACL requires the sender of information (ie, Alice) to believe that information herself.  This feature precludes the language being used for any interactions involving negotiations or scenario exploration.  The semantics of FIPA ACL also require Alice not to believe that the recipient believes one way or another about the information being communicated (eg, the proposition P).  Presumably this is to prevent Alice wasting the time of the recipient.  But this feature precludes the language being used for one of the most common interactions in computer communications – the citing of a password by someone (human or machine) seeking to access some resource, since the citer of the password assumes that the resource-controller already knows the password.
More work clearly needs doing on the semantics of machine communications.  As the example above demonstrates, communication has many subtleties and complexities.
Reference:
Tennent H. Bagley [2007]: Spy Wars: Moles, Mysteries, and Deadly Games.  New Haven, CT: Yale University Press.

Chicago – this is your moment, too

The election of Senator Barack Obama as President of the USA has brought to the fore his adopted home-town, Chicago, now reinforced by his selection of Chicago-based Congressman Rahm Emanuel as his White House Chief-of-Staff.   Chicago, hog-butcher to the world, was known first in the 19th-century for its dominance of the meat industry, and then its dominance of the markets for other agricultural commodities.  In the 20th century this led to dominance of the financial markets where such commodities, and later more sophisticated financial products, were traded.  With all this money, it is not surprising that the world’s first modern skyscrapers were built there too.
But Chicago has also been a centre for business consulting – for example, via Arthur Anderson (founded Chicago, 1913), and its spin-off Anderson Consulting (now Accenture) – and a centre for marketing research and marketing data analysis.   That particular thread includes AC Nielsen (founded Chicago, 1923) and Information Resources, Inc. (IRI, founded Chicago, 1977).   The three founders of IRI, John Malec, Gerald Eskin and William Walter, sought to take advantage of newly-deployed supermarket scanners to analyse tactical marketing data for fmcg products.   (Modern supermarket scanners began operations in the US from June 1974.)
But there is an earlier fibre to this thread:  Before the invention of the electronic computer, Chicago was also a centre of manufacturing of adding machines.  Data, and its analysis – practical, no-nonsense, mid-western, even – has been a key Chicago strength.
Reference:
Peggy A. Kidwell [2001]: Yours for Improvement – The adding machines of Chicago, 1884-1930IEEE Annals of the History of Computing, 23 (3):  3 – 21.  July 2001.

A data architecture for spimes

Thinking some more about spimes, those product entities that exist individually in space and time. I can see they could lead to major changes in the way in which marketing data is collected, collated, stored, analyzed, and used.   Clearly, individual spimes and their wranglers will generate a lot of data as they interact with the world and report back (eg, via RFID and GPS), and that data could usefully form the basis for marketing knowledge and marketing action.   But the web changes everything.  Spime wranglers, being intelligent human beings and companies, could comment and reflect on their interactions; the social web allows them to meet each other, across space and across time, in the same way that a houseowner can “meet” the previous or future occupants of his house.    Likewise, intelligent spimes could also reflect on their interactions, and even wrangle less-intelligent spimes.
What software architecture is appropriate for this mass of data?   Clearly, we’d want to store all the data, regardless of its format, in databases.  My question is pitched at a higher level of abstraction than that of the databases.  We desire that multiple, independent agents (both people and devices) are able to access the data, to read it and contribute to it, and maybe to over-write it (assuming they have the appropriate authorizations).  Moreover, we want to be able to combine and reason-across the data generated by one spime, say a particular motor vehicle, with that of other spimes — say, other vehicles of the same model, or other vehicles owned by the same person, or other vehicles purchased in the same year, etc.   We’d also like to combine and reason-across the data generated by spimes in different product categories — all the durables purchased by the Smith family in their life, for instance, or all the products purchased in Main Street, Anytown, last week.
An obvious data architecture for multiple, independent reading- and writing-entities is a blackboard.  A blackboard architecture is a shared memory space which enables agents sending and receiving messages to be decoupled from one another, both spatially and temporally.   Exactly as a blackboard does, messages left on the blackboard are stored until they are erased, and so the long-dead can communicate to the living, who can in turn communicate to the not-yet-born.   Tuple spaces and the associated Linda language are an example of a blackboard architecture (implemented in Java as Java Spaces).  We could imagine that each spime has its own tuple space, partitioned into secure sub-spaces for different spime-wranglers, from manufacturers, through each spime owner or carer, to after-sales service providers and disposal agencies.  Access to spaces will need to be controlled, so that only authorized agents may write, read and erase data in their allocated partition.   Here we could use something called Law-Governed Linda, an enhancement of Linda designed to add security features, although this may be too rigid for products whose uses cannot be readily predicted in advance.   An architecture allowing access to a tuple space following an appropriate dialogue between the relevant agents may be more flexible.
So far, so good for the data storage and access.  But spimes and spime wranglers will generate enormous quantities of data, and analyzing all this data will require some effort.  Better then, to plan for this effort and automate as much of the data collation, aggregation, processing and analysis.   Here, I suggest we should use so-called Tuple Centres, which are intelligent Tuple Spaces, able to reason over the data they hold.  Because we will want to combine and analyze data arising from different spimes, these tuple centres will need to communicate with one another, and agree (or not) to allow their data to be aggregated. A multi-agent system (MAS) with agents representing each spime-space (ie, the tuple space of each spime), and, for many spimes, each partition of each spime-space, seems the most effective architecture.  This is because the interests of the relevant stakeholders (spime-wranglers, marketing departments, manufacturers and service providers, data protection agencies, the state, the law) will vary and a MAS is the most effective way to formally represent and accommodate these diverse interests in a software system.
There are many details still be worked for this architecture.  But even at this level, it is clear that the traditional marketing data warehouse architecture is not sophisticated enough for what is needed for spimes. Hence, my statement above that spimes could lead to major changes in the way in which marketing data is collected, collated, stored and analyzed.  Use of spime data I will leave for another post.
References:
TuCSon, developed at the University of Bologna, Italy, is a platform which enables fast implementation of tuple centre applications.

The resonance of spimes

In 2004, Bruce Sterling coined the term “spime” for an object which tracked its own history and its own interactions with the world (using, for example, technologies such as RFID and GPS).  In Sterling’s words, spimes

“are precisely located in space and time. They have histories. They are recorded, tracked, inventoried, and always associated with a story. 
Spimes have identities, they are protagonists of a documented process.”

Spime wranglers are people willing to invest time and effort in managing the meta-data and narratives of their spimes. The always-interesting Russell Davies has been exploring the consequences of this idea for designers of commercial products.

Several thoughts have occured to me:
As with all new technologies, the future is unevenly distributed, and there have been spime wranglers for some artefacts for a very long time — for instance, for early industrial manufacturing technologies (eg, the 1785 Boulton and Watt steam engine (a diagram of which is above), in use for 102 years, and then immediately shipped by an alert wrangler to a museum in Australia in 1888) and for Stradivarius violins.  The service log books of motor vehicles, legally required in most western countries, are a pre-computer version of the metadata and narrative which a spime and its wranglers can generate.
Secondly, spime wranglers, like lead-users, become co-designers and co-marketers of the product, because they help to vest the product with meaning-in-the-world.   Grant McCracken has written on the trend to greater democratization of meaning-creation in marketing.  (Note: I’ll try to find a specific post of Grant’s on this topic.)
Finally, it strikes me that the best way to conceive of the narrative and metadata generated and collated by a spime and, working with it, by the spime’s wranglers is through Rupert Sheldrake’s powerful (and sadly neglected) idea of morphic fields.   I hope to explore this idea, and its implications for quantitative marketing, in a future post.

Putting the "Tea" in IT

One of the key ideas in the marketing of high-tech products is due to Eric von Hippel of the MIT Sloan School, the idea that lead users often anticipate applications of new technologies before the market as a whole, and even before inventors and suppliers. This is because lead users have pressing or important problems for which they seek solutions, and turn to whatever technologies they can find to respond to their problems.
A good example is shown by the history of Information Technology. The company which pioneered business applications of the new computer technology in the early 1950s was not a computer hardware manufacturer nor even an electronic engineering firm, but a lead user, Lyons Tea Shops, a nationwide British chain of tea-and-cake shops.  Lyons specified, designed, built, deployed and operated their own computers, under the name of Leo (Lyons Electronic Office). Lyons, through Leo, was also the first to conceive and deploy many of the business applications which we now take for granted, such as automated payroll systems and logistics management systems. One of the leaders in that effort, David Caminer, has recently died at the age of 92. LEO was later part of ICL, itself later purchased by Fujitsu.
This post is intended to honour David Caminer, as a pioneer of automated business decision-making.

Putting the "Tea" in IT

One of the key ideas in the marketing of high-tech products is due to Eric von Hippel of the MIT Sloan School, the idea that lead users often anticipate applications of new technologies before the market as a whole, and even before inventors and suppliers. This is because [tag]lead users[/tag] have pressing or important problems for which they seek solutions, and turn to whatever technologies they can find to respond to their problems.
A good example is shown by the history of Information Technology. The company which pioneered business applications of the new computer technology in the early 1950s was not a computer hardware manufacturer nor even an electronic engineering firm, but a lead user, Lyons Tea Shops, a nationwide British chain of tea-and-cake shops. [tag]Lyons[/tag] specified, designed, built, deployed and operated their own computers, under the name of Leo (Lyons Electronic Office). Lyons, through [tag]Leo[/tag], was also the first to conceive and deploy many of the business applications which we now take for granted, such as automated payroll systems and logistics management systems. One of the leaders in that effort, David Caminer, has recently died at the age of 92. LEO was later part of ICL, itself later purchased by Fujitsu.
This post is intended to honour David Caminer, as a pioneer of [tag]automated business decision-making[/tag].