As we once thought


The Internet, the World-Wide-Web and hypertext were all forecast by Vannevar Bush, in a July 1945 article for The Atlantic, entitled  As We May Think.  Perhaps this is not completely surprising since Bush had a strong influence on WW II and post-war military-industrial technology policy, as Director of the US Government Office of Scientific Research and Development.  Because of his influence, his forecasts may to some extent have been self-fulfilling.
However, his article also predicted automated machine reasoning using both logic programming, the computational use of formal logic, and computational argumentation, the formal representation and manipulation of arguments.  These areas are both now important domains of AI and computer science which developed first in Europe and which still much stronger there than in the USA.   An excerpt:

The scientist, however, is not the only person who manipulates data and examines the world about him by the use of logical processes, although he sometimes preserves this appearance by adopting into the fold anyone who becomes logical, much in the manner in which a British labor leader is elevated to knighthood. Whenever logical processes of thought are employed—that is, whenever thought for a time runs along an accepted groove—there is an opportunity for the machine. Formal logic used to be a keen instrument in the hands of the teacher in his trying of students’ souls. It is readily possible to construct a machine which will manipulate premises in accordance with formal logic, simply by the clever use of relay circuits. Put a set of premises into such a device and turn the crank, and it will readily pass out conclusion after conclusion, all in accordance with logical law, and with no more slips than would be expected of a keyboard adding machine.
Logic can become enormously difficult, and it would undoubtedly be well to produce more assurance in its use. The machines for higher analysis have usually been equation solvers. Ideas are beginning to appear for equation transformers, which will rearrange the relationship expressed by an equation in accordance with strict and rather advanced logic. Progress is inhibited by the exceedingly crude way in which mathematicians express their relationships. They employ a symbolism which grew like Topsy and has little consistency; a strange fact in that most logical field.
A new symbolism, probably positional, must apparently precede the reduction of mathematical transformations to machine processes. Then, on beyond the strict logic of the mathematician, lies the application of logic in everyday affairs. We may some day click off arguments on a machine with the same assurance that we now enter sales on a cash register. But the machine of logic will not look like a cash register, even of the streamlined model.”

Edinburgh sociologist, Donald MacKenzie, wrote a nice history and sociology of logic programming and the use of logic of computer science, Mechanizing Proof: Computing, Risk, and Trust.  The only flaw of this fascinating book is an apparent misunderstanding throughout that theorem-proving by machines  refers only to proving (or not) of theorems in mathematics.    Rather, theorem-proving in AI refers to proving claims in any domain of knowledge represented by a formal, logical language.    Medical expert systems, for example, may use theorem-proving techniques to infer the presence of a particular disease in a patient; the claims being proved (or not) are theorems of the formal language representing the domain, not necessarily mathematical theorems.
References:
Donald MacKenzie [2001]:  Mechanizing Proof: Computing, Risk, and Trust (2001).  Cambridge, MA, USA:  MIT Press.
Vannevar Bush[1945]:  As we may thinkThe Atlantic, July 1945.

Vale: Sol Encel


I have just learnt of the death last month of Sol Encel, Emeritus Professor of Sociology at the University of New South Wales, and a leading Australian sociologist, scenario planner, and futures thinker.    I took a course on futurology with him two decades ago, and it was one of the most interesting courses I ever studied.  This was not due to Encel himself, at least not directly, who appeared in human form only at the first lecture.
He told us he was a very busy and important man, and would certainly not have the time to spare to attend any of the subsequent lectures in the course.  Instead, he had arranged a series of guest lectures for us, on a variety of topics related to futures studies, futurology, and forecasting.  Because he was genuinely important, his professional network was immense and impressive, and so the guest speakers he had invited were a diverse group of prominent people, from different industries, academic disciplines, professions, politics and organizations, each with interesting perspectives or experiences on the topic of futures and prognosis.  The talks they gave were absolutely fascinating.
To accommodate the guest speakers, the lectures were held in the early evening, after normal working hours.  Because of this unusual timing, and because the course assessment comprised only an essay, student attendance at the lectures soon fell sharply.  Often I turned up to find I was the only student present.   These small classes presented superb opportunities to meet and talk with the guest speakers, conversations that usually adjourned to a cafe or a bar nearby.  I learnt a great deal about the subject of forecasting, futures, strategic planning, and prognosis, particularly in real organizations with real stakeholders, from these interactions.  Since he chose these guests, I thus sincerely count Sol Encel as one of the important influences on my thinking about futures.
Here, in a tribute from the Australian Broadcasting Commission, is a radio broadcast Encel made in 1981 about Andrei Sakharov. It is interesting that there appears to have been speculation in the West then has to how the so-called father of the Soviet nuclear bomb could have become a supporter of dissidents.   This question worried, too, the KGB, whose answer was one Vadim Delone, poet.  And here, almost a month after Solomon Encel’s death, is his obituary in the Sydney Morning Herald.  One wonders why this took so long to be published.

Complex Decisions

Most real-world business decisions are considerably more complex than the examples presented by academics in decision theory and game theory.  What makes some decisions more complex than others? Here I list some features, not all of which are present in all decision situations.

  • The problems are not posed in a form amenable to classical decision theory.

    Decision theory requires the decision-maker to know what are his or her action-options, what are the consequences of these, what are the uncertain events which may influence these consequences, and what are the probabilities of these uncertain events (and to know all these matters in advance of the decision). Yet, for many real-world decisions, this knowledge is either absent, or may only be known in some vague, intuitive, way. The drug thalidomide, for example, was tested thoroughly before it was sold commercially – on male and female human subjects, adults and children. The only group not to be tested were pregnant women, which were, unfortunately, the main group for which the drug had serious side effects. These side effects were consequences which had not been imagined before the decision to launch was made. Decision theory does not tell us how to identify the possible consequences of some decision, so what use is it in real decision-making?

  • There are fundamental domain uncertainties.

    None of us knows the future. Even with considerable investment in market research, future demand for new products may not be known because potential customers themselves do not know with any certainty what their future demand will be. Moreover, in many cases, we don’t know the past either. I have had many experiences where participants in a business venture have disagreed profoundly about the causes of failure, or even success, and so have taken very different lessons from the experience.

  • Decisions may be unique (non-repeated).

    It is hard to draw on past experience when something is being done for the first time. This does not stop people trying, and so decision-making by metaphor or by anecdote is an important feature of real-world decision-making, even though mostly ignored by decision theorists.

  • There may be multiple stakeholders and participants to the decision.

    In developing a business plan for a global satellite network, for example, a decision-maker would need to take account of the views of a handful of competitors, tens of major investors, scores of minor investors, approximately two hundred national and international telecommunications regulators, a similar number of national company law authorities, scores of upstream suppliers (eg equipment manufacturers), hundreds of employees, hundreds of downstream service wholesalers, thousands of downstream retailers, thousands or millions of shareholders (if listed publicly), and millions of potential customers. To ignore or oppose the views of any of these stakeholders could doom the business to failure. As it happens, Game Theory isn’t much use with this number and complexity of participants. Moreover, despite the view commonly held in academia, most large Western corporations operate with a form of democracy. (If opinions of intelligent, capable staff are regularly over-ridden, these staff will simply leave, so competition ensures democracy. In addition, good managers know that decisions unsupported by their staff will often be executed poorly, so success of a decision may depend on the extent to which staff believe it has been reached fairly.) Accordingly, all major decisions are decided by groups or teams, not at the sole discretion of an individual. Decision theorists, it seems to me, have paid insufficient attention to group decisions: We hear lots about Bayesian decision theory, but where, for example, is the Bayesian theory of combining subjective probability assessments?

  • Domain knowledge may be incomplete and distributed across these stakeholders.
  • Beliefs, goals and preferences of the stakeholders may be diverse and conflicting.
  • Beliefs, goals and preferences of stakeholders, the probabilities of events and the consequences of decisions, may be determined endogenously, as part of the decision process itself.

    For instance, economists use the term network good to refer to a good where one person’s utility depends on the utility of others. A fax machine is an example, since being the sole owner of fax is of little value to a consumer. Thus, a rational consumer would determine his or her preferences for such a good only AFTER learning the preferences of others. In other words, rational preferences are determined only in the course of the decision process, not beforehand.  Having considerable experience in marketing, I contend that ALL goods and services have a network-good component. Even so-called commodities, such as natural resources or telecommunications bandwidth, have demand which is subject to fashion and peer pressure. You can’t get fired for buying IBM, was the old saying. And an important function of advertising is to allow potential consumers to infer the likely preferences of other consumers, so that they can then determine their own preferences. If the advertisement appeals to people like me, or people to whom I aspire to be like, then I can infer that those others are likely to prefer the product being advertized, and thus I can determine my own preferences for it. Similarly, if the advertisement appeals to people I don’t aspire to be like, then I can infer that I won’t be subject to peer pressure or fashion trends, and can determine my preferences accordingly.
    This is commonsense to marketers, even if heretical to many economists.

  • The decision-maker may not fully understand what actions are possible until he or she begins to execute.
  • Some actions may change the decision-making landscape, particularly in domains where there are many interacting participants.

    A bold announcement by a company to launch a new product, for example, may induce competitors to follow and so increase (or decrease) the chances of success. For many goods, an ecosystem of critical size may be required for success, and bold initiatives may act to create (or destroy) such ecosystems.

  • Measures of success may be absent, conflicting or vague.
  • The consequences of actions, including their success or failure, may depend on the quality of execution, which in turn may depend on attitudes and actions of people not making the decision.

    Most business strategies are executed by people other than those who developed or decided the strategy. If the people undertaking the execution are not fully committed to the strategy, they generally have many ways to undermine or subvert it. In military domains, the so-called Powell Doctrine, named after former US Secretary of State Colin Powell, says that foreign military actions undertaken by a democracy may only be successful if these actions have majority public support. (I have written on this topic before.)

  • As a corollary of the previous feature, success of an action may require extensive and continuing dialog with relevant stakeholders, before, during and after its execution.

    This is not news to anyone in business.

  • Success may require pre-commitments before a decision is finally taken.

    In the 1990s, many telecommunications companies bid for national telecoms licences in foreign countries. Often, an important criterion used by the Governments awarding these licences was how quickly each potential operator could launch commercial service. To ensure that they could launch service quickly, some bidders resorted to making purchase commitments with suppliers and even installing equipment ahead of knowing the outcome of a bid, and even ahead, in at least one case I know, of deciding whether or not to bid.

  • The consequences of decisions may be slow to realize.

    Satellite mobile communications networks have typically taken ten years from serious inception to launch of service.  The oil industry usually works on 50+ year cycles for major investment projects.  BP is currently suffering the consequence in the Gulf of Mexico of what appears to be a decades-long culture which de-emphasized safety and adequate contingency planning.

  • Decision-makers may influence the consequences of decisions and/or the measures of success.
  • Intelligent participants may model each other in reaching a decision, what I term reflexivity.

    As a consequence, participants are not only reacting to events in their environment, they are anticipating events and the reactions and anticipations of other participants, and acting proactively to these anticipated events and reactions. Traditional decision theory ignores this. Following Nash, traditional game theory has modeled the outcomes of one such reasoning process, but not the processes themselves. Evolutionary game theory may prove useful for modeling these reasoning processes, although assuming a sequence of identical, repeated interactions does not strike me as an immediate way to model a process of reflexivity.  This problem still awaits its Nash.

In my experience, classical decision theory and game theory do not handle these features very well; in some cases, indeed, not at all.  I contend that a new theory of complex decisions is necessary to cope with decision domains having these features.

Straitjackets of Standards

This week I was invited to participate as an expert in a Delphi study of The Future Internet, being undertaken by an EC-funded research project.   One of the aims of the project is to identify multiple plausible future scenarios for the socio-economic role(s) of the Internet and related technologies, after which the project aim to reach a consensus on a small number of these scenarios.  Although the documents I saw were unclear as to exactly which population this consensus was to be reached among, I presume it was intended to be a consensus of the participants in the Delphi Study.
I have a profound philosophical disagreement with this objective, and indeed with most of the EC’s many efforts in standardization.   Tim Berners-Lee invented Hyper-Text Transfer Protocol (HTTP), for example, in order to enable physicists to publish their research documents to one another in a manner which enabled author-control of document appearance.    Like most new technologies. HTTP was not invented for the many other uses to which it has since been put; indeed, many of these other applications have required hacks or fudges to HTTP in order to work.  For example, because HTTP does not keep track of the state of a request, fudges such as cookies are needed.  If we had all been in consensual agreement with The Greatest Living Briton about the purposes of HTTP, we would have no e-commerce, no blogging, no social networking, no easy remote access to databases, no large-scale distributed collaborations, no easy action-at-a-distance, in short no transformation of our society and life these last two decades, just the broadcast publishing of text documents.
Let us put aside this childish, warm-and-fuzzy, touchy-feely seeking after consensus.  Our society benefits most from a diversity of opinions and strong disagreements, a hundred flowers blooming, a cacophony of voices in the words of Oliver Wendell Holmes.  This is particularly true of opinions regarding the uses and applications of innovations.   Yet the EC persists, in some recalcitrant chasing after illusive certainty, in trying to force us all into straitjackets of standards and equal practice.    These efforts are misguided and wrong-headed, and deserve to fail.

Myopic utilitarianism

What are the odds, eh?  On the same day that the Guardian publishes an obituary of theoretical computer scientist, Peter Landin (1930-2009), pioneer of the use of Alonzo Church’s lambda calculus as a formal semantics for computer programs, they also report that the Government is planning only to fund research which has relevance  to the real-world.  This is GREAT NEWS for philosophers and pure mathematicians! 
What might have seemed, for example,  mere pointless musings on the correct way to undertake reasoning – by Aristotle, by Islamic and Roman Catholic medieval theologians, by numerous English, Irish and American abstract mathematicians in the 19th century, by an entire generation of Polish logicians before World War II, and by those real-world men-of-action Gottlob Frege, Bertrand Russell, Ludwig Wittgenstein and Alonzo Church – turned out to be EXTREMELY USEFUL for the design and engineering of electronic computers.   Despite Russell’s Zen-influenced personal motto – “Just do!  Don’t think!” (later adopted by IBM) – his work turned out to be useful after all.   I can see the British research funding agencies right now, using their sophisticated and proven prognostication procedures to calculate the society-wide economic and social benefits we should expect to see from our current research efforts over the next 2300 years  – ie, the length of time that Aristotle’s research on logic took to be implemented in technology.   Thank goodness our politicians have shown no myopic utilitarianism this last couple of centuries, eh what?!
All while this man apparently received no direct state or commercial research funding for his efforts as a computer pioneer, playing with “pointless” abstractions like the lambda calculus.
And Normblog also comments.
POSTSCRIPT (2014-02-16):   And along comes The Cloud and ruins everything!   Because the lower layers of the Cloud – the physical infrastructure, operating system, even low-level application software – are fungible and dynamically so, then the Cloud is effectively “dark” to its users, beneath some level.   Specifying and designing applications that will run over it, or systems that will access it, thus requires specification and design to be undertaken at high levels of abstraction.   If all you can say about your new system is that in 10 years time it will grab some data from the NYSE, and nothing (yet) about the format of that data, then you need to speak in abstract generalities, not in specifics.   It turns out the lambda calculus is just right for this task and so London’s big banks have been recruiting logicians and formal methods people to spec & design their next-gen systems.  You can blame those action men, Church and Russell.

Bonuses yet again

Alex Goodall, over at A Swift Blow to the Head, has written another angry post about the bonuses paid to financial sector staff. I’ve been in several minds about responding, since my views seem to be decidedly minority ones in our present environment, and because there seems to be so much anger abroad on this topic.  But so much that is written and said, including by intelligent, reasonable people such as Alex, mis-understands the topic, that I feel a response is again needed.  It behooves none of us to make policy on the basis of anger and ignorance.
Continue reading ‘Bonuses yet again’

Social forecasting: Doppio Software

Five years ago, back in the antediluvian era of Web 2.0 (the web as enabler and facilitator of social networks), we had the idea of  social-network forecasting.  We developed a product to enable a group of people to share and aggregate their forecasts of something, via the web.  Because reducing greenhouse gases were also becoming flavour-du-jour, we applied these ideas to social forecasts of the price for the European Union’s carbon emission permits, in a nifty product we called Prophets-360.  Sadly, due mainly to poor regulatory design of the European carbon emission market, supply greatly outstripped demand for emissions permits, and the price of permits fell quickly and has mostly stayed fallen.  A flat curve is not difficult to predict, and certainly there was little value in comparing one person’s forecast with that of another.  Our venture was also felled.

But now the second generation of social networking forecasting tools has arrived.  I see that a French start-up, Doppio Software, has recently launched publicly.   They appear to have a product which has several advantages over ours:

  • Doppio Software is focused on forecasting demand along a supply chain.  This means the forecasting objective is very tactical, not the long-term strategic forecasting that CO2 emission permit prices became.   In the present economic climate, short-term tactical success is certainly more compelling to business customers than even looking five years hence.
  • The relevant social network for a supply chain is a much stronger community of interest than the amorphous groups we had in mind for Prophets-360.  Firstly, this community already exists (for each chain), and does not need to be created.  Secondly, the members of the community by definition have differential access to information, on the basis of their different positions up and down the chain.  Thirdly, although the interests of the partners in a supply chain are not identical, these interests are mutually-reinforcing:  everyone in the chain benefits if the chain itself is more successful at forecasting throughput.
  • In addition, Team Doppio (the Doppiogangers?) appear to have included a very compelling value-add:  their own automated modeling of causal relationships between the target demand variables of each client and general macro-economic variables, using  semantic-web data and qualitative modeling technologies from AI.  Only the largest manufacturing companies can afford their own econometricians, and such people will normally only be able to hand-craft models for the most important variables.  There are few companies IMO who would not benefit from Doppio’s offer here.

Of course, I’ve not seen the Doppio interface and a lot will hinge on its ease-of-use (as with all software aimed at business users).  But this offer appears to be very sophisticated, well-crafted and compelling, combining social network forecasting, intelligent causal modeling and semantic web technologies.

Well done, Team Doppio!  I wish you every success with this product!

PS:  I have just learnt that “doppio” means “double”, which makes it a very apposite name for this application – forecasts considered by many people, across their human network.  Neat!  (2009-09-16)

Article in The Observer (UK) about Doppio 2009-09-06 here. And here is an AFP TV news story (2009-09-15) about Doppio co-founder, Edouard d’Archimbaud.  Another co-founder is Benjamin Haycraft.

Black swans of trespass

gould-blackswan
Nassim Taleb has an article in the FinTimes presenting ten principles he believes would reduce the occurrence of rare, catastrophic events (events he has taken to calling black swans).  Many of his principles are not actionable, and several are ill-advised.  Take, for instance, # 3:

3. People who were driving a school bus blindfolded (and crashed it) should never be given a new bus.

If this principle was applied, the bus would have no drivers at all.   All of us are driving blindfolded, with our only guide to the road ahead being what we can apprehend from the rear-view mirror.  Past performance, as they say, is no guide to the future direction of the road.
Or take #6:

6. Do not give children sticks of dynamite, even if they come with a warning.  Complex derivatives need to be banned because nobody understands them and few are rational enough to know it. Citizens must be protected from themselves, from bankers selling them “hedging” products, and from gullible regulators who listen to economic theorists.

Well, what precisely is “complex”?  Surely, Dr Taleb is not suggesting the banning of plain futures and options, as these serve a valuable function in our economy (enabling the parceling and trading of risk).  But even these are too complex for some people (such as those farmers, dentists, and local government officials currently with burnt fingers), and surely such people need protection from themselves much more so than the quant-jocks and their masters on Wall Street.  So, where would one draw the line between allowed derivative and disallowed?
Once again, it appears there has been a mis-understanding of the cause of the recent problems.  It is not complex derivatives per se that are the problem, but the fact that many of these financial instruments have, unusually, been highly-correlated.  Thus, the failure of one instrument (and subsequently, one bank) brings down all the others with it — there is a systemic risk as well as a participant risk involved in their use.   Dr Taleb, who has long been a critic of the unthinking use of Gaussian models in finance, I am sure realises this.

The Better Angels

[WARNING:  In-jokes for telecoms people!]
Prediction, particularly of the future, is difficult, as we know.  We notice a good example of the difficulties reading Charles McCarry’s riveting political/spy thriller, The Better Angels.  Published in 1979 but set during the final US Presidential election campaign of the 20th Century (2000? 1996?), McCarry gets some of the big predictions spot on: suicide bombers, Islamic terrorism, oil-company malfeasance, an extreme right-wing US President, computer voting machines, a Greek-American in charge of the US foreign intelligence agency, uncollected garbage and wild animals in Manhattan’s streets, and, of course, the manned space mission to Jupiter’s moon, Ganymede, for instance.  But he makes a real howler with the telephone system:  a brief mention (p. 154) of “the Bell System” indicates he had no anticipation of the 1982 Modified Final Judgement of Judge Harold H Greene.  How could he have failed to see that coming, when AT&T’s managers were preparing for decades for the competition which would follow, evident in the masterful way these managers and their companies have prospered since?!  A future with a unified Bell system was so weird, I was barely able to concentrate on the other events in the novel after this.
Reference:
Charles McCarry [1979]: The Better Angels. London, UK:  Arrow Books.