This Much I Know (about CS and AI)

Inspired by The Guardian column of the same name, I decided to list here my key learnings of the last several years regarding Computer Science and Artificial Intelligence (AI). Few of these are my own insights, and I welcome comments and responses. From arguments I have had, I know that some of these statements are controversial; this fact surprises me, since most of them seem obvious to me. Statements are listed, approximately, from the more general to the more specific.

    • The discipline of Computer Science has usually developed first through practice, and only later through theory.

      The first calculating machine was invented by Leibniz in 1671, while the first mathematical theory of computing machines was that of Turing in 1937. The first widespread programmable device was Jacquard’s Loom, invented in 1801, while the first mathematical theory of programme languages did not appear until the 1960s. The Internet has been operating (first as Arpanet) since 1969, yet we still lack a formal theory of interaction. The first online friendships between people who never met were between telegraph operators, and later between telephonists, in the 19th century. The first e-commerce network was the Florists Telegraph Delivery Association, created in the USA in 1910.

    • It is therefore a profound mis-understanding to consider Computer Science to be a branch of Pure Mathematics.

The scope (the content) of the discipline of Computer Science comprises the behaviors of human artefacts of a certain sort, along with some natural phenomena. Without the artefacts, we would likely not have the theory, or at least not yet (and perhaps not for a long time). Without the theory, we would not fully understand the performance of the artefacts, so both are needed. But the artefacts came first, and practice should dominate. If the theory dominates, our discipline will shrivel and die, becoming as dessicated and as useless as mathematical economics.

    • Artificial Intelligence (AI) is the study of thinking about ways of knowing and ways of acting.

      This statement updates a statement of Seymour Papert (1988, p.3), who considered only ways of knowing.
      Seymour Papert [1988]: One AI or Many? Daedalus, 117 (1) (Winter 1988): 1-14.

    • Not all ways of thinking are equally effective in all situations.

      In particular, some means of representing knowledge are more effective than other representations for some purposes. For instance, non-probabilistic formalisms for representing uncertainty, such as Dempster-Shafer Theory and Possibility Theory, are more effective than Probability Theory for domains where knowledge may be inconsistent or incomplete (ie, domains where the Law of the Excluded Middle cannot be presumed to hold, such as in medical diagnosis and in criminal forensics). The statement in the previous sentence remains true even though many such non-probabilistic formalisms can be shown to be equivalent to second- or higher-order nested probabilistic formalisms. This equivalence is a quaint mathematical result; non-probabilisitic formalisms are often easier for ordinary humans to understand than nested probabilistic formalisms. This is why the statement in the second sentence in this paragraph is an instance of the statement in the first sentence.

    • Corollary: It behooves no one in AI to be dogmatic about ways of thinking.

      This is one reason why I am not a Bayesian.

    • Deductive reasoning over an abstract mathematical model will only provide information about the real world to the extent that the relationship between the model and reality is continuously dependent on the initial assumptions.

      In other words, the fact that the assumptions of a model are close to some real phenomenon tells us nothing about whether the outputs of the model are close to those of the real phenomenon if the relationship between model and reality is not continuous. If you derive some result from an assumption that participants in some interaction have infinite processing capabilities, for example, then it does not necessarily follow that the same or close result holds if their real processing capabilities are finite, even if very large. Economics has always suffered from forgetting this truth, but most mainstream economists only seem to have realized it following the Great Global Economic Crisis of 2007. Indeed, some of the so-called freshwater economists have still not realized it, alleging that their models are still good predictors.

    • In domains with intelligent participants (such as economics and computer science), models may be performative.

      In other words, participants may decide their modes of behaviour based on what modelers have suggested, so that modeling becomes, in effect, a form of self-fulfilling prophecy.
      In Economics, for example, the Black-Scholes model of options pricing allowed traders to price options rigorously. To do so, traders adopted the assumptions made by the modelers (eg, that errors are normally distributed, that decision-makers maximize expected utility, etc).
      Philip Mirowski [2002]: Machine Dreams: Economics Becomes a Cyborg Science. Cambridge, UK: Cambridge University Press.
      For doctrines of nuclear warfare, decision-makers adopted the modes of analysis, assumptions, and decision-options suggested to them by game theorists. Some of these assumptions were questionable during the Cold War – for example, that all participants know and agree on the game they are playing. As a consequence, the US Government appears to have embarked in the late 1950s on a mission to ensure the leaders of the USSR were also using game theory (mainly by issuing high-level public statements asserting that game theory was of no use in military applications).

    • We have entered an era when the prevailing paradigm for the notion of computation is computing-as-interaction.

This paradigm follows earlier paradigms of computation as calculation (c. 1600 – 1965), of computation as information-processing (1965 – 1980), and computation as cognition (1980 – 1995). The new paradigm changes everything. In particular, an abstract model of computers based on movie projectors (ie, Turing Machines) is woefully inadequate for computing where outputs may be needed before all the inputs arrive, where there are multiple threads of control, where programs may be created, composed with one another, and compiled when invoked (ie, at run-time), and where computational devices and software exist together in ever-on, dynamic ecologies. We await an adequate formal, mathematical account of computing-as-interaction, and the game semantics of Samson Abramsky et al, and the bigraphs of Robin Milner are possible candidates.

M. Luck, P. McBurney, S. Willmott and O. Shehory [2005]: The AgentLink III Agent Technology Roadmap. AgentLink III, the European Co-ordination Action for Agent-Based Computing, Southampton, UK.

  • As a consequence, Computer Science has a lot to learn from disciplines that have studied interaction.

    Disciplines such as Argumentation Theory, the Philosophy of Language, Linguistics, Economics, Social Psychology, Sociology, Anthropology, Political Science, Marketing, Epidemiology, Ecology, and Biology.Hopefully, the learning will be in both directions. For example, it is possible (although, in my personal opinion, unwise and immoral) for economists to assume that all economic actors always act in their own self-interest, maximizing their perceived expected utility. No rational computer scientist could make this assumption, however (except pro tem), since we all know the prevalence of buggy code: such code means that software entities may act against their own-self interest, or the interests of their principals. Creating a computational theory of interacting economic actors which does not make such false and unfalsifiable assumptions will surely benefit Economics, as well as Computer Science.
    The two-way interplay of Computer Science with these other disciplines of interaction provides further evidence that Computer Science is not a branch of Mathematics.

  • Conflict and disagreement is inevitable in open computer systems.

    It therefore seems absurd to use formal models in which conflict is not permitted. Instead, it behooves us to consider computational models which enable conflict to be identified, managed, mitigated, and possibly resolved. Argumentation theory, not classical logic, is appropriate here.

  • The Killer App for multi-agent systems is Distributed Computing.

    I have lost count of the number of times I have been asked by people outside the agents community, particularly other computer scientists, to name the killer app for agent technologies.  Look about! It is all around you!  Agent methodologies and technologies enable us to model and simulate complex adaptive systems, such as distributed computer systems. They also enable us to engineer (to specify, to design, and to create) such systems. And they enable us to study the properties of such systems, and hence to manage and control them.

  • Agents are not objects.

    Objects always execute when invoked, and execute as expected. Agents may not. Objects maintain persistent relationships between one another. Agent relationships may be dynamic.

0 Responses to “This Much I Know (about CS and AI)”


  • No Comments

Leave a Reply

You must be logged in to post a comment.