Syntax Attacks

Thanks to the ever-watchful Normblog, I encounter an article by Colin Tatz inveighing against talk about sport.  Norm is right to call Tatz to account for writing nonsense – talk about sport is just as meaningful as talk about politics, history, religion, nuclear deterrence, genocide, or any other real-world human activity.  Tatz says:

Sport is international phatic but also a crucial Australian (male) vehicle. It enables not just short, passing greetings but allows for what may seem like deep, passionate and meaningful conversations but which in the end are unmemorable, empty, producing nothing and enhancing no one.

Unmemorable?! Really?   What Australian could forget Norman May’s shouted “Gold! Gold for Australia! Gold!” commentary at the end of the men’s 400-metre swimming medley at the 1980 Olympics in Moscow.  Only a churlish gradgrind could fail to be enhanced by hearing this.   And what Australian of a certain age could forget the inimitable footie commentary of Rex Mossop, including, for example, such statements as,  “That’s the second consecutive time he’s done that in a row one straight after the other.” Mossop’s heat-of-the-moment sporting talk was commemorated with his many winning places in playwright Alex Buzo’s Australian Indoor Tautology Pennant, an annual competition held, as I recall,  in Wagga Wagga, Gin Gin and Woy Woy (although not in Woop Woop or in The Never Never), before moving internationally to exotic locations such as Pago Pago, Xai Xai and Baden Baden.  Unmemorable, Mr Tatz?  Enhancing no one?  Really?  To be clear, these are not memorable sporting events, but memorable sporting commentary.   And all I’ve mentioned so far is sporting talk, not the great writers on baseball, on golf, on cricket, on swimming,  . . .
But as well as misunderstanding what talk about sport is about and why it is meaningful, Tatz is wrong on another score.   He says:

But why so much natter and clatter about sport? Eco’s answer is that sport “is the maximum aberration of ‘phatic’ speech”, which is really a negation of speech.
Phatic speech is meaningless speech, as in “G’day, how’s it going?” or “have a nice day” or “catch you later” — small talk phrases intended to produce a sense of sociability, sometimes uttered in the hope that it will lead to further and more real intercourse, but human enough even if the converse goes no further.

Phatic communications are about establishing and maintaining relationships between people.  Such a purpose is the very essence of speech communication, not its negation.  Tatz, I fear, has fallen into the trap of so many computer scientists – to focus on the syntax of messages, and completely ignore their semantics and pragmatics.    The syntax of messages concerns their surface form, their logical structure, their obedience (or not) to rules which determine whether they are legal and well-formed statements (or not) in the language they purport to arise from.  The semantics of utterances concerns their truth or falsity, in so far they describe real objects in some world (perhaps the one we all live in, or some past, future or imagined world),  while their pragmatics concerns those aspects of their meaning unrelated to their truth status (for example, who has power to revoke or retract them).
I have discussed this syntax-is-all-there-is mistake before.    I believe the root causes of this mistaken view are two-fold: the mis-guided focus of philosophers these last two centuries on propositions to the exclusion of other types of utterances and statements (of which profound error Terry Eagleton has shown himself guilty), and the mis-guided view that we now live in some form of Information Society, a view which wrongly focuses attention on the information  transferred by utterances to the exclusion of any other functions that utterances may serve or any other things we agents (people and machines) may be doing and aiming to do when we talk.   If you don’t believe me about the potentially complex functionality of utterances, even when viewed as nothing more than the communication of factual propositions, then read this simple example.
If communications were only about the transfer of explicit information, then life would be immensely less interesting.  It would also not be human life, for we would be no more intelligent than desktop computers passing HTTP requests and responses to one another.

As we once thought


The Internet, the World-Wide-Web and hypertext were all forecast by Vannevar Bush, in a July 1945 article for The Atlantic, entitled  As We May Think.  Perhaps this is not completely surprising since Bush had a strong influence on WW II and post-war military-industrial technology policy, as Director of the US Government Office of Scientific Research and Development.  Because of his influence, his forecasts may to some extent have been self-fulfilling.
However, his article also predicted automated machine reasoning using both logic programming, the computational use of formal logic, and computational argumentation, the formal representation and manipulation of arguments.  These areas are both now important domains of AI and computer science which developed first in Europe and which still much stronger there than in the USA.   An excerpt:

The scientist, however, is not the only person who manipulates data and examines the world about him by the use of logical processes, although he sometimes preserves this appearance by adopting into the fold anyone who becomes logical, much in the manner in which a British labor leader is elevated to knighthood. Whenever logical processes of thought are employed—that is, whenever thought for a time runs along an accepted groove—there is an opportunity for the machine. Formal logic used to be a keen instrument in the hands of the teacher in his trying of students’ souls. It is readily possible to construct a machine which will manipulate premises in accordance with formal logic, simply by the clever use of relay circuits. Put a set of premises into such a device and turn the crank, and it will readily pass out conclusion after conclusion, all in accordance with logical law, and with no more slips than would be expected of a keyboard adding machine.
Logic can become enormously difficult, and it would undoubtedly be well to produce more assurance in its use. The machines for higher analysis have usually been equation solvers. Ideas are beginning to appear for equation transformers, which will rearrange the relationship expressed by an equation in accordance with strict and rather advanced logic. Progress is inhibited by the exceedingly crude way in which mathematicians express their relationships. They employ a symbolism which grew like Topsy and has little consistency; a strange fact in that most logical field.
A new symbolism, probably positional, must apparently precede the reduction of mathematical transformations to machine processes. Then, on beyond the strict logic of the mathematician, lies the application of logic in everyday affairs. We may some day click off arguments on a machine with the same assurance that we now enter sales on a cash register. But the machine of logic will not look like a cash register, even of the streamlined model.”

Edinburgh sociologist, Donald MacKenzie, wrote a nice history and sociology of logic programming and the use of logic of computer science, Mechanizing Proof: Computing, Risk, and Trust.  The only flaw of this fascinating book is an apparent misunderstanding throughout that theorem-proving by machines  refers only to proving (or not) of theorems in mathematics.    Rather, theorem-proving in AI refers to proving claims in any domain of knowledge represented by a formal, logical language.    Medical expert systems, for example, may use theorem-proving techniques to infer the presence of a particular disease in a patient; the claims being proved (or not) are theorems of the formal language representing the domain, not necessarily mathematical theorems.
References:
Donald MacKenzie [2001]:  Mechanizing Proof: Computing, Risk, and Trust (2001).  Cambridge, MA, USA:  MIT Press.
Vannevar Bush[1945]:  As we may thinkThe Atlantic, July 1945.

Crowd-sourcing for scientific research

Computers are much better than most humans at some tasks (eg, remembering large amounts of information, tedious and routine processing of large amounts of data), but worse than many humans at others (eg, generating new ideas, spatial pattern matching, strategic thinking). Progress may come from combining both types of machine (humans, computers) in ways which make use of their specific skills.  The journal Nature yesterday carried a report of a good example of this:  video-game players are able to assist computer programs tasked with predicting protein structures.  The abstract:

People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully ‘crowd-sourced’ through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.”

References:
Seth Cooper et al. [2010]: Predicting protein structures with a multiplayer online gameNature, 466:  756–760.  Published:  2010-08-05.
Eric Hand [2010]:  Citizen science:  people powerNature 466, 685-687. Published 2010-08-04.
The Foldit game is here.

Railtrack and the Joint-Action Society


For some time, I have been writing on these pages that the currently-fashionable paradigm of the Information Society is inadequate to describe what most of us do at work and play, or to describe how computing technologies support those activities (see, for example, recently here, with a collection of posts here).   Most work for most people in the developed world is about coordinating their actions with those of others  – colleagues, partners, underlings, bosses, customers, distributors, suppliers, publicists, regulators, und so weiter.   Information collection and transfer, while often important and sometimes essential to the co-ordination of actions,  is not usually itself the main game.
Given the extent to which computing technologies already support and enable human activities (landing our large aircraft automatically when there is fog, for example), the InfoSoc paradigm, although it may describe well the transmission of zeros and ones between machines, is of little value in understanding what these transmissions mean.  Indeed, the ur-text of the Information Society, Shannon’s mathematical theory of communications (Shannon 1948) explicitly ignores the semantics of messages!  In place of the InfoSoc metaphor, we need another new paradigm, a new way to construe what we are all doing.  For now, let me call it the Joint-Action Society, although this does not quite capture all that is intended.
I am pleased to learn that I am not alone in my views about InfoSoc.   I recently came across an article by the late Peter Martin, journalist, editor and e-businessman, about the lessons from that great disaster of privatization of Railtrack in the UK.  (In the 1980s and 90s, the French had grand projets while the British had great project management disasters.)  Here is Martin, writing in the FT in October 2001 (the article does not seem to be available online):

Railtrack had about a dozen prime contractors, which in turn farmed out the work to about 2,000 subcontractors.  Getting this web of relationships to work was a daunting task.  Gaps in communication, and the consequent “blame culture” are thought to be important causes of the track problems that led to the Hatfield crash which undermined Railtrack’s credibility.
.  .  .
These practical advantages of wholesale outsourcing rely, however, on unexamined assumptions.  It is these that the Railtrack episode comprehensively demolishes.
The first belief holds that properly specified contracts can replicate the operations of an integrated business.  Indeed, on this view, they may be better than integration because everyone understands what their responsibilities are, and their  incentives are clear and tightly defined.
This approach had a particular appeal to governments, as they attempted to step back from the minutiae of delivering public services.  British Conservative governments used the approach to break up monolithic nationalised industries into individual entities, such as power generators and distributors.
They put this approach into effect at the top level of the railway system by splitting the task of running the track and the signalling (Railtrack’s job) from the role of operating the trains.  It is not surprising that Railtrack, born into this environment, carried the approach to its logical conclusion in its internal operation.
.  .  .
In 1937, the Nobel prize-winning economist Ronald Coase had explained that companies perform internally those tasks for which the transactional costs of outsourcing are too high.
What fuelled the outsourcing boom of the 1990s was the second unexamined assumption – that the cost of negotiating, monitoring and maintaining contractual relationships with outsourcing partners had dropped sharply, thanks to the revolution in electronic communications.  The management of a much bigger web of contractors – indeed, the creation of a “virtual company” – became feasible.
In practice, of course, the real costs of establishing and maintaining contracts are not those of information exchange but of establishing trust, alignment of interests and a common purpose.  Speedy and cheap electronic communications have only a minor role to play in this process, as Coase himself pointed out in 1997.
.   .   .
And perhaps that is the most useful lesson from the Railtrack story: it is essential to decide what tasks are vital to your corporate purpose and to devote serious resources to achieving them.   Maintaining thousands of miles of steel tracks and stone chippings may be a dull, 19th century kind of task.   But as Railtrack found, if you can’t keep the railway running safely, you haven’t got a business.”

Reference:
Peter Martin [2001]: Lessons from Railtrack.  The collapse has demolished some untested assumptions about outsourcing.  Financial Times, 2001-10-09, page 21.
Claude E. Shannon [1948/1963]: The mathematical theory of communication. Bell System Technical Journal, October and November 1948.  Reprinted in:  C. E. Shannon and W. Weaver [1963]: The Mathematical Theory of Communication. pp. 29-125. Chicago, IL, USA: University of Illinois Press.

By their voice ye shall know them

Effective strategies are often counter-intuitive.  If you are speaking to a large group, some of whom are speaking to each other, your natural tendency will be to try to speak over them, to speak more loudly.  But doing this just encourages the talkers in the audience to increase their levels of speech, and so an arms race results.   Better for you to speak more softly, which means that audience talkers can hear themselves more clearly over you, and so typically, and unthinkingly, drop the levels of their own speech.
A recent issue of ACM Transactions on Computer Systems (ACM TOCS) carries a paper with a wonderful example of this principle.  Faced with a denial-of-service attack, they propose that a server ask all its clients to increase their messages to the server.  Most likely, attackers among the clients are already transmitting at their local full capacity, and so are unable to do this, which means that messages from attackers will form a decreasing proportion of all messages received by the server.   The paper abstract is:

This article presents the design, implementation, analysis, and experimental evaluation of speak-up, a defense against application-level distributed denial-of-service (DDoS), in which attackers cripple a server by sending legitimate-looking requests that consume computational resources (e.g., CPU cycles, disk). With speak-up, a victimized server encourages all clients, resources permitting, to automatically send higher volumes of traffic. We suppose that attackers are already using most of their upload bandwidth so cannot react to the encouragement. Good clients, however, have spare upload bandwidth so can react to the encouragement with drastically higher volumes of traffic. The intended outcome of this traffic inflation is that the good clients crowd out the bad ones, thereby capturing a much larger fraction of the server’s resources than before. We experiment under various conditions and find that speak-up causes the server to spend resources on a group of clients in rough proportion to their aggregate upload bandwidths, which is the intended result.

Reference:
Michael Walfish, Mythili Vukurutu, Hari Balakrishnan, David Karger and Scott Shenker [2010]:  DDoS defense by offense.  ACM Transactions on Computer Systems, 28 (1), article 3.

This Much I Know (about CS and AI)

Inspired by The Guardian column of the same name, I decided to list here my key learnings of the last several years regarding Computer Science and Artificial Intelligence (AI). Few of these are my own insights, and I welcome comments and responses. From arguments I have had, I know that some of these statements are controversial; this fact surprises me, since most of them seem obvious to me. Statements are listed, approximately, from the more general to the more specific.

Vale: Robin Milner

The death has just occurred of Robin Milner (1934-2010), one of the founders of theoretical computer science.   Milner was an ACM Turing Award winner and his main contributions were a formal theory of concurrent communicating processes and, more recently, a category-theoretic account of hyperlinks and embeddings, his so-called theory of bigraphs.   As we move into an era where the dominant metaphor for computation is computing-as-interaction, the idea of concurrency has become increasingly important; however, understanding, modeling and managing it have proven to be among the most difficult conceptual problems in modern computer science.  Alan Turing gave the world a simple mathematical model of computation as the sequential writing or erasing of characters on a linear tape under a read/write head, like a single strip of movie film passing back and forth through a projector.  Despite the prevalence of the Internet and of ambient, ever-on, and ubiquitous computing, we still await a similar mathematical model of interaction and interacting processes.  Milner’s work is a major contribution to developing such a model. In his bigraphs model, for example, one graph represents the links between entities while the other represents geographic proximity or organizational hierarchy.

Robin was an incredibly warm, generous and unprepossessing man. About seven years ago, without knowing him at all, I wrote to him inviting him to give an academic seminar; even though famous and retired, he responded positively, and was soon giving a very entertaining talk on bigraphs (a representation of which is on the blackboard behind him in the photo). He joined us for drinks in the pub afterwards, buying his round like everyone else, and chatting amicably with all, talking both about the war in Iraq and the problems of mathematical models based on pre-categories with a visitor from PennState. He always responded immediately to any of my occasional emails subsequently.

The London Times has an obituary here, and the Guardian here (from which the photo is borrowed).

References:
Robin Milner [1989]: Communication and Concurrency. Prentice Hall.
Robin Milner [1999]: Communicating and Mobile Systems: the Pi-Calculus. Cambridge University Press.
Robin Milner [2009]: The Space and Motion of Communicating Agents. Cambridge University Press.

Research funding myopia

The British Government, through its higher education funding council, is currently considering the use of socio-economic impact factors when deciding the relative rankings of university departments in terms of their research quality, the Research Assessment Exercise (RAE), held about every five years.   These impact factors are intended to measure the social or economic impact of research activities in the period of the RAE (ie, within 5 years). Since the RAE is used to allocate funds for research infrastructure to British universities these impact factors, if implemented, will thus indirectly decide which research groups and which research will be funded.    Some academic reactions to these proposals are here and here.
From the perspective of the national economy and technological progress, these proposals are extremely misguided, and should be opposed by us all.    They demonstrate a profound ignorance of where important ideas come from, of when and where and how they are applied, and of where they end up.  In particular, they demonstrate great ignorance of the multi-disciplinary nature of most socio-economically-impactful research.
One example will demonstrate this vividly.  As more human activities move online, more tasks can be automated or semi-automated.    To enable this, autonomous computers and other machines need to be able to communicate with one using shared languages and protocols, and thus much research effort in Computer Science and Artificial Intelligence these last three decades has focused on designing languages and protocols for computer-to-computer communications.  These protocols are used in various computer systems already and are likely to be used in future-generation mobile communications and e-commerce systems.
Despite its deep technological nature, research in this area draws fundamentally on past research and ideas from the Humanities, including:

  • Speech Act Theory in the Philosophy of Language (ideas due originally to Adolf Reinach 1913, John Austin 1955, John Searle 1969 and Jurgen Habermas 1981, among others)
  • Formal Logic (George Boole 1854, Clarence Lewis 1910, Ludwig Wittgenstein 1922, Alfred Tarski 1933, Saul Kripke 1959, Jaakko Hintikka 1962, etc), and
  • Argumentation Theory (Aristotle c. 350 BC, Stephen Toulmin 1958, Charles Hamblin 1970, etc).

Assessment of the impacts of research over five years is laughable when Aristotle’s work on rhetoric has taken 2300 years to find technological application.   Even Boole’s algebra took 84 years from its creation to its application in the design of electronic circuits (by Claude Shannon in 1938).  None of the humanities scholars responsible were doing their research to promote technologies for computer interaction or to support e-commerce, and most would not have even understood what these terms mean.  Of the people I have listed, only John Searle (who contributed to the theory of AI), and Charles Hamblin (who created one of the first computer languages, GEORGE, and who made major contributions to the architecture of early computers, including invention of the memory stack), had any direct connection to computing.   Only Hamblin was afforded an obituary by a computer journal (Allen 1985).
None of the applications of these ideas to computer science were predicted, or even predictable.  If we do not fund pure research across all academic disciplines without regard to its potential socio-economic impacts, we risk destroying the very source of the ideas upon which our modern society and our technological progress depend.
Reference:
M. W. Allen [1985]: “Charles Hamblin (1922-1985)”. The Australian Computer Journal, 17(4): 194-195.

Vale: Stephen Toulmin

The Anglo-American philosopher, Stephen Toulmin, has just died, aged 87.   One of the areas to which he made major contributions was argumentation, the theory of argument, and his work found and finds application not only in philosophy but in computer science.
For instance, under the direction of John Fox, the Advanced Computation Laboratory at Europe’s largest medical research charity, Cancer Research UK (formerly, the Imperial Cancer Research Fund) applied Toulmin’s model of argument in computer systems they built and deployed in the 1990s to handle conflicting arguments in some domain.  An example was a system for advising medical practitioners with the arguments for and against prescribing a particular drug to a patient with a particular medical history and disease presentation.  One company commercializing these ideas in medicine is Infermed.    Other applications include the automated prediction of chemical properties such as toxicity (see for example, the work of Lhasa Ltd), and dynamic optimization of extraction processes in mining.
S E Toulmin
For me, Toulmin’s most influential work was was his book Cosmopolis, which identified and deconstructed the main biases evident in contemporary western culture since the work of Descartes:

  • A bias for the written over the oral
  • A bias for the universal over the local
  • A bias for the general over the particular
  • A bias for the timeless over the timely.

Formal logic as a theory of human reasoning can be seen as example of these biases at work. In contrast, argumentation theory attempts to reclaim the theory of reasoning from formal logic with an approach able to deal with conflicts and gaps, and with special cases, and less subject to such biases.    Norm’s dispute with Larry Teabag is a recent example of resistance to the puritanical, Descartian desire to impose abstract formalisms onto practical reasoning quite contrary to local and particular sense.
Another instance of Descartian autism is the widespread deletion of economic history from graduate programs in economics and the associated privileging of deductive reasoning in abstract mathematical models over other forms of argument (eg, narrative accounts, laboratory and field experiments, field samples and surveys, computer simulation, etc) in economic theory.  One consequence of this autism is the Great Moral Failure of Macroeconomics in the Great World Recession of 2008-onwards.
References:
S. E. Toulmin [1958]:  The Uses of Argument.  Cambridge, UK: Cambridge University Press.
S. E. Toulmin [1990]: Cosmopolis:  The Hidden Agenda of Modernity.  Chicago, IL, USA: University of Chicago Press.

Computing-as-interaction

In its brief history, computer science has enjoyed several different metaphors for the notion of computation.  From the time of Charles Babbage in the nineteenth century until the mid-1960s, most people thought of computation as calculation, or the manipulation of numbers.  Indeed, the English word “computer” was originally used to describe a person undertaking arithmetical calculations.  With widespread digital storage and processing of non-numerical information from the 1960s onwards, computation was re-conceptualized more generally as information processing, or the manipulation of numerical-, text-, audio- or video-data.  This metaphor is probably still the prevailing view among people who are not computer scientists.  From the late 1970s, with the development of various forms of machine intelligence, such as expert systems, a yet more general metaphor of computation as cognition, or the manipulation of ideas, became widespread, at least among computer scientists.  The fruits of this metaphor have been realized, for example, in the advanced artificial intelligence technologies which have now been a standard part of desktop computer operating systems since the mid-1990s.  Windows95, for example, included a Bayesnet for automated diagnosis of printer faults.
With the growth of the Internet and the Web over the last two decades, we have reached a position where a new metaphor for computation is required:  computation as interaction, or the joint manipulation of ideas and actions. In this metaphor, computation is something which happens by and through the communications which computational entities have with one another.  Cognition and intelligent behaviour is not something which a computer does on its own, or not merely that, but is something which arises through its interactions with other intelligent computers to which is connected.  The network is the computer, in SUN’s famous phrase.  This viewpoint is a radical reconceptualization of the notion of computation.
coveral3roadmap
In this new metaphor, computation is an activity which is inherently social, rather than solitary, and this view leads to a new ways of conceiving, designing, developing and managing computational systems.  One example of the influence of this viewpoint, is the model of software as a service, for example in Service Oriented Architectures.  In this model, applications are no longer “compiled together” in order to function on one machine (single user applications), or distributed applications managed by a single organization (such as most of today’s Intranet applications), but instead are societies of components:

  • These components are viewed as providing services to one another rather than being compiled together.  They may not all have been designed together or even by the same software development team; they may be created, operate and de-commissioned according to different timescales; they may enter and leave different societies at different times and for different reasons; and they may form coalitions or virtual organizations with one another to achieve particular temporary objectives.  Examples are automated procurement systems comprising all the companies connected along a supply chain, or service creation and service delivery platforms for dynamic provision of value-added telecommunications services.
  • The components and their services may be owned and managed by different organizations, and thus have access to different information sources, have different objectives, have conflicting preferences, and be subject to different policies or regulations regarding information collection, storage and dissemination.  Health care management systems spanning multiple hospitals or automated resource allocation systems, such as Grid systems, are examples here.
  • The components are not necessarily activated by human users but may also carry out actions in an automated and co-ordinated manner when certain conditions hold true.  These pre-conditions may themselves be distributed across components, so that action by one component requires prior co-ordination and agreement with other components.  Simple multi-party database commit protocols are examples of this, but significantly more complex co-ordination and negotiation protocols have been studied and deployed, for example in utility computing systems and in ad hoc wireless networks.
  • Intelligent, automated components may even undertake self-assembly of software and systems, to enable adaptation or response to changing external or internal circumstances.  An example is the creation of on-the-fly coalitions in automated supply-chain systems in order to exploit dynamic commercial opportunities.  Such systems resemble those of the natural world and human societies much more than they do the example arithmetical calculations  programs typically taught in Fortran classes, and so ideas from biology, ecology, statistical physics, sociology, and economics play an increasingly important role in computer science.

How should we exploit this new metaphor of computation as a social activity, as interaction between intelligent and independent entities, adapting and co-evolving with one another?  The answer, many people believe, lies with agent technologies.  An agent is a computer programme capable of flexible and autonomous action in a dynamic environment, usually an environment containing other agents.  In this abstraction, we have software entities called agents, encapsulated, autonomous and intelligent, and we have demarcated the society in which they operate, a multi-agent system.  Agent-based computing concerns the theoretical and practical working through of the details of this simple two-level abstraction.
Reference:
Text edited slightly from the Executive Summary of:
M. Luck, P. McBurney, S. Willmott and O. Shehory [2005]: The AgentLink III Agent Technology Roadmap. AgentLink III, the European Co-ordination Action for Agent-Based Computing, Southampton, UK.