Research funding myopia

The British Government, through its higher education funding council, is currently considering the use of socio-economic impact factors when deciding the relative rankings of university departments in terms of their research quality, the Research Assessment Exercise (RAE), held about every five years.   These impact factors are intended to measure the social or economic impact of research activities in the period of the RAE (ie, within 5 years). Since the RAE is used to allocate funds for research infrastructure to British universities these impact factors, if implemented, will thus indirectly decide which research groups and which research will be funded.    Some academic reactions to these proposals are here and here.
From the perspective of the national economy and technological progress, these proposals are extremely misguided, and should be opposed by us all.    They demonstrate a profound ignorance of where important ideas come from, of when and where and how they are applied, and of where they end up.  In particular, they demonstrate great ignorance of the multi-disciplinary nature of most socio-economically-impactful research.
One example will demonstrate this vividly.  As more human activities move online, more tasks can be automated or semi-automated.    To enable this, autonomous computers and other machines need to be able to communicate with one using shared languages and protocols, and thus much research effort in Computer Science and Artificial Intelligence these last three decades has focused on designing languages and protocols for computer-to-computer communications.  These protocols are used in various computer systems already and are likely to be used in future-generation mobile communications and e-commerce systems.
Despite its deep technological nature, research in this area draws fundamentally on past research and ideas from the Humanities, including:

  • Speech Act Theory in the Philosophy of Language (ideas due originally to Adolf Reinach 1913, John Austin 1955, John Searle 1969 and Jurgen Habermas 1981, among others)
  • Formal Logic (George Boole 1854, Clarence Lewis 1910, Ludwig Wittgenstein 1922, Alfred Tarski 1933, Saul Kripke 1959, Jaakko Hintikka 1962, etc), and
  • Argumentation Theory (Aristotle c. 350 BC, Stephen Toulmin 1958, Charles Hamblin 1970, etc).

Assessment of the impacts of research over five years is laughable when Aristotle’s work on rhetoric has taken 2300 years to find technological application.   Even Boole’s algebra took 84 years from its creation to its application in the design of electronic circuits (by Claude Shannon in 1938).  None of the humanities scholars responsible were doing their research to promote technologies for computer interaction or to support e-commerce, and most would not have even understood what these terms mean.  Of the people I have listed, only John Searle (who contributed to the theory of AI), and Charles Hamblin (who created one of the first computer languages, GEORGE, and who made major contributions to the architecture of early computers, including invention of the memory stack), had any direct connection to computing.   Only Hamblin was afforded an obituary by a computer journal (Allen 1985).
None of the applications of these ideas to computer science were predicted, or even predictable.  If we do not fund pure research across all academic disciplines without regard to its potential socio-economic impacts, we risk destroying the very source of the ideas upon which our modern society and our technological progress depend.
Reference:
M. W. Allen [1985]: “Charles Hamblin (1922-1985)”. The Australian Computer Journal, 17(4): 194-195.

Vale: Stephen Toulmin

The Anglo-American philosopher, Stephen Toulmin, has just died, aged 87.   One of the areas to which he made major contributions was argumentation, the theory of argument, and his work found and finds application not only in philosophy but in computer science.
For instance, under the direction of John Fox, the Advanced Computation Laboratory at Europe’s largest medical research charity, Cancer Research UK (formerly, the Imperial Cancer Research Fund) applied Toulmin’s model of argument in computer systems they built and deployed in the 1990s to handle conflicting arguments in some domain.  An example was a system for advising medical practitioners with the arguments for and against prescribing a particular drug to a patient with a particular medical history and disease presentation.  One company commercializing these ideas in medicine is Infermed.    Other applications include the automated prediction of chemical properties such as toxicity (see for example, the work of Lhasa Ltd), and dynamic optimization of extraction processes in mining.
S E Toulmin
For me, Toulmin’s most influential work was was his book Cosmopolis, which identified and deconstructed the main biases evident in contemporary western culture since the work of Descartes:

  • A bias for the written over the oral
  • A bias for the universal over the local
  • A bias for the general over the particular
  • A bias for the timeless over the timely.

Formal logic as a theory of human reasoning can be seen as example of these biases at work. In contrast, argumentation theory attempts to reclaim the theory of reasoning from formal logic with an approach able to deal with conflicts and gaps, and with special cases, and less subject to such biases.    Norm’s dispute with Larry Teabag is a recent example of resistance to the puritanical, Descartian desire to impose abstract formalisms onto practical reasoning quite contrary to local and particular sense.
Another instance of Descartian autism is the widespread deletion of economic history from graduate programs in economics and the associated privileging of deductive reasoning in abstract mathematical models over other forms of argument (eg, narrative accounts, laboratory and field experiments, field samples and surveys, computer simulation, etc) in economic theory.  One consequence of this autism is the Great Moral Failure of Macroeconomics in the Great World Recession of 2008-onwards.
References:
S. E. Toulmin [1958]:  The Uses of Argument.  Cambridge, UK: Cambridge University Press.
S. E. Toulmin [1990]: Cosmopolis:  The Hidden Agenda of Modernity.  Chicago, IL, USA: University of Chicago Press.

The websearch-industrial complex

I think it is now well-known that the creation of Internet was sponsored by the US Government, through its military research funding agencies, ARPA (later DARPA).   It is perhaps less well-known that Google arose from a $4.5 million research project sponsored also by the US Government, through the National Science Foundation.   Let no one say that the USA has an economic system involving “free” enterprise.

In the primordial ooze of Internet content several hundred million seconds ago (1993), fewer than 100 Web sites inhabited the planet. Early clans of information seekers hunted for data among the far larger populations of text-only Gopher sites and FTP file-sharing servers. This was the world in the years before Google.
Continue reading ‘The websearch-industrial complex’

Straitjackets of Standards

This week I was invited to participate as an expert in a Delphi study of The Future Internet, being undertaken by an EC-funded research project.   One of the aims of the project is to identify multiple plausible future scenarios for the socio-economic role(s) of the Internet and related technologies, after which the project aim to reach a consensus on a small number of these scenarios.  Although the documents I saw were unclear as to exactly which population this consensus was to be reached among, I presume it was intended to be a consensus of the participants in the Delphi Study.
I have a profound philosophical disagreement with this objective, and indeed with most of the EC’s many efforts in standardization.   Tim Berners-Lee invented Hyper-Text Transfer Protocol (HTTP), for example, in order to enable physicists to publish their research documents to one another in a manner which enabled author-control of document appearance.    Like most new technologies. HTTP was not invented for the many other uses to which it has since been put; indeed, many of these other applications have required hacks or fudges to HTTP in order to work.  For example, because HTTP does not keep track of the state of a request, fudges such as cookies are needed.  If we had all been in consensual agreement with The Greatest Living Briton about the purposes of HTTP, we would have no e-commerce, no blogging, no social networking, no easy remote access to databases, no large-scale distributed collaborations, no easy action-at-a-distance, in short no transformation of our society and life these last two decades, just the broadcast publishing of text documents.
Let us put aside this childish, warm-and-fuzzy, touchy-feely seeking after consensus.  Our society benefits most from a diversity of opinions and strong disagreements, a hundred flowers blooming, a cacophony of voices in the words of Oliver Wendell Holmes.  This is particularly true of opinions regarding the uses and applications of innovations.   Yet the EC persists, in some recalcitrant chasing after illusive certainty, in trying to force us all into straitjackets of standards and equal practice.    These efforts are misguided and wrong-headed, and deserve to fail.

Myopic utilitarianism

What are the odds, eh?  On the same day that the Guardian publishes an obituary of theoretical computer scientist, Peter Landin (1930-2009), pioneer of the use of Alonzo Church’s lambda calculus as a formal semantics for computer programs, they also report that the Government is planning only to fund research which has relevance  to the real-world.  This is GREAT NEWS for philosophers and pure mathematicians! 
What might have seemed, for example,  mere pointless musings on the correct way to undertake reasoning – by Aristotle, by Islamic and Roman Catholic medieval theologians, by numerous English, Irish and American abstract mathematicians in the 19th century, by an entire generation of Polish logicians before World War II, and by those real-world men-of-action Gottlob Frege, Bertrand Russell, Ludwig Wittgenstein and Alonzo Church – turned out to be EXTREMELY USEFUL for the design and engineering of electronic computers.   Despite Russell’s Zen-influenced personal motto – “Just do!  Don’t think!” (later adopted by IBM) – his work turned out to be useful after all.   I can see the British research funding agencies right now, using their sophisticated and proven prognostication procedures to calculate the society-wide economic and social benefits we should expect to see from our current research efforts over the next 2300 years  – ie, the length of time that Aristotle’s research on logic took to be implemented in technology.   Thank goodness our politicians have shown no myopic utilitarianism this last couple of centuries, eh what?!
All while this man apparently received no direct state or commercial research funding for his efforts as a computer pioneer, playing with “pointless” abstractions like the lambda calculus.
And Normblog also comments.
POSTSCRIPT (2014-02-16):   And along comes The Cloud and ruins everything!   Because the lower layers of the Cloud – the physical infrastructure, operating system, even low-level application software – are fungible and dynamically so, then the Cloud is effectively “dark” to its users, beneath some level.   Specifying and designing applications that will run over it, or systems that will access it, thus requires specification and design to be undertaken at high levels of abstraction.   If all you can say about your new system is that in 10 years time it will grab some data from the NYSE, and nothing (yet) about the format of that data, then you need to speak in abstract generalities, not in specifics.   It turns out the lambda calculus is just right for this task and so London’s big banks have been recruiting logicians and formal methods people to spec & design their next-gen systems.  You can blame those action men, Church and Russell.

Guerrilla logic: a salute to Mervyn Pragnell

When a detailed history of computer science in Britain comes to be written, one name that should not be forgotten is Mervyn O. Pragnell.  As far as I am aware, Mervyn Pragnell never held any academic post and he published no research papers.   However, he introduced several of the key players in British computer science to one another, and as importantly, to the lambda calculus of Alonzo Church (Hodges 2001).  At a time (the 1950s and 1960s) when logic was not held in much favour in either philosophy or pure mathematics, and before it became to be regarded highly in computer science, he studied the discipline not as a salaried academic in a university, but in a private reading-circle of his own creation, almost as a guerrilla activity.

Pragnell recruited people for his logic reading-circle by haunting London bookshops, approaching people he saw buying logic texts (Bornat 2009).  Among those he recruited to the circle were later-famous computer pioneers such as Rod Burstall, Peter Landin (1930-2009) and Christopher Strachey (1916-1975).  The meetings were held after hours, usually in Birkbeck College, University of London, without the knowledge or permission of the college authorities (Burstall 2000).  Some were held or continued in the neighbouring pub, The Duke of Marlborough.  It seems that Pragnell was employed for a time in the 1960s as a private research assistant for Strachey, working from Strachey’s house (Burstall 2000).   By the 1980s, he was apparently a regular attendee at the seminars on logic programming held at the Department of Computing in Imperial College, London, then (and still) one of the great research centres for the application of formal logic in computer science.

Pragnell’s key role in early theoretical computer science is sadly under-recognized.   Donald MacKenzie’s fascinating history and sociology of automated theorem proving, for example, mentions Pragnell in the text (MacKenzie 2001, p. 273), but manages to omit his name from the index.  Other than this, the only references I can find to his contributions are in the obituaries and personal recollections of other people.  I welcome any other information anyone can provide.

UPDATE (2009-09-23): Today’s issue of The Guardian newspaper has an obituary for theoretical computer scientist Peter Landin (1930-2009), which mentions Mervyn Pragnell.

UPDATE (2012-01-30):  MOP appears also to have been part of a production of the play The Way Out at The Little Theatre, Bristol in 1945-46, according to this web-chive of theatrical info.

UPDATE (2013-02-11):  In this 2001 lecture by Peter Landin at the Science Museum, Landin mentions first meeting Mervyn Pragnell in a cafe in Sheffield, and then talks about his participation in Pragnell’s London reading group (from about minute 21:50).

UPDATE (2019-07-05): I have learnt some further information from a cousin of Mervyn Pragnell, Ms Susan Miles.  From her, I understand that MOP’s mother died in the Influenza Pandemic around 1918, when he was very young, and he was subsequently raised in Cardiff in the large family of a cousin of his mother’s, the Miles family.  MOP’s father’s family had a specialist paint manufacturing business in Bristol, Oliver Pragnell & Company Limited, which operated from 25-27 Broadmead.  This establishment suffered serious bomb damage during WW II.   MOP was married to Margaret and although they themselves had no children, they kept in close contact with their relatives.  Both are remembered fondly by their family.   (I am most grateful to Susan Miles, daughter of Mervyn Miles whose parents raised MOP, for sharing this information.)

References:

Richard Bornat [2009]:  Peter Landin:  a computer scientist who inspired a generation, 5th June 1930 – 3rd June 2009.  Formal Aspects of Computing, 21 (5):  393-395.

Rod Burstall [2000]:  Christopher Strachey – understanding programming languages.  Higher-Order and Symbolic Computation, 13:  51-55.

Wilfrid Hodges [2001]:  A history of British logic.  Unpublished slide presentation.  Available from his website.

Peter Landin [2002]:  Rod Burstall:  a personal note. Formal Aspects of Computing, 13:  195.

Donald MacKenzie [2001]:  Mechanizing Proof:  Computing, Risk, and Trust.  Cambridge, MA, USA:  MIT Press.

Computer science, love-child: Part 2

This post is a continuation of the story which began here.
Life for the teenager Computer Science was not entirely lonely, since he had several half-brothers, half-nephews, and lots of cousins, although he was the only one still living at home.   In fact, his family would have required a William Faulkner or a Patrick White to do it justice.
The oldest of Mathematics’ children was Geometry, who CS did not know well because he did not visit very often.  When he did visit, G would always bring a sketchpad and make drawings, while the others talked around him.   What the boy had heard was that G had been very successful early in his life, with a high-powered job to do with astronomy at someplace like NASA and with lots of people working for him, and with business trips to Egypt and Greece and China and places.  But then he’d had an illness or a nervous breakdown, and thought he was traveling through the fourth dimension.  CS had once overheard Maths telling someone that G had an “identity crisis“, and could not see the point of life anymore, and he  had become an alcoholic.  He didn’t speak much to the rest of the family, except for Algebra, although all of them still seemed very fond of him, perhaps because he was the oldest brother.
Continue reading ‘Computer science, love-child: Part 2’

Computer Science, love-child

With the history and pioneers of computing in the British news this week, I’ve been thinking about a common misconception:  many people regard computer science as very closely related to Mathematics, perhaps even a sub-branch of Mathematics.  Mathematicians and physical scientists, who often know little and that little often outdated about modern computer science and software engineering, are among the worst offenders here.  For some reason, they often think that computer science consists of Fortran programming and the study of algorithms, which has been a long way from the truth for, oh, the last few decades.  (I have past personal experience of the online vitriol which ignorant pure mathematicians can unleash on those who dare to suggest that computer science might involve the application of ideas from philosophy, economics, sociology or ecology.) 
So here’s my story:  Computer Science is the love-child of Pure Mathematics and Philosophy
Continue reading ‘Computer Science, love-child’

Computing-as-interaction

In its brief history, computer science has enjoyed several different metaphors for the notion of computation.  From the time of Charles Babbage in the nineteenth century until the mid-1960s, most people thought of computation as calculation, or the manipulation of numbers.  Indeed, the English word “computer” was originally used to describe a person undertaking arithmetical calculations.  With widespread digital storage and processing of non-numerical information from the 1960s onwards, computation was re-conceptualized more generally as information processing, or the manipulation of numerical-, text-, audio- or video-data.  This metaphor is probably still the prevailing view among people who are not computer scientists.  From the late 1970s, with the development of various forms of machine intelligence, such as expert systems, a yet more general metaphor of computation as cognition, or the manipulation of ideas, became widespread, at least among computer scientists.  The fruits of this metaphor have been realized, for example, in the advanced artificial intelligence technologies which have now been a standard part of desktop computer operating systems since the mid-1990s.  Windows95, for example, included a Bayesnet for automated diagnosis of printer faults.
With the growth of the Internet and the Web over the last two decades, we have reached a position where a new metaphor for computation is required:  computation as interaction, or the joint manipulation of ideas and actions. In this metaphor, computation is something which happens by and through the communications which computational entities have with one another.  Cognition and intelligent behaviour is not something which a computer does on its own, or not merely that, but is something which arises through its interactions with other intelligent computers to which is connected.  The network is the computer, in SUN’s famous phrase.  This viewpoint is a radical reconceptualization of the notion of computation.
coveral3roadmap
In this new metaphor, computation is an activity which is inherently social, rather than solitary, and this view leads to a new ways of conceiving, designing, developing and managing computational systems.  One example of the influence of this viewpoint, is the model of software as a service, for example in Service Oriented Architectures.  In this model, applications are no longer “compiled together” in order to function on one machine (single user applications), or distributed applications managed by a single organization (such as most of today’s Intranet applications), but instead are societies of components:

  • These components are viewed as providing services to one another rather than being compiled together.  They may not all have been designed together or even by the same software development team; they may be created, operate and de-commissioned according to different timescales; they may enter and leave different societies at different times and for different reasons; and they may form coalitions or virtual organizations with one another to achieve particular temporary objectives.  Examples are automated procurement systems comprising all the companies connected along a supply chain, or service creation and service delivery platforms for dynamic provision of value-added telecommunications services.
  • The components and their services may be owned and managed by different organizations, and thus have access to different information sources, have different objectives, have conflicting preferences, and be subject to different policies or regulations regarding information collection, storage and dissemination.  Health care management systems spanning multiple hospitals or automated resource allocation systems, such as Grid systems, are examples here.
  • The components are not necessarily activated by human users but may also carry out actions in an automated and co-ordinated manner when certain conditions hold true.  These pre-conditions may themselves be distributed across components, so that action by one component requires prior co-ordination and agreement with other components.  Simple multi-party database commit protocols are examples of this, but significantly more complex co-ordination and negotiation protocols have been studied and deployed, for example in utility computing systems and in ad hoc wireless networks.
  • Intelligent, automated components may even undertake self-assembly of software and systems, to enable adaptation or response to changing external or internal circumstances.  An example is the creation of on-the-fly coalitions in automated supply-chain systems in order to exploit dynamic commercial opportunities.  Such systems resemble those of the natural world and human societies much more than they do the example arithmetical calculations  programs typically taught in Fortran classes, and so ideas from biology, ecology, statistical physics, sociology, and economics play an increasingly important role in computer science.

How should we exploit this new metaphor of computation as a social activity, as interaction between intelligent and independent entities, adapting and co-evolving with one another?  The answer, many people believe, lies with agent technologies.  An agent is a computer programme capable of flexible and autonomous action in a dynamic environment, usually an environment containing other agents.  In this abstraction, we have software entities called agents, encapsulated, autonomous and intelligent, and we have demarcated the society in which they operate, a multi-agent system.  Agent-based computing concerns the theoretical and practical working through of the details of this simple two-level abstraction.
Reference:
Text edited slightly from the Executive Summary of:
M. Luck, P. McBurney, S. Willmott and O. Shehory [2005]: The AgentLink III Agent Technology Roadmap. AgentLink III, the European Co-ordination Action for Agent-Based Computing, Southampton, UK.