Computing-as-interaction

In its brief history, computer science has enjoyed several different metaphors for the notion of computation.  From the time of Charles Babbage in the nineteenth century until the mid-1960s, most people thought of computation as calculation, or the manipulation of numbers.  Indeed, the English word “computer” was originally used to describe a person undertaking arithmetical calculations.  With widespread digital storage and processing of non-numerical information from the 1960s onwards, computation was re-conceptualized more generally as information processing, or the manipulation of numerical-, text-, audio- or video-data.  This metaphor is probably still the prevailing view among people who are not computer scientists.  From the late 1970s, with the development of various forms of machine intelligence, such as expert systems, a yet more general metaphor of computation as cognition, or the manipulation of ideas, became widespread, at least among computer scientists.  The fruits of this metaphor have been realized, for example, in the advanced artificial intelligence technologies which have now been a standard part of desktop computer operating systems since the mid-1990s.  Windows95, for example, included a Bayesnet for automated diagnosis of printer faults.
With the growth of the Internet and the Web over the last two decades, we have reached a position where a new metaphor for computation is required:  computation as interaction, or the joint manipulation of ideas and actions. In this metaphor, computation is something which happens by and through the communications which computational entities have with one another.  Cognition and intelligent behaviour is not something which a computer does on its own, or not merely that, but is something which arises through its interactions with other intelligent computers to which is connected.  The network is the computer, in SUN’s famous phrase.  This viewpoint is a radical reconceptualization of the notion of computation.
coveral3roadmap
In this new metaphor, computation is an activity which is inherently social, rather than solitary, and this view leads to a new ways of conceiving, designing, developing and managing computational systems.  One example of the influence of this viewpoint, is the model of software as a service, for example in Service Oriented Architectures.  In this model, applications are no longer “compiled together” in order to function on one machine (single user applications), or distributed applications managed by a single organization (such as most of today’s Intranet applications), but instead are societies of components:

  • These components are viewed as providing services to one another rather than being compiled together.  They may not all have been designed together or even by the same software development team; they may be created, operate and de-commissioned according to different timescales; they may enter and leave different societies at different times and for different reasons; and they may form coalitions or virtual organizations with one another to achieve particular temporary objectives.  Examples are automated procurement systems comprising all the companies connected along a supply chain, or service creation and service delivery platforms for dynamic provision of value-added telecommunications services.
  • The components and their services may be owned and managed by different organizations, and thus have access to different information sources, have different objectives, have conflicting preferences, and be subject to different policies or regulations regarding information collection, storage and dissemination.  Health care management systems spanning multiple hospitals or automated resource allocation systems, such as Grid systems, are examples here.
  • The components are not necessarily activated by human users but may also carry out actions in an automated and co-ordinated manner when certain conditions hold true.  These pre-conditions may themselves be distributed across components, so that action by one component requires prior co-ordination and agreement with other components.  Simple multi-party database commit protocols are examples of this, but significantly more complex co-ordination and negotiation protocols have been studied and deployed, for example in utility computing systems and in ad hoc wireless networks.
  • Intelligent, automated components may even undertake self-assembly of software and systems, to enable adaptation or response to changing external or internal circumstances.  An example is the creation of on-the-fly coalitions in automated supply-chain systems in order to exploit dynamic commercial opportunities.  Such systems resemble those of the natural world and human societies much more than they do the example arithmetical calculations  programs typically taught in Fortran classes, and so ideas from biology, ecology, statistical physics, sociology, and economics play an increasingly important role in computer science.

How should we exploit this new metaphor of computation as a social activity, as interaction between intelligent and independent entities, adapting and co-evolving with one another?  The answer, many people believe, lies with agent technologies.  An agent is a computer programme capable of flexible and autonomous action in a dynamic environment, usually an environment containing other agents.  In this abstraction, we have software entities called agents, encapsulated, autonomous and intelligent, and we have demarcated the society in which they operate, a multi-agent system.  Agent-based computing concerns the theoretical and practical working through of the details of this simple two-level abstraction.
Reference:
Text edited slightly from the Executive Summary of:
M. Luck, P. McBurney, S. Willmott and O. Shehory [2005]: The AgentLink III Agent Technology Roadmap. AgentLink III, the European Co-ordination Action for Agent-Based Computing, Southampton, UK.

Social forecasting: Doppio Software

Five years ago, back in the antediluvian era of Web 2.0 (the web as enabler and facilitator of social networks), we had the idea of  social-network forecasting.  We developed a product to enable a group of people to share and aggregate their forecasts of something, via the web.  Because reducing greenhouse gases were also becoming flavour-du-jour, we applied these ideas to social forecasts of the price for the European Union’s carbon emission permits, in a nifty product we called Prophets-360.  Sadly, due mainly to poor regulatory design of the European carbon emission market, supply greatly outstripped demand for emissions permits, and the price of permits fell quickly and has mostly stayed fallen.  A flat curve is not difficult to predict, and certainly there was little value in comparing one person’s forecast with that of another.  Our venture was also felled.

But now the second generation of social networking forecasting tools has arrived.  I see that a French start-up, Doppio Software, has recently launched publicly.   They appear to have a product which has several advantages over ours:

  • Doppio Software is focused on forecasting demand along a supply chain.  This means the forecasting objective is very tactical, not the long-term strategic forecasting that CO2 emission permit prices became.   In the present economic climate, short-term tactical success is certainly more compelling to business customers than even looking five years hence.
  • The relevant social network for a supply chain is a much stronger community of interest than the amorphous groups we had in mind for Prophets-360.  Firstly, this community already exists (for each chain), and does not need to be created.  Secondly, the members of the community by definition have differential access to information, on the basis of their different positions up and down the chain.  Thirdly, although the interests of the partners in a supply chain are not identical, these interests are mutually-reinforcing:  everyone in the chain benefits if the chain itself is more successful at forecasting throughput.
  • In addition, Team Doppio (the Doppiogangers?) appear to have included a very compelling value-add:  their own automated modeling of causal relationships between the target demand variables of each client and general macro-economic variables, using  semantic-web data and qualitative modeling technologies from AI.  Only the largest manufacturing companies can afford their own econometricians, and such people will normally only be able to hand-craft models for the most important variables.  There are few companies IMO who would not benefit from Doppio’s offer here.

Of course, I’ve not seen the Doppio interface and a lot will hinge on its ease-of-use (as with all software aimed at business users).  But this offer appears to be very sophisticated, well-crafted and compelling, combining social network forecasting, intelligent causal modeling and semantic web technologies.

Well done, Team Doppio!  I wish you every success with this product!

PS:  I have just learnt that “doppio” means “double”, which makes it a very apposite name for this application – forecasts considered by many people, across their human network.  Neat!  (2009-09-16)

Article in The Observer (UK) about Doppio 2009-09-06 here. And here is an AFP TV news story (2009-09-15) about Doppio co-founder, Edouard d’Archimbaud.  Another co-founder is Benjamin Haycraft.

Action-at-a-distance

For at least 22 years, I have heard business presentations (ie, not just technical presentations) given by IT companies which mention client-server architectures.   For the last 17 of those years, this is not suprising, since both the Hyper-Text Transfer Protocol (HTTP) and the World-Wide Web (WWW) use this architecture.    In a client-server architecture, one machine (the client) requests that some action be taken by another machine (the server), which responds to the request.  For HTTP, the standard request by the client is for the server to send to the client some electronic file, such as a web-page.  The response by the server is not necessarily to undertake the action requested.    Indeed, the specifications of HTTP define 41 responses (so-called status codes), including outright refusal by the server (Client Error 403 “Forbidden”), and allow for hundreds more to be defined.  Typically, one server will be configured to respond to many simultaneous or near-simultaneous client requests.   The functions of client and server are conceptually quite distinct, although of course, one machine may undertake both functions, and a server may even have to make a request as a client to another server in order to respond to an earlier request from its clients.   As an analogy, consider a library which acts like a server of books to its readers, who are its clients;  a library may have to request a book via inter-library loan from another library in order to satisfy a reader’s request.
Since the rise of file sharing, particularly illegal file sharing, over a decade ago, it has also been common to hear talk about Peer-to-Peer (P2P) architectures.   Conceptually, in these architectures all machines are viewed equally, and none are especially distinguished as servers.   Here, there is no central library of books; rather, each reader him or herself owns some books and is willing to lend them to any other reader as and when needed.   Originally, peer-to-peer architectures were invented to circumvent laws on copyright, but they turn out (as do most technical innovations) to have other, more legal, uses – such as the distributed storage and sharing of electronic documents in large organizations (eg, xray images in networks of medical clinics).
Both client-server and P2P architectures involve attempts at remote control.  A client or a peer-machine makes a request of another machine (a server or another peer, respectively), to undertake some action(s) at the location of the second machine.   The second machine receiving the request from the first may or may not execute the request.   This has led me to think about models of such action-at-a-distance.
Imagine we have two agents (human or software), named A and B, at different locations, and a resource, named X, at the same location as B.   For example, X could be an electron microscope, B the local technician at site of the microscope, and  A a remote user of the microscope. Suppose further that agent B can take actions directly to control resource X.   Agent A may or may not have permissions or powers to act on X.
Then,  we have the following five possible situations:

1.  Agent A controls X directly, without agent B’s involvement (ie, A has remote access to and remote control over resource X).
2.  Agent A commands agent B to control X (ie, A and B have a master-slave relationship; some client-server relationships would fall into this category).
3.  Agent A requests agent B to control X (ie, both A and B are autonomous agents; P2P would be in this category, as well as many client-server interactions).
4.  Both agent A and agent B need to take actions jointly to control X (eg, the double-key system for launch of nuclear missiles in most nuclear-armed forces; coalitions of agents would be in this category)
5.  Agent A has no powers, not direct nor indirect, to control resource X.

As far as I can tell, these five situations exhaust the possible relationships betwen agents A and B acting on resource X, at least for those cases where potential actions on X are initated by agent A.  From this outline, we can see the relevance of much that is now being studied in computer science:

  • Action co-ordination (Cases 1-5)
  • Command dialogs (Case 2)
  • Persuasion dialogs (Case 3)
  • Negotiation dialogs (dialogs to divide a scarce resource) (Case 4)
  • Deliberation dialogs (dialogs over what actions to take) (Cases 1-4)
  • Coalitions (Case  4).

To the best of my knowledge, there is as yet no formal theory which encompasses these five cases.   (I welcome any suggestions or comments to the contrary.)  Such a formal theory is needed as we move beyond Web 2.0 (the web as means to create and sustain social networks) to reification of the idea of computing-as-interaction (the web as a means to co-ordinate joint actions).
Reference:
Network Working Group [1999]: Hypertext Transfer Protocol – HTTP/1.1. Technical Report RFC 2616.  Internet Engineering Task Force.

Commuting in the age of email

If you believe, as the prevailing social metaphor would have it, that this is the Age of Information, then you could easily imagine that the main purpose of human interactions is to request and provide information.  That seems to be the implicit assumption underlying Lane Wallace’s discussion of commuting and working-from-home here.   Wallace is surprised that anyone still travels to work, when information can be transferred so much more readily by phone, email and the web.
But the primary purpose of most workplace interactions is not information transfer, or this is so only incidentally.  Rather, workplace interactions are about the co-ordination of actions — identifying and assessing alternatives for future action, planning and co-ordinating future actions, and reporting on past actions undertaken or current actions being executed.    To engage in such interactions about action of course involves requests for and transfers of information.    To the extent that this is the case, such interactions can be and indeed are undertaken with participants separated in space and time.   But co-ordination of actions requires very different speech acts to those (relatively simple) locutions seeking and providing information:  speech acts such as proposals, promises, requests, entreaties, and commands.  These speech acts have two distinct and characteristic features — they usually require uptake (the intended hearer or actor must agree to the action before the action is undertaken), and the person with the power of retraction or revocation is not necessarily the initial speaker.   An accepted promise can only be revoked by the person to whom the promise is made, for instance, not by the person who made the promise. So, by their very nature these locutions are dialogical acts, not monolectical.   You can’t meaningfully give commands to yourself, for example, and what value is a promise made in a forest?  Neither of these two features apply to speech acts involving requests for information or responses to requests for information.
In addition, inherent in speech acts over actions is the notion of intentionality.    If I promise to you to do action X, then I am expressing an intention to do X.  If your goals requires that action X be commenced or done, then you need to assess how sincere and how feasible my promise is.  Part of your assessment may be based on your past experience with me, and/or the word of others you trust about me (my reputation).   Thus it is perfectly possible for you to assess my capability and my sincerity without ever meeting me.  International transactions across all sorts of industries have taken place for centuries between parties who never met; the need to assess sincerity and capability is surely a key reason for the dominance of families (eg, the Rothschilds in the 18th and 19th centuries) and close-knit ethnic groups (eg, the Chinese diaspora) in international trade networks.  But, if you don’t know me already, it is generally much easier and more reliable for you to assess my sincerity and capability by looking me in the eye as I make my promise to you.
Bloggers and writers and professors, who rarely need to co-ordinate actions with anyone to achieve their work goals, seem not to understand these issues very well.  But these are issues are known to anyone who actually does anything in the world, whether in politics, in public administration or in business.   One defining feature of modern North American corporate culture, in my experience, is that most people find it preferable to make promises of actions even when they do not yet have, and when they know that they do not yet have, the capabilities or resources required to undertake the actions promised.  They do this rather than not make the promise or rather than making the promise conditional on obtaining the necessary resources, in order to appear “positive” to their bosses.   This is the famous “Can Do” attitude at work, and I have discussed it tangentially before in connection with the failure of the Bay of Pigs;  its contribution to the failures of modern American business needs a separate post.

At the hot gates: a salute to Nate Fick

After viewing The Wire, certainly the best television series I have ever seen (and perhaps the best ever made), I naturally sought out Generation Kill, from the same writing team – David Simons and Ed Burns.  Also gripping and intelligent viewing, although (unlike The Wire), we only see one side’s view of the conflict.   The series follows a US Marine platoon, Second Platoon of Bravo Company of the 1st  Reconnaissance  Battalion, 1st Marine Regiment, as they invade Iraq in March-April 2003.   Like Band of Brothers, we come to know the platoon and its members very well, feeling joy at their wins, and sorrow at their losses.  The TV series is based on an eponymous 2004 book by a journalist, Evan Wright, who was embedded with the platoon in this campaign.
The TV series led me, however,  to read another book about this platoon, written by its commanding officer Lt. Nathaniel Fick (played in the series by actor Stark Sands).    The book is superb!    Fick writes extremely well, intelligently and evocatively, of his training and his battle experiences.  His prose style is direct and uncluttered, without being a parody of itself (as is, say, Hemingway’s).  His writing is remarkably smooth, gliding along, and this aspect reminded me of Doris Lessing, on one of her good days.   Fick clearly has a firm moral centre (perhaps an outcome of his Jesuit high school education), evident from his initial decision to apply to the military while still an undergraduate classics major at Dartmouth.     Having felt a similarly-strong desire as an undergraduate to experience life at the hot gates, I empathized immensely with his description of himself at that time.   Fick’s moral grounding is shown throughout the book, not only in the decisions he takes in battle, and his reflections on these decisions, but also in the way he refrains from naming those of his commanding officers whom he does not respect.    He also shows enormous loyalty to the men he commanded.
And Fick’s experiences demonstrate again that no organization, not even military forces,  can succeed for very long when commands are only obeyed mindlessly.   Successfully execution of commands requires intelligent dialogue between commanders and recipients, in a process of argumentation, to ensure that uttered commands are actionable, appropriate, feasible, effective, consistent, ethical and advisable.  Consequently, the most interesting features of the book for me were the descriptions of decision-making, descriptions often implicit.   Officers and non-officers, it seems, are drilled, through hours of rote learning, in the checklists and guiding principles necessary for low-level, tactical decision-making, so that these decisions can be automatic.  Only after these mindless drills are second nature are trainee officers led to reflect on the wider (strategic and ethical) aspects of decisions,  of decision-making and of actions.   I wonder to what extent such an approach would work in business, where most decision-making, even the most ordinary and tactical, is acquired through direct experience and not usually taught as drills.  Mainly this is because we lack codification of low-level decision-making, although strong fmcg companies such as Mars or Unilever come closest to codification of tactical decision-making.
Fick’s frequent frustrations with the commands issued to him seem to arise because these commands often ignore basic tactical constraints (such as the area of impact of weapons or the direction of firing of weapons), and because they often seem to be driven by a concern for appearances over substantive outcomes.   In contrast to this frustration, one of Fick’s commanding heroes is Major Richard Whitmer, whose unorthodox managerial style and keen intelligence is well described.  A military force able to accommodate such a style is to be admired, so I hope it is not a reflection on the USMC that Whitmer appears to have spent the years since the Iraq invasion running a marine recruitment office.  Next time that I’m CEO of a Fortune 500 company, I’ll actively try to recruit Whitmer and Fick, since they are both clearly superb managers.
I was also struck by how little the troops on the ground in Iraq knew of the larger, strategic picture.  Fick’s team relied on broadcasts from the BBC World Service on a personal, non-military-issue transister radio to learn what was happening as they invaded Iraq.   We who were not involved in the war also relied on the BBC, particularly Mark Urban’s fascinating daily strategic analyses on BBC TV’s Newsnight.  Were we remote viewers better informed than those in the ground in Iraq?  Quite possibly.
Nathaniel Fick now works for a defence think tank, the Center for a New American Security.  A 2006 speech he gave at the Pritzer Military Library in Chicago can be seen here.   A seminar talk to Johns Hopkins University’s series on Rethinking the Future Nature of Competition and Conflict can be found here (scroll down to 2006-01-25).  And here is Fick’s take on recent war poetry.
References:
K. Atkinson et al. [2008]: Command dialogues. In: I. Rahwan and P. Moraitis (Editors): Proceedings of the Fifth International Workshop on Argumentation in Multi-Agent Systems (ArgMAS 2008), AAMAS 2008, Lisbon, Portugal.
Nathaniel Fick [2005]:  One Bullet Away:  The Making of a Marine Officer.  London, UK:  Phoenix.
Evan Wright [2004]:  Generation Kill. Putnam.

Organizational Cognition

Over at Unrepentant Generalist, Eric Nehrlich is asking interesting questions about organizational cognition.   His post brings to mind the studies of decision-making by traders in financial markets undertaken in recent years by Donald MacKenzie at Edinburgh University, who argues that the locus of decision-making is usually not an individual trader, nor a group of traders working for a single company, nor even a group of traders working for a single company together with their computer models, but a group of traders working for a single company with their computer models and with their competitors. Information about market events, trends and opportunities is passed from traders at one company to traders from another through informal contacts and personal networks, and this information then informs decision-making by all of them.
It is possible, of course, for traders to pass false or self-serving information to competitors, but in an environment of repeated interactions and of job movements, the negative consequences of such actions will eventually be felt by the perpetrators themselves.  As evolutionary game theory would predict, everyone thus has long-term incentives to behave honourably in the short-term.  Of course, different market participants may evaluate this long-term/short-term tradeoff differently, and so we may still see the creation and diffusion of false rumours, something which financial markets regulators have tried to prevent.
Reference:
Donald MacKenzie [2009]: Material Markets: How Economic Agents are Constructed.  Oxford, UK:  Oxford University Press.

Presidential planning

Gordon Goldstein has some advice for President-elect Obama in managing his advisors.  Goldstein prefaces his remarks by a potted history of John F. Kennedy’s experience with the CIA-planned Bay of Pigs action, an attempted covert invasion of Cuba.    Although Goldstein’s general advice to Obama may be wise, he profoundly mis-characterizes the Bay of Pigs episode, and thus the management lessons it provides.   As we have remarked before, one aspect of that episode was that although the action was planned and managed by CIA, staff in the White House – including JFK himself! – unilaterally revised the plans right up until the moment of the invasion.   Indeed, the specific site in Cuba of the invasion was changed – at JFK’s order, and despite CIA’s reluctance – just 4 days before the scheduled date.  This left insufficient time to revise the plans adequately, and all but guaranteed failure.  The CIA man in charge, Dick Bissell, in his memoirs, regretted that he had not opposed the White House revisions more forcefully.
Anyone who has worked for a US multi-national will be familiar with this problem – bosses flying in, making profound, last-minute changes to detailed plans without proper analysis and apparently on whim, and then leaving middle management to fix everything.  Middle management are also assigned the role of taking the blame.  This has happened so often in my experience, I have come to see it as a specific trope of contemporary American culture — the supermanager, able to change detailed plans at a moment’s notice!     Even Scott Adams has recorded the phenomenon. It is to JFK’s credit that he took the public blame for the Bay of Pigs fiasco (although he also ensured that senior CIA people were made to resign for his error).  But so indeed he should have, since so much of the real blame rests squarely with the President himself and his White House national security staff.

The Bay of Pigs action had another, more existential, problem.  CIA wished to scare the junta running Cuba into resigning from office, by making them think the island was being invaded by a vastly superior force.  It was essential to the success of the venture that the Cuban government therefore think that the force was backed by the USA, the only regional power with such a capability and intent.  It was also essential to the USA’s international reputation that the USA could plausibly deny that they were in any way involved in the action, in order for the venture not to escalate (via the Cold War with the USSR) into a larger military conflict.   Thus, Kennedy ruled out the use of USAF planes to provide cover to the invading troops, and he continually insisted that the plans minimize “the noise level” of the invasion.   These two objectives were essentially contradictory, since reducing the noise level decreased the likelihood of the invasion scaring Castro from office.
The Bay of Pigs fiasco provides many lessons for management, both to US Presidents and to corporate executives.   One of these, seemingly forgotten in Vietnam and again in Iraq, is that plans do matter.   Success is rarely something reached by accident, or by a series of on-the-fly, ad hoc, decisions, each undertaken without careful analysis, reflection and independent assessment.

Social networking v1.0

Believers in the potential of Web 2.0, such as we at Vukutu, think it will change many things — our personal interactions, our way of being in the world, our social lives, our economic lives, even our sciences and technologies.   The basis of this belief is partly by comparison with what happened the first time social networking became fashionable in western society.   This occurred with the rise of the Coffee House in western Europe from the middle of the 17th century.
Coffee, first cultivated and drunk in the areas near the Red Sea, spread through the Ottoman empire during the 16th century.   In Western Europe, it became popular from the early 17th century, initially in Venice, becoming known to educated Europeans roughly simultaneously with marijuana and opium.  (An interesting question for marketers is why coffee became a popular consumer product in Europe and the others did not.)  Because of the presence there of scholars of the orient and scientists with an experimentalist ethos, coffee first arrived in the British Isles in Oxford, where it was consumed privately from at least 1637;  the first public coffee house in the British Isles opened in Oxford in 1650, called the Angel and operated by a Mr Jacob.  The first London coffeehouse was opened in 1652 by Pasqua Rosee; the same mid-century period saw the rise of public coffee houses in the cities of France and the Netherlands.  For non-marketers reading this, it is worth realizing that opening a coffee house meant first having access to a regular source of coffee beans, no mean feat when the only beans then grew in the Yemen and north-east Africa.

Facing competition, coffee houses soon segmented their market, and specialised in particular activities, types of conversation, or political positions (sound familiar,  bloggers?), and provided services such libraries, reading rooms, public lectures, scientific demonstrations and auctions. Educated people and businessmen would often visit several coffee houses each day on their rounds, to collect and trade information, to meet friends and colleagues, to commune with the like-minded, and to transact business.  The coffee houses were centres for learning and debate, just as blogs are today, as well as places of economic exchange.
What were the consequences of this new mode of human interaction?  Well, coffee houses enabled the launch of at least three new industries — insurance, fine-art auctions, and newspapers — and were the physical basis for modern stock exchanges.  For instance, English insurer Lloyds of London began in Edward Lloyd’s coffee house in 1688.  And these industries themselves enabled or facilitated others.  The development of an insurance industry, for example, both supported and grew alongside the trans-continental exploration undertaken by Dutch, English and Iberian merchant shipping fleets:  deciding whether to invest in  perilous oceanic voyages required some rigour in assessing likely costs and benefits if one wished to make a long-term living from it, and being able to partition, bundle, re-bundle and on-sell risks to others.
And coffee-houses even supported the development of a new science.  In the decade around 1665, the modern idea of mathematical probability arose, seemingly independently across western Europe, in what is now Britain, France, Italy, the Netherlands and Switzerland.   There is still some mystery as to why the mathematical representation of uncertainty became of interest to so many different people at around the same time, especially since their particular domains of application were diverse (shipping accidents, actuarial events, medical diagnosis, legal decisions, gambling games).  I wonder if sporadic outbreaks of the plague across Europe provoked a turn to randomness.  But there is no mystery as to where the topic of probability was discussed and how the ideas spread between different groups so quickly: coffee houses, and the inter-city and inter-national information networks they supported, were the medium.
What then will be the new industries and new sciences enabled by Web 2.0?
POSTSCRIPT: Several quotes from Cowan, for interest:

“No coffeehouse worth its name could refuse to supply its customers with a selection of newspapers.  . .  . The growing diversity of the press in the late seventeenth and early eighteenth centuries meant that there was great pressure for a coffeehouse to take in a number of journals.  Indeed, many felt the need to accept nearly anything Grub Street could put to press.  . . . Not all coffeehouses could afford to take in every paper published, of course, but many also supplied their customers with news published abroad.  Papers from Paris, Amsterdam, Leiden, Rotterdam, and Harlem were commonly delivered to many coffeehouses in early eighteenth-century London.  The Scotch Coffeehouse in Bartholomew Lane boasted regular updates from Flanders on the course of the war in the 1690s.   Along with newspapers, coffeehouses regularly purchased pamphlets and cheap prints for the use of their customers.” (pp. 173-174).

“Different coffeehouses also arose to cater to the socialization and business needs of various professional and economic groups in the metropolis. [London]  By the early decades of the eighteenth century, a number of separated coffeehouses around the Exchange had taken to catering to the business needs of merchants specializing in distinct trades, such as the New England, the Virginia, the Carolina, the Jamaica, and the East India coffeehouses.  Child’s Coffeehouse, located conveniently near the College of Physicians, was much favoured by physicians and clergymen.  Because such affiliations were well known, entry into one of these specialized coffeehouses offered an introduction into the professional society found therein.” (pp. 169-170).

“The numerous coffeehouses of the metropolis were greater than the sum of their parts; they formed an interactive system in which information was socialized and made sense of by the various constituencies of the city.   Although a rudimentary form of this sort of communication circuit existed in early modern England (and especially London) well before coffeehouses were introduced in places such as St. Paul’s walk or the booksellers’ shops of St. Paul’s churchyard, the new coffeehouses quickly established themselves at the heart of the metropolitan circuitry by merging news reading, text circulation, and oral communication all into one institution.  The coffeehouse was first and foremost the product of an increasingly complex urban and commercial society that required a means by which the flow of information might be properly channeled.” (p. 171)

 
References and Acknowledgments:
My thanks to Fernando E. Vega of the USDA for pointing me to the book by Cowan.
Brian Cowan [2005]:  The Social Life of Coffee:  The Emergence of the British Coffeehouse.  New Haven, CT, USA:  Yale University Press.
Ian Hacking [1975]:  The Emergence of Probability: a Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference. London, UK: Cambridge University Press.
Fernando E. Vega [2008]: The rise of coffee.  American Scientist, 96 (2): 138-145, March-April 2008.

A salute to Dick Bissell

For those who know his name, Richard Bissell (1910-1994) probably has a mostly negative reputation, as the chief planner of the failed attempted invasion of Cuba at the “Bay of Pigs” in April 1961.  Put aside the fact that last-minute changes to the invasion plans (including a change of location) were forced on Bissell and CIA by the Kennedy Administration; after all, as Bissell himself argued in his memoirs, he and CIA could have and should have done more to resist these changes.  (There is another post to be written on the lessons of this episode for the making of complex decisions, a topic on which surprisingly little seems to have been published.)  Bissell ended his career as VP for Marketing and Economic Planning at United Aircraft Corporation, a post he held for a decade, although he found it unfulfilling after the excitement of his Government service.

Earlier in his career, Bissell was several times an administrative and organizational hero, a man who got things done.  During World War II, Bissell, working for the US Government’s Shipping Adjustment Board, established a comprehensive card index of every ship in the US merchant marine to the point where he could predict, within an error of 5 percent, which ships would be at which ports unloading their cargoes when, and thus available for reloading.  He did this well before multi-agent systems or even Microsoft Excel. After WW II, he was the person who successfully implemented the Marshall Plan for the Economic Recovery of Europe.  And then, after joining CIA in 1954, he successfully created and led the project to design, build, equip and deploy a high-altitude spy-plane to observe America’s enemies, the U-2 spy plane.   Bissell also led the design, development and deployment of CIA’s Corona reconnaisance satellites, and appears to have played a key role in the development of America’s national space policy before that.
Whatever one thinks of the overall mission of CIA before 1989 (and I think there is a fairly compelling argument that CIA and KGB successfully and jointly kept the cold war from becoming a hot one), one can only but admire Bissell’s managerial competence, his ability to inspire others, his courage, and his verve.  Not only was the U-2 a completely new plane (designed and built by a team led by Kelly Johnson of Lockheed, using engines from Pratt & Whitney), flying at altitudes above any ever flown before, and using a new type of fuel (developed by Shell), but the plane also had to be equipped with sophisticated camera equipment, also newly invented and manufactured (by a team led by Edwin Land of Polaroid), producing developed film in industrial quantities.  All of these components, and the pilot, needed to operate under extreme conditions (eg, high-altitudes, long-duration flights, very sensitive flying parameters, vulnerability to enemy attack).  And the overall process, from weather prediction, through deployment of the plane and pilot to their launch site, all the way to the human analyses of the resulting acres of film, had to be designed, organized, integrated and managed.
All this was done in great secrecy and very rapidly, with multiple public-sector and private-sector stakeholders involved.   Bissell achieved all this while retaining the utmost loyalty and respect from those who worked for him and with him.  I can only respond with enormous admiration for the project management and expectations management abilities, and the political, negotiation, socialization, and consensus-forging skills, that Dick Bissell must have had.  Despite what many in academia believe, these abilities are rare and intellectually-demanding, and far too few people in any organization have them.
References:
Richard M. Bissell [1996]:  Reflections of a Cold Warrior:  From Yalta to the Bay of Pigs. New Haven, CT, USA:  Yale University Press.
Norman Polnar [2001]:  Spyplane:  The U-2 History Declassified.  Osceola, WI, USA: MBI Publishing.
Evan Thomas [1995]:  The Very Best Men.  Four Who Dared:  The Early Years of the CIA.  New York City, NY, USA:  Touchstone.

Extreme teams

Eric Nehrlich, over at Unrepentant Generalist, has reminded me of the book “The Wisdom of Teams“, by Jon Katzenbach and Douglas Smith, which I first read when it appeared in the early 1990s.   At the time, several of us here were managing applications for major foreign telecommunications licences for our clients – the fifth P (“Permission”) in telecoms marketing.
Before Governments around the world realized what enormous sums of money they could make from auctioning telecoms licences, they typically ran what was called a “beauty contest” to decide the winner.     In these contests, bidders needed to prepare an application document to persuade the Government that they (the bidder) were the best company to be awarded the licence.  What counted as compelling arguments differed from one country to another, and from one licence application to another.   The most common assessment criteria used by Governments were:  corporate reputation and size, technical preparedness and innovation, quality of business plans, market size and market growth, and the prospects for local employment and economic development.
As I’m sure you see immediately, these criteria are multi-disciplinary.  Licence applications were (and still are, even when conducted as auctions) always a multi-disciplinary effort, with folks from marketing, finance, engineering, operations, legal and regulatory, folks from different consortium partners, and people from different nationalities, all assigned to the one project team.  In the largest application we managed, the team comprised an average of about 100 people at any one time (people came and went all the time), and it ran for some 8 months.   In that case, the Government tender documents required us to prepare about 7,000 original pages of text in response (including detailed business plans and blue-prints of each mobile base station), multiplied by some 20 copies.    You don’t win these licences handing in coffee-stained photocopies or roneoed sheets.  Each of the 20 volumes was printed on glossy paper, hard-bound, and the lot assembled in a carved tea chest.
Work on these team projects was extremely challenging, not least because of the stakes involved.  If you miss the application submission deadline even by 5 minutes, you were out of the running.    That would mean throwing away the $10-20 million you spent preparing the application and upsetting your consortium partners more than somewhat.   If you submit on time, and you win the licence, you might see your company’s share-market value rise by several hundred million dollars overnight, simply on the news that you had a won a major overseas mobile licence.  $300 million sharevalue gain less $20 million preparation costs leaves a lot of gain.   In one case, our client’s share-market value even rose dramatically on news that they had LOST the licence!  We never discovered if this was because the shareholders were pleased that the company (not previously in telecoms) had lost and was sticking to its knitting, or were pleased that the company had tried to move into a hi-tech arena.
With high stakes, an unmovable deadline, and with different disciplines and companies involved, tempers were often loose.   One of the major differences between our experiences and those described in the Katzenbach and Smith book is that we never got to choose the team members.  In almost all cases, Governments required consortia to comprise a mix of local and international companies, so each consortium partner would choose its own representatives in the team.  Sometimes, the people assigned knew about the telecoms business and had experience in doing licence applications; more frequently, they knew little and had no relevant experience.  In addition, within each consortium partner company, internally powerful people in the different disciplines would select which folks to send.   One could sometimes gauge the opinion of the senior managers of our chances by the calibre of the people they chose to allocate to the team.
So — our teams comprised people having different languages, national cultures and corporate cultures, from different disciplines and having different skillsets and levels of ability, and sent to us sometimes for very different purposes. (Not everyone, even within the same company, wanted to win each licence application.)  Did I mention we normally had no line authority over anyone since they worked for different divisions of different companies?  Our task was to organize the planning work of these folks in a systematic and coherent way to produce a document that looked like it was written by a single mind, with a single, coherent narrative thread and compelling pitch to the Government evaluators.
Let us see how these characteristics stack up against the guidelines of Katzenbach and Smith, which Eric summarized:

  • Small size  – Not usually the case.  Indeed, many of the major licence applications could not physically or skill-wise have been undertaken by just a small team.  These projects demanded very diverse skills, under impossibly-short deadlines.  The teams, therefore, had to be large.
  • Complementary skills – Lots of different skills were needed, as I mention above.  Not all of these are complementary, though.  I am not sure how much lawyers and engineers complement each other; more often, their different styles of thinking and communicating (words vs. diagrams, respectively) and their different objectives would have them in disagreement.
  • Common purpose – In public, everyone had the same goal — to win the licence.  In private, as in any human organization, team members and their employers may have had other goals.  I have seen cases where people want to lose, to prove a point to other partners, or because they do not feel their company would be able to deal with too many simultaneous wins.   I have seen other cases where people do not want to win (not the same as wanting to lose) — they may be participating in order to demonstrate, for example, that they know how to do these applications.
  • Performance goals – Fine in theory, but very hard in practice when the team leaders do not have line responsibility (even temporarily) over the team members.
  • Common approach – Almost never was this the case.  Each consortium partner, and sometimes each functional discipline within each consortium partner had their own approach.  There was rarely time or resources to develop something mutually acceptable.  In any case, outputs usually mattered more than approach.
  • Mutual accountability – Again, almost never the case, partly due to the diversity of real objectives of team members, divisions and partners.
  • Despite not matching these guidelines, some of the licence application teams were very successful, both in undertaking effective high-quality collaborative work and in winning licences.  I therefore came away from reading “The Wisdom of Teams” 15 years ago with the feeling that the authors had missed something essential about team projects because they had not described my experiences in licence applications.  (I even wrote to the authors at the time a long letter about my experiences, but they did not deign to reply.) I still feel that the book misses much.