Hard choices

Adam Gopnik in the latest New Yorker magazine, writing of his former teacher, McGill University psychologist Albert Bregman:

he also gave me some of the best advice I’ve ever received.  Trying to decide whether to major in psychology or art history, I had gone to his office to see what he thought.   He squinted and lowered his head.  “Is this a hard choice for you?” he demanded.  Yes! I cried. “Oh,” he said, springing back up cheerfully.   “In that case, it doesn’t matter.  If it’s a hard decision, then there’s always lots to be said on both sides, so either choice is likely to be good in its way.  Hard choices are always unimportant. ” (page 35, italics in original)

I don’t agree that hard choices are always unimportant, since different options may have very different consequences, and with very different footprints (who is impacted, in what ways, and to what extents).  Perhaps what Bregman meant to say is that whatever option is selected in such cases will prove feasible to some extent or other, and we will usually survive the consequences that result.  Why would this be?    I think it because, as Bregman says, each decision-option in such cases has multiple pros and cons, and so no one option uniformly dominates the others.  No option is obviously or uniformly better:  there is no “slam-dunk” or “no-brainer” decision-option.  
In such cases, whatever we choose will potentially have negative consequences which we may have to live with.  Usually, however, we don’t seek to live with these consequences.  Instead, we try to eliminate them, or ameliorate them, or mitigate them, or divert them, or undermine them, or even ignore them.  Only when all else fails, do we live in full awareness with the negative consequences of our decisions.   Indeed, attempting to pre-emptively anticipate and eliminate or divert or undermine or ameliorate or mitigate negative consequences is a key part of human decision-making for complex decisions, something I’ve called (following Harald Wohlrapp), retroflexive decision-making.   We try to diminish the negative effects of an option and enhance the positive effects as part of the process of making our decision.
As a second-year undergraduate at university, I was, like Gopnik, faced with a choice of majors; for me it was either Pure Mathematics or English.    Now, with more experience of life, I would simply refuse to make this choice, and seek to do both together.  Then, as a sophomore, I was intimidated by the arguments presented to me by the university administration seeking, for reasons surely only of bureaucratic order, to force me to choose:  this combination is not permitted (to which I would respond now with:  And why not?); there are many timetable clashes (I can work around those);  no one else has ever asked to do both (Why is that relevant to my decision?); and, the skills required are too different (Well, I’ve been accepted onto Honours track in both subjects, so I must have the required skills).   
As an aside:  In making this decision, I asked the advice of poet Alec Hope, whom I knew a little.   He too as an undergraduate had studied both Mathematics and English, and had opted eventually for English.  He told me he chose English because he could understand on his own the poetry and fiction he read, but understanding Mathematics, he said, for him, required the help of others.  Although I thought I could learn and understand mathematical subjects well enough from books on my own, it was, for me, precisely the social nature of Mathematics that attracted me: One wasn’t merely creating some subjective personal interpretations or imaginings as one read, but participating in the joint creation of an objective shared mathematical space, albeit a space located in the collective heads of mathematicians.    What could be more exciting than that!?
More posts on complex decisions here, and here
Reference:
Adam Gopnik [2013]: Music to your ears: The quest for 3D recording and other mysteries of sound.  The New Yorker, 28 January 2013, pp. 32-39.

You have to make the case!

Former conservative Australian Prime Minister Malcolm Fraser (who, by the by, has long been an admirer of Ayn Rand), together with former Secretary of the Australian Commonwealth Department for Defence, Paul Barratt, and former Australian Defence Forces Chief, General Peter Gration, has called for a public inquiry regarding the decision to invade Iraq in 2003.   As I noted at the time, what was truly remarkable was the complete unwillingness of any of the principal decision-makers  – Bush, Cheney, Rumsfeld, Blair or Howard – to publicly justify their decision, a decision taken before August 2002, until very late in the day.    So severe was this reticence on John Howard’s part that the Australian Senate – for the first and so far only time in its history – passed a censure  motion against the Prime Minister for his refusal to explain or justify his decision.  It seems that Fraser, Barratt, Gration, et al., are still waiting for those particular dogs to bark.
As I said at the time, there could be good and compelling reasons for a Government to not publicly justify a military decision.  If so, one would have expected the principals at least to explain the reasons for their reticence to other friendly Governments, even if only in private.  It is noteworthy then to recall Joschka Fischer’s public beration of Donald Rumsfeld:  “You have to make the case!” (video here).   Even the German Foreign Minister, it seems, could not be trusted by the decision-makers with either an explanation of the invasion decision or an explanation as to why no explanation could be given.  After all this time of dogs still quiet, one is led increasingly to the conclusion that the real reason for the decision was something that ill-behooved or shamed the decision-makers.

August 1991 Putsch

Last August was the 20th anniversary of the short-lived revanchist coup in the USSR, which led directly to the break up of the Soviet Empire.   That the coup was ultimately unsuccessful was due in large part to the bravery of Boris Yeltsin and the citizens of Moscow who protested publicly against the coup.  Their bravery was shared by sections of the Soviet military, particularly the Air Force, who also informed the plotters of their disapproval.   I understand that the main reason why the plotters did not bombard the White House, the Russian Parliament which Yeltsin and his supporters had occupied, as they had threatened was that the Air Force had promised to retaliate with an attack on the Kremlin.

A fact reported at the time in the IHT, but little-known since was that the leadership of the Soviet ballistic missile command signaled to the USA their disapproval of the coup.  They did this by moving their mobile ICBMs into their storage hangars, thereby preventing their use.  Only the USA with its satellite surveillance could see all these movements;   CIA and George Bush, aided perhaps by telephone taps, were clever enough to draw the  intended inference:  that the leadership of the Soviet Missile Command was opposed to the coup.

Here is a report that week in the Chicago Tribune (1991-08-28):

WASHINGTON — During last week`s failed coup in the Soviet Union, U.S. intelligence overheard the general commanding all strategic nuclear missiles on Soviet land give a highly unusual order.  Gen. Yuri Maksimov, commander-in-chief of the Soviets’ Strategic Rocket Forces, ordered his SS-25 mobile nuclear missile forces back to their bases from their battle-ready positions in the field, said Bruce Blair, a former Strategic Air Command nuclear triggerman who studies the Soviet command system at the Brookings Institution.

“He was defying the coup. By bringing the SS-25s out of the field and off alert, he reduced their combat readiness and severed their links to the coup leaders,”  said Blair.
That firm hand on the nuclear safety catch showed that political chaos in the Soviet Union actually may have reduced the threat posed to the world by the Soviets’ 30,000 nuclear warheads, said several longtime U.S. nuclear war analysts. The Soviet nuclear arsenal, the world’s largest, has the world’s strictest controls, far stricter than those in the U.S., they said.  Those controls remained in place, and in some cases tightened, during last week’s failed coup-even when the coup plotters briefly stole a briefcase containing codes and communications equipment for launching nuclear weapons from Soviet President Mikhail Gorbachev.”

And here is R. W. Johnson, in a book review in the London Review of Books (2011-04-28):

One of the unheralded heroes of the end of the Cold War was General Y.P. Maksimov, the commander in chief of the Soviet strategic rocket forces during the hardliners’ coup against Gorbachev in August 1991. He made a pact with the heads of the navy and air force to disobey any order by the coup plotters to launch nuclear weapons. There was extreme concern in the West that the coup leader, Gennady Yanayev, had stolen Gorbachev’s Cheget (the case containing the nuclear button) and the launch codes, and that the coup leaders might initiate a nuclear exchange. Maksimov ordered his mobile SS-25 ICBMs to be withdrawn from their forest emplacements and shut up in their sheds – knowing that American satellites would relay this information immediately to Washington. In the event, the NSA let President Bush know that the rockets were being stored away in real time.”

References:
R. W. Johnson [2011]:  Living on the Edge. London Review of Books, 33 (9):  32-33 (2011-04-28).

Command Dialogs

Three years ago, in a post about Generation Kill and Nate Fick, I remarked that military commands often need dialog between commander and commandee(s) before they may be rationally accepted, and/or executed.   Sadly, a very good demonstration of the failure to adequately discuss commands (or purported commands) in a complex (police) action is shown by a report on the UC-Davis Pepper Spray incident.  
Management textbooks of a certain vintage used to define management as the doing of things through others.   The Pepper Spray example clearly shows the difficulties and challenges involved in actually achieving such vicarious doing in dynamic and ambiguous situations.  And the poverty of Philosophy is not better shown than by the fact that the speech act of commanding has barely been studied at all by philosophers, obsessed these last 2,350 years with understanding assertions of facts.  (Chellas, Hamblin, Girle and Parsons are exceptions.)

XX Foxiness: Counter-espionage

I have just read Ben MacIntyre’s superb “Double Cross:  The True Story of the D-Day Spies” (Bloomsbury, London 2012), which describes the succesful counter-espionage operation conducted by the British against the Nazis in Britain during WW II.  Every Nazi foreign agent in Britain was captured and either tried and executed, or turned, being run by the so-called Twenty (“XX”) Committee.  This network of double agents, many of whom created fictional sub-agents, became a secret weapon of considerable power, able to mislead and misdirect  Nazi war efforts through their messages back to their German controllers (in France, Portugal, Spain and Germany).
The success of these misdirections was known precisely, since Britain was able to read most German encrypted communications, through the work of Bletchley Park (the Enigma project).  Indeed, since the various German intelligence controllers often simply passed on the messages they received from their believed-agents in Britain verbatim (ie, without any summarization or editing),  these message helped the decoders decipher each German daily cypher code:  the decoders had both the original message sent from Britain and its encrypted version communicated between German intelligence offices in (say) Lisbon and Berlin.
This secret weapon was used most famously to deflect Nazi attentions from the true site of the D-Day landings in France.  So successful was this, with entire fictional armies created and reported on in South East England and in Scotland (for purported attacks on Calais in France and on Norway), that even after the war’s end, former Nazi military leaders talked about the non-use by allies of these vast forces, still not realizing the fiction.
One interesting question is the extent to which parts of German intelligence were witting or even complicit in this deception.  The Abwehr, the German military intelligence organization, under its leader Admiral Wilhelm Canaris (who led it 1935-1944), was notoriously anti-Nazi.  Indeed, many of its members were arrested for plotting against Hitler.  Certainly, if not witting or complicit, many of its staff were financially corrupt, and happy to take a percentage of payments made to agents they knew or suspected to be fictional.
Another fascinating issue is when it may not be good to know something:  One Abwehr officer, Johnny Jebsen, remained with them while secretly talking to the British about defecting.   The British could not, of course, know where his true loyalties lay while he remained with the Abwehr.   Despite their best efforts to stop him, he told them of all the German secret agents then working in Britain.  They tried to stop him because once he told them, he knew that they knew who the Germans believed their agents to be.  Their subsequent reactions to having this knowledge  – arrest each agent or leave the agent in place – would thus tell him which agents were really working for the Nazis and which were in fact double agents.
Jebsen was drugged and forcibly returned to Germany by the Abwehr (apparently, to pre-empt him being arrested by the SS and thus creating an excuse for the closure of the Abwehr), and then was tortured, sent to a concentration camp, and probably murdered by the Nazis.  It seems he did not reveal anything of what he knew about the British deceptions, and withstood the torture very bravely.  MacIntyre rightly admires him as one of the unsung heroes of this story.
Had Jebsen been able to defect to Britain, as others did, the British would have faced the same quandary that later confronted both CIA and KGB with each defecting espionage agent during the Cold War:  Is this person a genuine defector or a plant by the other side?  I have talked before about some of the issues for what to believe, what to pretend to believe, and what to do in the case of KGB defector (and IMHO likely plant) Yuri Nosenko, here and here.
 

Roughshod Riders

One annoying feature of the verbal commentariat is their general lack of real-world business experience.  A fine example has just been provided by political blogger Marbury, who derides Gordon Brown for not asserting himself when Prime Minister over his Cabinet Secretary on the matter of an enquiry into voicemail hacking at certain newspapers.
Well, to be fair to Gordon Brown, Marbury has clearly never led an organization and tried to force the people below him to do something they adamantly oppose doing.  No doubt, Brown when PM could have ordered the Cabinet Secretary to implement a public enquiry, but every single person in the chain of command could then have: (a) leaked the CabSec’s advice opposing the instruction, and/or (b) exercised their pocket veto to delay or prevent the enquiry happening, and/or (c) implemented it in a way which backfired upon Brown and the Cabinet. No rational manager tries to execute a policy his own staff vehemently oppose, even when, as appears to be the case here, he knows he has morality, the law, good governance, and the public interest all on his side.

Markets as feedback mechanisms

I just posted after hearing a talk by economic journalist Tim Harford at LSE.  At the end of that post, I linked to a critical review of Harford’s latest book,  Adapt – Why Success Always Starts with Failure, by Whimsley.  This review quotes Harford talking about markets as feedback mechanisms:

To identify successful strategies, Harford argues that “we should not try to design a better world. We should make better feedback loops” (140) so that failures can be identified and successes capitalized on. Harford just asserts that “a market provides a short, strong feedback loop” (141), because “If one cafe is ordering a better combination of service, range of food, prices, decor, coffee blend, and so on, then more customers will congregate there than at the cafe next door“, but everyday small-scale examples like this have little to do with markets for credit default swaps or with any other large-scale operation.

Yes, indeed.  The lead-time between undertaking initial business planning in order to raise early capital investments and the launching of services to the  public for  global satellite communications networks is in the order of 10 years (since satellites, satellite networks and user devices need to be designed, manufactured, approved by regulators, deployed, and connected before they can provide service).  The time between initial business planning and the final decommissioning of an international gas or oil pipeline is about 50 years.  The time between initial business planning and the final decommissioning of an international undersea telecommunications cable may be as long as 100 years.   As I remarked once previously, the design of Transmission Control Protocol (TCP) packets, the primary engine of communication in the 21st century Internet, is closely modeled on the design of telegrams first sent in the middle of the 19th century.  Some markets, if they work at all, only work over the long run, but as Keynes famously said, in the long run we are all dead.
I have experience of trying to design telecoms services for satellite networks (among others), knowing that any accurate feedback for design decisions may come late or not at all, and when it comes may be vague and ambiguous, or even misleading.   Moreover, the success or failure of the selected marketing strategy may not ever be clear, since its success may depend on the quality of execution of the strategy, so that it may be impossible to determine what precisely led to the outcome.   I have talked about this issue before, both regarding military strategies and regarding complex decisions in general.  If the quality of execution also influences success (as it does), then just who or what is the market giving feedback to?
In other words, these coffees are not always short and strong (in Harford’s words), but may be cold, weak, very very slow in arriving, and even their very nature contested.   I’ve not yet read Harford’s book, but if he thinks all business is as simple as providing fmc (fast-moving consumer) services, his book is not worth reading.
Once again, an economist argues by anecdote and example.  And once again, I wonder at the world:  That economists have a reputation for talking about reality, when most of them evidently know so little about it, or reduce its messy complexities to homilies based on the operation of suburban coffee shops.

Tim Harford at LSE: Dirigisme in action

This week I heard economic journalist Tim Harford talk at the London School of Economics (LSE), on a whirlwind tour (7 talks, I think he told us, this week) to promote his new book.   Each talk is on one topic covered in the book, and at LSE he talked about the GFC and his suggestions for preventing its recurrence.

Harford’s talk itself was chatty, anecdotal, and witty.    Economics is still in deep thrall to its 19th century fascination with physical machines, and this talk was no exception.   The anecdotes mostly concerned Great Engineering Disasters of our time, with Harford emphasizing the risks that arise from tightly-coupling of components in systems and, ironically, frequent misguided attempts to improve their safety which only worsen it.

Anecdotal descriptions of failed engineering artefacts may have relevance to the preventing a repeat of the GFC, but Harford did not make any case that they do.  He just gave examples from engineering and from financial markets, and asserted that these were examples of the same conceptual phenomena.    However, as metaphors for economies machines and mechanical systems are worse than useless, since they emphasize in people’s minds, especially in the minds of regulators and participants, mechanical and stand-alone aspects of systems which are completely inappropriate here.  

Economies and marketplaces are NOT like machines, with inanimate parts whose relationships are static and that move when levers are pulled, or effects which can be known or predicted when causes are instantiated, or components designed centrally to achieve some global objectives.  Autonomous, intelligent components having dynamic relationships describes few machines or mechanical systems, and certainly none from the 19th century.   

A better category of failure metaphors would be ecological and biological.   We introduce cane toads to North Queensland to prey upon a sugar cane pest, and the cane toads, having no predators themselves,  take over the country.    Unintended and unforeseen consequences of actions, not arising merely because the  system is complex or its parts tightly-coupled, but arise because the system comprises multiple autonomous and goal-directed actors with different beliefs, histories and motivations, and whose relationships with one another change as a result of their interactions.  

Where, I wanted to shout to Harford, were the ecological metaphors?  Why, I wanted to ask, does this 19th-century fascination with deterministic, centralized machines and mechanisms persist in economics, despite its obvious irrelevance and failings? Who, if not rich FT journalists with time to write books, I wanted to know, will think differently about these problems?

Finally, only economists strongly in favour of allowing market forces to operate unfettered would have used the dirigismic methods that the LSE did to allocate people to seats for this lecture.  We were forced to sit in rows in our order of arrival in the auditorium. Why was this?  When I asked an usher for the reason, the answer I was given made no sense:   Because we expect a full hall.    Why were the organizers so afraid of allowing people to exercise their own preferences as to where to sit?  We don’t all have the same hearing and sight capabilities, we don’t all have the same preferences as to side of the hall, or side  of the aisle, etc. We don’t all arrive in parties of the same size.  We don’t all want to sit behind a tall person or near a noisy group.

The hall was not full, as it happened, so we were crammed into place in part of the hall like passive objects in a consumer choice model of voting, instead of as free, active citizens in a democracy occupying whatever position we most preferred of those still available.  But even if the hall had been full, there are less-centralized and less-unfriendly methods of matching people to seats.  The 20 or so LSE student ushers on hand, for instance, could been scattered about the hall to direct latecomers to empty seats, rather than lining the aisles like red-shirted troops to prevent people sitting where they wanted to.

What hope is there that our economic problems will be solved when the London School of Economics, of all places, uses central planning to sit people in public lectures?

Update: There is an interesting critical review of Harford’s latest book, here.

What use are models?

What are models for?   Most developers and users of models, in my experience, seem to assume the answer to this question is obvious and thus never raise it.   In fact, modeling has many potential purposes, and some of these conflict with one another.   Some of the criticisms made of particular models arise from mis-understandings or mis-perceptions of the purposes of those models, and the modeling activities which led to them.
Liking cladistics as I do, I thought it useful to list all the potential purposes of models and modeling.   The only discussion that considers this topic that I know is a brief discussion by game theorist Ariel Rubinstein in an appendix to a book on modeling rational behaviour (Rubinstein 1998).  Rubinstein considers several alternative purposes for economic modeling, but ignores many others.   My list is as follows (to be expanded and annotated in due course):

  • 1. To better understand some real phenomena or existing system.   This is perhaps the most commonly perceived purpose of modeling, in the sciences and the social sciences.
  • 2. To predict (some properties of) some real phenomena or existing system.  A model aiming to predict some domain may be successful without aiding our understanding  of the domain at all.  Isaac Newton’s model of the motion of planets, for example, was predictive but not explanatory.   I understand that physicist David Deutsch argues that predictive ability is not an end of scientific modeling but a means, since it is how we assess and compare alternative models of the same phenomena.    This is wrong on both counts:  prediction IS an end of much modeling activity (especially in business strategy and public policy domains), and it not the only means we use to assess models.  Indeed, for many modeling activities, calibration and prediction are problematic, and so predictive capability may not even be  possible as a form of model assessment.
  • 3. To manage or control (some properties of) some real phenomena or existing system.
  • 4. To better understand a model of some real phenomena or existing system.  Arguably, most of economic theorizing and modeling falls into this category, and Rubinstein’s preferred purpose is this type.   Macro-economic models, if they are calibrated at all, are calibrated against artificial, human-defined, variables such as employment, GDP and inflation, variables which may themselves bear a tenuous and dynamic relationship to any underlying economic reality.   Micro-economic models, if they are calibrated at all, are often calibrated with stylized facts, abstractions and simplifications of reality which economists have come to regard as representative of the domain in question.    In other words, economic models are not not usually calibrated against reality directly, but against other models of reality.  Similarly, large parts of contemporary mathematical physics (such as string theory and brane theory) have no access to any physical phenomena other than via the mathematical model itself:  our only means of apprehension of vibrating strings in inaccessible dimensions beyond the four we live in, for instance, is through the mathematics of string theory.    In this light, it seems nonsense to talk about the effectiveness, reasonable or otherwise, of mathematics in modeling reality, since how we could tell?
  • 5. To predict (some properties of) a model of some real phenomena or existing system.
  • 6. To better understand, predict or manage some intended (not-yet-existing) artificial system, so to guide its design and development.   Understanding a system that does  not yet exist is qualitatively different to understanding an existing domain or system, because the possibility of calibration is often absent and because the model may act to define the limits and possibilities of subsequent design actions on the artificial system.  The use of speech act theory (a model of natural human language) for the design of artificial machine-to-machine languages, or the use of economic game theory (a mathematical model of a stylized conceptual model of particular micro-economic realities) for the design of online auction sites are examples here.   The modeling activity can even be performative, helping to create the reality it may purport to describe, as in the case of the Black-Scholes model of options pricing.
  • 7. To provide a locus for discussion between relevant stakeholders in some business or public policy domain.  Most large-scale business planning models have this purpose within companies, particularly when multiple partners are involved.  Likewise, models of major public policy issues, such as epidemics, have this function.  In many complex domains, such as those in public health, models provide a means to tame and domesticate the complexity of the domain.  This helps stakeholders to jointly consider concepts, data, dynamics, policy options, and assessment of potential consequences of policy options,  all of which may need to be socially constructed. 
  • 8. To provide a means for identification, articulation and potentially resolution of trade-offs and their consequences in some business or public policy domain.   This is the case, for example, with models of public health risk assessment of chemicals or new products by environmental protection agencies, and models of epidemics deployed by government health authorities.
  • 9. To enable rigorous and justified thinking about the assumptions and their relationships to one another in modeling some domain.   Business planning models usually serve this purpose.   They may be used to inform actions, both to eliminate or mitigate negative consequences and to enhance positive consequences, as in retroflexive decision making.
  • 10. To enable a means of assessment of managerial competencies of the people undertaking the modeling activity. Investors in start-ups know that the business plans of the company founders are likely to be out of date very quickly.  The function of such business plans is not to model reality accurately, but to force rigorous thinking about the domain, and to provide a means by which potential investors can challenge the assumptions and thinking of management as way of probing the managerial competence of those managers.    Business planning can thus be seen to be a form of epideictic argument, where arguments are assessed on their form rather than their content, as I have argued here.
  • 11. As a means of play, to enable the exercise of human intelligence, ingenuity and creativity, in developing and exploring the properties of models themselves.  This purpose is true of that human activity known as doing pure mathematics, and perhaps of most of that academic activity known as doing mathematical economics.   As I have argued before, mathematical economics is closer to theology than to the modeling undertaken in the natural sciences. I see nothing wrong with this being a purpose of modeling, although it would be nice if academic economists were honest enough to admit that their use of public funds was primarily in pursuit of private pleasures, and any wider social benefits from their modeling activities were incidental.

POSTSCRIPT (Added 2011-06-17):  I have just seen Joshua Epstein’s 2008 discussion of the purposes of modeling in science and social science.   Epstein lists 17 reasons to build explicit models (in his words, although I have added the label “0” to his first reason):

0. Prediction
1. Explain (very different from predict)
2. Guide data collection
3. Illuminate core dynamics
4. Suggest dynamical analogies
5. Discover new questions
6. Promote a scientific habit of mind
7. Bound (bracket) outcomes to plausible ranges
8. Illuminate core uncertainties
9. Offer crisis options in near-real time. [Presumably, Epstein means “crisis-response options” here.]
10. Demonstrate tradeoffe/ suggest efficiencies
11. Challenge the robustness of prevailing theory through peturbations
12. Expose prevailing wisdom as imcompatible with available data
13. Train practitioners
14. Discipline the policy dialog
15. Educate the general public
16. Reveal the apparently simple (complex) to be complex (simple).

These are at a lower level than my list, and I believe some of his items are the consequences of purposes rather than purposes themselves, at least for honest modelers (eg, #11, #12, #16).
References:
Joshua M Epstein [2008]: Why model? Keynote address to the Second World Congress on Social Simulation, George Mason University, USA.  Available here (PDF).
Robert E Marks [2007]:  Validating simulation models: a general framework and four applied examples. Computational Economics, 30 (3): 265-290.
David F Midgley, Robert E Marks and D Kunchamwar [2007]:  The building and assurance of agent-based models: an example and challenge to the field. Journal of Business Research, 60 (8): 884-893.
Robert Rosen [1985]: Anticipatory Systems. Pergamon Press.
Ariel Rubinstein [1998]: Modeling Bounded Rationality. Cambridge, MA, USA: MIT Press.  Zeuthen Lecture Book Series.
Ariel Rubinstein [2006]: Dilemmas of an economic theorist. Econometrica, 74 (4): 865-883.

Dialogs over actions

In the post below, I mentioned the challenge for knowledge engineers of representing know-how, a task which may require explicit representation of actions, and sometimes also of utterances over actions.  The know-how involved in steering a large sailing ship with its diverse crew surely includes the knowledge of who to ask (or to command) to do what, when, and how to respond when these requests (or commands) are ignored, or fail to be executed successfully or timeously.
One might imagine epistemology – the philosophy of knowledge – would be of help here.  Philosophers, however, have been seduced since Aristotle with propositions (factual statements about the world having truth values), largely ignoring actions, and their representation.   Philosophers of language have also mostly focused on speech acts – utterances which act to change the world – rather than on utterances about actions themselves.  Even among speech act theorists the obsession with propositions is strong, with attempts to analyze utterances which are demonstrably not propositions (eg, commands) by means of implicit assertive statements – propositions  asserting something about the world, where “the world” is extended to include internal mental states and intangible social relations between people – which these utterances allegedly imply.  With only a few exceptions (Thomas Reid 1788, Adolf Reinach 1913, Juergen Habermas 1981, Charles Hamblin 1987), philosophers of language have mostly ignored utterances  about actions.
Consider the following two statements:

I promise you to wash the car.
I command you to wash the car.

The two statements have almost identical English syntax.   Yet their meanings, and the intentions of their speakers, are very distinct.  For a start, the action of washing the car would be done by different people – the speaker and the hearer, respectively (assuming for the moment that the command is validly issued, and accepted).  Similarly, the power to retract or revoke the action of washing the car rests with different people – with the hearer (as the recipient of the promise) and the speaker (as the commander), respectively.
Linguists generally use “semantics” to refer to the real-world referants of syntactically-correct expressions, while “pragmatics” refers to other aspects of the meaning and use of an expression not related to their relationship (or not) to things in the world, such as the speaker’s intentions.  For neither of these two expressions does it make sense to speak of  their truth value:  a promise may be questioned as to its sincerity, or its feasibility, or its appropriateness, etc, but not its truth or falsity;  likewise, a command  may be questioned as to its legal validity, or its feasibility, or its morality, etc, but also not its truth or falsity.
For utterances about actions, such as promises, requests, entreaties and commands, truth-value semantics makes no sense.  Instead, we generally need to consider two pragmatic aspects.  The first is uptake, the acceptance of the utterance by the hearer (an aspect first identified by Reid and by Reinach), an acceptance which generally creates a social commitment to execute the action described in the utterance by one or other party to the conversation (speaker or hearer).    Once uptaken, a second pragmatic aspect comes into play:  the power to revoke or retract the social commitment to execute the action.  This revocation power does not necessarily lie with the original speaker; only the recipient of a promise may cancel it, for example, and not the original promiser.  The revocation power also does not necessarily lie with the uptaker, as commands readily indicate.
Why would a computer scientist be interested in such humanistic arcana?  The more tasks we delegate to intelligent machines, the more they need to co-ordinate actions with others of like kind.  Such co-ordination requires conversations comprising utterances over actions, and, for success, these require agreed syntax, semantics and pragmatics.  To give just one example:  the use of intelligent devices by soldiers have made the modern battlefield a place of overwhelming information collection, analysis and communication.  Lots of this communication can be done by intelligent software agents, which is why the US military, inter alia, sponsors research applying the philosophy of language and the  philosophy of argumentation to machine communications.
Meanwhile, the philistine British Government intends to cease funding tertiary education in the arts and the humanities.   Even utilitarians should object to this.
References:
Juergen  Habermas [1984/1981]:   The Theory of Communicative Action:  Volume 1:  Reason and the Rationalization of Society.  London, UK:  Heinemann.   (Translation by T. McCarthy of:  Theorie des Kommunikativen Handelns, Band I,  Handlungsrationalitat und gesellschaftliche Rationalisierung. Suhrkamp, Frankfurt, Germany, 1981.)
Charles  L. Hamblin [1987]:  Imperatives. Oxford, UK:  Basil Blackwell.
P. McBurney and S. Parsons [2007]: Retraction and revocation in agent deliberation dialogs. Argumentation, 21 (3): 269-289.

Adolph Reinach [1913]:  Die apriorischen Grundlagen des bürgerlichen Rechtes.  Jahrbuch für Philosophie und phänomenologische Forschung, 1: 685-847.