August 1991 Putsch

Last August was the 20th anniversary of the short-lived revanchist coup in the USSR, which led directly to the break up of the Soviet Empire.   That the coup was ultimately unsuccessful was due in large part to the bravery of Boris Yeltsin and the citizens of Moscow who protested publicly against the coup.  Their bravery was shared by sections of the Soviet military, particularly the Air Force, who also informed the plotters of their disapproval.   I understand that the main reason why the plotters did not bombard the White House, the Russian Parliament which Yeltsin and his supporters had occupied, as they had threatened was that the Air Force had promised to retaliate with an attack on the Kremlin.

A fact reported at the time in the IHT, but little-known since was that the leadership of the Soviet ballistic missile command signaled to the USA their disapproval of the coup.  They did this by moving their mobile ICBMs into their storage hangars, thereby preventing their use.  Only the USA with its satellite surveillance could see all these movements;   CIA and George Bush, aided perhaps by telephone taps, were clever enough to draw the  intended inference:  that the leadership of the Soviet Missile Command was opposed to the coup.

Here is a report that week in the Chicago Tribune (1991-08-28):

WASHINGTON — During last week`s failed coup in the Soviet Union, U.S. intelligence overheard the general commanding all strategic nuclear missiles on Soviet land give a highly unusual order.  Gen. Yuri Maksimov, commander-in-chief of the Soviets’ Strategic Rocket Forces, ordered his SS-25 mobile nuclear missile forces back to their bases from their battle-ready positions in the field, said Bruce Blair, a former Strategic Air Command nuclear triggerman who studies the Soviet command system at the Brookings Institution.

“He was defying the coup. By bringing the SS-25s out of the field and off alert, he reduced their combat readiness and severed their links to the coup leaders,”  said Blair.
That firm hand on the nuclear safety catch showed that political chaos in the Soviet Union actually may have reduced the threat posed to the world by the Soviets’ 30,000 nuclear warheads, said several longtime U.S. nuclear war analysts. The Soviet nuclear arsenal, the world’s largest, has the world’s strictest controls, far stricter than those in the U.S., they said.  Those controls remained in place, and in some cases tightened, during last week’s failed coup-even when the coup plotters briefly stole a briefcase containing codes and communications equipment for launching nuclear weapons from Soviet President Mikhail Gorbachev.”

And here is R. W. Johnson, in a book review in the London Review of Books (2011-04-28):

One of the unheralded heroes of the end of the Cold War was General Y.P. Maksimov, the commander in chief of the Soviet strategic rocket forces during the hardliners’ coup against Gorbachev in August 1991. He made a pact with the heads of the navy and air force to disobey any order by the coup plotters to launch nuclear weapons. There was extreme concern in the West that the coup leader, Gennady Yanayev, had stolen Gorbachev’s Cheget (the case containing the nuclear button) and the launch codes, and that the coup leaders might initiate a nuclear exchange. Maksimov ordered his mobile SS-25 ICBMs to be withdrawn from their forest emplacements and shut up in their sheds – knowing that American satellites would relay this information immediately to Washington. In the event, the NSA let President Bush know that the rockets were being stored away in real time.”

References:
R. W. Johnson [2011]:  Living on the Edge. London Review of Books, 33 (9):  32-33 (2011-04-28).

XX Foxiness: Counter-espionage

I have just read Ben MacIntyre’s superb “Double Cross:  The True Story of the D-Day Spies” (Bloomsbury, London 2012), which describes the succesful counter-espionage operation conducted by the British against the Nazis in Britain during WW II.  Every Nazi foreign agent in Britain was captured and either tried and executed, or turned, being run by the so-called Twenty (“XX”) Committee.  This network of double agents, many of whom created fictional sub-agents, became a secret weapon of considerable power, able to mislead and misdirect  Nazi war efforts through their messages back to their German controllers (in France, Portugal, Spain and Germany).
The success of these misdirections was known precisely, since Britain was able to read most German encrypted communications, through the work of Bletchley Park (the Enigma project).  Indeed, since the various German intelligence controllers often simply passed on the messages they received from their believed-agents in Britain verbatim (ie, without any summarization or editing),  these message helped the decoders decipher each German daily cypher code:  the decoders had both the original message sent from Britain and its encrypted version communicated between German intelligence offices in (say) Lisbon and Berlin.
This secret weapon was used most famously to deflect Nazi attentions from the true site of the D-Day landings in France.  So successful was this, with entire fictional armies created and reported on in South East England and in Scotland (for purported attacks on Calais in France and on Norway), that even after the war’s end, former Nazi military leaders talked about the non-use by allies of these vast forces, still not realizing the fiction.
One interesting question is the extent to which parts of German intelligence were witting or even complicit in this deception.  The Abwehr, the German military intelligence organization, under its leader Admiral Wilhelm Canaris (who led it 1935-1944), was notoriously anti-Nazi.  Indeed, many of its members were arrested for plotting against Hitler.  Certainly, if not witting or complicit, many of its staff were financially corrupt, and happy to take a percentage of payments made to agents they knew or suspected to be fictional.
Another fascinating issue is when it may not be good to know something:  One Abwehr officer, Johnny Jebsen, remained with them while secretly talking to the British about defecting.   The British could not, of course, know where his true loyalties lay while he remained with the Abwehr.   Despite their best efforts to stop him, he told them of all the German secret agents then working in Britain.  They tried to stop him because once he told them, he knew that they knew who the Germans believed their agents to be.  Their subsequent reactions to having this knowledge  – arrest each agent or leave the agent in place – would thus tell him which agents were really working for the Nazis and which were in fact double agents.
Jebsen was drugged and forcibly returned to Germany by the Abwehr (apparently, to pre-empt him being arrested by the SS and thus creating an excuse for the closure of the Abwehr), and then was tortured, sent to a concentration camp, and probably murdered by the Nazis.  It seems he did not reveal anything of what he knew about the British deceptions, and withstood the torture very bravely.  MacIntyre rightly admires him as one of the unsung heroes of this story.
Had Jebsen been able to defect to Britain, as others did, the British would have faced the same quandary that later confronted both CIA and KGB with each defecting espionage agent during the Cold War:  Is this person a genuine defector or a plant by the other side?  I have talked before about some of the issues for what to believe, what to pretend to believe, and what to do in the case of KGB defector (and IMHO likely plant) Yuri Nosenko, here and here.
 

Strategy vs. Tactics

What is the difference between strategy and tactics?  In my experience, many people cannot tell the difference, and/or speak as if they conflate the two. Personally, I have never had difficulty telling them apart.
The 18th-century British naval definition was that tactics are for when you can see the enemy’s ships, and strategies are for when you cannot.  When you can see the enemy’s ships there are still important unknown variables, but you should know how many ships there are, where they are located, and (within some degree of accuracy) what hostile actions they are capable of.  If you are close enough to identify the particular enemy ships that you can see, you may also know then the identities of their captains.  With knowledge of past engagements, you may thus be able to estimate the intentions, the likely behaviors, and the fighting will of the ships’ crews.   None of these variables are known when the ships lay beyond the horizon.
Thus, tactics describe your possible actions when you know who the other stakeholders are in the situation you are in, and you have accurate (although not necessarily precise) information about their capabilities, goals, preferences, and intentions.   To the extent that such knowledge is missing is the extent to which reasoning about potential actions becomes strategic rather than tactical.  These distinctions are usually quite clear in marketing contexts.  For instance, licking envelopes for a client’s direct marketing campaign is not strategic consultancy, nor is finding, cleaning, verifying, and compiling the addresses needed by the client to put on the envelopes. (This is not to say that either task can be done well without expertise and experience.) Advising a client to embark on a direct marketing campaign rather than (say) a television ad campaign is closer to strategic consultancy, although in some contexts it may be mere tactics. Determining ahead of time which segments of the potential customer population should be targeted with an advertising campaign is definitely strategic, as is deciding whether or not to enter (or stay) in the market.
The key difference between the two is that articulating a strategy requires taking a view on the values of significant uncertain variables, whereas articulating a tactic generally does not.

Complex Decisions

Most real-world business decisions are considerably more complex than the examples presented by academics in decision theory and game theory.  What makes some decisions more complex than others? Here I list some features, not all of which are present in all decision situations.

  • The problems are not posed in a form amenable to classical decision theory.

    Decision theory requires the decision-maker to know what are his or her action-options, what are the consequences of these, what are the uncertain events which may influence these consequences, and what are the probabilities of these uncertain events (and to know all these matters in advance of the decision). Yet, for many real-world decisions, this knowledge is either absent, or may only be known in some vague, intuitive, way. The drug thalidomide, for example, was tested thoroughly before it was sold commercially – on male and female human subjects, adults and children. The only group not to be tested were pregnant women, which were, unfortunately, the main group for which the drug had serious side effects. These side effects were consequences which had not been imagined before the decision to launch was made. Decision theory does not tell us how to identify the possible consequences of some decision, so what use is it in real decision-making?

  • There are fundamental domain uncertainties.

    None of us knows the future. Even with considerable investment in market research, future demand for new products may not be known because potential customers themselves do not know with any certainty what their future demand will be. Moreover, in many cases, we don’t know the past either. I have had many experiences where participants in a business venture have disagreed profoundly about the causes of failure, or even success, and so have taken very different lessons from the experience.

  • Decisions may be unique (non-repeated).

    It is hard to draw on past experience when something is being done for the first time. This does not stop people trying, and so decision-making by metaphor or by anecdote is an important feature of real-world decision-making, even though mostly ignored by decision theorists.

  • There may be multiple stakeholders and participants to the decision.

    In developing a business plan for a global satellite network, for example, a decision-maker would need to take account of the views of a handful of competitors, tens of major investors, scores of minor investors, approximately two hundred national and international telecommunications regulators, a similar number of national company law authorities, scores of upstream suppliers (eg equipment manufacturers), hundreds of employees, hundreds of downstream service wholesalers, thousands of downstream retailers, thousands or millions of shareholders (if listed publicly), and millions of potential customers. To ignore or oppose the views of any of these stakeholders could doom the business to failure. As it happens, Game Theory isn’t much use with this number and complexity of participants. Moreover, despite the view commonly held in academia, most large Western corporations operate with a form of democracy. (If opinions of intelligent, capable staff are regularly over-ridden, these staff will simply leave, so competition ensures democracy. In addition, good managers know that decisions unsupported by their staff will often be executed poorly, so success of a decision may depend on the extent to which staff believe it has been reached fairly.) Accordingly, all major decisions are decided by groups or teams, not at the sole discretion of an individual. Decision theorists, it seems to me, have paid insufficient attention to group decisions: We hear lots about Bayesian decision theory, but where, for example, is the Bayesian theory of combining subjective probability assessments?

  • Domain knowledge may be incomplete and distributed across these stakeholders.
  • Beliefs, goals and preferences of the stakeholders may be diverse and conflicting.
  • Beliefs, goals and preferences of stakeholders, the probabilities of events and the consequences of decisions, may be determined endogenously, as part of the decision process itself.

    For instance, economists use the term network good to refer to a good where one person’s utility depends on the utility of others. A fax machine is an example, since being the sole owner of fax is of little value to a consumer. Thus, a rational consumer would determine his or her preferences for such a good only AFTER learning the preferences of others. In other words, rational preferences are determined only in the course of the decision process, not beforehand.  Having considerable experience in marketing, I contend that ALL goods and services have a network-good component. Even so-called commodities, such as natural resources or telecommunications bandwidth, have demand which is subject to fashion and peer pressure. You can’t get fired for buying IBM, was the old saying. And an important function of advertising is to allow potential consumers to infer the likely preferences of other consumers, so that they can then determine their own preferences. If the advertisement appeals to people like me, or people to whom I aspire to be like, then I can infer that those others are likely to prefer the product being advertized, and thus I can determine my own preferences for it. Similarly, if the advertisement appeals to people I don’t aspire to be like, then I can infer that I won’t be subject to peer pressure or fashion trends, and can determine my preferences accordingly.
    This is commonsense to marketers, even if heretical to many economists.

  • The decision-maker may not fully understand what actions are possible until he or she begins to execute.
  • Some actions may change the decision-making landscape, particularly in domains where there are many interacting participants.

    A bold announcement by a company to launch a new product, for example, may induce competitors to follow and so increase (or decrease) the chances of success. For many goods, an ecosystem of critical size may be required for success, and bold initiatives may act to create (or destroy) such ecosystems.

  • Measures of success may be absent, conflicting or vague.
  • The consequences of actions, including their success or failure, may depend on the quality of execution, which in turn may depend on attitudes and actions of people not making the decision.

    Most business strategies are executed by people other than those who developed or decided the strategy. If the people undertaking the execution are not fully committed to the strategy, they generally have many ways to undermine or subvert it. In military domains, the so-called Powell Doctrine, named after former US Secretary of State Colin Powell, says that foreign military actions undertaken by a democracy may only be successful if these actions have majority public support. (I have written on this topic before.)

  • As a corollary of the previous feature, success of an action may require extensive and continuing dialog with relevant stakeholders, before, during and after its execution.

    This is not news to anyone in business.

  • Success may require pre-commitments before a decision is finally taken.

    In the 1990s, many telecommunications companies bid for national telecoms licences in foreign countries. Often, an important criterion used by the Governments awarding these licences was how quickly each potential operator could launch commercial service. To ensure that they could launch service quickly, some bidders resorted to making purchase commitments with suppliers and even installing equipment ahead of knowing the outcome of a bid, and even ahead, in at least one case I know, of deciding whether or not to bid.

  • The consequences of decisions may be slow to realize.

    Satellite mobile communications networks have typically taken ten years from serious inception to launch of service.  The oil industry usually works on 50+ year cycles for major investment projects.  BP is currently suffering the consequence in the Gulf of Mexico of what appears to be a decades-long culture which de-emphasized safety and adequate contingency planning.

  • Decision-makers may influence the consequences of decisions and/or the measures of success.
  • Intelligent participants may model each other in reaching a decision, what I term reflexivity.

    As a consequence, participants are not only reacting to events in their environment, they are anticipating events and the reactions and anticipations of other participants, and acting proactively to these anticipated events and reactions. Traditional decision theory ignores this. Following Nash, traditional game theory has modeled the outcomes of one such reasoning process, but not the processes themselves. Evolutionary game theory may prove useful for modeling these reasoning processes, although assuming a sequence of identical, repeated interactions does not strike me as an immediate way to model a process of reflexivity.  This problem still awaits its Nash.

In my experience, classical decision theory and game theory do not handle these features very well; in some cases, indeed, not at all.  I contend that a new theory of complex decisions is necessary to cope with decision domains having these features.

Metrosexual competition

Writing about the macho world of pure mathematics (at least, in my experience, in analysis and group theory, less so in category theory and number theory, for example), led me to think that some academic disciplines seem hyper-competitive:  physics, philosophy, and mainstream economics come to mind.  A problem for economics is that the domain of the discipline includes the study of competition, and the macho, hyper-competitive nature of academic economists has led them, I believe, astray in their thinking about the marketplace competition they claim to be studying.  They have assumed that their own nasty, bullying, dog-eat-dog world is a good model for the world of business.

If business were truly the self-interested, take-no-prisoners world of competition described in economics textbooks and assumed in mainstream economics, our lives would all be very different.  Fortunately, our world is mostly not like this.   One example is in telecommunications where companies compete and collaborate with each other at the same time, and often through the same business units.  For instance, British Telecommunications and Vodafone are competitors (both directly in the same product categories and indirectly through partial substitutes such as fixed and mobile services), and collaborators, through the legally-required and commercially-sensible inter-connections of their respective networks.  Indeed, for many years, each company was the other company’s largest customer, since the inter-connection of their networks means each company completes calls that originate on the other’s network; thus each company receives payments from the other. 

Do you seek to drive your main competitor out of business when that competitor is also your largest customer?   Would you do this, as stupid as it seems, knowing that your competitor could retaliate (perhaps pre-emptively!) by disconnecting your network or reducing the quality of your calls that interconnect?  No rational business manager would do this, although perhaps an economist might.

Nor would you destroy your competitors when you and they are sharing physical infrastructure  – co-locating switches in each other’s buildings, for example, or sharing rural cellular base stations, both of which are common in telecommunications.   And, to complicate matters, large corporate customers of telecommunications companies increasingly want direct access to the telco’s own switches, leading to very porous boundaries between companies and their suppliers.   Doctrines of nuclear warfare, such as mutually-assured destruction or iterated prisoners’ dilemma, are better models for this marketplace than the mainstream one-shot utility-maximizing models, in my opinion.

You might protest that telecommunications is a special case, since the product is a networked good – that is, one where a customer’s utility from a particular service may depend on the numbers of other customers also using the service.    However, even for non-networked goods, the fact that business usually involves repeated interactions with the same group of people (and is decidely not a one-shot interaction) leads to more co-operation than is found in an economist’s philosophy.  

The empirical studies of hedge funds undertaken by sociologist Donald MacKenzie, for example, showed the great extent to which hedge fund managers rely in their investment decisions on information they receive from their competitors.  Because everyone hopes to come to work tomorrow and the day after, as well as today, there are strong incentives on people not to  mis-use these networks through, for instance, disseminating false or explicitly-self-serving information.

It’s a dog-help-dog world out there!

Reference:
Iain Hardie and Donald MacKenzie [2007]:  Assembling an economic actor: the agencement of a hedge fund. The Sociological Review, 55 (1): 57-80.

Argumentation in public health policy

While on the subject of public health policy making under conditions of ignorance, linguist Louise Cummings has recently published an interesting article about the logical fallacies used in the UK debate about possible human variants of mad-cow disease just over a decade ago (Cummings 2009).   Two fallacies were common in the scientific and public debates of the time (italics in orginal):
An Argument from Ignorance:

FROM: There is no evidence that BSE in cattle causes CJD in humans.
CONCLUDE:  BSE in cattle does not cause CJD in humans.

An Argument from Analogy:

FROM:  BSE is similar to scrapie in certain respects.
AND: Scrapie has not transmitted to humans.
CONCLUDE:   BSE will not transmit to humans.

Cummings argues that such arguments were justified for science policy, since the two presumptive conclusions adopted acted to guide the direction and prioritisation of subsequent scientific research efforts.  These presumptive conclusions did so despite both being defeasible, and despite, in fact, both being subsequently defeated by the scientific research they invoked.   This is a very interesting viewpoint, with much to commend it as a way to construe (and to reconstrue) the dynamics of scientific epistemology using argumentation.  It would be nice to combine such an approach with Marcello Pera’s 3-person model of scientific progress (Pera 1994), the persons being:  the Investigator, the Scientific Community, and Nature.
Some might be tempted to also believe that these arguments were justified in public health policy terms – for example,  in calming a nervous public over fears regarding possible BSE in humans.   However, because British public policy makers did in fact do just this and because the presumptive conclusions were subsequently defeated (ie, shown to be false), the long-term effect has been to make the great British public extremely suspicious of any similar official pronouncements.   The rise in parents refusing the triple MMR vaccine for their children is a direct consequence of the false assurances we were given by British health ministers about the safety of eating beef.   An argumentation-based  theory of dynamic epistemology in public policy would therefore need to include some game theory.   There’s also a close connection to be made to the analysis of the effects of propaganda and counter-propaganda (as in George 1959), and of intelligence and counter-intelligence.
References:
Louise Cummings [2009]: Emerging infectious diseases: coping with uncertaintyArgumentation, 23 (2): 171-188.
Alexander L. George [1959]: Propaganda Analysis:  A Study of Inferences Made from Nazi Propaganda in World War II.  (Evanston, IL, USA: Row, Peterson and Company).
Marcello Pera [1994]: The Discourses of Science. (Chicago, IL, USA: University of Chicago Press).

The network is the consumer

Economists use the term network good to refer to a product or service where one user’s utility depends, at least partly, on the utility received by other users. A fax machine is an example, since being the sole owner of fax is of little value to anyone; only when others in your business network also own fax machines does owning one provide value to you. Thus, a rational consumer would determine his or her preferences for such a good only AFTER learning the preferences of others.   This runs counter to the standard model of decisions in economic decision theory, where consumers come to a purchase decision with their preferences pre-installed; for network goods, the preferences of rational consumers are formed instead in the course of the decision process itself, not determined beforehand.   Preferences are emergent phenomena, in the jargon of complex systems.

What I find interesting as a marketer is that ALL products and services have a network-good component. Even so-called commodities, such as natural resources or telecommunications bandwidth, can be subject to fashion and peer-group pressure in their demand.  You can’t get fired for buying IBM, was the old saying.   Sellers of so-called commodities such as coal or bauxite know that the buyers make their decisions, at least in part, on the basis of what other large buyers are deciding.  Lest any mainstream economist reading this disparage such consumer behaviour, note that in an environment of great uncertainty or instability, it can be perfectly rational to follow the crowd when making purchase decisions, since a group may have access to information that any one buyer does not know.  If you are buying coal from Australia for your steel plant in Japan, and you learn that your competitors are switching to buying coal from Brazil, then there could be good reasons for this; as they are your competitors, it may be difficult for you to discover what these good reasons are, and so imitation may be your most rational strategic response.

For any product and service with a network component, even the humblest, there are deep implications for marketing strategies and tactics.  For example, advertising may not merely provide information to potential consumers about the product and its features.  It can also assist potential consumers to infer the likely preferences of other consumers, and so to determine their own preferences. If an advertisement appeals to people like me, or people to whom I aspire to be like, then I can infer from this that those other people are likely to prefer the product being advertized, and thus I can determine my own preferences for it. Similarly, if the advertisement appeals to people I don’t aspire to be like, then I can infer from this that I won’t be subject to peer pressure or fashion trends, and can determine my preferences accordingly.

For several decades, the prevailing social paradigm to describe modern, western society has been that of The Information Society, and so, for example, advertising has been seen by many people primarily as a form of information transmission.  But, in my opinion, we in the west are entering an era where a different prevailing paradigm is appropriate, perhaps best called The Joint-Action Society;  advertising then is also assisting consumers to co-ordinate their preferences and their decisions.    I’ll talk more about the Joint-Action Society in a future post.