Good decisions

Which decisions are good decisions?
Since 1945, mainstream economists have arrogated the word “rational” to describe a mode of decision-making which they consider to be best.   This method, called maximum-expected utility (MEU) decision-making, assumes that the decision-maker has only a finite set of possible action-options and that she knows what these are, that she knows the possible consequences of each of these actions and can quantify (or at least can estimate) these consequences, and can do so on a single, common, numerical scale of value (the payoffs), that she knows a finite and complete collection of uncertain events that are possible and which may impact the consequences and their values, and knows (or at least can estimate) the probabilities of these uncertain events, again on a common numerical scale of uncertainty.  The MEU decision procedure is then to quantify the consequences of each action-option, weighting them by the relative likelihood of their arising according to their probabilities of the uncertain events which influence them.
The decision-maker then selects that action-option which has the maximum expected consequential value, ie the consequential value weighted by the probabilities of the uncertain events. Such decision-making, in an abuse of language that cries out for a criminal charges, is then called rational by economists.   Bayesian statistician Dennis Lindley even wrote a book about MEU which included the stunningly-arrogant sentence, “The main conclusion [of this book] is that there is essentially only one way to reach a decision sensibly.”

Rational?  This method is not even feasible, let alone sensible or good!
First, where do all these numbers come from?  With the explicit assumptions that I have listed, economists are assuming that the decision-maker has some form of perfect knowledge.  Well, no one making any real-world decisions has that much knowledge.  Of course, economists often respond, estimates can be used when the knowledge is missing.  But whose estimates?   Sourced from where?   Updated when? Anyone with any corporate or public policy experience knows straight away that consensus on such numbers for any half-way important problem will be hard to find.  Worse than that, any consensus achieved should immediately be suspected and interrogated, since it may be evidence of groupthink.    There simply is no certainty about the future, and if a group of people all do agree on what it holds, down to quantified probabilities and payoffs, they deserve the comeuppance they are likely to get!
Second, the MEU principle simply averages across uncertain events.   What of action-options with potentially catastrophic outcomes?   Their small likelihood of occurrence may mean they disappear in the averaging process, but no real-world decision-maker – at least, none with any experience or common sense – would risk a catastrophic outcome, despite their estimated low probabilities.   Wall Street trading firms have off-street (and often off-city) backup IT systems, and sometimes even entire backup trading floors, ready for those rare events.
Third, look at all the assumptions not made explicit in this framework.  There is no mention of the time allowed for the decision, so apparently the decision-maker has infinities of time available.  No mention is made of the processing or memory resources available for making the decision, so she has infinities of world also.   That makes a change from most real-world decisions:  what a pleasant utopia this MEU-land must be.  Nothing is said – at least nothing explicit – about taking into account the historical or other contexts of the decision, such as past decisions by this or related decision-makers, technology standards, legacy systems, organization policies and constraints, legal, regulatory or ethical constraints, or the strategies of the company or the society in which the decision-maker sits.   How could a decision procedure which ignores such issues be considered, even for a moment, rational?   I think only an academic could ignore context in this way; no business person I know would do so, since certain unemployment would be the result.  And how could members of an academic discipline purporting to be a social science accept and disseminate a decision-making framework which ignores such social, contextual features?
And do the selected action-options just execute themselves?  Nothing is said in this framework about consultation with stakeholders during the decision-process, so presumably the decision-maker has no one to report to, no board members or stockholders or division presidents or ward chairmen or electors to manage or inform or liaise with or mollify or reward or appease or seek re-election from, no technical departments to seek feasibility approval from, no implementation staff to motivate or inspire, no regulators or ethicists or corporate counsel to seek legal approval from, no funders or investors to raise finance from, no suppliers to convince to accept orders with, no distribution channels to persuade to schedule throughput with,  no competitors to second-guess or outwit, and no actual, self-immolating protesters outside one’s office window to avert one’s eyes from and feel guilt about for years afterward.*
For many complex decisions, the ultimate success or failure of the decision can depend significantly on the degree to which those having to execute the decision also support it.  Consequently, the choice of a specific action-option (and the logical reasoning process used to select it) may be far less important for success of the decision than that key stakeholders feel that they have been consulted appropriately during the reasoning process.  In other words, the quality of the decision may depend much more on how and with who the decision-maker reasons than on the particular conclusion she reaches.   Arguably this is true of almost all significant corporate strategy decisions and major public policy decisions:  There is ultimately no point sending your military to prop up an anti-communist regime in South-East Asia, for example, if your own soldiers come to feel they should not be there (as I discuss here, regarding another decision to go to war).
Mainstream economists have a long way to go before they will have a theory of good decision-making.   In the meantime, it would behoove them to show some humility when criticizing the decision-making processes of human beings.**
Notes and Bibliography:
Oskar Lange [1945-46]:  The scope and method of economics.  The Review of Economic Studies, 13 (1): 19-32.
Dennis Lindley [1985]:  Making Decisions.  Second Edition. London, UK: John Wiley and Sons.
L James Savage [1950]: The Foundations of Statistics.  New York, NY, USA:  Wiley.
* I’m sure Robert McNamara, statistician and decision-theory whizz kid, never considered the reactions of self-immolating protesters when making decisions early in his career, but having seen one outside his office window late in his time as Secretary of Defense he seems to have done so subsequently.
** Three-toed sloth comments dialogically and amusingly on MEU theory here.

Herbert Hoover, zombie

I posted last week on Robert Skidelsky’s criticisms of the current British Government’s deflationary economic policy for lacking any rational theoretical underpinning.   Two Nobelistas have now joined the fray.  Here is Joe Stiglitz, writing about the apparent belief in a Confidence Fairy:

There is a shortage of aggregate demand – the demand for goods and services that generates jobs. Cutbacks in government spending will mean lower output and higher unemployment, unless something else fills the gap. Monetary policy won’t. Short-term interest rates can’t go any lower, and quantitative easing is not likely to substantially reduce the long-term interest rates government pays – and is even less likely to lead to substantial increases either in consumption or investment. If only one country does it, it might hope to gain an advantage through the weakening of its currency; but if anything the US is more likely to succeed in weakening its currency against sterling through its aggressive quantitative easing, worsening Britain’s trade position.
Of course if Britain succeeds in getting the world to believe that its economic policies are among the worst – an admittedly fierce contest at the moment – its currency may decline, but this is hardly the road to a recovery. Besides, in the malaise into which the global economy is sinking, the challenge will be to maintain exports; they can’t be relied on as a substitute for domestic demand. The few instances where small countries managed to grow in the face of austerity were those where their trading partners were experiencing a boom.
. . . .
Britain is embarking on a highly risky experiment. More likely than not, it will add one more data point to the well- established result that austerity in the midst of a downturn lowers GDP and increases unemployment, and excessive austerity can have long-lasting effects.
If Britain were wealthier, or if the prospects of success were greater, it might be a risk worth taking. But it is a gamble with almost no potential upside. Austerity is a gamble which Britain can ill afford.

And here is Paul Krugman, accusing the  British Government of being dedicated followers of fashion:

In the spring of 2010, fiscal austerity became fashionable. I use the term advisedly: the sudden consensus among Very Serious People that everyone must balance budgets now now now wasn’t based on any kind of careful analysis. It was more like a fad, something everyone professed to believe because that was what the in-crowd was saying.
. . . .
But trendy fashion, almost by definition, isn’t sensible — and the British government seems determined to ignore the lessons of history.
Both the new British budget announced on Wednesday and the rhetoric that accompanied the announcement might have come straight from the desk of Andrew Mellon, the Treasury secretary who told President Herbert Hoover to fight the Depression by liquidating the farmers, liquidating the workers, and driving down wages. Or if you prefer more British precedents, it echoes the Snowden budget of 1931, which tried to restore confidence but ended up deepening the economic crisis.
The British government’s plan is bold, say the pundits — and so it is. But it boldly goes in exactly the wrong direction. It would cut government employment by 490,000 workers — the equivalent of almost three million layoffs in the United States — at a time when the private sector is in no position to provide alternative employment. It would slash spending at a time when private demand isn’t at all ready to take up the slack.
Why is the British government doing this? The real reason has a lot to do with ideology: the Tories are using the deficit as an excuse to downsize the welfare state. But the official rationale is that there is no alternative.
Indeed, there has been a noticeable change in the rhetoric of the government of Prime Minister David Cameron over the past few weeks — a shift from hope to fear. In his speech announcing the budget plan, George Osborne, the chancellor of the Exchequer, seemed to have given up on the confidence fairy — that is, on claims that the plan would have positive effects on employment and growth.
Instead, it was all about the apocalypse looming if Britain failed to go down this route. Never mind that British debt as a percentage of national income is actually below its historical average; never mind that British interest rates stayed low even as the nation’s budget deficit soared, reflecting the belief of investors that the country can and will get its finances under control. Britain, declared Mr. Osborne, was on the “brink of bankruptcy.”
What happens now? Maybe Britain will get lucky, and something will come along to rescue the economy. But the best guess is that Britain in 2011 will look like Britain in 1931, or the United States in 1937, or Japan in 1997. That is, premature fiscal austerity will lead to a renewed economic slump. As always, those who refuse to learn from the past are doomed to repeat it.

Pity for all of us here, there’s no there there in current UK economic policy.

Cutting morality

When, in 2 or 5 or 25 years,  we look back on this strange, phony-war period of wasted economic opportunity, we will wonder why the lessons of the Great Depression – lessons that we know, and that we know that we know – are not being applied by those with the power to decide levels of Government spending:  Congress in the USA, the ConDem coalition in the UK, austeritarians everywhere.    The reasons for the drive to austerity cannot be ignorance, for we know full well that this policy is inappropriate in the present circumstances.   Here is Robert Skidelsky, economist and Keynes’ biographer, writing in the Financial Times this week (2010-10-13) showing the wrong-headed-ness of a policy of cutting spending in a recession:

David Cameron, Mr Osborne, and Nick Clegg appear to believe in something called “crowding out”. This is the view that for every extra pound the government spends, the private sector spends one pound less.  Jobs created by stimulus spending are jobs lost by the decline of private spending. Any stimulus to revive the economy is doubly damned: not only does it fail to stimulate, but, because government spending is less efficient than private, it reduces the economy’s longer term recovery potential.
Applied to the deficit, the “crowding out” thesis takes two forms. The first is “Ricardian equivalence’’. Government borrowing is simply deferred taxation, because it produces no revenue to pay for it. Households save more to pay the higher taxes they expect. This means that any extra income created by the deficit will be saved, not spent. Net stimulus: zero.
The other leg of the “crowding out” argument is that government borrowing causes interest rates to rise. There is a fixed lump of saving. The more the government borrows, the more private borrowers will have to pay for their loans.
A refinement of this argument is “psychological crowding out”. In this version it is not a shortage of saving, but a shortage of confidence in the government’s creditworthiness – due to a fear of default – which causes interest rates to rise. Either way the deficit “crowds out” private investment. Net stimulus: zero.
The supposed implication of this type of argument is that in the short-run the deficit can do no good; and that in the slightly longer term it harms the potential for recovery. What the cutters have to believe is that every pound of deficit reduction will be matched by an extra pound of private sector spending.  That is, if the government weren’t spending this money, the private sector would be, and making much better use of it. Mr Osborne’s programme is a beautiful cure for recession, provided there’s no recession to cure!
Keynesians do not deny the possibility of “psychological crowding out”: markets are subject to all kinds of irrational hopes and fears. But what the cutters mean by “crowding out” can normally only happen at full employment.  At full employment, extra public spending obviously subtracts from private spending. But this is not the position we are in today.
What Keynesians say is that when resources are unemployed, government borrowing is not deferred taxation: it brings resources into use that would otherwise be idle, and thus increases the government’s revenues without having to raise taxes. When the government borrows money for which there is no current business use, this increases people’s incomes and therefore the saving needed to finance the borrowing, without interest rates having to rise. And though confidence problems may occur even in an under-employed economy, the probability of the UK government defaulting on its debt is, if not zero, extremely low.
In short, the “crowding out” argument is false.  The problem is not the expansion of the deficit but the shrinkage of the economy. The deficit is the stimulant the economy needs to start growing again: its withdrawal guarantees stagnation or worse.”

With such knowledge, what forgiveness?  Ignorance of the appropriate macro-economic policy thus cannot be the reason for our political leaders adopting a policy of drastic cuts.  The reason for cutting now can only be a desire to reduce the total levels of Government spending to further some ideological agenda, regardless of the deleterious economic and social consequences of the policy.
In Britain, the Conservative and Unionist Party has prepared for this ideological moment for some time, despite appearances to the contrary.  In the period leading up to the June 2010 election, the Conservative party was awash with funds.  The party paid to place enormous campaign posters in central Liverpool, in the constituency of Liverpool Riverside, a constituency that has been held by the Labour Party since the constituency’s creation in 1983.  Liverpool Riverside was formed from constituencies which had been held by Labour since 1964 (Liverpool Toxteth, although for 2 years its MP was a Social Democrat), 1945 (Liverpool Exchange), and 1929 (Liverpool Scotland, before which it was held from 1885 by prominent Irish Nationalist, TP (aka “Tay Pay”) O’Connor).  In the election of June 2010, the Labour MP, Louise Ellman actually increased her share of the vote to 59%, and the Conservatives placed 3rd, with a mere 11% of the vote.   In other words, parts of Liverpool Riverside have not voted for the Conservative Party for more than 125 years, almost back to the time when the Party actively prevented Jewish emancipation.
Why would the Conservative and Unionist party waste large sums of money on campaign posters in a constituency it would never win?  The answer is in the content of the posters:  The posters were billboard size and showed a picture of Gordon Brown’s head with a slogan blaming him for increasing the national debt massively.   What they did not do was thank Brown for steering the economy successfully through the worst recession for 80 years, nor for saving millions from unemployment, nor for leading the G20 nations in policies to ensure the world did not suffer worse, nor for leading global efforts to re-regulate the financial sector to prevent a repeat of the events leading to the crash.   With such posters, the ground was being prepared for a push for austerity, even months before the election, and despite the warm and fuzzy noises of the Conservative leadership during the campaign itself.
These posters were the tendentious work of ideologues, intent on reducing the size of the state, regardless of any economic or social consequences, and undertaken with forethought.  Given the consequences of a policy of large cuts at the present time, and our knowledge of them, adopting such a deleterious policy is malicious and immoral, and shames all those who have promoted it.

Precision as the enemy of knowledge

I have posted previously about the different ways in which knowledge may be represented.  A key learning of the discipline of Artificial Intelligence in its short life thus far is that not all representations are equal.    Indeed, more precise representations may provide less information, as in this example from cartography (from a profile of economist Paul Krugman):

Again, as in his [Krugman’s] trade theory, it was not so much his idea [that regional ecomomic specializations were essentially due to historical accidents] that was significant as the translation of the idea into a mathematical language.  “I explained this basic idea” – of economic geography – “to a non-economist friend,” Krugman wrote, “who replied in some dismay, ‘Isn’t that pretty obvious?’  And of course it is.”  Yet, because it had not been well modelled, the idea had been disregarded by economists for years.  Krugman began to realize that in the previous few decades economic knowledge that had not been translated into [tractable analytical mathematical] models had been effectively lost, because economists didn’t know what to do with it.  His friend Craig Murphy, a political scientist at Wellesley, had a collection of antique maps of Africa, and he told Krugman that a similar thing had happened in cartography.  Sixteenth century maps of Africa were misleading in all kinds of ways, but they contained quite a bit of information about the continent’s interior – the River Niger, Timbuktu.  Two centuries later, mapmaking had become more accurate, but the interior of Africa had become a blank.  As standards for what counted as a mappable fact rose, knowledge that didn’t meet those standards – secondhand travellers’ reports, guesses hazarded without compasses or sextants – was discarded and lost.  Eventually, the higher standards paid off – by the nineteenth century the maps were filled in again – but for a while the sharpening of technique caused loss as well as gain. ” (page 45)

Reference:
Larissa MacFarquhar [2010]:  The deflationist:  How Paul Krugman found politicsThe New Yorker, 2010-03-01, pp. 38-49.

Concat 1: The GEC

A post to concatenate interesting material on the GFC and the GEC:

Mass customization of economic laws

Belatedly, I have just seen a column by John Kay in the FT of 13 April 2010 (subscribers only), entitled:  “Economics may be dismal, but it is not a science.” His column reminded me of Stephen Toulmin’s arguments in his book Cosmopolis about the universalizing tendencies of modern western culture these last four centuries, which I discussed here.
An excerpt from Kay’s column:

Both the efficient market hypothesis and DSGE [Dynamic Stochastic General Equilibrium models] are associated with the idea of rational expectations – which might be described as the idea that households and companies make economic decisions as if they had available to them all the information about the world that might be available. If you wonder why such an implausible notion has won wide acceptance, part of the explanation lies in its conservative implications. Under rational expectations, not only do firms and households know already as much as policymakers, but they also anticipate what the government itself will do, so the best thing government can do is to remain predictable. Most economic policy is futile.
So is most interference in free markets. There is no room for the notion that people bought subprime mortgages or securitised products based on them because they knew less than the people who sold them. When the men and women of Goldman Sachs perform “God’s work”, the profits they make come not from information advantages, but from the value of their services. The economic role of government is to keep markets working.
These theories have appeal beyond the ranks of the rich and conservative for a deeper reason. If there were a simple, single, universal theory of economic behaviour, then the suite of arguments comprising rational expectations, efficient markets and DSEG would be that theory. Any other way of describing the world would have to recognise that what people do depends on their fallible beliefs and perceptions, would have to acknowledge uncertainty, and would accommodate the dependence of actions on changing social and cultural norms. Models could not then be universal: they would have to be specific to contexts.
The standard approach has the appearance of science in its ability to generate clear predictions from a small number of axioms. But only the appearance, since these predictions are mostly false. The environment actually faced by investors and economic policymakers is one in which actions do depend on beliefs and perceptions, must deal with uncertainty and are the product of a social context. There is no universal economic theory, and new economic thinking must necessarily be eclectic. That insight is Keynes’s greatest legacy.

The glass bead game of mathematical economics

Over at the economics blog, A Fine Theorem, there is a post about economic modelling.
My first comment is that the poster misunderstands the axiomatic method in pure mathematics.  It is not the case that “axioms are by assumption true”.  Truth is a bivariant relationship between some language or symbolic expression and the world.  Pure mathematicians using axiomatic methods make no assumptions about the relationship between their symbolic expressions of interest and the world.   Rather they deduce consequences from the axioms, as if those axioms were true, but without assuming that they are.    How do I know they do not assume their axioms to be true?  Because mathematicians often work with competing, mutually-inconsistent, sets of axioms, for example when they consider both Euclidean and non-Euclidean geometries, or when looking at systems which assume the Axiom of Choice and systems which do not.   Indeed, one could view parts of the meta-mathematical theory called Model Theory as being the formal and deductive exploration of multiple, competing sets of axioms.
On the question of economic modeling, the blogger presents the views of Gerard Debreu on why the abstract mathematicization of economics is something to be desired.   One should also point out the very great dangers of this research program, some of which we are suffering now.  The first is that people — both academic researchers and others — can become so intoxicated with the pleasures of mathematical modeling that they mistake the axioms and the models for reality itself.  Arguably the widespread adoption of financial models assuming independent and normally-distributed errors was the main cause of the Global Financial Crisis of 2008, where the errors of complex derivative trades (such as credit default swaps) were neither independent nor as thin-tailed as Normal distributions are.  The GFC led, inexorably, to the Great Recession we are all in now.
Secondly, considered only as a research program, this approach has serious flaws.  If you were planning to construct a realistic model of human economic behaviour in all its diversity and splendour, it would be very odd to start by modeling only that one very particular, and indeed pathological, type of behaviour examplified by homo economicus, so-called rational economic man.   Acting with with infinite mental processing resources and time, with perfect knowledge of the external world, with perfect knowledge of his own capabilities, his own goals, own preferences, and indeed own internal knowledge, with perfect foresight or, if not, then with perfect knowledge of a measure of uncertainty overlaid on a pre-specified sigma-algebra of events, and completely unencumbered with any concern for others, with any knowledge of history, or with any emotions, homo economicus is nowhere to be found on any omnibus to Clapham.  Starting economic theory with such a creature of fiction would be like building a general theory of human personality from a study only of convicted serial killers awaiting execution, or like articulating a general theory of evolution using only a hand-book of British birds.   Homo economicus is not where any reasonable researcher interested in modeling the real world would start from in creating a theory of economic man.
And, even if this starting point were not on its very face ridiculous, the fact that economic systems are complex adaptive systems should give economists great pause.   Such systems are, typically, not continuously dependent on their initial conditions, meaning that a small change in input parameters can result in a large change in output values.   In other words, you could have a model of economic man which was arbitrarily close to, but not identical with, homo economicus, and yet see wildly different behaviours between the two.  Simply removing the assumption of infinite mental processing resources creates a very different economic actor from the assumed one, and consequently very different properties at the level of economic systems.  Faced with such overwhelming non-continuity (and non-linearity), a naive person might expect economists to be humble about making predictions or giving advice to anyone living outside their models.   Instead, we get an entire profession labeling those human behaviours which their models cannot explain as “irrational”.
My anger at The Great Wen of mathematical economics arises because of the immorality this discipline evinces:   such significant and rare mathematical skills deployed, not to help alleviate suffering or to make the world a better place (as those outside Economics might expect the discipline to aspire to), but to explore the deductive consequences of abstract formal systems, systems neither descriptive of any reality, nor even always implementable in a virtual world.

Complex Decisions

Most real-world business decisions are considerably more complex than the examples presented by academics in decision theory and game theory.  What makes some decisions more complex than others? Here I list some features, not all of which are present in all decision situations.

  • The problems are not posed in a form amenable to classical decision theory.

    Decision theory requires the decision-maker to know what are his or her action-options, what are the consequences of these, what are the uncertain events which may influence these consequences, and what are the probabilities of these uncertain events (and to know all these matters in advance of the decision). Yet, for many real-world decisions, this knowledge is either absent, or may only be known in some vague, intuitive, way. The drug thalidomide, for example, was tested thoroughly before it was sold commercially – on male and female human subjects, adults and children. The only group not to be tested were pregnant women, which were, unfortunately, the main group for which the drug had serious side effects. These side effects were consequences which had not been imagined before the decision to launch was made. Decision theory does not tell us how to identify the possible consequences of some decision, so what use is it in real decision-making?

  • There are fundamental domain uncertainties.

    None of us knows the future. Even with considerable investment in market research, future demand for new products may not be known because potential customers themselves do not know with any certainty what their future demand will be. Moreover, in many cases, we don’t know the past either. I have had many experiences where participants in a business venture have disagreed profoundly about the causes of failure, or even success, and so have taken very different lessons from the experience.

  • Decisions may be unique (non-repeated).

    It is hard to draw on past experience when something is being done for the first time. This does not stop people trying, and so decision-making by metaphor or by anecdote is an important feature of real-world decision-making, even though mostly ignored by decision theorists.

  • There may be multiple stakeholders and participants to the decision.

    In developing a business plan for a global satellite network, for example, a decision-maker would need to take account of the views of a handful of competitors, tens of major investors, scores of minor investors, approximately two hundred national and international telecommunications regulators, a similar number of national company law authorities, scores of upstream suppliers (eg equipment manufacturers), hundreds of employees, hundreds of downstream service wholesalers, thousands of downstream retailers, thousands or millions of shareholders (if listed publicly), and millions of potential customers. To ignore or oppose the views of any of these stakeholders could doom the business to failure. As it happens, Game Theory isn’t much use with this number and complexity of participants. Moreover, despite the view commonly held in academia, most large Western corporations operate with a form of democracy. (If opinions of intelligent, capable staff are regularly over-ridden, these staff will simply leave, so competition ensures democracy. In addition, good managers know that decisions unsupported by their staff will often be executed poorly, so success of a decision may depend on the extent to which staff believe it has been reached fairly.) Accordingly, all major decisions are decided by groups or teams, not at the sole discretion of an individual. Decision theorists, it seems to me, have paid insufficient attention to group decisions: We hear lots about Bayesian decision theory, but where, for example, is the Bayesian theory of combining subjective probability assessments?

  • Domain knowledge may be incomplete and distributed across these stakeholders.
  • Beliefs, goals and preferences of the stakeholders may be diverse and conflicting.
  • Beliefs, goals and preferences of stakeholders, the probabilities of events and the consequences of decisions, may be determined endogenously, as part of the decision process itself.

    For instance, economists use the term network good to refer to a good where one person’s utility depends on the utility of others. A fax machine is an example, since being the sole owner of fax is of little value to a consumer. Thus, a rational consumer would determine his or her preferences for such a good only AFTER learning the preferences of others. In other words, rational preferences are determined only in the course of the decision process, not beforehand.  Having considerable experience in marketing, I contend that ALL goods and services have a network-good component. Even so-called commodities, such as natural resources or telecommunications bandwidth, have demand which is subject to fashion and peer pressure. You can’t get fired for buying IBM, was the old saying. And an important function of advertising is to allow potential consumers to infer the likely preferences of other consumers, so that they can then determine their own preferences. If the advertisement appeals to people like me, or people to whom I aspire to be like, then I can infer that those others are likely to prefer the product being advertized, and thus I can determine my own preferences for it. Similarly, if the advertisement appeals to people I don’t aspire to be like, then I can infer that I won’t be subject to peer pressure or fashion trends, and can determine my preferences accordingly.
    This is commonsense to marketers, even if heretical to many economists.

  • The decision-maker may not fully understand what actions are possible until he or she begins to execute.
  • Some actions may change the decision-making landscape, particularly in domains where there are many interacting participants.

    A bold announcement by a company to launch a new product, for example, may induce competitors to follow and so increase (or decrease) the chances of success. For many goods, an ecosystem of critical size may be required for success, and bold initiatives may act to create (or destroy) such ecosystems.

  • Measures of success may be absent, conflicting or vague.
  • The consequences of actions, including their success or failure, may depend on the quality of execution, which in turn may depend on attitudes and actions of people not making the decision.

    Most business strategies are executed by people other than those who developed or decided the strategy. If the people undertaking the execution are not fully committed to the strategy, they generally have many ways to undermine or subvert it. In military domains, the so-called Powell Doctrine, named after former US Secretary of State Colin Powell, says that foreign military actions undertaken by a democracy may only be successful if these actions have majority public support. (I have written on this topic before.)

  • As a corollary of the previous feature, success of an action may require extensive and continuing dialog with relevant stakeholders, before, during and after its execution.

    This is not news to anyone in business.

  • Success may require pre-commitments before a decision is finally taken.

    In the 1990s, many telecommunications companies bid for national telecoms licences in foreign countries. Often, an important criterion used by the Governments awarding these licences was how quickly each potential operator could launch commercial service. To ensure that they could launch service quickly, some bidders resorted to making purchase commitments with suppliers and even installing equipment ahead of knowing the outcome of a bid, and even ahead, in at least one case I know, of deciding whether or not to bid.

  • The consequences of decisions may be slow to realize.

    Satellite mobile communications networks have typically taken ten years from serious inception to launch of service.  The oil industry usually works on 50+ year cycles for major investment projects.  BP is currently suffering the consequence in the Gulf of Mexico of what appears to be a decades-long culture which de-emphasized safety and adequate contingency planning.

  • Decision-makers may influence the consequences of decisions and/or the measures of success.
  • Intelligent participants may model each other in reaching a decision, what I term reflexivity.

    As a consequence, participants are not only reacting to events in their environment, they are anticipating events and the reactions and anticipations of other participants, and acting proactively to these anticipated events and reactions. Traditional decision theory ignores this. Following Nash, traditional game theory has modeled the outcomes of one such reasoning process, but not the processes themselves. Evolutionary game theory may prove useful for modeling these reasoning processes, although assuming a sequence of identical, repeated interactions does not strike me as an immediate way to model a process of reflexivity.  This problem still awaits its Nash.

In my experience, classical decision theory and game theory do not handle these features very well; in some cases, indeed, not at all.  I contend that a new theory of complex decisions is necessary to cope with decision domains having these features.

Hey, Economics! Meet Politics!

Economists are fond of simplistic generalizations, which they refer to as “laws” (in imitation of Physics, itself showing its links to Theology), or as stylized facts.   Most such are, at best, default conclusions, since there are always exceptions.  Here are several generalizations, linked in a chain of inferences:

  • A successful single European currency requires a single European monetary policy.
  • A successful single European monetary policy requires a single European fiscal policy.
  • A successful single European fiscal policy requires fiscal transfers from one part of the European Union to another.
  • Fiscal transfers from one part of the European Union to another can only be undertaken over the long term by European institutions having democratic legitimacy.
  • To achieve democratic legitimacy for European institutions, the nations of Europe will require full political union.

This is not a new argument.  I first heard it put by Zambian economist Chiselebwe Ng’andwe in a paper read to a meeting of the African Association of Political Science in Salisbury (later Harare), Zimbabwe, in May 1981, talking about regional economic unions in Africa.   Dr Ng’andwe was subsequently a Board Member of the Zambian Central Bank and is currently Chairman of the state-owned National Savings and Credit Bank of Zambia. In today’s Guardian, Simon Jenkins refers back to a book about European integration by Larry Seidentop, published in 2000, which apparently makes a similar case about Europe.  Here is Ng’andwe in 1981:

Central banks play a pivotal role in the harmonization of fiscal, monetary and general economic policies.  Hence, separate central banks make it difficult to harmonize even those policy areas where joint arrangements exist such as a common tariff.
The Central bank is such an important institution for economic policy control that a joint central bank [in an economic union of states] needs total political harmony to function.  The necessary political harmony is not possible without political union.  Hence, a joint central bank and its potential benefits are simply not possible in a grouping of political[ly] independent states.  If one state wants some specific monetary policy to deal with an internal problem, a joint central bank will [op]pose some problems [policies?] unless the desired action is completely consistent with the economic and (or) political mood of the other countries.  The loss of some territorial capacity for fiscal and monetary manoeuvre entailed by a joint central bank may involve a greater loss in territorial economic growth than the territorial gain from joint economic actions. This possibility of net economic loss does not augur well for a joint central bank.  But even more important to the territorial political leaders is the loss of control over the key instruments of economic policy.  This loss can create frustrations in the internal economic and political policies of individual countries.
. . .
Another signifance of joint policy instruments lie in the capacity of these instruments to reduce imbalances in the distribution of economic benefits.   .  .  .  Even in the U.S.A. where there is practically no government industrial and commercial activities, the availability of common fiscal and monetary policies enable[s] the central government to redistribute income throughout the federal states.
This redistribution may not be enough to remove inequalities completely, but it does remove the rough edges from any regional economic imbalances.”  (pp. 13-14)

Why is this argument not, then, widely understood?  Is it that some ideas are too comprehensible – in other words, apparently lacking in complexity or subtlety – to be understood by intelligent people? Or is that the political forces which benefit from the non-democratic European status quo are so strong as to prevent the adoption of democratic structures, and to muzzle the arguments for them?  As I recall, Ng’andwe’s talk was received very coldly by his audience, most of whom were keen on economic unions (between African countries), while maintaining national sovereignty in all other respects.
POSTSCRIPT (2014-12-07):  Another aspect of the failure of economic union without political union is revealed in George Packer’s profile of Angela Merkel, a bland woman seemingly arisen without trace:  her insistence on austerity policies for southern Eurozone countries in crisis is a play to her own, intensely financially conservative, voters.  Without an over-arching federal political structure no politician in Europe has an electoral incentive to consider the good governance of the global whole, rather than just their own, local or national part.  When historical accounts are eventually drawn up for responsibility for prolongation of the Great Global Recession of 2008-?, the small-minded, economically illiterate Mrs Merkel will be one of those most culpable.
 
References:

Chiselebwe Ng’andwe[1981]:  Problems of Economic Integration in Africa.  Paper presented to the Fourth Bi-Annual Meeting of the African Association of Political Science (AAPS 1981).  Salisbury, Zimbabwe:  23-27 May 1981.
George Parker [2014]: The quiet German.  The New Yorker, 1 December 2014.
Larry Seidentop [2000]:  Democracy in Europe.  London, UK: Penguin.

Verligte Economics

Nobel  laureate economist, Paul Krugman, has a blogpost summarizing his (and some of Brad DeLong’s) arguments against imposing fiscal austerity in the short-term.   I realize that the verkrampte wing of the economic commentariat seem to be in the majority at present, so unfortunately the wise good sense of Krugman and DeLong seems unlikely to prevail.  But I want to note their arguments for the record so that, 2 or 5 years from now, when we are again (or still) in recession, we can look back and weep.

So, one more time: here’s an attempt to put together some key arguments about why the rush to fiscal austerity is deeply misguided.
Let me start with the budget arithmetic, borrowing an approach from Brad DeLong. Consider the long-run budget implications for the United States of spending $1 trillion on stimulus at a time when the economy is suffering from severe unemployment.
That sounds like a lot of money. But the US Treasury can currently issue long-term inflation-protected securities at an interest rate of 1.75%. So the long-term cost of servicing an extra trillion dollars of borrowing is $17.5 billion, or around 0.13 percent of GDP.
And bear in mind that additional stimulus would lead to at least a somewhat stronger economy, and hence higher revenues. Almost surely, the true budget cost of $1 trillion in stimulus would be less than one-tenth of one percent of GDP – not much cost to pay for generating jobs when they’re badly needed and avoiding disastrous cuts in government services.
But we can’t afford it, say the advocates of austerity. Why? Because we must impose pain to appease the markets.
There are three problems with this claim.
First, it assumes that markets are irrational – that they will be spooked by stimulus spending and/or encouraged by austerity even though the long-run budget implications of such spending and/or austerity are trivial.
Second, we’re talking about punishing the real economy to satisfy demands that markets are not, in fact, making. It’s truly amazing to see so many people urging immediate infliction of pain when the US government remains able to borrow at remarkably low interest rates, simply because Very Serious People believe, in their wisdom, that the markets might change their mind any day now.
Third, all this presumes that if the markets were to lose faith in the US government, they would be reassured by short-term fiscal austerity. The available facts suggest otherwise: markets continue to treat Ireland, which has accepted savage austerity with little resistance, as being somewhat riskier than Spain, which has accepted austerity slowly and reluctantly.
In short: the demand for immediate austerity is based on the assertion that markets will demand such austerity in the future, even though they shouldn’t, and show no sign of making any such demand now; and that if markets do lose faith in us, self-flagellation would restore that faith, even though that hasn’t actually worked anywhere else.