O ignorance! O mores!

In the last few weeks, it was reported that mathematician Edward Nelson of Princeton had claimed to show that Peano Arithmetic, one of many possible axiomatic systems for the numbers, was internally inconsistent.   Within a short period, his claim and proof were subject to examination by other pure mathematicians, not least Terence Tao of UCLA, who thought Nelson’s argument had potential flaws.   Nelson initially defended himself and then, accepting the criticisms, retracted his claim.  More details can be found in a post by John Baez on the n-category blog, which initiated a dialog in which both Tao and Nelson participated, and where Nelson announced his retraction.   A subsequent discussion of what happened in this dialog and the lessons for the philosophy of mathematics can be found on the blog of Catarina Dutilh Novaes, a discussion to which Tao again contributed, this time on his methods.
This example of fast proposal-criticism-retraction contrasts sharply with mainstream Economics, where an error in deductive reasoning may be pointed out, with neither retraction nor revision nor apparent learning from its adherents 70 years on.  Keynes’ criticisms of conventional austerity economics were first uttered in the 1930s, and yet they still have to be repeatedRelcalcitrant ignorance indeed.
One of the key insights of Keynesian economics is that a government is not like a household:  Governments can increase their income by increasing their spending, something most households cannot do.   Another key insight is that the effect of one person doing something may be very different if many people also do it.  To see better at a baseball stadium, for instance, you can stand up, but this only works if the people in front of you stay seated; if everyone stands, you will see no better than if everyone stayed seated.    Likewise, the economy-wide effects of individuals saving may be deleterious even when the effects are beneficial for an individual.   Keynes called this the savings trap.  Instead of learning from such insights, we get a British Prime Minister telling us all in 2011 to save hard and reduce our personal debt, and treating the national budget as if he were running a a household in Grantham.

Recalcitrant ignorance in economics

British business economist John Kay has written an essay for the Institute for New Economic Thinking on the failures of mainstream macro-economics.   Among many insightful comments, there is this:

What Lucas means when he asserts that deviations are ‘too small to matter’ is that attempts to construct general models of deviations from the efficient market hypothesis – by specifying mechanical trading rules or by writing equations to identify bubbles in asset prices – have not met with much success.  But this is to miss the point: the expert billiard player plays a nearly perfect game, but it is the imperfections of play between experts that determine the result.  There is a – trivial – sense in which the deviations from efficient markets are too small to matter – and a more important sense in which these deviations are the principal thing that matters.”

Mostly agreeing with Kay, Paul Krugman repeats a point he has made before about the freshwater economists — their failure to understand the deductive implications of their own models:

Here’s what we agree on: if consumers have perfect foresight, live forever, have perfect access to capital markets, etc., then they will take into account the expected future burden of taxes to pay for government spending. If the government introduces a new program that will spend $100 billion a year forever, then taxes must ultimately go up by the present-value equivalent of $100 billion forever. Assume that consumers want to reduce consumption by the same amount every year to offset this tax burden; then consumer spending will fall by $100 billion per year to compensate, wiping out any expansionary effect of the government spending.
But suppose that the increase in government spending is temporary, not permanent — that it will increase spending by $100 billion per year for only 1 or 2 years, not forever. This clearly implies a lower future tax burden than $100 billion a year forever, and therefore implies a fall in consumer spending of less than $100 billion per year. So the spending program IS expansionary in this case, EVEN IF you have full Ricardian equivalence.”

As Krugman says:

The fact that these guys don’t even get the implications of their own models right tells us that the problem runs deeper than believing too much in abstract math. At some level it has to be political: they want to declare government policy ineffectual so badly that for all their vaunted modeling mojo they can’t be bothered to think it through, or listen to other people who point out their error.”

The macroeconomic dark ages

Paul Krugman, writing about the failings of macro-economists before and after the Great Recession, notes the wide social consequences of the pro-abstraction, anti-history turn in the study of economics this last half-century.   Sadly, this turn has been another instance of the dominance of Descartian autism in western intellectual culture.

Early in 2009, when the Obama stimulus was under discussion, I was stunned to read statements from a number of well-regarded economists asserting not merely that the plan was a bad idea in practice — a defensible idea — but that debt-financed government spending could not, in principle, raise overall spending. Here’s John Cochrane:
Continue reading ‘The macroeconomic dark ages’

Networks of Banks

The first plenary speaker at the 13th International Conference on E-Commerce (ICEC 2011) in Liverpool last week was Robert, Lord May, Professor of Ecology at Oxford University, former Chief UK Government Scientific Advisor, and former President of the Royal Society.  His talk was part of the special session on Robustness and Reliability of Electronic Marketplaces (RREM 2011), and it was insightful, provocative and amusing.
May began life as an applied mathematician and theoretical physicist (in the Sydney University Physics department of Harry Messel), then applied his models to food webs in ecology, and now finds the same types of network and lattice models useful for understanding inter-dependencies in networks of banks.  Although, as he said in his talk, these models are very simplified, to the point of being toy models, they still have the power to demonstrate unexpected outcomes:  For example, that actions which are individually rational may not be desirable from the perspective of a system containing those individuals.  (It is one of the profound differences between Computer Science and Economics, that such an outcome would be unlikely to be surprising to most computer scientists, yet seems to be so to mainstream Economists, imbued with a belief in metaphysical carpal entities.)
From the final section of Haldane and May (2011):

The analytic model outlined earlier demonstrates that the topology of the financial sector’s balance sheet has fundamental implications for the state and dynamics of systemic risk. From a public policy perspective, two topological features are key.
First, diversity across the financial system. In the run-up to the crisis, and in the pursuit of diversification, banks’ balance sheets and risk management systems became increasingly homogenous. For example, banks became increasingly reliant on wholesale funding on the liabilities side of the balance sheet; in structured credit on the assets side of their balance sheet; and managed the resulting risks using the same value-at-risk models. This desire for diversification was individually rational from a risk perspective. But it came at the expense of lower diversity across the system as whole, thereby increasing systemic risk. Homogeneity bred fragility (N. Beale and colleagues, manuscript in preparation).
In regulating the financial system, little effort has as yet been put into assessing the system-wide characteristics of the network, such as the diversity of its aggregate balance sheet and risk management models. Even less effort has been put into providing regulatory incentives to promote diversity of balance sheet structures, business models and risk management systems. In rebuilding and maintaining the financial system, this systemic diversity objective should probably be given much greater prominence by the regulatory community.
Second, modularity within the financial system. The structure of many non-financial networks is explicitly and intentionally modular.  This includes the design of personal computers and the world wide web and the management of forests and utility grids. Modular configurations prevent contagion infecting the whole network in the event of nodal failure. By limiting the potential for cascades, modularity protects the systemic resilience of both natural and constructed networks.
The same principles apply in banking. That is why there is an ongoing debate on the merits of splitting banks, either to limit their size (to curtail the strength of cascades following failure) or to limit their activities (to curtail the potential for cross-contamination within firms). The recently proposed Volcker rule in the United States, quarantining risky hedge fund, private equity and proprietary trading activity from other areas of banking business, is one example of modularity in practice. In the United Kingdom, the new government have recently set up a Royal Commission to investigate the case for encouraging modularity and diversity in banking ecosystems, as a means of buttressing systemic resilience.
It took a generation for ecological models to adapt. The same is likely to be true of banking and finance.”

It would be interesting to consider network models which are more realistic than these toy versions, for instance, with nodes representing banks with goals, preferences and beliefs.
 
References:
F. Caccioli, M. Marsili and P. Vivo [2009]: Eroding market stability by proliferation of financial instruments. The European Physical Journal B, 71: 467–479.
Andrew Haldane and Robert May [2011]: Systemic risk in banking ecosystems. Nature, 469:  351-355.
Robert May, Simon Levin and George Sugihara [2008]: Complex systems: ecology for bankers. Nature, 451, 893–895.
Also, the UK Government’s 2011 Foresight Programme on the Future of Computer Trading in Financial Markets has published its background and working papers, here.
 

Markets as feedback mechanisms

I just posted after hearing a talk by economic journalist Tim Harford at LSE.  At the end of that post, I linked to a critical review of Harford’s latest book,  Adapt – Why Success Always Starts with Failure, by Whimsley.  This review quotes Harford talking about markets as feedback mechanisms:

To identify successful strategies, Harford argues that “we should not try to design a better world. We should make better feedback loops” (140) so that failures can be identified and successes capitalized on. Harford just asserts that “a market provides a short, strong feedback loop” (141), because “If one cafe is ordering a better combination of service, range of food, prices, decor, coffee blend, and so on, then more customers will congregate there than at the cafe next door“, but everyday small-scale examples like this have little to do with markets for credit default swaps or with any other large-scale operation.

Yes, indeed.  The lead-time between undertaking initial business planning in order to raise early capital investments and the launching of services to the  public for  global satellite communications networks is in the order of 10 years (since satellites, satellite networks and user devices need to be designed, manufactured, approved by regulators, deployed, and connected before they can provide service).  The time between initial business planning and the final decommissioning of an international gas or oil pipeline is about 50 years.  The time between initial business planning and the final decommissioning of an international undersea telecommunications cable may be as long as 100 years.   As I remarked once previously, the design of Transmission Control Protocol (TCP) packets, the primary engine of communication in the 21st century Internet, is closely modeled on the design of telegrams first sent in the middle of the 19th century.  Some markets, if they work at all, only work over the long run, but as Keynes famously said, in the long run we are all dead.
I have experience of trying to design telecoms services for satellite networks (among others), knowing that any accurate feedback for design decisions may come late or not at all, and when it comes may be vague and ambiguous, or even misleading.   Moreover, the success or failure of the selected marketing strategy may not ever be clear, since its success may depend on the quality of execution of the strategy, so that it may be impossible to determine what precisely led to the outcome.   I have talked about this issue before, both regarding military strategies and regarding complex decisions in general.  If the quality of execution also influences success (as it does), then just who or what is the market giving feedback to?
In other words, these coffees are not always short and strong (in Harford’s words), but may be cold, weak, very very slow in arriving, and even their very nature contested.   I’ve not yet read Harford’s book, but if he thinks all business is as simple as providing fmc (fast-moving consumer) services, his book is not worth reading.
Once again, an economist argues by anecdote and example.  And once again, I wonder at the world:  That economists have a reputation for talking about reality, when most of them evidently know so little about it, or reduce its messy complexities to homilies based on the operation of suburban coffee shops.

Tim Harford at LSE: Dirigisme in action

This week I heard economic journalist Tim Harford talk at the London School of Economics (LSE), on a whirlwind tour (7 talks, I think he told us, this week) to promote his new book.   Each talk is on one topic covered in the book, and at LSE he talked about the GFC and his suggestions for preventing its recurrence.

Harford’s talk itself was chatty, anecdotal, and witty.    Economics is still in deep thrall to its 19th century fascination with physical machines, and this talk was no exception.   The anecdotes mostly concerned Great Engineering Disasters of our time, with Harford emphasizing the risks that arise from tightly-coupling of components in systems and, ironically, frequent misguided attempts to improve their safety which only worsen it.

Anecdotal descriptions of failed engineering artefacts may have relevance to the preventing a repeat of the GFC, but Harford did not make any case that they do.  He just gave examples from engineering and from financial markets, and asserted that these were examples of the same conceptual phenomena.    However, as metaphors for economies machines and mechanical systems are worse than useless, since they emphasize in people’s minds, especially in the minds of regulators and participants, mechanical and stand-alone aspects of systems which are completely inappropriate here.  

Economies and marketplaces are NOT like machines, with inanimate parts whose relationships are static and that move when levers are pulled, or effects which can be known or predicted when causes are instantiated, or components designed centrally to achieve some global objectives.  Autonomous, intelligent components having dynamic relationships describes few machines or mechanical systems, and certainly none from the 19th century.   

A better category of failure metaphors would be ecological and biological.   We introduce cane toads to North Queensland to prey upon a sugar cane pest, and the cane toads, having no predators themselves,  take over the country.    Unintended and unforeseen consequences of actions, not arising merely because the  system is complex or its parts tightly-coupled, but arise because the system comprises multiple autonomous and goal-directed actors with different beliefs, histories and motivations, and whose relationships with one another change as a result of their interactions.  

Where, I wanted to shout to Harford, were the ecological metaphors?  Why, I wanted to ask, does this 19th-century fascination with deterministic, centralized machines and mechanisms persist in economics, despite its obvious irrelevance and failings? Who, if not rich FT journalists with time to write books, I wanted to know, will think differently about these problems?

Finally, only economists strongly in favour of allowing market forces to operate unfettered would have used the dirigismic methods that the LSE did to allocate people to seats for this lecture.  We were forced to sit in rows in our order of arrival in the auditorium. Why was this?  When I asked an usher for the reason, the answer I was given made no sense:   Because we expect a full hall.    Why were the organizers so afraid of allowing people to exercise their own preferences as to where to sit?  We don’t all have the same hearing and sight capabilities, we don’t all have the same preferences as to side of the hall, or side  of the aisle, etc. We don’t all arrive in parties of the same size.  We don’t all want to sit behind a tall person or near a noisy group.

The hall was not full, as it happened, so we were crammed into place in part of the hall like passive objects in a consumer choice model of voting, instead of as free, active citizens in a democracy occupying whatever position we most preferred of those still available.  But even if the hall had been full, there are less-centralized and less-unfriendly methods of matching people to seats.  The 20 or so LSE student ushers on hand, for instance, could been scattered about the hall to direct latecomers to empty seats, rather than lining the aisles like red-shirted troops to prevent people sitting where they wanted to.

What hope is there that our economic problems will be solved when the London School of Economics, of all places, uses central planning to sit people in public lectures?

Update: There is an interesting critical review of Harford’s latest book, here.

What use are models?

What are models for?   Most developers and users of models, in my experience, seem to assume the answer to this question is obvious and thus never raise it.   In fact, modeling has many potential purposes, and some of these conflict with one another.   Some of the criticisms made of particular models arise from mis-understandings or mis-perceptions of the purposes of those models, and the modeling activities which led to them.
Liking cladistics as I do, I thought it useful to list all the potential purposes of models and modeling.   The only discussion that considers this topic that I know is a brief discussion by game theorist Ariel Rubinstein in an appendix to a book on modeling rational behaviour (Rubinstein 1998).  Rubinstein considers several alternative purposes for economic modeling, but ignores many others.   My list is as follows (to be expanded and annotated in due course):

  • 1. To better understand some real phenomena or existing system.   This is perhaps the most commonly perceived purpose of modeling, in the sciences and the social sciences.
  • 2. To predict (some properties of) some real phenomena or existing system.  A model aiming to predict some domain may be successful without aiding our understanding  of the domain at all.  Isaac Newton’s model of the motion of planets, for example, was predictive but not explanatory.   I understand that physicist David Deutsch argues that predictive ability is not an end of scientific modeling but a means, since it is how we assess and compare alternative models of the same phenomena.    This is wrong on both counts:  prediction IS an end of much modeling activity (especially in business strategy and public policy domains), and it not the only means we use to assess models.  Indeed, for many modeling activities, calibration and prediction are problematic, and so predictive capability may not even be  possible as a form of model assessment.
  • 3. To manage or control (some properties of) some real phenomena or existing system.
  • 4. To better understand a model of some real phenomena or existing system.  Arguably, most of economic theorizing and modeling falls into this category, and Rubinstein’s preferred purpose is this type.   Macro-economic models, if they are calibrated at all, are calibrated against artificial, human-defined, variables such as employment, GDP and inflation, variables which may themselves bear a tenuous and dynamic relationship to any underlying economic reality.   Micro-economic models, if they are calibrated at all, are often calibrated with stylized facts, abstractions and simplifications of reality which economists have come to regard as representative of the domain in question.    In other words, economic models are not not usually calibrated against reality directly, but against other models of reality.  Similarly, large parts of contemporary mathematical physics (such as string theory and brane theory) have no access to any physical phenomena other than via the mathematical model itself:  our only means of apprehension of vibrating strings in inaccessible dimensions beyond the four we live in, for instance, is through the mathematics of string theory.    In this light, it seems nonsense to talk about the effectiveness, reasonable or otherwise, of mathematics in modeling reality, since how we could tell?
  • 5. To predict (some properties of) a model of some real phenomena or existing system.
  • 6. To better understand, predict or manage some intended (not-yet-existing) artificial system, so to guide its design and development.   Understanding a system that does  not yet exist is qualitatively different to understanding an existing domain or system, because the possibility of calibration is often absent and because the model may act to define the limits and possibilities of subsequent design actions on the artificial system.  The use of speech act theory (a model of natural human language) for the design of artificial machine-to-machine languages, or the use of economic game theory (a mathematical model of a stylized conceptual model of particular micro-economic realities) for the design of online auction sites are examples here.   The modeling activity can even be performative, helping to create the reality it may purport to describe, as in the case of the Black-Scholes model of options pricing.
  • 7. To provide a locus for discussion between relevant stakeholders in some business or public policy domain.  Most large-scale business planning models have this purpose within companies, particularly when multiple partners are involved.  Likewise, models of major public policy issues, such as epidemics, have this function.  In many complex domains, such as those in public health, models provide a means to tame and domesticate the complexity of the domain.  This helps stakeholders to jointly consider concepts, data, dynamics, policy options, and assessment of potential consequences of policy options,  all of which may need to be socially constructed. 
  • 8. To provide a means for identification, articulation and potentially resolution of trade-offs and their consequences in some business or public policy domain.   This is the case, for example, with models of public health risk assessment of chemicals or new products by environmental protection agencies, and models of epidemics deployed by government health authorities.
  • 9. To enable rigorous and justified thinking about the assumptions and their relationships to one another in modeling some domain.   Business planning models usually serve this purpose.   They may be used to inform actions, both to eliminate or mitigate negative consequences and to enhance positive consequences, as in retroflexive decision making.
  • 10. To enable a means of assessment of managerial competencies of the people undertaking the modeling activity. Investors in start-ups know that the business plans of the company founders are likely to be out of date very quickly.  The function of such business plans is not to model reality accurately, but to force rigorous thinking about the domain, and to provide a means by which potential investors can challenge the assumptions and thinking of management as way of probing the managerial competence of those managers.    Business planning can thus be seen to be a form of epideictic argument, where arguments are assessed on their form rather than their content, as I have argued here.
  • 11. As a means of play, to enable the exercise of human intelligence, ingenuity and creativity, in developing and exploring the properties of models themselves.  This purpose is true of that human activity known as doing pure mathematics, and perhaps of most of that academic activity known as doing mathematical economics.   As I have argued before, mathematical economics is closer to theology than to the modeling undertaken in the natural sciences. I see nothing wrong with this being a purpose of modeling, although it would be nice if academic economists were honest enough to admit that their use of public funds was primarily in pursuit of private pleasures, and any wider social benefits from their modeling activities were incidental.

POSTSCRIPT (Added 2011-06-17):  I have just seen Joshua Epstein’s 2008 discussion of the purposes of modeling in science and social science.   Epstein lists 17 reasons to build explicit models (in his words, although I have added the label “0” to his first reason):

0. Prediction
1. Explain (very different from predict)
2. Guide data collection
3. Illuminate core dynamics
4. Suggest dynamical analogies
5. Discover new questions
6. Promote a scientific habit of mind
7. Bound (bracket) outcomes to plausible ranges
8. Illuminate core uncertainties
9. Offer crisis options in near-real time. [Presumably, Epstein means “crisis-response options” here.]
10. Demonstrate tradeoffe/ suggest efficiencies
11. Challenge the robustness of prevailing theory through peturbations
12. Expose prevailing wisdom as imcompatible with available data
13. Train practitioners
14. Discipline the policy dialog
15. Educate the general public
16. Reveal the apparently simple (complex) to be complex (simple).

These are at a lower level than my list, and I believe some of his items are the consequences of purposes rather than purposes themselves, at least for honest modelers (eg, #11, #12, #16).
References:
Joshua M Epstein [2008]: Why model? Keynote address to the Second World Congress on Social Simulation, George Mason University, USA.  Available here (PDF).
Robert E Marks [2007]:  Validating simulation models: a general framework and four applied examples. Computational Economics, 30 (3): 265-290.
David F Midgley, Robert E Marks and D Kunchamwar [2007]:  The building and assurance of agent-based models: an example and challenge to the field. Journal of Business Research, 60 (8): 884-893.
Robert Rosen [1985]: Anticipatory Systems. Pergamon Press.
Ariel Rubinstein [1998]: Modeling Bounded Rationality. Cambridge, MA, USA: MIT Press.  Zeuthen Lecture Book Series.
Ariel Rubinstein [2006]: Dilemmas of an economic theorist. Econometrica, 74 (4): 865-883.

Concat: The crisis in macroeconomic policy execution

During the Great Depression, as the Bank of England and British banks were attempting to renegotiate the terms of their loans from the USA, the British sent Sir Otto Niemeyer to Australia to prevent Australia doing the same for its loans from Britain.    The injustice and unabashed hypocrisy of this – where you stood on the issue of debt repayment clearly depending on where you sat – always angered me.    Had I been around in 1932, I would have supported New South Wales Premier Jack Lang’s refusal to hand over moneys from the NSW State Government owed to the Australian Commonwealth Government for its payment of interest on NSW foreign debts.
We seem to be in for more hypocrisy and hard times, as the share-owning class, having received bailouts from western taxpayers for their investments in failed and paralyzed banks, now raise a wacka wacka huna kuna against public sector debt.    The plain people of Ireland, for example, will now be paying for the malfeasance and incompetence of their richer compatriots.
Two illuminating posts from Brad DeLong and Paul Krugman on our failed western political system, which seems unable to fix our failed economy, despite us knowing what should be done:

And here is Barry Eichengreen on the Irish bailout:

Some older articles on the crisis:

 
 
 

Coupling preferences and decision-processes

I have expressed my strong and long-held criticisms of classical decision theory – that based on maximum expected utility (MEU) –  before and again before that.  I want to expand here on one of my criticisms.
One feature of MEU theory is that the preferences of a decision-maker are decoupled from the decision-making process itself.  The MEU process works independently of the preferences of the decision-maker, which are assumed to be independent inputs to the decision-making process.    This may be fine for some decisions, and for some decision-makers, but there are many, many real-world decisions where this decoupling is infeasible or undesirable, or both.
For example, I have talked before about network goods, goods for which the utility received by one consumer depends on the utility received by other consumers.   A fax machine, in the paradigm example, provides no benefits at all to someone whose network of contacts or colleagues includes no one else with a fax machine.   A rational consumer (rational in the narrow sense of MEU theory, as well as rational in the prior sense of being reason-based) would wait to see whether other consumers  in her network decide to purchase such a good (or are likely to decide to purchase it) before deciding to do so herself.   In this case, her preferences are endogeneous to the decision-process, and it makes no sense to model preferences as logically or chronologically prior to the process.   Like most people  in marketing, I have yet to encounter a good or service which is not a network good:  even so-called commodities, like coal, are subject to fashion, to peer-group pressures, and to imitative purchase behaviors.  (In so far as something looks like a commodity in the real world, some marketing manager is not doing his or her job.)
A second class of decisions also require us to consider preferences and decision-processes as strongly coupled.  These are situations where there are multiple decision-makers or stakeholders.     A truly self-interested agent (such as those assumed by mainstream micro-economics) cares not a jot for the interests of other stakeholders, but for those of us out here in the real world, this is almost never the case.  In any multiple-stakeholder decision – ie, any decision where the consequences accrue to more than one party – a non-selfish decision-maker would first seek to learn of the consequences of the different decision-options to other stakeholders as well as to herself, and of the preferences of those other stakeholders over these consequences.  Thus, any sensible decision-making process needs to allow for the elicitation and sharing of consequences and preferences between stakeholders.  In any reasonably complex decision – such as deciding whether to restrict use of some chemical on public health grounds, or deciding on a new distribution strategy for a commercial product  – these consequences will be dispersed and non-uniform in their effects.   This is why democratic government regulatory agencies, such as environmental agencies, conduct public hearings, enquiries and consultations exercises prior to making determinations.  And this is why even the most self-interested of corporate decision-makers invariably consider the views of shareholders, of regulators, of funders, of customers, of supply chain partners (both upstream and downstream), or of those internal staff who will be carrying out the decision, when they want the selected decision-option to be executed successfully.    No CEO is an island.
The fact that the consequences of major regulatory and corporate decisions are usually non-uniform in their impacts on stakeholders  – each decision-option advantaging some people or groups, while disadvantaging others – makes the application of any standard, context-independent decision-rule nonsensical.   Applying standard statistical tests as decision rules falls into this nonsensical category, something statisticians have known all along, but others seem not to. (See the references below for more on this.)
Any rational, feasible decision-process intended for the sorts of decisions we citizens, consumers and businesses face every day needs to allow preferences to emerge as part of the decision-making process, with preferences and the decision-process strongly coupled together.  Once again, as on so many other aspects, MEU theory fails.   Remind me again why it stays in Economics text books and MBA curricula.
References:
L. Atkins and D. Jarrett [1979]:  The significance of “significance tests”.  In:  J. Irvine, I. Miles and J. Evans (Editors): Demystifying Social Statistics. London, UK: Pluto Press.
D. J. Fiorino [1989]:  Environmental risk and democratic process:  a critical review.  Columbia Journal of Environmental Law,  14: 501-547.  (This paper presents reasons why deliberative democratic processes are necessary in environmental regulation.)
T. Page [1978]:  A generic view of toxic chemicals and similar risks.  Ecology Law Quarterly.  7 (2): 207-244.