The epistemology of intelligence

I have in the past discussed some of the epistemological challenges facing an intelligence agency – here and here.  I now see that I am not the only person to think about these matters, and that academic philosophers have started to write articles for learned journals on the topic, eg,  Herbert (2006) and Dreisbach (2011).
In essence, Herbert makes a standard argument from the philosophy of knowledge:  that knowledge (by someone of some proposition p) comprises three necessary elements:  belief by the someone in p, p being true, and a justification by the someone for his/her belief in p.  The first very obvious criticism of this approach, particularly in intelligence work, is that answering the question, Is p true? is surely the objective of any analysis, not its starting point.     A person (or an organization) may have numerous beliefs about which he (she or it) cannot say whether or not the propositions in question are true or not.  Any justification is an attempt to generate a judgement about whether or not the propositions should be believed, so saying that one can only know something when it is also true has everything pointing exactly in the wrong direction, putting the cart before the horse. This is defining knowledge to be something almost impossible to verify, and is akin to the conflict between constructivist and non-constructivist mathematicians.  How else can we know something is true except by some adequate process of justification,  so our only knowledge surely comprises justified belief, rather than justified true belief.   I think the essential problem here is that all knowledge, except perhaps some conclusions drawn using deduction, is uncertain, and this standard philosophical approach simply ignores uncertainty.
Dreisbach presents other criticisms (also long-standing) of the justified true belief model of knowledge, but both authors ignore a more fundamental  problem with this approach.   That is that much of intelligence activity aims to identify the intentions of other actors, be they states (such as the USSR or Iraq), or groups and individuals (such as potential terrorists).   Intentions, as any marketing researcher can tell you, are very slippery things:  Even a person having, or believed by others to have, an intention may not realize they have it, or may not understand themselves well enough to realize they have it, or may not be able to express to others that they have it, even when they do realize they have it.   Moreover, intentions about the non-immediate future are particularly slippery:  you can ask potential purchasers of some new gizmo all you want before the gizmo is for sale, and still learn nothing accurate about how those very same purchasers will actually react when they are able to finally purchase it.  In short, there is no fact of the matter with intentions, and thus it makes no sense to represent them as propositions.  Accordingly, we cannot evaluate whether or not p is true, so the justified true belief model collapses.  It would be better to ask (as good marketing researchers do):    Does the person in question have a strong tendency to act in future in a certain way, and if so, what factors will likely encourage or inhibit or preclude them to act that way?
However, a larger problem looms with both these papers, since both are written as if the respective author believes the primary purpose of intelligence analysis is to garner knowledge in a vacuum.      Knowledge is an intermediate objective of intelligence activity, but it is surely subordinate to the wider diplomatic, military or political objectives of the government or society the intelligence activity is part of.  CIA was not collecting information about the USSR, for example, because of a disinterested, ivory-tower-ish concern with the progress of socialism in one country, but because the USA and the USSR were engaged in a global conflict.    Accordingly, there are no neutral actions – every action, every policy, every statement, even every belief of each side may have consequences for the larger strategic interaction that the two sides are engaged in.   A rational and effective intelligence agency should not just be asking:
Is p true?
but also:

  • What are the consequences of us believing p to be true?
  • What are the consequences of us believing p not to be true?
  • What are the consequences of the other side believing that we believe p to be true?
  • What are the consequences of the other side believing that we do not believe p to be true?
  • What are the consequences of the other side believing that we are conflicted internally about the truth of p?
  • What are the consequences of the other side initially believing that we believe p to be true and then coming to believe that we do not believe p?
  • What are the consequences of the other side initially believing that we do not believe p to be true and then coming to believe that we do in fact believe p?
  • What are the consequences of the other side being conflicted about whether or not they should believe p?
  • What are the consequences of the other side being conflicted about whether or not we believe p?

and so on.   I give an example of the possible strategic interplay between a protagonist’s beliefs and his or her antagonist’s intentions here.
A decision to believe or not believe p may then become a strategic one, taken after analysis of these various consequences and their implications.   An effective intelligence agency, of course, will need to keep separate accounts for what it really believes and what it wants others to believe it believes.  This can result in all sorts of organizational schizophrenia, hidden agendas, and paranoia (Holzman 2008), with consequent challenges for those writing histories of espionage.  Call these mind-games if you wish, but such analyses helped the British manipulate and eventually control Nazi German remote intelligence efforts in British and other allied territory during World War II (through the famous XX system).
Likewise, many later intelligence efforts from all major participants in the Cold War were attempts –some successful, some not – to manipulate the beliefs of opponents.   The Nosenko case (Bagley 2007) is perhaps the most famous of these, but there were many.   In the context of the XX action, it is worth mentioning that the USA landed scores of teams of spies and saboteurs into the Democratic Republic of Vietnam (North Vietnam) during the Second Indochinese War, only to have every single team either be captured and executed, or captured and turned; only the use of secret duress codes by some landed agents communicating back enabled the USA to infer that these agents were being played by their DRV captors.
Intelligence activities are about the larger strategic interaction between the relevant stakeholders as much (or more) than they are about the truth of propositions.  Neither Herbert nor Dreisbach seems to grasp this, which makes their analysis disappointingly impoverished.
References:
Tennent H. Bagley [2007]:  Spy Wars.  New Haven, CT, USA:  Yale University Press.
Christopher Dreisbach [2011]:  The challenges facing an IC epistemologist-in-residence.  International Journal of Intelligence and CounterIntelligence, 24: 757-792.
Matthew Herbert [2006]:  The intelligence analyst as epistemologist.  International Journal of Intelligence and CounterIntelligence, 19:  666-684.
Michael Holzman [2008]:  James Jesus Angleton, the CIA and the Craft of Counterintelligence.  Boston, MA, USA: University of Massachusetts Press.

0 Responses to “The epistemology of intelligence”


Comments are currently closed.