Hearing is (not necessarily) believing

Someone (let’s call her Alice) tells you that something is true, say the proposition P.  What can you validly infer from that utterance of Alice?  Not that P is necessarily true, since Alice may be mistaken.  You can’t even infer that Alice believes that P is true, since she may be aiming to mislead you.
Can you then infer that Alice wants you to believe that P is true?  Well, not always, since the two of you may have the sort of history of interactions which leads you to mostly distrust what she says, and she may know this about you, so she may be counting on you believing that P is not true precisely because she told you that it is true.  But, you, in turn, may know this about Alice (that she is counting on you not to believe her regarding the truth of P), and she knows that you know, so she is actually expecting you not to not-believe her on P, but to in fact infer either no opinion on P or to believe that P is true.
So, let us try summarizing what you could infer from Alice telling that P is true:

  • That P is true.
  • That Alice believes that P is true.
  • That Alice desires you to believe that P is true.
  • That Alice desires that you believe that Alice desires you to believe that P is true.
  • That Alice desires you to not believe that P is true.
  • That Alice desires that you believe that Alice desires you to not believe that P is true.
  • That Alice desires you to believe that P is not true.
  • That Alice desires that you believe that Alice desires you to believe that P is not true.
  • And so on, ad infinitum.

Apart from life, the universe and everything, you may be wondering where such ideas would find application.   Well, one place is in Intelligence.   Tennent H. Bagley, in his very thorough book on the Nosenko affair, for example, discusses the ructions in CIA caused by doubts about the veracity of the supposed KGB defector, Yuri Nosenko.    Was he a real defector?  Or was he sent by KGB as a fake defector, in order to lead CIA astray with false or misleading information?  If he was a fake defector, should CIA admit this publicly or should they try to convince KGB that they believe Nosenko and his stories?  Does KGB actually want CIA to conclude that Nosenko is a fake defector, for instance, in order to believe something by an earlier defector which CIA may otherwise doubt?  In which case, should CIA pretend to taken in by Nosenko (to make KGB think their plot was successful) or let KGB know that they were not taken in (in order to make KGB believe that CIA does not believe that other earlier information)?  And so on, ad infinitum.
I have seen similar (although far less dramatic) ructions in companies when they learn of some exciting or important piece of competitor intelligence.   Quite often, the recipient company just assumes the information is true and launches itself into vast efforts executing new plans.  Before doing this, companies should explicitly ask, Is this information true?,  and also pay great attention to the separate question, Who would benefit if we (the recipients) were to believe it?
Another application of these ideas is in the design of computer communications systems.   Machines send messages to each other all the time (for example, via the various Internet protocols, whenever a web-page is browsed or email is sent), and most of these are completely believed by the recipient machine.   To the extent that this is so, the recipient machines can hardly be called intelligent.   Designing intelligent communications between machines requires machines able and willing to query and challenge information they receive when appropriate, and then able to reach an informed conclusion about what received information to believe.
Many computer scientists believe that a key component for such intelligent communications is an agreed semantics for communication interactions between machines, so that the symbols exchanged between different machines are understood by them all in the same way.   The most thoroughly-developed machine semantics to date is the Semantic Language SL of the Agent Communications Language ACL of the IEEE Foundation for Intelligent Physical Agents (IEEE FIPA), which has been formalized in a mix of epistemic and doxastic logics (ie, logics of knowledge and belief).   Unfortunately, the semantics of FIPA ACL requires the sender of information (ie, Alice) to believe that information herself.  This feature precludes the language being used for any interactions involving negotiations or scenario exploration.  The semantics of FIPA ACL also require Alice not to believe that the recipient believes one way or another about the information being communicated (eg, the proposition P).  Presumably this is to prevent Alice wasting the time of the recipient.  But this feature precludes the language being used for one of the most common interactions in computer communications – the citing of a password by someone (human or machine) seeking to access some resource, since the citer of the password assumes that the resource-controller already knows the password.
More work clearly needs doing on the semantics of machine communications.  As the example above demonstrates, communication has many subtleties and complexities.
Reference:
Tennent H. Bagley [2007]: Spy Wars: Moles, Mysteries, and Deadly Games.  New Haven, CT: Yale University Press.

0 Responses to “Hearing is (not necessarily) believing”


  • No Comments

Leave a Reply

You must be logged in to post a comment.