In the post below, I mentioned the challenge for knowledge engineers of representing know-how, a task which may require explicit representation of actions, and sometimes also of utterances over actions. The know-how involved in steering a large sailing ship with its diverse crew surely includes the knowledge of who to ask (or to command) to do what, when, and how to respond when these requests (or commands) are ignored, or fail to be executed successfully or timeously.
One might imagine epistemology – the philosophy of knowledge – would be of help here. Philosophers, however, have been seduced since Aristotle with propositions (factual statements about the world having truth values), largely ignoring actions, and their representation. Philosophers of language have also mostly focused on speech acts – utterances which act to change the world – rather than on utterances about actions themselves. Even among speech act theorists the obsession with propositions is strong, with attempts to analyze utterances which are demonstrably not propositions (eg, commands) by means of implicit assertive statements – propositions asserting something about the world, where “the world” is extended to include internal mental states and intangible social relations between people – which these utterances allegedly imply. With only a few exceptions (Thomas Reid 1788, Adolf Reinach 1913, Juergen Habermas 1981, Charles Hamblin 1987), philosophers of language have mostly ignored utterances about actions.
Consider the following two statements:
I promise you to wash the car.
I command you to wash the car.
The two statements have almost identical English syntax. Yet their meanings, and the intentions of their speakers, are very distinct. For a start, the action of washing the car would be done by different people – the speaker and the hearer, respectively (assuming for the moment that the command is validly issued, and accepted). Similarly, the power to retract or revoke the action of washing the car rests with different people – with the hearer (as the recipient of the promise) and the speaker (as the commander), respectively.
Linguists generally use “semantics” to refer to the real-world referants of syntactically-correct expressions, while “pragmatics” refers to other aspects of the meaning and use of an expression not related to their relationship (or not) to things in the world, such as the speaker’s intentions. For neither of these two expressions does it make sense to speak of their truth value: a promise may be questioned as to its sincerity, or its feasibility, or its appropriateness, etc, but not its truth or falsity; likewise, a command may be questioned as to its legal validity, or its feasibility, or its morality, etc, but also not its truth or falsity.
For utterances about actions, such as promises, requests, entreaties and commands, truth-value semantics makes no sense. Instead, we generally need to consider two pragmatic aspects. The first is uptake, the acceptance of the utterance by the hearer (an aspect first identified by Reid and by Reinach), an acceptance which generally creates a social commitment to execute the action described in the utterance by one or other party to the conversation (speaker or hearer). Once uptaken, a second pragmatic aspect comes into play: the power to revoke or retract the social commitment to execute the action. This revocation power does not necessarily lie with the original speaker; only the recipient of a promise may cancel it, for example, and not the original promiser. The revocation power also does not necessarily lie with the uptaker, as commands readily indicate.
Why would a computer scientist be interested in such humanistic arcana? The more tasks we delegate to intelligent machines, the more they need to co-ordinate actions with others of like kind. Such co-ordination requires conversations comprising utterances over actions, and, for success, these require agreed syntax, semantics and pragmatics. To give just one example: the use of intelligent devices by soldiers have made the modern battlefield a place of overwhelming information collection, analysis and communication. Lots of this communication can be done by intelligent software agents, which is why the US military, inter alia, sponsors research applying the philosophy of language and the philosophy of argumentation to machine communications.
Meanwhile, the philistine British Government intends to cease funding tertiary education in the arts and the humanities. Even utilitarians should object to this.
References:
Juergen Habermas [1984/1981]: The Theory of Communicative Action: Volume 1: Reason and the Rationalization of Society. London, UK: Heinemann. (Translation by T. McCarthy of: Theorie des Kommunikativen Handelns, Band I, Handlungsrationalitat und gesellschaftliche Rationalisierung. Suhrkamp, Frankfurt, Germany, 1981.)
Charles L. Hamblin [1987]: Imperatives. Oxford, UK: Basil Blackwell.
P. McBurney and S. Parsons [2007]: Retraction and revocation in agent deliberation dialogs. Argumentation, 21 (3): 269-289.
Adolph Reinach [1913]: Die apriorischen Grundlagen des bürgerlichen Rechtes. Jahrbuch für Philosophie und phänomenologische Forschung, 1: 685-847.
0 Responses to “Dialogs over actions”