For at least 22 years, I have heard business presentations (ie, not just technical presentations) given by IT companies which mention client-server architectures. For the last 17 of those years, this is not suprising, since both the Hyper-Text Transfer Protocol (HTTP) and the World-Wide Web (WWW) use this architecture. In a client-server architecture, one machine (the client) requests that some action be taken by another machine (the server), which responds to the request. For HTTP, the standard request by the client is for the server to send to the client some electronic file, such as a web-page. The response by the server is not necessarily to undertake the action requested. Indeed, the specifications of HTTP define 41 responses (so-called status codes), including outright refusal by the server (Client Error 403 “Forbidden”), and allow for hundreds more to be defined. Typically, one server will be configured to respond to many simultaneous or near-simultaneous client requests. The functions of client and server are conceptually quite distinct, although of course, one machine may undertake both functions, and a server may even have to make a request as a client to another server in order to respond to an earlier request from its clients. As an analogy, consider a library which acts like a server of books to its readers, who are its clients; a library may have to request a book via inter-library loan from another library in order to satisfy a reader’s request.
Since the rise of file sharing, particularly illegal file sharing, over a decade ago, it has also been common to hear talk about Peer-to-Peer (P2P) architectures. Conceptually, in these architectures all machines are viewed equally, and none are especially distinguished as servers. Here, there is no central library of books; rather, each reader him or herself owns some books and is willing to lend them to any other reader as and when needed. Originally, peer-to-peer architectures were invented to circumvent laws on copyright, but they turn out (as do most technical innovations) to have other, more legal, uses – such as the distributed storage and sharing of electronic documents in large organizations (eg, xray images in networks of medical clinics).
Both client-server and P2P architectures involve attempts at remote control. A client or a peer-machine makes a request of another machine (a server or another peer, respectively), to undertake some action(s) at the location of the second machine. The second machine receiving the request from the first may or may not execute the request. This has led me to think about models of such action-at-a-distance.
Imagine we have two agents (human or software), named A and B, at different locations, and a resource, named X, at the same location as B. For example, X could be an electron microscope, B the local technician at site of the microscope, and A a remote user of the microscope. Suppose further that agent B can take actions directly to control resource X. Agent A may or may not have permissions or powers to act on X.
Then, we have the following five possible situations:
1. Agent A controls X directly, without agent B’s involvement (ie, A has remote access to and remote control over resource X).
2. Agent A commands agent B to control X (ie, A and B have a master-slave relationship; some client-server relationships would fall into this category).
3. Agent A requests agent B to control X (ie, both A and B are autonomous agents; P2P would be in this category, as well as many client-server interactions).
4. Both agent A and agent B need to take actions jointly to control X (eg, the double-key system for launch of nuclear missiles in most nuclear-armed forces; coalitions of agents would be in this category)
5. Agent A has no powers, not direct nor indirect, to control resource X.
As far as I can tell, these five situations exhaust the possible relationships betwen agents A and B acting on resource X, at least for those cases where potential actions on X are initated by agent A. From this outline, we can see the relevance of much that is now being studied in computer science:
- Action co-ordination (Cases 1-5)
- Command dialogs (Case 2)
- Persuasion dialogs (Case 3)
- Negotiation dialogs (dialogs to divide a scarce resource) (Case 4)
- Deliberation dialogs (dialogs over what actions to take) (Cases 1-4)
- Coalitions (Case 4).
To the best of my knowledge, there is as yet no formal theory which encompasses these five cases. (I welcome any suggestions or comments to the contrary.) Such a formal theory is needed as we move beyond Web 2.0 (the web as means to create and sustain social networks) to reification of the idea of computing-as-interaction (the web as a means to co-ordinate joint actions).
Reference:
Network Working Group [1999]: Hypertext Transfer Protocol – HTTP/1.1. Technical Report RFC 2616. Internet Engineering Task Force.