John Bennett RIP

John Bennett AO (1921-2010), first professor of computing in Australia and founder of Sydney University’s Basser Department of Computer Science, died last month.  The SMH obit, from which the lines below are taken, is here.

Emeritus Professor John Bennett AO was an internationally recognised Australian computing pioneer. Known variously as ”the Prof”, ”JMB” or ”Rusty”, he was a man with a voracious appetite for ideas, renowned for his eclectic interests, intellectual generosity, cosmopolitan hospitality and prodigious general knowledge.
As Australia’s first professor of computer science and foundation president of the Australian Computer Society, Bennett was an innovator, educator and mentor but at the end of his life he wished most to be remembered for his contribution to the construction of one of the world’s first computers, the Electronic Delay Storage Automatic Calculator (EDSAC). In 1947 at Cambridge University, as Maurice Wilkes’s first research student, he was responsible for the design, construction and testing of the main control unit and bootstrap facility for EDSAC and carried out the first structural engineering calculations on a computer as part of his PhD. Bennett’s work was critical to the success of EDSAC and was achieved with soldering irons and war-surplus valves in the old Cambridge anatomy dissecting rooms, still reeking of formalin.
. . .
The importance of his work on EDSAC was recognised by many who followed. In Cambridge he also pioneered the use of digital computers for X-ray crystallography in collaboration with John Kendrew (later a Nobel Prize winner), one of many productive collaborations.
He was recruited from Cambridge by Ferranti Manchester in 1950 to work on the Mark 1*. Colleague G. E. ”Tommy” Thomas recalls that when Ferranti’s promise to provide a computer for the 1951 Festival of Britain could not be fulfilled, ”John suggested … a machine to play the game of Nim against all comers … [It] was a great success. The machine was named Nimrod and is the precursor of the vast electronic games industry we know today.”
In 1952, Bennett married Mary Elkington, a London School of Economics and Political Science graduate in economics, who was working in another section at Ferranti. Moving to Ferranti’s London Computer Laboratory in 1953, Bennett worked in a team led by Bill Elliott, alongside Charles Owen, whose plug-in components enabled design of complete computers by non-engineers. Owen went on to design the IBM 360/30. Bennett remembered of the time, ”Whatever we touched was new; it gives you a great lift. We weren’t fully aware of what we were pioneering. We knew we had the best way but we weren’t doing it to convert people – we were doing it because it was a new tool which should get used. We knew we were ploughing new ground.”
Bennett was proud of being Australian and strongly felt the debt he owed for his education. When Harry Messel’s School of Physics group asked him in 1956 to head operations on SILLIAC (the Sydney version of ILLIAC, the University of Illinois Automatic Computer – faster than any machine then commercially available), he declined a more lucrative offer he had accepted from IBM and moved his family to Sydney.
The University of Sydney acknowledged computer science as a discipline by creating a chair for Bennett, the professor of physics (electronic computing) in 1961. Later the title became professor of computer science and head of the Basser department of computer science. Fostering industry relationships and ensuring a flow of graduates was a cornerstone of his tenure.
Bennett was determined that Australia should be part of the world computing scene and devoted much time and effort to international professional organisations. This was sometimes a trial for his staff. Arthur Sale recalls, ”I quickly learnt that John going away was the precursor to him returning with a big new idea. After a period when we could catch up with our individual work, John would tell us about the new thing that we just had to work on. Once it was the ARPAnet [Advanced Research Projects Agency Network] and nothing would suffice until we started to try to communicate with the Aloha satellite over Hawaii that had run out of gas to establish a link to Los Angeles and ARPAnet and lo and behold, the internet had come to Australia in the 1970s.”
In 1983, Bennett was appointed as an officer of the Order of Australia. After his retirement in 1986, he remained active, attending PhD seminars and lectures to ”stay up to date and offer a little advice” while continuing to earn recognition for his contributions to computer science for more than half a century.

Dialogs over actions

In the post below, I mentioned the challenge for knowledge engineers of representing know-how, a task which may require explicit representation of actions, and sometimes also of utterances over actions.  The know-how involved in steering a large sailing ship with its diverse crew surely includes the knowledge of who to ask (or to command) to do what, when, and how to respond when these requests (or commands) are ignored, or fail to be executed successfully or timeously.
One might imagine epistemology – the philosophy of knowledge – would be of help here.  Philosophers, however, have been seduced since Aristotle with propositions (factual statements about the world having truth values), largely ignoring actions, and their representation.   Philosophers of language have also mostly focused on speech acts – utterances which act to change the world – rather than on utterances about actions themselves.  Even among speech act theorists the obsession with propositions is strong, with attempts to analyze utterances which are demonstrably not propositions (eg, commands) by means of implicit assertive statements – propositions  asserting something about the world, where “the world” is extended to include internal mental states and intangible social relations between people – which these utterances allegedly imply.  With only a few exceptions (Thomas Reid 1788, Adolf Reinach 1913, Juergen Habermas 1981, Charles Hamblin 1987), philosophers of language have mostly ignored utterances  about actions.
Consider the following two statements:

I promise you to wash the car.
I command you to wash the car.

The two statements have almost identical English syntax.   Yet their meanings, and the intentions of their speakers, are very distinct.  For a start, the action of washing the car would be done by different people – the speaker and the hearer, respectively (assuming for the moment that the command is validly issued, and accepted).  Similarly, the power to retract or revoke the action of washing the car rests with different people – with the hearer (as the recipient of the promise) and the speaker (as the commander), respectively.
Linguists generally use “semantics” to refer to the real-world referants of syntactically-correct expressions, while “pragmatics” refers to other aspects of the meaning and use of an expression not related to their relationship (or not) to things in the world, such as the speaker’s intentions.  For neither of these two expressions does it make sense to speak of  their truth value:  a promise may be questioned as to its sincerity, or its feasibility, or its appropriateness, etc, but not its truth or falsity;  likewise, a command  may be questioned as to its legal validity, or its feasibility, or its morality, etc, but also not its truth or falsity.
For utterances about actions, such as promises, requests, entreaties and commands, truth-value semantics makes no sense.  Instead, we generally need to consider two pragmatic aspects.  The first is uptake, the acceptance of the utterance by the hearer (an aspect first identified by Reid and by Reinach), an acceptance which generally creates a social commitment to execute the action described in the utterance by one or other party to the conversation (speaker or hearer).    Once uptaken, a second pragmatic aspect comes into play:  the power to revoke or retract the social commitment to execute the action.  This revocation power does not necessarily lie with the original speaker; only the recipient of a promise may cancel it, for example, and not the original promiser.  The revocation power also does not necessarily lie with the uptaker, as commands readily indicate.
Why would a computer scientist be interested in such humanistic arcana?  The more tasks we delegate to intelligent machines, the more they need to co-ordinate actions with others of like kind.  Such co-ordination requires conversations comprising utterances over actions, and, for success, these require agreed syntax, semantics and pragmatics.  To give just one example:  the use of intelligent devices by soldiers have made the modern battlefield a place of overwhelming information collection, analysis and communication.  Lots of this communication can be done by intelligent software agents, which is why the US military, inter alia, sponsors research applying the philosophy of language and the  philosophy of argumentation to machine communications.
Meanwhile, the philistine British Government intends to cease funding tertiary education in the arts and the humanities.   Even utilitarians should object to this.
References:
Juergen  Habermas [1984/1981]:   The Theory of Communicative Action:  Volume 1:  Reason and the Rationalization of Society.  London, UK:  Heinemann.   (Translation by T. McCarthy of:  Theorie des Kommunikativen Handelns, Band I,  Handlungsrationalitat und gesellschaftliche Rationalisierung. Suhrkamp, Frankfurt, Germany, 1981.)
Charles  L. Hamblin [1987]:  Imperatives. Oxford, UK:  Basil Blackwell.
P. McBurney and S. Parsons [2007]: Retraction and revocation in agent deliberation dialogs. Argumentation, 21 (3): 269-289.

Adolph Reinach [1913]:  Die apriorischen Grundlagen des bürgerlichen Rechtes.  Jahrbuch für Philosophie und phänomenologische Forschung, 1: 685-847.

Antikythera

An orrery is a machine for predicting the movements of heavenly bodies.   The oldest known orrery is the Antikythera Mechanism, created in Greece around 2100 years ago, and rediscovered in 1901 in a shipwreck near the island of  Antikythera (hence its name).   The high-quality and precision nature of its components would indicate that this device was not unique, since the making of high-quality mechanical components is not trivial, and is not usually achieved with just one attempt (something Charles Babbage found, and which delayed his development of computing machinery immensely).
It took until 2006 and the development of x-ray tomography for a plausible theory of the purpose and operations of the Antikythera Mechanism to be proposed (Freeth et al. 2006).   The machine was said to be a physical examplification of  late Greek theories of cosmology, in particular the idea that the motion of a heavenly body could  be modeled by an epicycle – ie, a body traveling around a circle, which is itself moving around some second circle.  This model provided an explanation for the fact that many heavenly bodies appear to move at different speeds at different times of the year, and sometimes even (appear to) move backwards.
There have been two recent developments:  One is the re-creation of the machine (or, rather, an interpretation of it)  using lego components.
The second has arisen from a more careful examination of the details of the mechanism.  According to Marchant (2010), some people now believe that the mechanism examplifies Babylonian, rather than Greek, cosmology.   Babylonian astronomers modeled the movements of heavenly bodies by assuming each body traveled along just one circle, but at two different speeds:  movement in one period of the year being faster than during the other part of the year.
If this second interpretation of the Antikythera Mechanism is correct, then perhaps it was the mechanism itself (or others like it) which gave late Greek astronomers the idea for an epicycle model.   In support of this view is the fact that, apparently, gearing mechanisms and the epicycle model both appeared around the same time, with gears perhaps a little earlier.   So late Greek cosmology (and perhaps late geometry) may have arisen in response to, or at least alongside, practical developments and physical models.   New ideas in computing typically follow the same trajectory – first they exist in real, human-engineered, systems; then, we develop a formal, mathematical theory of them.   Programmable machines, for instance, were invented in the textile industry in the first decade of the 19th century (eg, the Jacquard Loom), but a mathematical theory of programming did not appear until the 1960s.   Likewise, we have had a fully-functioning, scalable, global network enabling multiple, asynchronous, parallel, sequential and interleaved interactions since Arpanet four decades ago, but we still lack a thorough mathematical theory of interaction.
And what have the Babylonians ever done for us?   Apart from giving us our units for measuring of time (divided into 60) and of angles (into 360 degrees)?
References:
T Freeth, Y Bitsakis, X Moussas, JH Seiradaki, A Tselikas, H Mangou, M Zafeiropoulou, R Hadland, D Bate, A Ramsey, M Allen, A Crawley, P Hockley, T Malzbender, D Gelb,W Ambrisco and MG Edmunds [2006]:  Decoding the ancient Greek astronomical calculator known as the Antikythera Mechanism.  Nature444 (30):   587-591.  30 November 2006.
J. Marchant [2010]:  Mechanical inspiration.  Nature, 468:  496-498.  25 November 2010.

Syntax Attacks

Thanks to the ever-watchful Normblog, I encounter an article by Colin Tatz inveighing against talk about sport.  Norm is right to call Tatz to account for writing nonsense – talk about sport is just as meaningful as talk about politics, history, religion, nuclear deterrence, genocide, or any other real-world human activity.  Tatz says:

Sport is international phatic but also a crucial Australian (male) vehicle. It enables not just short, passing greetings but allows for what may seem like deep, passionate and meaningful conversations but which in the end are unmemorable, empty, producing nothing and enhancing no one.

Unmemorable?! Really?   What Australian could forget Norman May’s shouted “Gold! Gold for Australia! Gold!” commentary at the end of the men’s 400-metre swimming medley at the 1980 Olympics in Moscow.  Only a churlish gradgrind could fail to be enhanced by hearing this.   And what Australian of a certain age could forget the inimitable footie commentary of Rex Mossop, including, for example, such statements as,  “That’s the second consecutive time he’s done that in a row one straight after the other.” Mossop’s heat-of-the-moment sporting talk was commemorated with his many winning places in playwright Alex Buzo’s Australian Indoor Tautology Pennant, an annual competition held, as I recall,  in Wagga Wagga, Gin Gin and Woy Woy (although not in Woop Woop or in The Never Never), before moving internationally to exotic locations such as Pago Pago, Xai Xai and Baden Baden.  Unmemorable, Mr Tatz?  Enhancing no one?  Really?  To be clear, these are not memorable sporting events, but memorable sporting commentary.   And all I’ve mentioned so far is sporting talk, not the great writers on baseball, on golf, on cricket, on swimming,  . . .
But as well as misunderstanding what talk about sport is about and why it is meaningful, Tatz is wrong on another score.   He says:

But why so much natter and clatter about sport? Eco’s answer is that sport “is the maximum aberration of ‘phatic’ speech”, which is really a negation of speech.
Phatic speech is meaningless speech, as in “G’day, how’s it going?” or “have a nice day” or “catch you later” — small talk phrases intended to produce a sense of sociability, sometimes uttered in the hope that it will lead to further and more real intercourse, but human enough even if the converse goes no further.

Phatic communications are about establishing and maintaining relationships between people.  Such a purpose is the very essence of speech communication, not its negation.  Tatz, I fear, has fallen into the trap of so many computer scientists – to focus on the syntax of messages, and completely ignore their semantics and pragmatics.    The syntax of messages concerns their surface form, their logical structure, their obedience (or not) to rules which determine whether they are legal and well-formed statements (or not) in the language they purport to arise from.  The semantics of utterances concerns their truth or falsity, in so far they describe real objects in some world (perhaps the one we all live in, or some past, future or imagined world),  while their pragmatics concerns those aspects of their meaning unrelated to their truth status (for example, who has power to revoke or retract them).
I have discussed this syntax-is-all-there-is mistake before.    I believe the root causes of this mistaken view are two-fold: the mis-guided focus of philosophers these last two centuries on propositions to the exclusion of other types of utterances and statements (of which profound error Terry Eagleton has shown himself guilty), and the mis-guided view that we now live in some form of Information Society, a view which wrongly focuses attention on the information  transferred by utterances to the exclusion of any other functions that utterances may serve or any other things we agents (people and machines) may be doing and aiming to do when we talk.   If you don’t believe me about the potentially complex functionality of utterances, even when viewed as nothing more than the communication of factual propositions, then read this simple example.
If communications were only about the transfer of explicit information, then life would be immensely less interesting.  It would also not be human life, for we would be no more intelligent than desktop computers passing HTTP requests and responses to one another.

On birds and frogs

I have posted before about the two cultures of pure mathematicians – the theory-builders and the problem-solvers.  Thanks to string theorist and SF author Hannu Rajaniemi, I have just seen a fascinating paper by Freeman Dyson, which draws a similar distinction – between the birds (who survey the broad landscape, making links between disparate branches of mathematics) and the frogs (who burrow down in the mud, solving particular problems in specific branches of the discipline).   This distinction is analogous to that between a focus on breadth and a focus on depth, respectively, as strategies  in search.   As Dyson says, pure mathematics as a discipline needs both personality-types if it is to make progress.   Yet, a tension often exists between these types:  in my experience, frogs are often disdainful of birds for lacking deep technical expertise.   I have less often encountered disdain from birds, perhaps because that is where my own sympathies are.
A similar tension exists in computing – a subject which needs both deep technical expertise AND a rich awareness of the breadth of applications to which computing may be put.  This need arises because the history of the subject shows an intricate interplay of theory and applications, led almost always by the application.    Turing’s abstract cineprojector model of computing arrived a century after Babbage’s calculating machines, for example, and we’ve had programmable devices since at least Jacquard’s loom in 1804, yet only had a mathematical theory of programming since the 1960s.  In fact, since computer science is almost entirely a theory of human artefacts (apart from that part – still small – which looks at natural computing), it would be strange indeed were the theory to divorce itself from the artefacts which are its scope of study.
A story which examplifies this division in computing is here.
Reference:
Freeman Dyson [2009]:  Birds and frogs.  Notices of the American Mathematical Society, 56 (2): 212-223, February 2009.   Available here.

A computer pioneer

I have posted before about how the history of commercial computing is intimately linked with the British tea-shop, via LEO, a successful line of commercial computers developed by the Lyons tea-shop chain.  The first business application run on a Lyons computer was almost 60 years ago, in 1951.  Today’s Grauniad carries an obituary for John Aris (1934-2010), who had worked for LEO on the first stage of an illustrious career in commercial IT.  His career included a period as Chief Systems Engineer with British computer firm ICL (later part of Fujitsu).  Aris’ university education was in Classics, and he provides another example to show that the matherati represent a cast of mind, and not merely a collection of people educated in mathematics.

John’s career in computing began in 1958 when he was recruited to the Leo (Lyons Electronic Office) computer team by J Lyons, then the major food business in the UK, and initiators of the notion that the future of computers lay in their use as a business tool. At the time, the prevailing view was that work with computers required a trained mathematician. The Leo management thought otherwise and recruited using an aptitude test. John, an Oxford classics graduate, passed with flying colours, noting that “the great advantage of studying classics is that it does not fit you for anything specific”. “

Of course, LEO was not the first time that cafes had led to new information industries, as we noted here in a post about the intellectual and commercial consequences of the rise of coffee houses in Europe from the mid-17th century.  The new industries the first time round were newspapers, insurance, and fine art auctions (and through them, painting as a commercial activity aimed at non-aristocrat collectors); the new intellectual discipline was the formal modeling of uncertainty (then aka probability theory).

UPDATE (2012-05-22):  The Telegraph of 2011-11-10 ran an article about the Lyons Tea Shop computer business, here, to celebrate the 60th anniversary of the LEO (1951-11-17).

The Matherati

Howard Gardner’s theory of multiple intelligences includes an intelligence he called Logical-Mathematical Intelligence, the ability to reason about numbers, shapes and structure, to think logically and abstractly.   In truth, there are several different capabilities in this broad category of intelligence – being good at pure mathematics does not necessarily make you good at abstraction, and vice versa, and so the set of great mathematicians and the set of great computer programmers, for example, are not identical.
But there is definitely a cast of mind we might call mathmind.   As well as the usual suspects, such as Euclid, Newton and Einstein, there are many others with this cast of mind.  For example, Thomas Harriott (c. 1560-1621), inventor of the less-than symbol, and the first person to draw the  moon with a telescope was one.   Newton’s friend, Nicolas Fatio de Duiller (1664-1753), was another.   In the talented 18th-century family of Charles Burney, whose relatives and children included musicians, dancers, artists, and writers (and an admiral), Charles’ grandson, Alexander d’Arblay (1794-1837), the son of writer Fanny Burney, was 10th wrangler in the Mathematics Tripos at Cambridge in 1818, and played chess to a high standard.  He was friends with Charles Babbage, also a student at Cambridge at the time, and a member of the Analytical Society which Babbage had co-founded; this was an attempt to modernize the teaching of pure mathematics in Britain by importing the rigor and notation of continental analysis, which d’Arblay had already encountered as a school student in France.
And there are people with mathmind right up to the present day.   The Guardian a year ago carried an obituary, written by a family member, of Joan Burchardt, who was described as follows:

My aunt, Joan Burchardt, who has died aged 91, had a full and interesting life as an aircraft engineer, a teacher of physics and maths, an amateur astronomer, goat farmer and volunteer for Oxfam. If you had heard her talking over the gate of her smallholding near Sherborne, Dorset, you might have thought she was a figure from the past. In fact, if she represented anything, it was the modern, independent-minded energy and intelligence of England. In her 80s she mastered the latest computer software coding.”

Since language and text have dominated modern Western culture these last few centuries, our culture’s histories are mostly written in words.   These histories favor the literate, who naturally tend to write about each other.    Clive James’ book of a lifetime’s reading and thinking, Cultural Amnesia (2007), for instance, lists just 1 musician and 1 film-maker in his 126 profiles, and includes not a single mathematician or scientist.     It is testimony to text’s continuing dominance in our culture, despite our society’s deep-seated, long-standing reliance on sophisticated technology and engineering, that we do not celebrate more the matherati.
On this page you will find an index to Vukutu posts about the Matherati.
FOOTNOTE: The image above shows the equivalence classes of directed homotopy (or, dihomotopy) paths in 2-dimensional spaces with two holes (shown as the black and white boxes). The two diagrams model situations where there are two alternative courses of action (eg, two possible directions) represented respectively by the horizontal and vertical axes.  The paths on each diagram correspond to different choices of interleaving of these two types of actions.  The word directed is used because actions happen in sequence, represented by movement from the lower left of each diagram to the upper right.  The word homotopy refers to paths which can be smoothly deformed into one another without crossing one of the holes.  The upper diagram shows there are just two classes of dihomotopically-equivalent paths from lower-left to upper-right, while the lower diagram (where the holes are positioned differently) has three such dihomotopic equivalence classes.  Of course, depending on the precise definitions of action combinations, the upper diagram may in fact reveal four equivalence classes, if paths that first skirt above the black hole and then beneath the white one (or vice versa) are permitted.  Applications of these ideas occur in concurrency theory in computer science and in theoretical physics.

AI's first millenium: prepare to celebrate

A search algorithm is a computational procedure (an algorithm) for finding a particular object or objects in a larger collection of objects.    Typically, these algorithms search for objects with desired properties whose identities are otherwise not yet known.   Search algorithms (and search generally) has been an integral part of artificial intelligence and computer science this last half-century, since the first working AI program, designed to play checkers, was written in 1951-2 by Christopher Strachey.    At each round, that program evaluated the alternative board positions that resulted from potential next moves, thereby searching for the “best” next move for that round.
The first search algorithm in modern times apparently dates from 1895:  a depth-first search algorithm to solve a maze, due to amateur French mathematician Gaston Tarry (1843-1913).  Now, in a recent paper by logician Wilfrid Hodges, the date for the first search algorithm has been pushed back much further:  to the third decade of the second millenium, the 1020s.  Hodges translates and analyzes a logic text of Persian Islamic philosopher and mathematician, Ibn Sina (aka Avicenna, c. 980 – 1037) on methods for finding a proof of a syllogistic claim when some premises of the syllogism are missing.   Representation of domain knowledge using formal logic and automated reasoning over these logical representations (ie, logic programming) has become a key way in which intelligence is inserted into modern machines;  searching for proofs of claims (“potential theorems”) is how such intelligent machines determine what they know or can deduce.  It is nice to think that automated theorem-proving is almost 990 years old.
References:
B. Jack Copeland [2000]:  What is Artificial Intelligence?
Wilfrid Hodges [2010]: Ibn Sina on analysis: 1. Proof search. or: abstract state machines as a tool for history of logic.  pp. 354-404, in: A. Blass, N. Dershowitz and W. Reisig (Editors):  Fields of Logic and Computation. Lecture Notes in Computer Science, volume 6300.  Berlin, Germany:  Springer.   A version of the paper is available from Hodges’ website, here.
Gaston Tarry [1895]: La problem des labyrinths. Nouvelles Annales de Mathématiques, 14: 187-190.

In defence of futures thinking

Norm at Normblog has a post defending theology as a legitimate area of academic inquiry, after an attack on theology by Oliver Kamm.  (Since OK’s post is behind a paywall, I have not read it, so my comments here may be awry with respect to that post.)  Norm argues, very correctly, that it is legitimate for theology, considered as a branch of philosophy to, inter alia, reflect on the properties of entities whose existence has not yet been proven.  In strong support of Norm, let me add:  Not just in philosophy!
In business strategy, good decision-making requires consideration of the consequences of potential actions, which in turn requires the consideration of the potential actions of other actors and stakeholders in response to the first set of actions.  These actors may include entities whose existence is not yet known or even suspected, for example, future competitors to a product whose launch creates a new product category.   Why, there’s even a whole branch of strategy analysis, devoted to scenario planning, a discipline that began in the military analysis of alternative post-nuclear worlds, and whose very essence involves the creation of imagined futures (for forecasting and prognosis) and/or imagined pasts (for diagnosis and analysis).   Every good air-crash investigation, medical diagnosis, and police homicide investigation, for instance, involves the creation of imagined alternative pasts, and often the creation of imaginary entities in those imagined pasts, whose fictional attributes we may explore at length.   Arguably, in one widespread view of the philosophy of mathematics, pure mathematicians do nothing but explore the attributes of entities without material existence.
And not just in business, medicine, the military, and the professions.   In computer software engineering, no new software system development is complete without due and rigorous consideration of the likely actions of users or other actors with and on the system, for example.   Users and actors here include those who are the intended target users of the system, as well as malevolent or whimsical or poorly-behaved or bug-ridden others, both human and virtual, not all of whom may even exist when the system is first developed or put into production.      If creative articulation and manipulation of imaginary futures (possible or impossible) is to be outlawed, not only would we have no literary fiction or much poetry, we’d also have few working software systems either.

The long after-life of design decisions

Reading Natasha Vargas-Cooper’s lively romp through the 1960s culture referenced in the TV series Mad Men, I came across Tim Siedell’s discussion of a witty, early 1960s advert by Doyle Dane Bernbach for Western Union telegrams, displayed here

Seeing a telegram for the first time in about, oh, 35 years*, I looked at the structure.   Note the header, with information about the company, as well as meta-information about the message.   That structure immediately brought to mind the structure of a TCP packet.

The Transmission Control Protocol (TCP) is the work-horse protocol of the Internet, and was developed by Vince Cerf and Bob Kahn in 1974.   Their division of the packet contents into a header-part (the control information) and a data part (the payload) no doubt derived from earlier work on the design of packets for packet-switched networks.   Later packets (eg, for IP, the Internet Protocol) were simpler, but still retained this two-part structure.  This two-part division is also found in voice telecommunications at the time, for example in Common Channel Signalling Systems, which separated message content from information about the message (control information).   Such systems were adopted internationally by the ITU for voice communications from Signalling System #6 (SS6) in 1975 onwards.  In case the packet design seems obvious, it is worth considering some alternatives:  the meta-information could be in a footer rather than in a header, or enmeshed in the data itself (as, for example, HTML tags are enmeshed in the content they modify).  Or, the meta-data could be sent in a separate packet, perhaps ahead of the data packet, as happens with control information in Signalling System #7 (SS7), adopted from 1980.  There are technical reasons why some of these design possibilities are not feasible or not elegant, and perhaps the same reasons apply to transmission of telegrams (which is, after all, a communications medium using packets).
The first commercial electrical telegraph networks date from 1837, and the Western Union company itself dates from 1855 (although created from the merger of earlier companies).  I don’t know when the two-part structure for telegrams was adopted, but it was certainly long before Vannevar Bush predicted the Internet in 1945, and long before packet-switched communications networks were first conceived in the early 1960s.   It is interesting that the two-part structure of the telegramlives on in the structure of internet packets.
* Footnote: As I recall, I sent my first email in 1979.
Reference:
Tim Siedell [2010]: “Western Union:  What makes a great ad?” pp. 15-17 of:  Natasha Vargas-Cooper [2010]:  Mad Men Unbuttoned. New York, NY:  HarperCollins.