Computing in Cottonopolis

A 1951 article about the Manchester computer, reprinted in The Guardian today.

To think of two twelve-figure numbers and write them down and then to multiply them together would involve considerable mental effort for many people, and could scarcely be done in much under a quarter of an hour. A machine will be officially “opened” at Manchester University on Monday which does this sort of calculation 320 times a second. Provisionally named “Madam” – from the initials of Manchester Automatic Digital Machine and because of certain unpredictable tendencies – it is a high-speed electronic computer built for the University Mathematics Department, and paid for by a Government grant. It is an improved version of a prototype developed by Professor F. C. Newman and Dr. T. Kilburn of the Electrical Engineering Department, and Professor M. A. Newman and Mr. A. Turing, of the Mathematics Department.
The practical applications of the machine are great and varied, and it is, of course, of greatest use where long, repetitive calculations are involved, some of which would probably be impossible without its aid.  There are also commercial possibilities as yet unexplored relating to accountancy and wage departments. It is significant that one of the largest catering firms in the country has recently installed a similar machine, which may replace the work of hundreds of clerks. Will it perhaps solve the problems of redundancy it may create? Large-scale private or national statistics can be prepared in a far more up-to-date form, in some cases in a matter of weeks rather than years. Finally, of course, there are such sidelines as teaching the machine to play chess or bridge.
There are two features that might be mentioned: the magnetic drum for storing permanent information and the cathode-ray tubes for storing information produced in the course of a calculation. These have added immensely to the “memory” of such machines. The magnetic drum will hold 650,000 binary digits and each of the eight cathode-tubes sixty-four twenty-digit numbers. It will add up 500 numbers before you could say “addition”, and it could work out in half a day the logarithmic tables which took Napier and Briggs almost a lifetime.
It is an alarming machine, in fact. A tool like a plough is friendly and intelligible, but this reduction to absurdity of mental arithmetic is another matter. Those associated with the machine stress that what it can do depends on the “programme” fed to it. Nobody knows what Manchester’s machine will be able to do, and Mr. Turing said to-day that, although it will be used on problems of pure mathematics, the main idea is to investigate the possibilities and theory of such machines.  In an article in “Mind” six months ago, Mr. Turing seemed to come to the conclusion that eventually digital computers would be able to do something akin to “thinking” and also discussed the possibilities of educating a “child-machine.”  One feels that whatever “Madam” can do she will do it for Mr. Turing.

The government grant mentioned in paragraph 1 was awarded to the pure mathematician Max Newman because of his secret cryptographic work at Bletchley Park during WW II.   Because of that work, he knew Turing and his capabilities very well, and recruited him to Manchester to work on the project.   It is interesting that even in a newspaper article published in 1951 mention was made of machines playing chess.
An earlier post on long-lived memories of Alan Turing is here.  Some information about Turing’s death is here, including his mother’s theory that his death by poison was accidental, occurring while he attempted to silver-plate a spoon.
 

The mechanical judiciary

In the tradition of Montaigne and Orwell, Rory Stewart MP has an extremely important blog post about the need for judicial decisions to be be made case-by-case, using humane wisdom, intuition, and discretion, and not by deterministic or mechanical algorithms. The same applies to most important decisions in our lives and our society. Sadly, his view runs counter to the thrust of modern western culture these last four centuries, as Stephen Toulmin observed.   Our obssessive desire for consistency in decision-making sweeps all before it, from oral examinations in mathematics to eurozone economic policy.

Stewart’s post is worth quoting at length:

What is the point of a parliamentary debate? It isn’t about changing MPs’ minds or their votes. It wasn’t, even in the mid-nineteenth century. In the 1860s Trollope describes how MPs almost always voted on party lines. But they and he still felt that parliamentary debate mattered, because it set the terms of the public discussion, and clarified the great national questions. The press and public galleries were often filled. Churchill, even as a young backbencher, could expect an entire speech, lasting almost an hour, to be reprinted verbatim in the Morning Post. MPs put enormous effort into their speeches. But in the five-hour debate today on the judicial sentencing council, the press gallery was empty, and for most of the time there was only one single person on the Labour benches – a shadow Minister who had no choice. And on our side, a few former judges, and barristers. For whom, and about what, were we speaking?
Continue reading ‘The mechanical judiciary’

Digital dumbing down

Despite being all the rage, touchscreens have never impressed me.   I did not put my finger (metaphor chosen deliberately) on the reasons why until reading Edward Tufte’s criticism:  they have no hand!  They lack tactility, and of all the many possible diverse, sophisticated, subtle, and complex motions that our hands and fingers are capable of, touchscreens seem designed to accommodate just two very simple motions:  tapping and sliding.   Not something to write home about when you wake up each morning eager to digitally percuss, or have hands able  to think.    Bret Victor has a nice graphically-supported argument about the lack of embodiment of touchscreens in the world of those of us with opposable thumbs, here.  As Victor says:

Are we really going to accept an Interface Of The Future that is less expressive than a sandwich?

When are agent models or systems appropriate?


In July 2005, inspired by a talk on formation flying by unmanned aircraft by Sandor Veres at the Liverpool Agents in Space Symposium, I wrote down some rules of thumb I have been using informally for determining whether an agent-based modeling (ABM) approach is appropriate for a particular application domain.  Appropriateness is assessed by answering the following questions:

1. Are there multiple entities in the domain, or can the domain be represented as if there are?
2. Do the entities have access to potentially different information sources or do they have potentially different beliefs? For example, differences may be due to geographic, temporal, legal, resource or conceptual constraints on the information available to the entities.
3. Do the entities have potentially different goals or objectives? This will typically be the case if the entities are owned or instructed by different people or organizations.
4. Do the entities have potentially different preferences (or utilities) over their goals or objectives ?
5. Are the relationships between the entities likely to change over time?
6. Does a system representing the domain have multiple threads of control?

If the answers are YES to Question 1 and also YES to any other question, then an agent-based approach is appropriate. If the answer to Question 1 is NO, or if the answers are YES to Question 1 but NO to all other questions, then a traditional object-based approach is more appropriate.
Traditional object-oriented systems involve static relationships between non-autonomous entities sharing the same beliefs, preferences and goals, and in a system with a single thread of control.

Resilient capitalism

Yesterday began with a meeting at an investment bank in Paternoster Square, London, which turned out to be inaccessible to visitors and the public.   The owners of the Square had asked the police to close public access to prevent its occupation by the anti-capitalism (OWS) protesters, encamped between the Square and St Paul’s Cathedral.  So our meeting took place in a cafe beside the square.

The day ended with a debate at the Royal Society, organized by The Foundation for Science and Technology, on developing adaptation policy in response to climate change.     The speakers were Dr Rupert Lewis of DEFRA, Sir Graham Wynne of the Sub-Committee on Adaptation, UK Committee on Climate Change, and Tom Bolt, Director of Performance Management at LLoyd’s of London.  (Their presentations will eventually be posted here.) As Bolt remarked, insurance companies have to imagine potential global futures in which climate change has wrecked social and economic havoc, and so are major consumers of scientific prognoses.   One commentator from the audience suggested that insurers, particularly, may have a vested short-term financial interest in us all being pessimistic about the long term future, although this inference was not obvious to me:  one human reaction to a belief in a certainly-ruinous future is not to save or insure for it, but rather to spend today.
A very interesting issue raised by some audience members is just how do we engineer and build infrastructure for adaptability?  What would a well-adapted society look like?     One imagines that the floating houses built in the Netherlands to survive floods would fit any such description.  Computer scientists have some experience in creating and managing robust, designing resilient and adaptive systems, and so it may be useful to examine that experience for lessons for design and engineering efforts for other infrastructure.

Digital aspen forests

Brian Arthur has an article about automated and intelligent machine-to-machine communications creating a second digital economy underlying the first physical one, in the latest issue of The McKinsey Quarterly here.

I want to argue that something deep is going on with information technology, something that goes well beyond the use of computers, social media, and commerce on the Internet. Business processes that once took place among human beings are now being executed electronically. They are taking place in an unseen domain that is strictly digital. On the surface, this shift doesn’t seem particularly consequential—it’s almost something we take for granted. But I believe it is causing a revolution no less important and dramatic than that of the railroads. It is quietly creating a second economy, a digital one.
. . . .
We do have sophisticated machines, but in the place of personal automation (robots) we have a collective automation. Underneath the physical economy, with its physical people and physical tasks, lies a second economy that is automatic and neurally intelligent, with no upper limit to its buildout. The prosperity we enjoy and the difficulties with jobs would not have surprised Keynes, but the means of achieving that prosperity would have.
This second economy that is silently forming—vast, interconnected, and extraordinarily productive—is creating for us a new economic world. How we will fare in this world, how we will adapt to it, how we will profit from it and share its benefits, is very much up to us.”

Reference:
W. Brian Arthur [2011]:  The Second EconomyThe McKinsey Quarterly, October 2011.

Vale Dennis Ritchie (1941-2011)

A post to note the passing on of Dennis Ritchie (1941-2011), co-developer of C programming language and of the Unix operating system.  The Guardian’s obituary is here, a brief note from Wired Magazine here, and John Naughton’s tribute in the Observer here.    So much of modern technology we owe to just a few people, and Ritchie was one of them.
An index to posts about the Matherati is here.

Antikythera

An orrery is a machine for predicting the movements of heavenly bodies.   The oldest known orrery is the Antikythera Mechanism, created in Greece around 2100 years ago, and rediscovered in 1901 in a shipwreck near the island of  Antikythera (hence its name).   The high-quality and precision nature of its components would indicate that this device was not unique, since the making of high-quality mechanical components is not trivial, and is not usually achieved with just one attempt (something Charles Babbage found, and which delayed his development of computing machinery immensely).
It took until 2006 and the development of x-ray tomography for a plausible theory of the purpose and operations of the Antikythera Mechanism to be proposed (Freeth et al. 2006).   The machine was said to be a physical examplification of  late Greek theories of cosmology, in particular the idea that the motion of a heavenly body could  be modeled by an epicycle – ie, a body traveling around a circle, which is itself moving around some second circle.  This model provided an explanation for the fact that many heavenly bodies appear to move at different speeds at different times of the year, and sometimes even (appear to) move backwards.
There have been two recent developments:  One is the re-creation of the machine (or, rather, an interpretation of it)  using lego components.
The second has arisen from a more careful examination of the details of the mechanism.  According to Marchant (2010), some people now believe that the mechanism examplifies Babylonian, rather than Greek, cosmology.   Babylonian astronomers modeled the movements of heavenly bodies by assuming each body traveled along just one circle, but at two different speeds:  movement in one period of the year being faster than during the other part of the year.
If this second interpretation of the Antikythera Mechanism is correct, then perhaps it was the mechanism itself (or others like it) which gave late Greek astronomers the idea for an epicycle model.   In support of this view is the fact that, apparently, gearing mechanisms and the epicycle model both appeared around the same time, with gears perhaps a little earlier.   So late Greek cosmology (and perhaps late geometry) may have arisen in response to, or at least alongside, practical developments and physical models.   New ideas in computing typically follow the same trajectory – first they exist in real, human-engineered, systems; then, we develop a formal, mathematical theory of them.   Programmable machines, for instance, were invented in the textile industry in the first decade of the 19th century (eg, the Jacquard Loom), but a mathematical theory of programming did not appear until the 1960s.   Likewise, we have had a fully-functioning, scalable, global network enabling multiple, asynchronous, parallel, sequential and interleaved interactions since Arpanet four decades ago, but we still lack a thorough mathematical theory of interaction.
And what have the Babylonians ever done for us?   Apart from giving us our units for measuring of time (divided into 60) and of angles (into 360 degrees)?
References:
T Freeth, Y Bitsakis, X Moussas, JH Seiradaki, A Tselikas, H Mangou, M Zafeiropoulou, R Hadland, D Bate, A Ramsey, M Allen, A Crawley, P Hockley, T Malzbender, D Gelb,W Ambrisco and MG Edmunds [2006]:  Decoding the ancient Greek astronomical calculator known as the Antikythera Mechanism.  Nature444 (30):   587-591.  30 November 2006.
J. Marchant [2010]:  Mechanical inspiration.  Nature, 468:  496-498.  25 November 2010.

The writing on the wall

Over at Normblog, Norm tells us that he wants his books and not merely the words they contain.   We’ve discussed this human passion before:  books, unlike e-readers, are postcards from our past-self to our future-self, tangible souvenirs of the emotions we had when we first read them.   For that very reason – that they transport us through time – books aren’t going anywhere.  It’s a very rare technology indeed that completely eliminates all its predecessors, since every technology has something unique it provides to some users or other.   We could ask, for example, why we still carve words onto stone and why we still engrave names onto rings and pewter mugs for special occasions, when the invention of printing should have done away with those earlier text-delivery platforms, more expensive and less portable than books and paper?

The long after-life of design decisions

Reading Natasha Vargas-Cooper’s lively romp through the 1960s culture referenced in the TV series Mad Men, I came across Tim Siedell’s discussion of a witty, early 1960s advert by Doyle Dane Bernbach for Western Union telegrams, displayed here

Seeing a telegram for the first time in about, oh, 35 years*, I looked at the structure.   Note the header, with information about the company, as well as meta-information about the message.   That structure immediately brought to mind the structure of a TCP packet.

The Transmission Control Protocol (TCP) is the work-horse protocol of the Internet, and was developed by Vince Cerf and Bob Kahn in 1974.   Their division of the packet contents into a header-part (the control information) and a data part (the payload) no doubt derived from earlier work on the design of packets for packet-switched networks.   Later packets (eg, for IP, the Internet Protocol) were simpler, but still retained this two-part structure.  This two-part division is also found in voice telecommunications at the time, for example in Common Channel Signalling Systems, which separated message content from information about the message (control information).   Such systems were adopted internationally by the ITU for voice communications from Signalling System #6 (SS6) in 1975 onwards.  In case the packet design seems obvious, it is worth considering some alternatives:  the meta-information could be in a footer rather than in a header, or enmeshed in the data itself (as, for example, HTML tags are enmeshed in the content they modify).  Or, the meta-data could be sent in a separate packet, perhaps ahead of the data packet, as happens with control information in Signalling System #7 (SS7), adopted from 1980.  There are technical reasons why some of these design possibilities are not feasible or not elegant, and perhaps the same reasons apply to transmission of telegrams (which is, after all, a communications medium using packets).
The first commercial electrical telegraph networks date from 1837, and the Western Union company itself dates from 1855 (although created from the merger of earlier companies).  I don’t know when the two-part structure for telegrams was adopted, but it was certainly long before Vannevar Bush predicted the Internet in 1945, and long before packet-switched communications networks were first conceived in the early 1960s.   It is interesting that the two-part structure of the telegramlives on in the structure of internet packets.
* Footnote: As I recall, I sent my first email in 1979.
Reference:
Tim Siedell [2010]: “Western Union:  What makes a great ad?” pp. 15-17 of:  Natasha Vargas-Cooper [2010]:  Mad Men Unbuttoned. New York, NY:  HarperCollins.