Chess and Computing

 

Image_2.Stanford_University.McCarthy-John.c1967.STANFORD_UNIVERSITY

The long-awaited (by me, at least) film Computer Chess is now out.  Directed by Andrew Bujalski, this is one of the few full-length fictional cinematic accounts of computer programmers.  In this case, these are the programmers who developed artificial intelligence programs to compete in chess tournaments.

I have written extensively about the role of these chess tournaments in my paper entitled “Is Chess the Drosophila of AI? A Social History of an Algorithm,” published in early 2012 in the journal Social Studies of Science.  This was one of my favorite and I think, most original, of my publications in the history of software.  You can read a draft version of the paper here.

As an additional coincidence, during the year that I spent on the faculty of the School of Information at the University of Texas in Austin, I accidentally followed Bujalski around.  Every time I would talk to someone about my work — which I thought was incredibly novel — they would nod knowingly and say, “yeah, there was this guy filming a movie about that….”

I have not yet seen the film, but I am looking forward to it.  A review to follow sometime soon.

UPDATE:  I recently found out that the “Is Chess the Drosophila of AI? a Social History of an Algorithm” (Social Studies of Science, 2012) was awarded the 2013 Maurice Daumas Prize by the International Committee for the History of Technology (ICOHTEC).  For an historian of technology, this is a great honor.

 

On Douglas Engelbart and “Bootstrapping”

Douglas Engelbart passed away earlier this month.  For many years, Engelbart languished as one of the many forgotten heroes of the computer revolution.  More recently, he has been rediscovered as the inventor of the computer mouse.  But Engelbart’s idea that the computer could be a tool for “augmenting” the human intellect is too often confused with the modern notion of “user-friendliness.”   In many important respects, the mouse that Engelbart invented was not the mouse that we use today.  It resembles it in terms of its fundamental technological architecture, but as part of a larger socio-technical system, it is really quite different.

There has been a lot written about Engelbart in the wake of his death.   The very best book on Engelbart, Thierry Bardini’s Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing (Stanford University Press, 2000) is, alas, too little read or referenced.  What follows is my review of Bardini’s book, first published in the Harvard Business History Review:

Review of  Thierry Bardini, Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing (Stanford, CA: Stanford University Press, 2000)

Recent years have witnessed the rediscovery of Douglas Engelbart.  During the 1960s and 1970s Engelbart was a central figure in the development of several key user-interface technologies, including the electronic mouse, the windowed user-interface, and hypertext, that have since become fundamental paradigms of modern computing. In this substantial new history of Engelbart and his pioneering research on the “augmentation of human intellect,” Thierry Bardini provides a balanced and far-reaching account of Engelbart’s role in shaping the technical and social origins of the personal computer.

Douglas Engelbart’s interest in human-computer interfaces began in the early 1950s.  After serving a short stint as a radar operator in the Army and working for three years as an electrical engineer at the Ames Research Laboratory, Engelbart found himself dissatisfied with his own personal and profession development.  In an epiphanal moment of self-realization, he embarked on a “crusade” aimed at “maximizing how much good I could do for mankind.”  He decided that the newly invented electronic computer was the ideal tool for addressing the growing “complexity/urgency ratio” of the problems facing modern technocratic society.  He enrolled in a computer program at Berkeley, received his Ph.D. in 1956, and by 1959 had founded a program for the “augmentation of human intellect” at the newly-established Stanford Research Institute (SRI).

Over the course of the next several decades, Engelbart and his “crusade” played an active role in shaping the emerging science of user-interface design.  Researchers at his Augmentation Research Center (ARC) pursued what Engelbart referred to as  a “bootstrapping” approach to research and development.  The “bootstrapping” concept borrowed heavily from the cybernetic notion of feedback: progress would be achieved by “the feeding back of positive research results to improve the means by which the researchers themselves can pursue their work.”  The result would be the iterative improvement of both the user and the computer.  The augmentation of human intelligence was dependent not only on the development of new technologies, but on the adaptation of humans to new modes and mechanisms of human-machine interaction.  Several of the key technologies invented at ARC, including the chord keyset (an efficient five-button keyboard that could be operated one-handed) and the electronic mouse, required significant behavioral readjustments on the part of inexperienced users.

Engelbart’s emphasis on the computer as an augmentation device brought him into conflict with then-dominant perspectives on user-interface research.  In 1960 the MIT psychologist J.C.R. Licklider published a highly influential paper on “Man-Computer Symbiosis” that conceived of human-computer interaction in terms of a conversation among equals.  In Licklider’s model, the computer was not merely a tool to be used but a legitimate and complementary form of intelligence: users would be encouraged “to think in interaction with a computer in the same way you think with a colleague whose competence supplements your own.”  As the first director of the Information Processing Techniques Office (IPTO) at the United States Department of Defense Advanced Research Projects Agency (ARPA), Licklider actively encouraged research in artificial intelligence.  Engelbart’s program, which was based on an entirely different assumption about the nature of the human-computer relationship, was thus consigned to the margins of the institutional networks that developed under the auspices of IPTO and ARPA.

In the course of their research, ARC researchers produced several important technological innovations: chord key set, the electronic mouse, and the oN-Line System (NLS) for storing, retrieving, and linking between data.  Bardini describes the development of these technologies in considerable detail, but his emphasis is on a much more significant construction: the computer user.  Building on recent research in the sociology of technology, Bardini argues that “Technical innovators such as Douglas Engelbart also invent the kind of people they expect to use their innovations.”  In Engelbart’s case, this user was a skilled knowledge worker, typically an experienced programmer.  Embedded in the bootstrapping approach was the assumption that the user already knew how to operate the technology, and could therefore focus on the adaptation of his or her own practices in an optimal feedback loop with the computer.

The idealized computer user invented at ARC contrasted sharply with the virtual user invented at the nearby Xerox Corporation Palo Alto Research Center (PARC).   At PARC computer users were assumed to be inexperienced, non-technical, and child-like: the focus of PARC research was therefore on the development of “user-friendly” interfaces that required no significant learning or adaptation.  The graphical user interface developed at PARC was based on common, real-world metaphors: using a standard keyboard, users “typed” on electronic “paper” and manipulated objects on a virtual “desktop.”  The result was the WIMP (windows-icons-mouse-pointer) interface that since become conventional for most personal computer operating systems. Although PARC researchers adopted several key technologies from ARC, including the mouse and the windowed user-interface, Engelbart considered their “dumbed-down” interface to be a betrayal of the real power of the computer.

For most of his career, Engelbart and his ARC researchers were consigned to the outskirts of the computer science community, overshadowed by the more visible research programs funded by IPTO and Xerox PARC.  Although many of Engelbart’s fundamental ideas and innovations the discipline have since been recognized and adopted, his larger “crusade” has largely gone unrealized.  The strength of Bardini’s narrative is that it moves beyond the simplistic “misunderstood genius” genre to provide a rich account of the many social and political factors that determine how and why certain ideas and technologies get disseminated and adopted.  Overall the book is accessible and compelling, and although not primarily targeted at business historians, provides valuable insights into the fundamental theories and innovations that underlie modern computing technology.

 

 

New Book: Hybrid Zone: Computers and Science at Argonne National Laboratory, 1946-1992

http://docentpress.com/media/2013/03/yood_cover_layout1.jpgThere have been a number of exciting new books in the history of computing that have been published in the past year. The most recent is Charles Yood’s Hybrid Zone: Computers and Science at Argonne National Laboratory, 1946-1992 (Docent Press, 2013).

The Hybrid Zone arrived in the mail at a timely moment. As part of the recent dedication of Indiana University’s new Big Red II Supercomputer, the current director of Science at the Argonne Leadership Computing Facility of Argonne National Laboratory gave a public lecture on the history and future of computing in the sciences. And so it has been a week of supercomputing all around…

I plan on writing more about Yood’s book sometime in the near future, but for the time being, here is what I blurbed for the book cover:

This book provides an insightful, accessible, and nuanced history of the complex interactions between electronic computing, computer science, and the immensely influential scientific and intellectual practices known collectively as computational science. In Hybrid Zone, Yood has managed to combine deep historical scholarship with a broadly synthetic perspective that will be of interest to scholars, practitioners, and general audiences interested in understanding the relationship between technological innovation, scientific practices, and the social history of computing.

The announcement for the book can be found at Docent Press. Find it on Amazon here.

Fixing that which cannot be broken.

This semester I have been teaching a course on the social and organizational aspects of software development. This is not a history course, but a course aimed at students who are working towards becoming software professionals.

One of the more interesting discussions we had recently was about the significance of maintenance in the software development lifecycle. Software maintenance occupies the majority of the time and expense associated with software development — a fact that continues to surprise and perplex even those with long experience in the software industry. In theory, software should never need maintenance, or at least not maintenance in the conventional meaning of the word. After all, software does not break down or wear out. It has no parts to be tightened or lubricated. Once a software system is working properly, it should continue to work forever, assuming that nothing goes wrong with the underlying hardware. So why all the effort spent fixing something that can never be broken?

I have written about the history of software maintenance elsewhere.1 The short version of the story is that most software maintenance is not about fixing bugs, but about adapting software to a changing technological and organizational environment. As Richard Canning, one of the first industry analysts to identify and describe the hidden costs of software maintenance, described the situation, most maintenance was a reflection not of technological failures, but of “changes in the business environment” 2 Because software systems were so inextricably tied to other elements of the socio-technical system, it had to constantly evolve in response to changes in its surrounding environment. It is this interface with other systems that “breaks” and needs to be “maintained.” In this as in many other cases, the adoption of metaphors from traditional manufacturing break down when applied to software development.

In any case, it turned out the literature on software maintenance provided my students with one of the most convincing demonstrations of what Frederick Brooks famously described as the “essential” complexity of software development. Brooks was using the Aristotelian distinction between essence and accident to argue that software was difficult not in its implementation (in other words, because of the difficulty in avoiding bugs) but in terms of its fundamental essence. The complexity of software was unique in that it was never-ending; unlike say, the complexity of physical or natural systems, the complexity of software was arbitrary, “forced without rhyme or reason by the many human institutions and systems to which [software] interfaces must conform.”3

This notion of essential complexity neatly tied together a series of conversations we have had over the course of the semester about the life-cycle of software development, from programming language choice to development methodologies to user-centered design philosophies to documentation and maintenance. I would be the last to argue that the goal of doing history is to learn lessons about the present, but in this case, the relevance of the history of computing to contemporary practice was particularly apparent.

 

 

  1. Nathan Ensmenger, “Software as History Embodied.” Annals of the History of Computing (2009), 31(1)
  2. Richard Canning, “The Maintenance Iceberg,” EDP Analyzer (1972), 10(10)
  3. Frederick Brooks, *The Mythical Man-Month Addison-Wesley, 1975).

The Computer Boys meet the Digital Humanities

In the most recent issue of the American Quarterly, a professor of Literature, Communication, and Culture at Georgia Tech named Lauren Frederica Klein has published an interesting review essay that covers The Computer Boys.  The full essay is behind a paywall, but if you have access is well worth tracking down.  I have said before that a good reviewer can reveal things about a book that even the author might not have seen or even explicitly intended.  In this case, Professor Klein situates The Computer Boys in the literature the digital humanities.  This is not necessarily how I had thought of the book, but her reading of the book in this context makes sense, and has given me much to think about.

Of the other books covered in the essays, I was familiar only Wendy Hui Kyong Chun’s Programmed Visions: Software and Memory (in fact, my much-delayed copy arrived in the mail earlier this week) and Lisa Nakamura and Peter Chow-White’s edited volume Race after the Internet.  I assign Nakamura’s work all the time in my courses.   The fourth book, however, was new to me: Debates in the Digital Humanities, edited by Matthew Gold.  All of these are worth a closer look in their own right, but Klein’s essay inspires me to think of the connections between them in new ways. As academics, it is too easy to get lost in our own disciplines.

Here is one of the particularly nice things the review has to say about The Computer Boys:

This is important work for the history of computing, and for the digital humanities as a whole. For even if Ensmenger does not position his study as a prehistory of digital culture, accounts such as his are essential if we are to fully comprehend the historical and technical complexity of today’s digital world.

One of my goals in the book was to make the history of computing relevant to scholars in disciplines other than the history of science and technology.  I am pleased to see that my work can be useful in the context of American Studies and the digital humanities!

 

Computers, Programmers, and the Politics of Technical Expertise