The Kindle Edition of The Computer Boys Take Over is now available from Amazon!
In The Mythical Man-Month, his classic post-mortem account of the software development fiasco that was the IBM System/360 operating system, Frederick Brooks lamented the lack of “conceptual unity” in most software architecture. The very best software architecture, Brooks argued, was the work not of a team but an individual, the reflection of the vision of a single master designer . Like the great medieval cathedral at Reims, software should designed for coherence, unity, and beauty. Like Steve Jobs would famously declare decades later, in computing as in all of life, Brooks believed that “Great designs come from great designers.”
At the core of The Mythical Man-Month was the idea of the “chief programming team,” in which a single master software architect would direct the work of a support staff of programmers, code librarians, and other technical staff. Like a surgeon in an operating theater, the chief programmer was master of his domain, responsible for overseeing the entire process from start to finish. Brooks used several names for the chief programmer: the one that was most often borrowed by actual software developers was “the super-programmer.”
In the above ad for National Computer Analysts, Inc., of Princeton, NJ, the idea of the super-programmer is taken much more literally.
“A book, then, or a computer, or a program comes into existence first as an ideal construct, built outside time and space, but complete in the mind of the author. It is realized in time and space, by pen, ink, and paper, or by wire, silicon, and ferrite. The creation is complete when someone reads the book, uses the computer, or runs the program, thereby interacting with the mind of the maker.”1
- Frederick Brooks, The Mythical Man-Month: Essays on Software Engineering (Addison-Wesley, 1975) ↩
The 1960s were characterized by a perpetual “crisis” in the supply of computer programmers. The computer industry was expanding rapidly; the significance of software was becoming ever more apparent; and good programmers were hard to find. The central assumption at the time was that programming ability was an innate rather than a learned ability, something to be identified rather than instilled. Good programming was believed to be dependent on uniquely qualified individuals, and that what defined these uniquely individuals was some indescribable, impalpable quality — a “twinkle in the eye,” an “indefinable enthusiasm,” or what one interviewer described as “the programming bug that meant … we’re going to take a chance on him despite his background.”
In order to identify the members of the special breed of people who might make for a good programmer, many firms turned to aptitude testing. Many of these tests emphasized logical or mathematical puzzles: “Creativity is a major attribute of technically oriented people,” suggested one advocated of such testing. “Look for those who like intellectual challenge rather than interpersonal relations or managerial decision-making. Look for the chess player, the solver of mathematical puzzles.”
The most popular of these aptitude tests was the IBM Programmer Aptitude Test (PAT). By 1962 an estimated eighty percent of all businesses used some form of aptitude test when hiring programmers, and half of these used the IBM PAT.
Although the use of such tests was popular (see Chapter 3, Chess-players, Music-lovers, and Mathematicians), the were also widely criticized. The focus on mathematical trivia, logic puzzles, and word games, for example, did not allow for any more nuanced or meaningful or context-specific problem solving. By the late 1960s, the widespread use of such tests had become something of a joke, as this Datamation editorial cartoon illustrates.
So why did these puzzle tests continue to be used (including to this day)? In part, despite their flaws, they were the best (only?) tool available for processing large pools of programmer candidates. In the absence of some shared understanding of what made a good programmer good, they were at least some quantifiable measure of … something.
One of the big goals of The Computer Boys book was to help shift the focus of center of gravity of the history of computing from hardware to software, from machines to people — and not just the usual people, the “great man” inventors that dominate most popular histories of computing, but the thousands of largely anonymous men and women who worked to construct the computerized systems that form the basic infrastructure of our modern, information-centric society.
It has been a source of great embarrassment to me, therefore, to have people ask me about the man pictured on the cover of my book and not to have any real information about who he was or what he did. I did not even know exactly which computer he was standing in front of. [For those of you not familiar with the publishing business, my ignorance is somewhat excusable: in most cases, authors have no input into the book design process, and I never communicated directly with the graphic designers who did the (excellent) cover design.]
Thanks to Richard Gillespie, the head of the History & Technology department at the Museum Victoria in Melbourne, Australia, I now know exactly who this person was. His name was Trevor Pearcey, and the machine he is standing in front of is the CSIR Mark 1, the fourth stored program computer ever constructed. Pearcey was trained as a physicist and mathematician who in 1945 left England for Australia to work at the Radiophysics Division of the Council for Scientific and Industrial Research (CSIR). The CSIR Mark 1, which he helped design, ran its first program in 1949 and was operational by 1951. The Museum Victoria has an excellent exhibit on this early and important computer.
Trevor Pearcey went on to become one of the great figures in Australian computing. The Pearcey Foundation and the Pearcey National Award were established in honor of his accomplishments. He was born in 1919, and died in 1998.
The “computer revolution” of the mid-20th century is widely considered to be one of the defining moments of contemporary history. And yet very little is known about its principal revolutionaries, the computer programmers, systems analysts, and other technical experts who made possible the computerization of modern society. The story of how the “computer boys” took over, how they constructed for themselves a professional identity, and how they were simultaneously admired and resented by their corporate peers and employers, reveals the complex relationship between technological innovation, organizational politics, and social disruption that continue to define the relationship between computers and society.