A couple of years ago I wrote an essay for the IEEE Annals of the History of Computing entitled “Software as History Embodied” in which I addressed the tongue-in-cheek question, first posed by the Princeton historian Michael Mahoney, of “what makes the history of software so hard?” Mahoney himself, of course, was playing on an even earlier question asked by numerous computer programmers, including the illustrious Donald Knuth. In my essay, I focused specifically on the challenges associated with software maintenance, a long-standing and perplexing problem within the software industry (one made all the more complicated by the fact that, in theory at least, software is a technology that should never be broken – at least in the tradition sense of wearing out or breaking down). My answer to Mahoney’s question was that the history of software was so hard because software itself was so complicated. Software systems are generally deeply embedded in a larger social, economic, and organizational context. Unlike hardware, which is almost by definition a tangible “thing” that can readily be isolated, identified, and evaluated, software is inextricably intertwined with the larger socio-technical system of computing that includes machines (computers and their associated peripherals), people (users, designers, and developers), and processes (the corporate payroll system, for example). Software, I argued, is not just an isolated artifact; software is “history, organization, and social relationships made tangible.”
In any case, I was tickled this past week to discover in my archives an early example of one of my “computer people” asking the question “what makes software so hard.” The article is from 1965, and was published in Datamation. The author is Frank L. Lambert, about who I know very little, other than that he was the head of the software group for the Air Force. What I like most about this piece is the way in which Lambert adopts a broad understanding of software. “Software … is the total set of programs” used to extend the capabilities of the computer, and the “totality of [the] system” included “men, equipment, and time.” Like so many of his contemporaries, Lambert saw software as a complex, heterogeneous system. “What made software so hard?,” Lambert asked rhetorically: “Everything.”
The notion that computer programmer is not just an occupation but a personality type has a long history. In 1968 the systems consultant Dick Brandon gave a talk at the ACM National Conference on the “Problem in Perspective” (the problem here being the problem of programming labor, one of the more significant “software crises” of the 1950s and 1960s) in which he described the “Programmer Pysche”:
Although computer programming has not been around long enough for biological inbreeding to be considered a problem, the personality traits of the average programmer almost universally reflect certain negative characteristics.
The average programmer is excessively independent — sometimes to the point of mild paranoia. He is often egocentric, slightly neurotic, and he borders upon a limited schizophrenia. The incidence of beards, sandals, and other symptoms of rugged individualism or nonconformity are notably greater among this demographic group. Stories about programmers and their attitudes and peculiarities are legion, and do not bear repeating here..
Although Brandon’s characterizations are at least somewhat tongue in cheek, they were close enough to the reality of common perception to prompt a follow-up study by Dr. Theodore Willoughby of Pennsylvania State University. Willoughby concluded that programmers taken as a group were not, in fact, either paranoid, schizophrenic, or otherwise psychologically deviant. Instead, he suggested that such characterizations were a reflection of the “generation gap” (or perhaps “professional identity gap”) between managers and programmers (or what I refer to in The Computer Boys as “organizational territory disputes”). In any case, it is good to know that, statistically speaker, although your IT Guy might be odd, he is probably not paranoid…
The pioneering computer scientists Daniel McCracken passed away yesterday. Among other things, McCracken wrote one of the first books on computer programming. McCracken also wrote extensively on computer programming throughout the 1960s. I know him best through his 1962 Datamation article on “The Software Turmoil,” which was one of the first articulations of the general sense of dissatisfaction with software development that would emerge in the late 1960s as the “Software Crisis.”.
Throughout his career, McCracken argued that the solution to the burgeoning crisis on software development was in part the pursuit of professionalism within programming. The following is from another 1961 essay on “The Human Side of Computing”:.
The training of hordes of newcomers isn’t the whole story, of course. There are problems in the professional development for those already in the field. To take one instance, a lot of the present coders will have to become systems analysts in the next few years. The problem is, how are they supposed to go about learning the new skills required?[p.9]
The difficulty seems to be that systems work is not so much a body of factual knowledge, as an approach to problem solving – and no one knows how to teach the problem solving approach. All that we seem to be able to do it let the coder work with an experienced systems man, and hope that some of the skills get transferred by osmosis.[p.9-10]
This observer would like to suggest that the attainment of truly professional status for computer people as computer people is only partly a matter of demonstrating mastery of subject matter. It is also a matter of demonstrating a sense of responsibility and thereby gaining a certain dignity and stature in the public eye.
The historian of computing Arthur Norberg interviewed McCracken for an oral history for the ACM History Committee.
The 1960s were characterized by a perpetual “crisis” in the supply of computer programmers. The computer industry was expanding rapidly; the significance of software was becoming ever more apparent; and good programmers were hard to find. The central assumption at the time was that programming ability was an innate rather than a learned ability, something to be identified rather than instilled. Good programming was believed to be dependent on uniquely qualified individuals, and that what defined these uniquely individuals was some indescribable, impalpable quality — a “twinkle in the eye,” an “indefinable enthusiasm,” or what one interviewer described as “the programming bug that meant … we’re going to take a chance on him despite his background.”
In order to identify the members of the special breed of people who might make for a good programmer, many firms turned to aptitude testing. Many of these tests emphasized logical or mathematical puzzles: “Creativity is a major attribute of technically oriented people,” suggested one advocated of such testing. “Look for those who like intellectual challenge rather than interpersonal relations or managerial decision-making. Look for the chess player, the solver of mathematical puzzles.”
The most popular of these aptitude tests was the IBM Programmer Aptitude Test (PAT). By 1962 an estimated eighty percent of all businesses used some form of aptitude test when hiring programmers, and half of these used the IBM PAT.
Although the use of such tests was popular (see Chapter 3, Chess-players, Music-lovers, and Mathematicians), the were also widely criticized. The focus on mathematical trivia, logic puzzles, and word games, for example, did not allow for any more nuanced or meaningful or context-specific problem solving. By the late 1960s, the widespread use of such tests had become something of a joke, as this Datamation editorial cartoon illustrates.
So why did these puzzle tests continue to be used (including to this day)? In part, despite their flaws, they were the best (only?) tool available for processing large pools of programmer candidates. In the absence of some shared understanding of what made a good programmer good, they were at least some quantifiable measure of … something.
In a recent talk that I gave at Stanford University, I discussed the changing role of women in the computing industry. The focus of the talk was a 1967 article in Cosmopolitan Magazine called “The Computer Girls”. An unusual source for a historian of computing, but one of my favorite and most useful. My particular favorite: a quote from the celebrated computer pioneer Admiral Grace Hopper comparing computer programming to following a recipe: “You have to plan ahead and schedule everything so it’s ready when you need it. Programming requires patience and the ability to handle detail. Women are ‘naturals’ at computer programming.”