History of UNIVAC at ArsTechnica

The nerd news website ArsTechnica recently published an article by Mathew Lasar on the history of the UNIVAC I computer.  It’s a nice little piece that draws heavily on Kurt Beyer’s excellent recent biography of Grace Hopper and Paul Ceruzzi’s classic History of Modern Computing (one of the earliest of the books published as part of MIT Press’ History of Computing series, of which The Computer Boys Take Over is the latest entry).

Lasar highlights a issue relevant to the history of computer programming that I had previously not encountered (or at least noticed).  In discussing the female programmers that Grace Hopper had cultivated at the Eckert Mauchly Computer Company, he notes that after the sale of EMCC to Remington Rand, many of these women left to pursue other opportunities, largely because of the lack of respect they felt in their new big-corporation environment:

“On top of that, new management did not sympathize with EMCC’s female programmers, among them Grace Hopper, who by 1952 had written the UNIVAC’s first software compiler. ‘There were not the same opportunities for women in larger corporations like Remington Rand,’ she later reflected. ‘They were older companies, and the jobs had been stereotyped.'”

During the labor crisis in programming that emerged in the 1950s, these women had plenty of other opportunities, Lasar argues, and many departed for other, more enlightened employers.  Read the whole article.  A nice piece, and it is good to see this history get rediscovered for a contemporary audience (particularly in a venue as popular as ArsTechnica).

Who stole the Computer Girls?

In an article in today’s Washington Post, ombudsman Patrick Pexton addresses a recent Outlook essay that “borrows” heavily from my academic research.   In that piece, a freelance journalist named Anna Lewis repeats my discussion of The Computer Girls article in the April 1967 issue of Cosmopolitan Magazine.  This is work for which I am well known, having written about it in both The Computer Boys Take Over and in an essay published in a recent collection edited by Tom Misa called Gender Codes: Why Women are Leaving Computing (Wiley, 2010).  In the original blog post that prompted the Washington Post piece, this “journalist” does link to my book site, although she does not mention me by name.  She also presents her discovery of the Cosmo Girl article as entirely her own, and even copies images (again, without attribution) from this site.   In the Washington Post piece, there are no links, no mention of me or my work, and only a vague allusion to the Misa collection.  Lewis presents this material as being entirely her own, with no recognition of my contributions, or the work of other historians.

Whatever the Washington Post lawyers might argue, this is clearly a case of plagiarism.  No question, no ambiguity.  If a student had turned in a paper like this, I would have failed him or her.  The guidelines on plagiarism that we provide our students at the University of Texas, for example, make this clear: “Plagiarism is another serious violation of academic integrity. In simplest terms, this occurs if you represent as your own work any material that was obtained from another source, regardless how or where you acquired it.”  In this case, the plagiarism did not involve copying word-for-word my material (although the original blog post by Lewis comes pretty close), but rather the ideas.  Again, the standard definition of plagiarism makes it clear that plagiarism includes not just verbatim repetition but also the “use of another person’s research, phrasing, conclusions or unique descriptions without proper attribution.”

In this case, the key theft is the sources and interpretation.  The discovery of the long-lost Cosmo article, the identification of its significance to contemporary debates about gender and computing, and the situation of this material in the context of late 1960s developments in the computer industry, are mine alone, and are recognized by other professional historians as significant insights.  This material was not commonly known, was not just there to be found, and would have made little sense to anyone without the analysis I provide.

This kind of intellectual theft is increasingly common in the Internet-era, but this is no excuse.  If another newspaper had summarized, without attribution, an article published in the Washington Post, I doubt that they would have had such a casual attitude.  In my several conversations with Patrick Pexton, he gave every impression that he regarded this as a serious breach of professional ethics on the part of Anna Lewis.   The first draft of his opinion piece, which he shared with me earlier this week, took a more principled stand.  The final version, however, seems to have been whitewashed by the Washington Post legal counsel.

I am generally thrilled when people benefit from my research.  I hope that they learn from it and extend it in new directions.   I would have been happy to write this incident off as a simple mistake, or a consequence of moving from one medium to another (in this case, from the web, which allows for hyperlinks, to print, which does not).  It is only the fact that neither the author nor the Post provided any apology and no official acknowledgement of the problem that makes me upset.

In any case, as to the question of attribution, let the readers decide.  Unfortunately, my essay from the Misa collection is not available online (the publisher owns the rights), although you can read a draft version from the conference presentation here.  Does that first sentence look familiar?  It gets repeated almost verbatim on the original Lewis blog.  Do the larger ideas there and in Washington Post article seem surprisingly similar to you?  Do you believe that it provided proper attribution — or indeed, any at all?  The answer seems pretty obvious, no matter what the Washington Post might choose to believe.

The great irony is that, had the author and the Washington Post simply apologized to me and corrected the online version by adding a link, the matter probably would have ended there.  I very deliberately kept this private, and have not sought to embarrass either the Post of the author.   In choosing to deny responsibility, they made this a public matter, and demanded a response from me.

[UPDATE] The media watchdog site stinkyjournalism.com has covered the incident.  This is not the first time that the Washington Post has struggled with “proper attribution.”  However, they seem to have higher standards when it happens to them.

Computing as Science and Practice

And the good books keep on coming…

In the current issue of Science, I review Histories of Computing, a collection of the late Princeton historian Michael Mahoney’s essays on the history of software, edited by Tom Haigh for Harvard University Press.  Mahoney was one of the intellectual giants of the history of computing.  I studied with him in graduate school, and he was a friend and colleague for many years afterward.

Cybernetic Revolutionaries

Always on the lookout for good books dealing with the history of computing, I was pleased to see this week the announcement of Eden Medina’s new book, Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile (MIT Press, 2011).  Not only does Eden happen to be a friend of mine, but her book is a welcome addition to the literature on the history of computing, which has generally been focused almost exclusively on the United States (with the occasional recognition of developments in Europe) and on technologies rather than users.  Medina’s “cybernetic revolutionaries” are a very different group from the “computer boys” that I focus on (although they share similar characteristics and agendas).  Medina’s subjects are the cyberneticians and government officials who imagined, in the context of Salvadore Allende’s socialist Chile, a far-seeing and -reaching computer system that would monitor and manage that nation’s entire economy.  Although the system, called Project Cybersyn, was never fully implemented, it represents in many ways the apotheosis of a particular vision of the electronic computer as the ultimate cybernetic system.

For those of you unfamiliar with the concept, cybernetics is possible the most important science that you have never heard of.   The word itself has larger disappeared from public discourse (at least in the United States) but its influence remains strong in a range of scientific disciplines from ecology to molecular biology to economics to psychology.

To give just a small sense of what Cybernetic Revolutionaries is trying to accomplish, take a look at a recent post on the MIT Press blog in which Medina draws a parallel between the socialist utopian vision of Project Cybersyn and the roughly contemporary television series Star Trek.  “Like ‘Star Trek,’ Medina argues, “Project Cybersyn brought together technology and politics to advance a utopian vision of a just society.”  It is clear that, for many computer revolutionaries, “computerization” was not just a technological project, but also ideological. For many American corporations in the 1950s and 1960s, for example, computerization was about centralization.  In Allende’s Chile, the ideological dimensions of computerization appear to have been much more complex.  More recently, computerization is associated with democratization, or liberalization, or the perpetuation of free-market ideology.   The clear and close relationship between technology and ideology is only beginning to be explored by historians.

What makes software hard?

A couple of years ago I wrote an essay for the IEEE Annals of the History of Computing entitled “Software as History Embodied” in which I addressed the tongue-in-cheek question, first posed by the Princeton historian Michael Mahoney, of “what makes the history of software so hard?” Mahoney himself, of course, was playing on an even earlier question asked by numerous computer programmers, including the illustrious Donald Knuth. In my essay, I focused specifically on the challenges associated with software maintenance, a long-standing and perplexing problem within the software industry (one made all the more complicated by the fact that, in theory at least, software is a technology that should never be broken – at least in the tradition sense of wearing out or breaking down). My answer to Mahoney’s question was that the history of software was so hard because software itself was so complicated. Software systems are generally deeply embedded in a larger social, economic, and organizational context. Unlike hardware, which is almost by definition a tangible “thing” that can readily be isolated, identified, and evaluated, software is inextricably intertwined with the larger socio-technical system of computing that includes machines (computers and their associated peripherals), people (users, designers, and developers), and processes (the corporate payroll system, for example). Software, I argued, is not just an isolated artifact; software is “history, organization, and social relationships made tangible.”

In any case, I was tickled this past week to discover in my archives an early example of one of my “computer people” asking the question “what makes software so hard.” The article is from 1965, and was published in Datamation. The author is Frank L. Lambert, about who I know very little, other than that he was the head of the software group for the Air Force. What I like most about this piece is the way in which Lambert adopts a broad understanding of software. “Software … is the total set of programs” used to extend the capabilities of the computer, and the “totality of [the] system” included “men, equipment, and time.” Like so many of his contemporaries, Lambert saw software as a complex, heterogeneous system. “What made software so hard?,” Lambert asked rhetorically: “Everything.”

Computers, Programmers, and the Politics of Technical Expertise