Tag Archives: programming

Recent finds

I have not been very diligent about keeping up with this blog lately. In part this is because my online activities have been focused on building up a bibliography of scholarship on the history of technology and gender. Although it is not confined to information technology specifically, it does thoroughly cover the available literature on gender and computing. You can find the printable PDF version here, but I am hosting the source material as a shared Github repository in the hopes that this will make the resource more useful to other scholars.

There are a few recent works in the history of computer programming that are worth highlighting, however.

The first is a new book by Gerard Alberts called Computerpioniers: het begin van het computertijdperk in Nederland (University of Amsterdam Press, 2017).  As you might infer from the title, it is written in Dutch.  The title roughly translates into English as Computer Pioneers: The beginning of the computer age in the Netherlands.  I unfortunately do not read Dutch, but I know Gerard and his work and have talked about this history on many occasions. This is an important and original contribution to our field.  Buy a copy if only as an encouragement to the publisher to issue a translation!

The second is a very recent article by Ksenia Tatarchenko entitled “The Computer Does Not Believe in Tears”: Soviet Programming, Professionalization, and the Gendering of Authority, which was published in Kritika: Explorations in Russian and Eurasian History.  Alas, for those of you who are not professional academics, this will likely be hidden behind a paywall.  But pay attention to Tatarchenko and her work.  Her work on the history of Soviet computing is just stellar, and part of an exciting reinvigoration of that field.

In fact, speaking of Soviet computing, Ben Peter’s 2016 book How Not to Network a Nation: The Uneasy History of the Soviet Internet was just awarded the 2017 Vucinich Book Prize for “the most important contribution to Russian, Eurasian, and East European studies in any discipline of the humanities or social sciences.”  This is a well-deserved award for an excellent book, and it is particularly nice to see research in the history of computing getting recognized by the broader historical community!

Finally, Jeffrey Yost has published a book that is so fresh that my copy has not yet been delivered.  It is called Making IT Work: A History of the Computer Services Industry (MIT Press, 2017).  I am so excited about this book, which takes an even broader view of the history of computer work than The Computer Boys, and encompasses consulting services, data processing, programming, and systems integration, among other topics.  My understanding is that it covers a longer time period as well, from the 1950s to the present.

 

The Multiple Meanings of a Flowchart

Although I have not been posting much to this site recently, I have been busy working on my research.  Most of my attention has been on a new book project tentatively entitled Dirty Bits: An Environmental History of Computing.  This is a project that explores the intersection of the digital economy and the material world, from the geopolitics of minerals (lithium, cobalt, etc.) to e-waste disposal to the energy and water requirements associated with the misleadingly named “Cloud.”

But I have been continuing to work on the history of computer programming as well.  My most recent article is on the history of flowcharts, which were (and to a certain extent, still are) an essential element of the process of programming.  For the most of past century, learning to flowchart a problem was the first step in learning to program a computer.  And yet flowcharts were rarely useful as the “blueprints” of software architecture that they were often claimed to be.   Their function was much more complicated and ambiguous —although none the less useful.

In the latest issue of the journal Information & Culture, I explore the “Multiple Meanings of the Flowchart”. For those of you without access to the Project Muse academic database, you can find an earlier draft version of the paper for free online here.

Here is a brief excerpt from the introduction:

 

In the September 1963 issue of the data processing journal *Datamation* there appeared a curious little four-page supplement entitled “The Programmer’s Coloring Book.” This rare but delightful bit of period computer industry whimsy is full of self-deprecating (and extremely “in”) cartoons about working life of computer programmers. For example, “See the program bug. He is our friend!! Color him swell. He gives us job security.” Some of these jokes are a little dated, but most hold up surprisingly well.

One of the most insightful and revealing of “The Programmer’s Coloring Book” cartoons is also one of the most minimalistic. The drawing is of a simple program flowchart accompanied by a short and seemingly straightforward caption: “This is a flowchart. It is usually wrong.”

1-flowchart-datamation

In case you don’t get the joke, here is some context: by the early 1960s, the flowchart was well-established as an essential element of any large-scale software development project. Originally introduced into computing by John von Neumann in the mid-1940s, flowcharts were a schematic representation of the logical structure of a computer program. The idea was that an analyst would examine a problem, design an algorithmic solution, and outline that algorithm in the form of a flowchart diagram. A programmer (or “coder”) would then translate that flowchart into the machine language understood by the computer. The expectation was that the flowchart would serve as the design schematic for the program code ( in the literature from this period flowcharts were widely referred to as the “programmer’s blueprint”) with the assumption was that once this “blueprint” had been developed, “the actual coding of the computer program is rather routine.”

For contemporary audiences, the centrality of the flowchart to software development would have been self-evident. Every programmer in this period would have learned how to flowchart. In the same year that the “Programmer’s Coloring Book” was published, the American Standards Association had approved a standardized flowchart symbol vocabulary.  Shortly thereafter, the inclusion of flowcharting instruction in introductory programming courses had been mandated by the Association for Computing Machinery’s influential Curriculum ’68 guidelines. A 1969 IBM introduction to data processing referred to flowcharts as “an all-purpose tool” for software development and noted that “the programmer uses flowcharting in and through every part of his task.” By the early 1970s, the conventional wisdom was that “developing a program flowchart is a necessary first step in the preparation of a computer program.”

But every programmer in this period also knew that although drawing and maintaining an accurate flowchart was what programmers were *supposed* to do, this is rarely what happened in actual practice. Most programmers preferred not to bother with a flowchart, or produced their flowcharts only after they were done writing code. Many flowcharts were only superficial sketches to begin with, and were rarely updated to reflect the changing reality of a rapidly evolving software system.[@Yohe1974] Many programmers loathed and resented having to draw (and redraw) flowcharts, and the majority did not. Frederick Brooks, in his classic text on software engineering, dismissed the flowchart as an “obsolete nuisance,” “a curse,” and a “space hogging exercise in drafting.” Wayne LeBlanc lamented that despite the best efforts of programmers to “communicate the logic of routines in a more understandable form than computer language by writing flowcharts,” many flowcharts “more closely resemble confusing road maps than the easily understood pictorial representations they should be.”  Donald Knuth argued that not only were flowcharts time-consuming to create and expensive to maintain, but that they were generally rendered obsolete almost immediately. In any active software development effort, he argued, “any resemblance between our flow charts and the present program is purely coincidental.”[@Knuth:1963fg]

All of these critiques are, of course, the basis of the humor in the *Datamation* cartoon: as every programmer knew well, although in theory the flowchart was meant to serve as a design document, in practice they often served only as post-facto justification. Frederick Brooks denied that he had ever known “an experienced programmer who routinely made detailed flow charts before beginning to write programs,” suggesting that “where organization standards require flow charts, these are almost invariably done after the fact.” And in fact, one of the first commercial software packages, Applied Data Research’s Autoflow, was designed specifically to reverse-engineer a flowchart “specification” from already-written program code. In other words, the implementation of many software systems actually preceded their own design! This indeed is a wonderful joke, or at the very least, a paradox. As Marty Goetz, the inventor of Autoflow recalled “like most strong programmers, I never flowcharted; I just wrote the program.” For Goetz, among others, the flowchart was nothing more than a collective fiction: a requirement driven by the managerial need for control, having nothing to do with the actual design or construction of software. The construction of the flowchart could thus be safely left to the machine, since no-one was really interested in reading them in the first place. Indeed, the expert consensus on flowcharts seemed to accord with the popular wisdom captured by the “Programmer’s Coloring Book”: there were such things as flowcharts, and they were generally wrong.

 

Fixing that which cannot be broken.

This semester I have been teaching a course on the social and organizational aspects of software development. This is not a history course, but a course aimed at students who are working towards becoming software professionals.

One of the more interesting discussions we had recently was about the significance of maintenance in the software development lifecycle. Software maintenance occupies the majority of the time and expense associated with software development — a fact that continues to surprise and perplex even those with long experience in the software industry. In theory, software should never need maintenance, or at least not maintenance in the conventional meaning of the word. After all, software does not break down or wear out. It has no parts to be tightened or lubricated. Once a software system is working properly, it should continue to work forever, assuming that nothing goes wrong with the underlying hardware. So why all the effort spent fixing something that can never be broken?

I have written about the history of software maintenance elsewhere.1Nathan Ensmenger, “Software as History Embodied.” Annals of the History of Computing (2009), 31(1) The short version of the story is that most software maintenance is not about fixing bugs, but about adapting software to a changing technological and organizational environment. As Richard Canning, one of the first industry analysts to identify and describe the hidden costs of software maintenance, described the situation, most maintenance was a reflection not of technological failures, but of “changes in the business environment” 2Richard Canning, “The Maintenance Iceberg,” EDP Analyzer (1972), 10(10) Because software systems were so inextricably tied to other elements of the socio-technical system, it had to constantly evolve in response to changes in its surrounding environment. It is this interface with other systems that “breaks” and needs to be “maintained.” In this as in many other cases, the adoption of metaphors from traditional manufacturing break down when applied to software development.

In any case, it turned out the literature on software maintenance provided my students with one of the most convincing demonstrations of what Frederick Brooks famously described as the “essential” complexity of software development. Brooks was using the Aristotelian distinction between essence and accident to argue that software was difficult not in its implementation (in other words, because of the difficulty in avoiding bugs) but in terms of its fundamental essence. The complexity of software was unique in that it was never-ending; unlike say, the complexity of physical or natural systems, the complexity of software was arbitrary, “forced without rhyme or reason by the many human institutions and systems to which [software] interfaces must conform.”3Frederick Brooks, *The Mythical Man-Month Addison-Wesley, 1975).

This notion of essential complexity neatly tied together a series of conversations we have had over the course of the semester about the life-cycle of software development, from programming language choice to development methodologies to user-centered design philosophies to documentation and maintenance. I would be the last to argue that the goal of doing history is to learn lessons about the present, but in this case, the relevance of the history of computing to contemporary practice was particularly apparent.

  • 1
    Nathan Ensmenger, “Software as History Embodied.” Annals of the History of Computing (2009), 31(1)
  • 2
    Richard Canning, “The Maintenance Iceberg,” EDP Analyzer (1972), 10(10)
  • 3
    Frederick Brooks, *The Mythical Man-Month Addison-Wesley, 1975).

New dissertation on the history of programming

A young scholar whose work I have been keeping an eye on for many years has just finished her dissertation on the history of programming.  The scholar in question is named  Joline Zepcevski, and the dissertation is entitled “Complexity & Verification: The History of Programming as Problem Solving.”  For those of you with access to the Proquest Dissertation database, do check it out. Joline did her PhD at the University of Minnesota, and her advisor was the renowned historian of computing Arthur Norberg.