Tag Archives: programming

The Multiple Meanings of a Flowchart

Although I have not been posting much to this site recently, I have been busy working on my research.  Most of my attention has been on a new book project tentatively entitled Dirty Bits: An Environmental History of Computing.  This is a project that explores the intersection of the digital economy and the material world, from the geopolitics of minerals (lithium, cobalt, etc.) to e-waste disposal to the energy and water requirements associated with the misleadingly named “Cloud.”

But I have been continuing to work on the history of computer programming as well.  My most recent article is on the history of flowcharts, which were (and to a certain extent, still are) an essential element of the process of programming.  For the most of past century, learning to flowchart a problem was the first step in learning to program a computer.  And yet flowcharts were rarely useful as the “blueprints” of software architecture that they were often claimed to be.   Their function was much more complicated and ambiguous —although none the less useful.

In the latest issue of the journal Information & Culture, I explore the “Multiple Meanings of the Flowchart”. For those of you without access to the Project Muse academic database, you can find an earlier draft version of the paper for free online here.

Here is a brief excerpt from the introduction:

 

In the September 1963 issue of the data processing journal *Datamation* there appeared a curious little four-page supplement entitled “The Programmer’s Coloring Book.” This rare but delightful bit of period computer industry whimsy is full of self-deprecating (and extremely “in”) cartoons about working life of computer programmers. For example, “See the program bug. He is our friend!! Color him swell. He gives us job security.” Some of these jokes are a little dated, but most hold up surprisingly well.

One of the most insightful and revealing of “The Programmer’s Coloring Book” cartoons is also one of the most minimalistic. The drawing is of a simple program flowchart accompanied by a short and seemingly straightforward caption: “This is a flowchart. It is usually wrong.”

1-flowchart-datamation

In case you don’t get the joke, here is some context: by the early 1960s, the flowchart was well-established as an essential element of any large-scale software development project. Originally introduced into computing by John von Neumann in the mid-1940s, flowcharts were a schematic representation of the logical structure of a computer program. The idea was that an analyst would examine a problem, design an algorithmic solution, and outline that algorithm in the form of a flowchart diagram. A programmer (or “coder”) would then translate that flowchart into the machine language understood by the computer. The expectation was that the flowchart would serve as the design schematic for the program code ( in the literature from this period flowcharts were widely referred to as the “programmer’s blueprint”) with the assumption was that once this “blueprint” had been developed, “the actual coding of the computer program is rather routine.”

For contemporary audiences, the centrality of the flowchart to software development would have been self-evident. Every programmer in this period would have learned how to flowchart. In the same year that the “Programmer’s Coloring Book” was published, the American Standards Association had approved a standardized flowchart symbol vocabulary.  Shortly thereafter, the inclusion of flowcharting instruction in introductory programming courses had been mandated by the Association for Computing Machinery’s influential Curriculum ’68 guidelines. A 1969 IBM introduction to data processing referred to flowcharts as “an all-purpose tool” for software development and noted that “the programmer uses flowcharting in and through every part of his task.” By the early 1970s, the conventional wisdom was that “developing a program flowchart is a necessary first step in the preparation of a computer program.”

But every programmer in this period also knew that although drawing and maintaining an accurate flowchart was what programmers were *supposed* to do, this is rarely what happened in actual practice. Most programmers preferred not to bother with a flowchart, or produced their flowcharts only after they were done writing code. Many flowcharts were only superficial sketches to begin with, and were rarely updated to reflect the changing reality of a rapidly evolving software system.[@Yohe1974] Many programmers loathed and resented having to draw (and redraw) flowcharts, and the majority did not. Frederick Brooks, in his classic text on software engineering, dismissed the flowchart as an “obsolete nuisance,” “a curse,” and a “space hogging exercise in drafting.” Wayne LeBlanc lamented that despite the best efforts of programmers to “communicate the logic of routines in a more understandable form than computer language by writing flowcharts,” many flowcharts “more closely resemble confusing road maps than the easily understood pictorial representations they should be.”  Donald Knuth argued that not only were flowcharts time-consuming to create and expensive to maintain, but that they were generally rendered obsolete almost immediately. In any active software development effort, he argued, “any resemblance between our flow charts and the present program is purely coincidental.”[@Knuth:1963fg]

All of these critiques are, of course, the basis of the humor in the *Datamation* cartoon: as every programmer knew well, although in theory the flowchart was meant to serve as a design document, in practice they often served only as post-facto justification. Frederick Brooks denied that he had ever known “an experienced programmer who routinely made detailed flow charts before beginning to write programs,” suggesting that “where organization standards require flow charts, these are almost invariably done after the fact.” And in fact, one of the first commercial software packages, Applied Data Research’s Autoflow, was designed specifically to reverse-engineer a flowchart “specification” from already-written program code. In other words, the implementation of many software systems actually preceded their own design! This indeed is a wonderful joke, or at the very least, a paradox. As Marty Goetz, the inventor of Autoflow recalled “like most strong programmers, I never flowcharted; I just wrote the program.” For Goetz, among others, the flowchart was nothing more than a collective fiction: a requirement driven by the managerial need for control, having nothing to do with the actual design or construction of software. The construction of the flowchart could thus be safely left to the machine, since no-one was really interested in reading them in the first place. Indeed, the expert consensus on flowcharts seemed to accord with the popular wisdom captured by the “Programmer’s Coloring Book”: there were such things as flowcharts, and they were generally wrong.

 

Fixing that which cannot be broken.

This semester I have been teaching a course on the social and organizational aspects of software development. This is not a history course, but a course aimed at students who are working towards becoming software professionals.

One of the more interesting discussions we had recently was about the significance of maintenance in the software development lifecycle. Software maintenance occupies the majority of the time and expense associated with software development — a fact that continues to surprise and perplex even those with long experience in the software industry. In theory, software should never need maintenance, or at least not maintenance in the conventional meaning of the word. After all, software does not break down or wear out. It has no parts to be tightened or lubricated. Once a software system is working properly, it should continue to work forever, assuming that nothing goes wrong with the underlying hardware. So why all the effort spent fixing something that can never be broken?

I have written about the history of software maintenance elsewhere.1 The short version of the story is that most software maintenance is not about fixing bugs, but about adapting software to a changing technological and organizational environment. As Richard Canning, one of the first industry analysts to identify and describe the hidden costs of software maintenance, described the situation, most maintenance was a reflection not of technological failures, but of “changes in the business environment” 2 Because software systems were so inextricably tied to other elements of the socio-technical system, it had to constantly evolve in response to changes in its surrounding environment. It is this interface with other systems that “breaks” and needs to be “maintained.” In this as in many other cases, the adoption of metaphors from traditional manufacturing break down when applied to software development.

In any case, it turned out the literature on software maintenance provided my students with one of the most convincing demonstrations of what Frederick Brooks famously described as the “essential” complexity of software development. Brooks was using the Aristotelian distinction between essence and accident to argue that software was difficult not in its implementation (in other words, because of the difficulty in avoiding bugs) but in terms of its fundamental essence. The complexity of software was unique in that it was never-ending; unlike say, the complexity of physical or natural systems, the complexity of software was arbitrary, “forced without rhyme or reason by the many human institutions and systems to which [software] interfaces must conform.”3

This notion of essential complexity neatly tied together a series of conversations we have had over the course of the semester about the life-cycle of software development, from programming language choice to development methodologies to user-centered design philosophies to documentation and maintenance. I would be the last to argue that the goal of doing history is to learn lessons about the present, but in this case, the relevance of the history of computing to contemporary practice was particularly apparent.

 

 

  1. Nathan Ensmenger, “Software as History Embodied.” Annals of the History of Computing (2009), 31(1)
  2. Richard Canning, “The Maintenance Iceberg,” EDP Analyzer (1972), 10(10)
  3. Frederick Brooks, *The Mythical Man-Month Addison-Wesley, 1975).

New dissertation on the history of programming

A young scholar whose work I have been keeping an eye on for many years has just finished her dissertation on the history of programming.  The scholar in question is named  Joline Zepcevski, and the dissertation is entitled “Complexity & Verification: The History of Programming as Problem Solving.”  For those of you with access to the Proquest Dissertation database, do check it out. Joline did her PhD at the University of Minnesota, and her advisor was the renowned historian of computing Arthur Norberg.

What makes software hard?

A couple of years ago I wrote an essay for the IEEE Annals of the History of Computing entitled “Software as History Embodied” in which I addressed the tongue-in-cheek question, first posed by the Princeton historian Michael Mahoney, of “what makes the history of software so hard?” Mahoney himself, of course, was playing on an even earlier question asked by numerous computer programmers, including the illustrious Donald Knuth. In my essay, I focused specifically on the challenges associated with software maintenance, a long-standing and perplexing problem within the software industry (one made all the more complicated by the fact that, in theory at least, software is a technology that should never be broken – at least in the tradition sense of wearing out or breaking down). My answer to Mahoney’s question was that the history of software was so hard because software itself was so complicated. Software systems are generally deeply embedded in a larger social, economic, and organizational context. Unlike hardware, which is almost by definition a tangible “thing” that can readily be isolated, identified, and evaluated, software is inextricably intertwined with the larger socio-technical system of computing that includes machines (computers and their associated peripherals), people (users, designers, and developers), and processes (the corporate payroll system, for example). Software, I argued, is not just an isolated artifact; software is “history, organization, and social relationships made tangible.”

In any case, I was tickled this past week to discover in my archives an early example of one of my “computer people” asking the question “what makes software so hard.” The article is from 1965, and was published in Datamation. The author is Frank L. Lambert, about who I know very little, other than that he was the head of the software group for the Air Force. What I like most about this piece is the way in which Lambert adopts a broad understanding of software. “Software … is the total set of programs” used to extend the capabilities of the computer, and the “totality of [the] system” included “men, equipment, and time.” Like so many of his contemporaries, Lambert saw software as a complex, heterogeneous system. “What made software so hard?,” Lambert asked rhetorically: “Everything.”