Dangerous S-Curves Ahead!

The much-hyped and controversial website Vox.com (“Its mission is simple: Explain the news”) has been taken to task for a recent article claiming that the speed of adoption of new technologies has been speeding up. They have been criticized not only for their uncritical and misleading use of data, but also the way in which they approached the process of making corrections to the original article.

The actual claim made in the Vox.com piece about the supposed accelerating pace of technology is not at all original. In many ways, it would be hard to find a more conventional piece of “wisdom.” It is true that their reliance on Youtube videos allegedly showing how children today cannot figure out what a Sony Walkman might be for is particularly anecdotal (“kids say the darndest things!”), but if you were to ask the proverbial man or woman on the street about technology this is pretty much exactly what they would say.

The Vox.com article is a prime example of what I teach my students about the dangers of the “s-curve.” The s-curve has become something of a cliche in pop-economics writing about the history of technology.

Here is the basic form (and premise) of the s-curve graph (adapted from a recent lecture in my Information Society course):

w10b-Myth4-GUI Revolution.005

The idea is that the adoption of a novel technology often starts slowly, then accelerates rapidly as the technology gets perfected, and then tails off as the technology becomes mainstream.  The “fact” that the length of these s-curves are getting increasingly shorter is the premise behind many a “technology is driving history” argument — including the recent one made by Vox.com.

w10b-Myth4-GUI Revolution.004

The problem, of course, is that by fiddling with dates, scale, or detail, you can fit any technological phenomenon into a convenient s-curve.

Consider for example this illustration of the adoption of electric-car technology:


In the surface, this seems to capture the s-curve phenomenon neatly.

But let us provide a more historically-accurate graph of electric-car ownership in the 20th century, which actually looks more like this:

w10b-Myth4-GUI Revolution.028

The really interesting phenomenon here is how popular electric cars were in the early decades of the 20th century.  Here is a close view of that period.  Notice that ownership peaks at about 1920.

w10b-Myth4-GUI Revolution.030

Here is a picture of an Edison manufacturer electric car from this period

w10b-Myth4-GUI Revolution.023

and, just for fun, a picture of Colonel Sanders standing next to his electric car

w10b-Myth4-GUI Revolution.024

The point is that these vehicles were popular.  It was only a combination of social changes (electric cars get gendered female — they become known as “chick cars,” so to speak), technological innovations, political changes (cheap oil!), etc., that gasoline powered vehicles become standard. [For more on this history, see David Kirsch’s The Electric Vehicle and the Burden of History]


w10b-Myth4-GUI Revolution.041

If we overlay a graph of the adoption of hybrid gas-electric vehicles onto this history, we can begin to explain the resurgence of electric vehicles in the early 21st century.  (Again, the explanation is social, political, and technological).

w10b-Myth4-GUI Revolution.029


Finally, we could overlay all of this with an s-curve that seems to neatly capture the phenomenon.  But this would provide an entirely false picture of the history of the electric car, and would suggest a coherence to the s-curve model of technological adoption that is entirely inconsistent with reality.   The story of the electric car is not one of gradual (albeit slow) transformation from niche to mainstream technology.   The s-curve does not fit.


w10b-Myth4-GUI Revolution.027

So how does this relate the the Vox.com article?

Depending on how you chose your endpoints, you can always construct a picture of accelerating adoption.  Take for example the curves associated with various music reproduction technologies:

w10b-Myth4-GUI Revolution.011

This seems to support the idea that the adoption period of the iPod was much shorter than that of the phonograph. This is probably true.    But what does it tell us?  If we think more broadly about the underlying phenomenon, which is music reproduction, we get a different story.w10b-Myth4-GUI Revolution.013

Yes, consumers took to the iPod much more quickly than the Sony Walkman. But this is because the Sony Walkman had already done all the work — social, economic, etc. — that accustomed people (and music publishers) to the idea of portable, mass-reproduced popular music.  And the Sony Walkman in turn was building on decades of work — again, social, economic, and legal — that had already changed the way in which Americans consumed music.  What do you think was the more revolutionary moment of technological change?  When you upgraded your Walkman to an iPod?  Or when your grandparents first heard recorded music on a phonograph?   It seems pretty clear to me that the latter was the much more significant (and disruptive) experience.

How does this relate to the history of computing?  My lecture on “dangerous s-curves” was created specifically to talk about the adoption of electronic digital computing technology, and in particular the personal computer.   It is easy to interpret the history of the PC as a part of a “changing pace of technology” argument — but only if you ignore the many decades of technological and social work that went into create the tools, user-base, and technologies (particularly software) that made it easy to adopt this “new” technology.


w10b-Myth4-GUI Revolution.046


The s-curve for “computing” subsumes a number of related but quite different curves (and technologies) that cover the rise — and fall — of the main-frame and mini-computer.   The PC was not just the latest in a series of developments in computing.  In many ways, it came out of a very different technological trajectory.

The point is, again, that you can force this history into a s-curve that supports whatever argument you want about the course and pace of technological development.  But to do so conceals much more than it  reveals.




Review of the Computer Boys in Digital Humanities Quarterly

Somehow I missed this review of The Computer Boys Take Over in the Digital Humanities Quarterly.  Here is the money quote praising the book:

Nathan Ensmenger’s book is an impressive and engrossing historical work. He brings the crises and the peopling of computer programming alive. His historical artifacts become characters and a full picture of what this heterogeneous history looks like emerges. He adeptly weaves together competing voices in history with competing personalities to leave us with this haunting and antagonizing last line, echoed from the rhetoric of programming as created by the history: “almost thirty years after the NATO Conference on Software Engineering many programmers are still concluding that ‘excellent developers, like excellent musicians and artists, are born, not made'” [243].

But as tempting as it is to only highlight the positive from this review, I was also intrigued by the author Trisha Campbell‘s biggest critique:

And therein lies my only problem with the book, and it is my same problem with many academic books; the thing is socially constructed and the pages of the text helped us get there, but how do we build from this heterogeneity. Is there a method? Perhaps this will be Ensmenger’s second book.

Alas, I suspect that it will take me longer than that (and perhaps a couple more books) to figure this one out, but I increasingly share Campbell’s call to make the theory of social constructivism more practically applicable.   One of the welcome benefits of having moved from a  history of science program to a School of Informatics and Computing is that I interact more regularly with scientists and technologists.  When I am talking with fellow humanists and historians it is (too) easy to wave away challenging questions by invoking theory; with practitioners, I have to work more to make myself clear, to figure out what I actually mean, and to be relevant.  This is a good thing.






Computer Dating in the 1960s


Slate.com recently published an interesting article by an Indiana University graduate who in 1966 created Project Flame, an early “computer” dating service.  Students would fill out a punch-card questionnaire,  but were not actually matched using a computer.  Instead, he and his friends randomly shuffled the cards together to provide the illusion of computerized expertise.


Although Project Flame might have been a fraud, 1966 was a formative year for computer dating.   The article referenced above from Look magazine describes Operation Match, a computerized dating system developed by two Harvard undergraduates, Jeff Tarr and Vaughn Morrill.1   Within a few months, Operation Match received 8,000 applicants from nearby universities and colleges, 52% of whom were women.   (Like early Facebook, the target audience was the Ivy League and associated schools: Harvard, Yale, Vassar, Amherst, William, Mount Holyoke).  Within nine months, Tarr and Morrill had attracted 90,000 applicants and grossed $270,000, all using rented computer time.  An alternate system called Contact was started at MIT by David DeWan.  His system drew 11,000 applicants.

Interestingly, the Look article ends by talking about the need for mutual deception within the computer dating environment:

Boys have discovered that there is more to getting the girl of their dreams than ordering a blonde, intelligent, wealthy, sexually experienced wench.  They must also try to guess what kind of boy such a girl would request, and describe themselves to conform to her data.  The future suggests itself: the boy answers artfully.  A girl does took.  The computer whirs.  They receive each other’s name.  Breathlessly, they make a date.  They meet.  They stop short.  There they are: Plain Jane and So-So-Sol.   Two liars.  But they are, after all, exactly alike, and they have been matched.  It is the computer’s moment of triumph.

After a flurry of media coverage of these and similar systems, the computer dating fad of the mid-1960s seems to have quieted down quickly.2


By the early 1970s, the focus had turned to the dark side of computer dating, including fraud, misrepresentation, and violations of privacy.3



Perhaps the most bizarre twist from this period was a Times Square bookseller who used its customer data to start its own computerized dating service — without their customer’s knowledge.  They set up a dial-a-date service advertising “Girls Galore.”  Women who found themselves besieged by calls from strangers experienced “anxiety and fear.”   The first example of computer-related stalking, perhaps?4



UPDATE:  an obvious question to ask about computer dating in this period is “why 1966”?

The short answer to this question is that this is the late 1960s was the heyday of the computer utility.  These were services that allowed users to rent time on a shared mainframe computer, generally via a remote terminal.  The practical upshot was that entrepreneurs who wanted to provide computer-based services, but who did not have the resources (or desire) to own their own computer, could rent time via a computer utility such as Tymshare, University Computing, GE Information Systems, or the Service Bureau Corporation.

The era of the computer utility was short-lived, as the difficulties associated with writing time-sharing software (see my recent post on Why Software is Hardand competition from low-cost minicomputers demolished the revenue models of the computer utility.  By these computer utilities, however transitory, represented an important moment in the democratization of computing.  As one professor in the MIT School of Management described it in 1964, the vision of “an on-line interactive computer service, provided commercially by an information utility … as commonplace … as the telephone service is today” was a compelling one, and would be recreated, via the Internet, in later decades.5

In terms of the history of computer dating, the existence of the computer utilities dramatically reduced the barriers of entry into computer-based services.  The sudden rise of multiple dating services in 1966 are anything but a coincidence.



Youtube video on a London-based computer dating service (also called Operation Match)

Hat tip: Alex Bochannek

  1. Gene Shalit, “New dating craze sweeps campus … boy …girl … computer”, Look Magazine, February 22, 1966
  2. Russell Baker, “Automation Comes to Love: Computerized Mating, New York Times, Feb 10, 1966
  3. Steven Roberts, “Often, Computers Spoil Cupid’s Aim,” New York Times, Dec 25, 1970
  4. “Stores Sell Names Of Women Using Dating Computers,” New York Times, Jul 30, 1968
  5. Martin Greenberger, quoted in Campbell-Kelly, Aspray, Ensmenger, and Yost, Computer: A History of the Information Machine (Westview Press, 2013)

Why software is hard

In my Information Society course this week we talked about the rise to dominance of IBM in the late 1950s and early 1960s, and in particular the role of the IBM System/360 system as a strategy for encouraging what we would today refer to as vendor lock-in. The rapid pace of development in computer hardware in this period meant that customers were frequently upgrading their equipment, and every such moment of decision was the opportunity (or risk, depending on your perspective) for a customer to consider an IBM competitor. By providing an entire line of software compatible System/360 computers, IBM leveraged the socio-technical complexity of a computer installation to its own advantage. That is to say, the fact that a computer installation in this period included not only the central computer unit, but also an increasingly expensive ecosystem of peripherals, application programs, data formats, user interfaces, and human operators and programmers, meant that it was increasingly difficult to change any one component of the system without affecting (or at least considering) its interactions with every other component. By allowing users to reuse their investment in these “softer” elements of the system, which were now compatible across central computer units, IBM assured that they would keep buying IBM hardware.

Thinking about computers in socio-technical terms helps make sense of one of the persistent questions facing both developers and historians of software, namely “why is software so hard?” In theory, computer code, although it might not be easy to write, should be easy to get right. It costs nothing (or at least, next to nothing) to “build” and distribute a software product. If you find a bug, you can fix it, and almost immediately deploy that fix to all of your users. There is no material object or artifact to repair, replace, or dispose of. As an abstraction or an isolated set of computer code, software is infinitely protean, almost ephemeral. As an element in a larger socio-technical system, software is extraordinarily durable.

One of the remarkable implications of all of this is that the software industry, which man consider to be one of the fastest-moving and most innovative industries in the world, is perhaps the industry most constrained by its own history. As one observer recently noted, today there are still more than 240 million lines of computer code written in the programming language COBOL, which was first introduced in 1959.1 All of this COBOL code needs to actively maintained, modified, and expanded. I was just reading an article in the most recent issue of the School of Informatics and Computing newsletter highlighting the fabulous new job that one of our 2013 graduates had landed. What was he doing: programming in COBOL.

In my notes for my lecture about the emerging “software crisis” of the late 1960s, which was essentially a reflection of the realization by the computer industry that software was indeed, a complex socio-technical system that could not be isolated from its larger environment (and therefore also not easily rationalized, routinized, or automated), I borrowed heavily from a short piece I wrote for the Annals of the History of Computing on software maintenance. The whole concept of software maintenance represents something of a paradox, at least according to the traditional understanding of software as mere computer code. After all, software does not wear out or break down in the traditional sense. Once a software-based system is working, it should work forever — or at least until the underlying hardware breaks down (which is make it somebody else’s problem). Any latent “bugs” that are subsequently revealed in software are considered flaws in the original design or implementation, not the result of the wear-and-tear of daily use, and in theory could be completely eliminated by rigorous development and testing methods. To quote from the article:

But despite the fact that software in theory never breaks down, in most large software projects maintenance represents the single most time consuming and expensive phase of development. Since the early 1960s, software maintenance has been a continuous source of tension between computer programmers and users. In a 1972 article the influential industry analysts Richard Canning argued that the rising cost of software maintenance, which by that time already devoured as much as half or two-third of programming resources, was just the “tip of the maintenance iceberg.” 2 Today it is estimated that software maintenance represents between 50% and 70% of all total expenditures on software.3

So if software is an artifact that in theory, can never be broken, what is software maintenance? Although software does not not wear out or break down in any traditional sense, what does “break” over time is the larger context of use. To borrow a concept from John Law and Donald MacKenzie, software is a heterogenous technology. Unlike computer hardware, which was by definition a tangible “thing” that could readily be isolated, identified, and evaluated (and whose maintenance can be anticipated and accounted for), computer software was inextricably linked to a larger socio-technical system that includes machines (computers and their associated peripherals), people (users, designers, and developers), and processes (the corporate payroll system, for example). Software maintenance is therefore as much a social as a technological endeavor. Most often what needs to be “fixed” is the ongoing negotiation between the expectations of users, the larger context of use and operation, and the features of the software system in question.

If we consider software not as an end-product, or a finished good, but as a heterogeneous system, with both technological and social components, we can understand why the problem of software maintenance was (is) so complex. To begin with, it raises a fundamental question – one that has plagued software developers since the earliest days of electronic computing – namely, what does it mean for software to work properly? The most obvious answer is that it performs as expected, that the behavior of the system conforms to its original design or specification. But only a small percentage of software maintenance is devoted to fixing such bugs in implementation.4

The majority of software maintenance involve what are vaguely referred to in the literature as “enhancements.” These enhancements sometimes involved strictly technical measures – such as implementing performance optimizations – but most often what Richard Canning termed “responses to changes in the business environment.” This included the introduction of new functionality, as dictated by market, organizational, or legislative develops, but also changes in the larger technological or organizational system in which the software was inextricably bound. Software maintenance also incorporated such apparently non-technical tasks as documentation, training, support, and management.5 In the technical literature that emerged in the 1980s, this “adaptive” dimension so dominated the larger problem of maintenance that some observers pushed for the abandonment of the term maintenance altogether. The process of adapting software to change would better be described as “software support”, “software evolution”, or (my personal favorite) “continuation engineering.”6

My conclusion in the Annals essay was that we need to re-evaluate the assumption that the history of software is only the history of computer code (or coders).  The idea that the computer programmer, as Frederick Brooks famously described it in The Mythical Man-Month, like the poet, “works only slightly removed from pure-thought stuff. He builds his castles in the air, from air, creating by exertion of the imagination,” is both accurate and misleading.   To a degree, Brook’s fanciful metaphor is entirely accurate – at least when the programmer is working on constructing a new system.   But when charged with maintaining so-called “legacy” system, the programmer is working not with a blank slate, but a palimpsest. Computer code is indeed a kind of writing, and software development a form of literary production.

But the ease with which computer code can be written, modified, and deleted belies the durability of the underlying document.  Because software is a tangible record, not only of the intentions of the original designer, but of the social, technological, and organization context in which it was developed, it cannot be easily modified.  “We never have a clean slate,” argued Barjne Stroudstroup, the creator of the widely used C++ programming language, “Whatever new we do must make it possible for people to make a transition from old tools and ideas to new.”7  In this sense, software is less like a poem  and more like a contract, a constitution, or a covenant. Despite the fact that the material costs associated with building software are low (in comparison with traditional, physical systems), the degree to which software is embedded in larger, heterogeneous systems makes starting from scratch almost impossible. Software is history, organization, and social relationships made tangible. 



  1. Michael Swaine. “Is Your Next Language COBOL?” _Dr. Dobbs Journal_ (2008)
  2. Richard Canning, “The Maintenance ‘Iceberg'” EDP Analyzer, 10(10), 1972, pp. 1-14.
  3. Girish Parikh, “Maintenance: penny wise, program foolish”. In: SIGSOFT Softw. Eng. Notes 10.5 (1985)
  4. David C. Rine. “A short overview of a history of software maintenance: as it pertains to reuse” SIGSOFT Softw. Eng. Notes 16(4), 1991, pp. 60-63
  5. E. Burton Swanson. “The dimensions of maintenance”. In ICSE ’76: Proceedings of the 2nd international conference on Software engineering. IEEE Computer Society Press, 1976, pp.492-497.
  6. Girish Parikh. “What is software maintenance really?: what is in a name?”. In: SIGSOFT Softw. Eng. Notes 9(2), 1984, pp. 114-116.
  7. Bjarne Stroustrup. “A History of C++,” History of Programming Languages. T.M. Bergin and R.G. Gibson, eds. ACM Press, 1996.

Working the Desk Set


In my Information Society course this morning we talked about the Desk Set, the 1957 romantic comedy starring Katherine Hepburn, Spencer Tracy, and the fictional computer EMERAC.  As I write about in The Cosa-Nostra of the Data Processing chapter, ”

What is less widely remembered about Desk Set is that it was spon- sored in part by the IBM Corporation. The film opens with a wide-angle view of an IBM showroom, which then closes to a tight shot of a single machine bearing the IBM logo. The equipment on the set was provided by IBM, and the credits at the end of the film—in which an acknowledgment of IBM’s involvement and assistance features prominently—appear as if printed on an IBM machine. IBM also supplied equipment operators and training.

Read the entire discussion of the Desk Set and its relationship to the history of computing here.

Computers, Programmers, and the Politics of Technical Expertise