Greetings. And welcome back to “Working on It.”


Everyone who writes is familiar with the process of revision. Less familiar is the process of revising a visualization. The goal is to make the visualization display as much meaning as the data will support, and no more. One also has a responsibility to do the best one can to avoid having the visualization misinterpreted. There is a way of talking about visualizations—a mistake, in my view—that focuses primarily on their distortions and deceptions, as if people never lie with words. Nevertheless, most people are less sophisticated at reading visualizations than prose, and so authors bear more of a burden to do right by the reader.

As an example of trying to make an honest visualization, here is a series of screenshots showing revisions I made to America’s Public Bible. These visualizations are part of the expanded version of the site and aren’t available yet; only the prototype is up. The goal of this particular visualization is to let the reader pick a Bible verse, and then show the trend in the rate of quotations over time in American newspaper.

Making a visualization like this entails innumerable small choices along the way: here are a few of the salient ones. Let’s start with what the dataset looks like.

Table of predictions

The prediction model I trained starts with newspaper pages. After identifying potential quotations, it makes a prediction: is this a quotation from the Bible or not? The result is table like the one above. Each row indicates that a particular verse appears on a particular newspaper page, with the probability that it is actually a quotation and not just a false positive. That kind of machine learning, called supervised classification, has some well-established procedures to ensure that the predictions are honest and not the result of self-deception. Important as they are, those rules need not detain us now.

This dataset has a few specific considerations for analyzing it honestly. One consideration is what the correct threshold is for determining whether something is a genuine quotation or not. Obviously it has to be more than 50%, but how much more? In the prototype I set that threshold at 90% because, frankly, I didn’t want to be embarrassed by people noticing entries that weren’t quotations. But that’s not quite honest either: it leaves out many, many quotations which are genuine but just have a lower probability. (The tradeoff here is similar to that of a medical diagnostic test, albeit with rather different stakes.) The revised version of the site will include quotations above 58%. Another consideration is the fact that the OCR for Chronicling America can be quite dreadful in places; obviously it is not possible to find a quotation in a bunch of gibberish. In calculating the rate of quotations, the number of quotations is the numerator but I have also had to figure out how to exclude junk OCR in order to get the denominator right.

Once we’ve got a dataset of quotations, it is straightforward to aggregate them into the rate of quotations per word per year. Visualizing the rate, instead of the number, of quotations is a key decision. In this case it is an obvious choice. The number of quotations almost always goes up over time, simply because there were a lot more newspaper from the end of the nineteenth century that were digitized than from the beginning. The trend is what we really care about.

Comparison of trend lines

Drawing a trend line is not as simple as it might seem. The contrived screenshot above shows two different options for how to visualize the trend. The gray line is the simplest option, but not very good; the red line is a better option. The gray line is very spiky because it shows each individual year, and because each pair of points is joined by a straight line segment. Year to year the data is noisy and we can only expect to find meaning in the long-term trends. I have seen people try to divine meaning from every little spike, when they are just noise. So the red line is a better, if not entirely satisfactory approach. For that line, each data point is shown as a five-year rolling average (e.g., the data point for 1860 is the average of the years 1858ā€“1862). And the data points are joined with a smooth curve. The red trend line is the best I can come up with to represent the nature of the actual trend: in most instances quite gradual, with occasional genuine spikes.

The interface around the visualization is unfinished, but there are a few other decisions that had to be made. One is which verses to include in the list the user can select. This is somewhat subjective, but in many instances there are verses which have a very high rate of false positives. For example, the phrase “went into the city” frequently appears in both the Bible and (who would have guessed?) in newspapers. Then too, the Bible frequently repeats itself: the synoptic Gospels and quotations of the Old Testament in the New Testament are just the most obvious examples. In this case, the verse “Suffer little children to come unto me” was one of the most popular verses in the nineteenth century, but there is no reliable way to computationally distinguish between a quotation to Matthew 19:14, Mark 10:14, and Luke 18:16. In such instances, I collapse all the quotations into a single reference which stands for all three.

Cutoff date

The next consideration is which cutoff dates to pick for the visualization. The visualization above shows the trend to the maximum chronological extent of the Chronicling America corpus in the 1960s, whereas the visualizations above extend only to 1922. There appears to be a huge increase in the rate of quotations in the mid twentieth century, but displaying that trend would be misleading. There are next to no newspapers in Chronicling America after 1922. (Unreasonably long copyright terms rear their ugly heads again.) Thus we face what we might call the batting average problem: if you only have a few plate appearances and get lucky, you could end up with an abnormally good batting average. So, it’s more honest to represent the trend line only when there is a substantial underlying corpus. For Chronicling America, the correct minimum and maximum years are easily determined.

Completed visualization

This last screenshot is what the mostly complete visualization looks like. This version adds another feature to help readers accurately interpret the trends. In addition to showing the trend line for Chronicling America, it also shows the trend line for a separate corpus, Gale’s Nineteenth-century Newspapers. Since the corpora don’t completely overlap, one would not expect the trend lines to be identical. But one would also expect them to be close, as they are here.

Many difficult decisions go into making a visualization as honest as possible, and then there is more work to be done interpreting it honestly. More on the problem of interpretation some other time, but I will leave you with my parlor trick for giving talks about this project.


Brief book note

Two well-written, recently published works take up the themes of masculinity and femininity within American evangelicalism or megachurch Christianity.

Kristen Kobes Du Mez’s Jesus and John Wayne: How White Evangelicals Corrupted a Faith and Fractured a Nation chronicles the stream of masculinity that has pervaded American evangelicalism. No mere monograph, this history is quite sweeping in its interpretation of the movement in terms of its advocacy for (or perhaps, obsession with?) a particular configuration of family and gender roles. The next time that I teach twentieth-century evangelicalism, I will likely start with Du Mez’s book to provide the framework. (Don’t miss this tongue-in-cheek review.)

Kate Bowler’s The Preacher’s Wife: The Precarious Power of Evangelical Women Celebrities is a fascinating study of celebrity Christian women, often the wives of megachurch pastors. The main dynamic she explores is that evangelical women operate under much stricter theological constraints governing gender roles than their liberal or mainline counterparts, but evangelical women have much great access to the marketplace by which celebrity power is created. It’s a fitting sequel to Bowler’s Blessed, on the history of the prosperity gospel.


Random screen shot

Too many screenshots above, so here is a photo of my tomato plants. No fig tree, but I’m thankful that I can sit in peace and safety under my own vines.

Tomato plants

Updates

Reading: Already finished it, but next time I’ll write briefly about Tara Isabella Burton’s Strange Rites. Currently reading Diana Butler Bass’s Standing Against the Whirlwind: Evangelical Episcopalians in Nineteenth-Century America.

Working: Getting reading to teach a minor field readings for PhD students in American religion, the DH Practicum for incoming students at RRCHNM, as well as a course on “Capitalism and American Religion.” Here’s the syllabus for that last one.

Watching: The Expanse. I promised my colleagues at RRCHNM that I would finish it before we reopen, but I’m not going to meet my goal.

Playing: Some PGA Tour game on the Xbox, which is infinitely more stressful than any first-person shooter.



Greetings. And welcome back to “Working on It.”


This week the American Religious Ecologies team released our interactive map of Roman Catholic dioceses in North America from 1511 to the present, with an accompanying essay. Check it out.

Screenshot of a map of Catholic dioceses

This is the first visualization that we’ve shipped, but we are working on more. It’s taken us some time because we’ve also been building our infrastructure for creating, analyzing, and visualizing historical datasets. In later issues, I hope to explain some of the considerations that go into making visualizations like this one.

But if that sort of thing interests you, you could also watch this video. This week I was supposed to speak about our ongoing work in Luxembourg, but for obvious reasons the host asked me to make a video instead. So I talked through the work we are doing to digitize the 1926 Census of Religious Bodies, the way we are creating datasets and visualizations, and what historical questions we are seeking to answer.

That’s the first video I’ve ever made, and it was a lot of work. Learning even just the rudiments of making videos at the same time that I was rushing to write a talk was not ideal. But I am glad to have learned enough that I could make another video if an idea or audience warranted it. And now that I’ve mentioned the video in this newsletter, I’m sure that its views will break into double digits. That’s enough to call myself an influencer, right?

While I was putting up the video, I also found a recording of a panel on “The Bible in Public Life” that I participated in. I’m linking to it now becauseā€”to be quite honestā€”speaking at the Library of Congress on the history of the Bible at the invitation of Mark Noll was everything senior-in-college Lincoln could have hoped for in life. I know the academy is full of people who make life miserable for others, but I’ve been fortunate to have encountered a long list of generous people, and Noll is at the top of that list. And the chapter in my co-panelist Valerie Cooper’s book Word, Like Fire, in which she wrote history as a kind of commentary on Maria Stewart’s “Religion and the Pure Principles of Morality,” has been essential as I think through revising the form of America’s Public Bible.

That’s all I’ve got from YouTube. I promise never to do that to you again.


Random screenshot

A random screenshot

Brief book review

I came across a reference to Vaclav Smil’s book Energy and Civilization: A History (first edition 1994; revised edition 2017) while reading about religion and capitalism. The book seeks to explain that “the course of history can be seen as the quest for controlling greater stores and flows of more concentrated and more versatile forms of energy and converting them, in more affordable ways at lower costs and with higher efficiencies, into heat, light, and motion.” I learned a lot about how humans have captured and used energy throughout history. In particular, the basic problems and solutions of agriculture and industry in the nineteenth-century United States are much clearer to me now. Despite the language about “the course of history,” the book manages to avoid the reductionist notion that all of culture, society, and other forms of human endeavor can be reduced to energy flows, while also advancing what I found to be a compelling case that thinking about energy helps us understand global history over a long period of time.

The other reason I find the book interesting is its frequent use of sidebars and other texts within the text. This is quite different than the majority of historical writing outside of textbooks. The asides helped, rather than distracted from, the flow of argument in the book. In that regard, they are like the books of one of my graduate school professors, David Hackett Fischer. I asked Fischer why he included so many appendices, often a dozen or more. He explained that they gave different readers different points of entry to the book. There is a certain kind of reader who only cares about muskets (or uniform buttons!), and the appendices gave those people a reason to start reading and then stay for the more significant parts of the history. In this book, the sidebars also gave Smil a way to explain basic concepts for a non-specialized reader. At one point, I was reading a sidebar on types of levers the same day that levers were an assignment in my daughter’s science curriculum. It’s an intriguing model of writing on multiple levels.


Updates

Reading: Peter Gay, Freud: A Life for Our Time.

Listening: The Celeste soundtrack encourages a frenetic yet deliberate style of working, if that’s what you need.

Playing: PUBG, again.

Watching: The Game Maker’s Toolkit channel on YouTube has amazing video essays explaining how games work.


For our work on the American Religious Ecologies project, we have to deal with the way that the federal Census Bureau cataloged religious groups by denomination. At a meeting this week, I conflated the denomination called “The Church of God in Christ Jesus” (an Adventist denomination) with “The Church of God in Christ” (the much larger and predominantly African American Pentecostal-Holiness denomination). Though my sharper colleagues were not confused, perhaps my mistake was understandable. Of the 214 denominations that were enumerated in the 1926 Census of Religious Bodies, here are the ones that have the phrase “Church of God” in their name:

  • Church of God (Adventist)
  • Church of God in Christ Jesus (Adventist)
  • Apostolic Overcoming Holy Church of God
  • Church of God (New Dunkers)
  • Church of God (Headquarters, Anderson, Ind.)
  • Church of God and Saints of Christ
  • Church of God in Christ
  • Church of God as Organized by Christ
  • Church of God (Apostolic)
  • Free Church of God in Christ
  • Church of God in Christ (Mennonite)
  • The (Original) Church of God
  • Church of God [or, Church of God (General Assembly)]

My favorite of these is The Church of God as Organized by Christ, though The (Original) Church of God gives them a run for their money.

I’d be willing to wager that very few historians of American religion could distinguish between each of these groups. And why should they be able to? The minutiae of denominational distinctives is not the stuff of historical understanding. And yet, I find that the Religious Bodies Censuses are a useful tool for focusing my attention on groups that I would otherwise pass by. Of these groups listed above, the only one that I can recall reading about in a work of academic history is Anthea Butler’s Women in the Church of God in Christ, unless you lump all the Adventists together. What are the stories of the others? As I tried to explain in an earlier post about a congregation in the Armenian Apostolic Church, you can learn a great deal from these census schedules about groups you might otherwise never pay attention to.


Supposedly this newsletter promises that I am working on it, and at the moment the “it” is Pandemic Religion.

Pandemic Religion screenshot

Pandemic Religion is a digital archive that aims to collect and preserve the experiences of individuals and religious communities during the COVID-19 pandemic. John Turner, our colleagues at RRCHNM, and I started it a little over a month ago. The site allows people to contribute memories, documents, photos, websites, and the like. We are following the playbook that RRCHNM came up with to respond to September 11th and Hurricane Katrina, a playbook best described in this article by Sheila Brennan and Mills Kelly. Collecting historical materials online has taken on special urgency during this pandemic since so much experience has moved online.

To date we have received about one hundred contributions and have some reasonable hope that contributions are increasing, so do not despise the day of small things. One of my worries at the beginning of the project was that we would simply get a thousand screenshots of Zoom services. Instead, I’ve been surprised at the willingness of many contributors—who are entirely unknown to us—to reflect deeply on their religious experiences. I’d encourage you to browse what we’ve collected so far: many items give evidence of a great deal of creative responses to the time.

We have a lot of questions that we would like to know about how this pandemic is affecting American religious practice. I’ll mention just one. This pandemic may be global, but it has shrunk my actual horizons to practically nothing. I now read the local news with more urgency than the national news. I rely on the government of Virginia more than the federal government—an awkward position for someone with my first name. I’ve driven to work just once since March, a journey of just eight miles with no traffic that felt impossibly distant.

Churches like to think of themselves as local institutions. Undoubtedly the pandemic and resulting mass unemployment have led many churches to renew their efforts to serve local communities. But many churches, I suspect, are not actually local institutions. Many draw on a membership which travels some non-trivial distance to hear a favored preacher, or to attend some preferred parish.

The pandemic confined (most) churchgoers to their homes, but it also moved services online, where distance is meaningless. How many people, I wonder, are attending the services of a church they do not attend in person? Certainly not all churches have had the resources or know-how to move their services online. One of the clergy at my church mentioned that they knew of “visitors” to our online services. And what about people whose family is distant? Services online may afford the opportunity to “attend” the same service as a child or parent in a way that could never be done in person.

None of this is new, of course. Radio broadcasts, televangelism, and more recently the phenomenon of churches with satellite campuses where services are broadcast from the main church have tended to allow churchgoers to collapse physical distance in the pursuit of preferred affiliations. But I do wonder how even basic practices like church attendance are changing during the pandemic, and whether the pandemic will become an inflection point. For now, the answer is that I don’t know, and we are hopeful that the site will help uncover some evidence for this and other such questions.


Perils of the Overworld

I heartily commend to you Robin Sloan’s newsletter, “Perils of the Overworld.” Sloan is writing about how he is writing and developing a video game in progress. If you’ve read his novels, Mr. Penumbra’s 24-Hour Bookstore and Sourdough, you’ll know why you want to read the newsletter. But even if you haven’t, I know you will like his newsletter if you’re subscribed to this one. His newsletter is exactly what I was hoping this one would be if I, uh, did interesting work and was a good writer. šŸ˜†


Random screenshot

Random screenshot

Updates

Reading: David King, God’s Internationalists: World Vision and the Age of Evangelical Humanitarianism.

Listening: The SpotifyĀ playlists for the characters from Halt and Catch Fire.

Playing: Outer Wilds.

Working on: Historical map of Catholic dioceses in North America, which I hope will be done soon.

Planning: Next week is a week of re-thinking and renewal. All obligations that are not contractual or sacramental will have to justify their continued existence.


Greetings. And welcome back to “Working on It.”


On the last day of classes before spring break, I determined that my administrative work was basically done for the semester and that I could turn my attention almost exclusively to an ongoing research project.

šŸ˜‚

Needless to say, the ensuing weeks have brought some fragmentation of attention and more than a few administrative tasks. This is a time when I am thankful for my minor administrative role because it has afforded me some small opportunity to help co-workers and students who are going through a difficult transition. And there have been many other occasions in this time for counting my blessings.

Work continues from my basement office and, when the sun is out, from my deck.

View from my basement office and my deck.

Lately I have been recovering my ability to use D3.js to make interactive graphics for the web. If you haven’t heard of D3, it is a JavaScript library with which you can make incredible interactive graphics, as this gallery shows. You’ve almost certainly seen visualizations made with it, since its creator, Mike Bostock, used to be a part of the New York Times visualization team.

A few years ago, I made a few interactive maps with D3 (such as this and this). But mostly I’ve done my visualization work in R. In the interim a lot has changed with D3 and, indeed, with JavaScript and the web itself. I’ve been acquainting myself with the recent developments.

For this kind of computational work, I am drawn to tools where you feel like you are working close to the material itself. That previous sentence is absurd since all of this work involves using a stack of technological abstractions to manipulate another stack of historical abstractions. But still, you can tell the difference between when your tool is shaping the raw materials and when you’re operating a robotic arm at a distance. It’s not for nothing that a common metaphor for data analysis is carpentry.

D3 fits that bill in two senses. First, D3 lets you deal with the native abstractions of the web: the HTML document-object model and scalable vector graphics. In practical terms, that means you can do almost anything with D3 that a browser can do … and browsers can do a lot.

Second, it is more like drawing visualizations than describing them. In some of the most popular libraries for visualization, you use a grammar of graphics to describe what the visualization should be: this variable is on the x-axis, that variable is on the y-axis. The program then draws the correct visualization for you. Using D3 is more like drawing, where you specify exactly which lines should appear where, with what width, and so forth. It’s not so distant from drawing with the turtle in the Logo programming language, which was my first exposure to programming in elementary school. (Here is a modern manifestation of turtle drawing.) It takes much more code and much more thought to draw rather than to describe, which is why higher-level graphics libraries are so useful. But one gains much more control and flexibility when working at the lower-level that D3 affords.

An example of drawing with the Logo programming langauge

Getting reacquainted with D3 has also meant learning modern JavaScript. I’m no developer, as the real developers I work with might be the first to point out. That means that I am mostly free to skip the chaos that is the modern front-end web and use only the good parts. The good parts have gotten a lot better, especially usable classes and modules in JavaScript. And I’ll admit it, I’m weak: I like functions with default parameters.

One of the other nice features of the modern web is the rise of interactive notebooks. Observable, another creation from the author of D3, is great for sketching out ideas. You can see the environmental historian Jason Heppler’s notebooks for some cool examples. I like owning my own turf too much to put finished work there, I think, but Observable is great for prototyping.

I feel close to having something to show for this, both for collaborative projects at RRCHNM, and for some things I am working on on my own. Hopefully I’ll have something to share in the coming weeks.


Brief book review

They Knew They Were Pilgrims: Plymouth Colony and the Contest for American Liberty is out this week from my friend and collaborator, John Turner. The book marks the four hundredth anniversary of the founding of Plymouth Colony.

Cover for They Knew They Were Pilgrims

John is both a gifted writer of narrative and a historian’s historian. On the one hand, he leaves no stone unturned in his research. (I can attest to this from experience since in our collaborations he has frequently found information that I was embarrassed to have missed.) The history of Plymouth Colony has often been neglected in favor of the history of the Massachusetts Bay Colony, or historians have settled for using the same sources as their predecessors. This book, though, is based on new research that makes it the definitive history of the colony. But John also writes narrative history that weaves together the details of life in the colony with his broader interpretative themes. John is the best practitioner I know of the art of writing books that both genuine contributions to the scholarship and genuinely readable by anyone interested in the subject.

As I have been reading the book again this past week, two of those themes stood out to me. One is the “liberty” mentioned in the title. The book navigates the between the Scylla and Charybdis of two interpretations of Plymouth: that it was the origin of everything good about America, from Thanksgiving to liberty of conscience, and that it was obviously complicit in every American evil, not least of which was genocide. Turner is not heavy-handed, yet he is also unsparing in his critiques of the Pilgrims. See the chapter on ā€œHope,ā€ as an example, or his description of the treatment of Quakers. But he also offers an explanation—all the more satisfying because it is not anachronistic—of what kind of liberty the people of Plymouth Colony thought they were pursuing.

The other theme that stood out to me is the covenanted congregation. The Separatists set themselves against the practices and ecclesiology of the reformed English church which I now count as my own. (I’m not saying William Laud should have come out looking better in this narrative.) But I’ve also been a part of gathered churches where I took the church covenant, and took it seriously. Those of you whose own churches bear a distant family resemblance to those of Leiden or New Plymouth will enjoy Turner’s portrait of all the hopes and challenges of the covenanted church.


Random screenshot

A random screenshot

Updates

Reading: Dissertation prospectuses.

Listening: Johnny Cash, My Mother’s Hymn Book.

Watching: Parks and Rec.

Playing: Super Mario Odyssey.

Sketching: Network visualization of how federal judges have moved between courts.

Anticipating: The Triduum.


This coming fall I will be teaching a graduate seminar on “Religion and Capitalism in the United States.” I taught the course for the first time in spring 2015. I think eight significant books on the topic were published the same semester I was teaching it, and more since, so it will likely be a completely new syllabus.

My course description was due today. It’s a task that I wasn’t looking forward to, since these descriptions always seem contrived. But this one … this one I’m pretty happy with.

Religion and Capitalism in the United States

The relationship between religion and capitalism has long occupied historians of the United States, and before them it concerned the people whom historians study. In this class, you will meet many people whose religion led them to interact with capitalism in unexpected and unusual ways. You will meet the Puritans whose work ethic supposedly created capitalism, but who insisted on resting on the Sabbath; Moravian missionaries who made converts and money; slaves, slaveowners, and abolitionists who all claimed the Bible when reckoning with the capitalist system of slavery; a Protestant writer who insisted that Jesus was a businessman, and Catholics who believed Jesus called them to a kind of socialism; African American preachers who marketed their recorded sermons; Jews who mass-manufactured matzah and created Yiddish socialism; an industrialist who wrote The Gospel of Wealth, and laborers who created churches for the working class; nineteenth-century consumers who turned gift-giving into a ritual, and a twenty-first-century television personality who turned consumption into therapy; Christians whose faith turned them into environmentalists, and Christians who drilled for crude oil; converts who thought religion required poverty, and Prosperity Gospelers who thought it promised wealth. You will read primary sources from American history, secondary works in both religious history and the new history of capitalism, and excerpts from theorists of religion and capitalism. Through these readings and your own research project, you are invited make sense of this perpetual historical puzzle.


Greetings. And welcome back to “Working on It.”


This semester I will be teaching my class on “Computational History” for the sixth time. The official title in the catalog is something else, but I’ve gone rogue since my second year at GMU, and even the unofficial title has changed over time.

This is the course I was hired by GMU to teach. Or, more precisely, my department vaguely had the idea that they wanted a course on programming for historians, and I vaguely had an idea of how it could be taught. I shudder a bit looking back at the first iteration of the course. The bones of the course are similar, but my approach to the subject, including the wildly unrealistic idea that I should teach JavaScript as an introduction to R, now seems fundamentally mistaken. I was fortunate to have a great group of students who made the most of that first course as I learned the material a week ahead of them.

Now that I understand—or hope I understand—how to teach the course, perhaps I can explain some of the reasons the syllabus has changed over time.

The most important consideration is that the course has been required for Ph.D. students in history at George Mason. The chance to offer the course every year has let me improve it regularly. But it has also meant that the course had to be designed to meet the department’s goals, which I have done by trying to give students a clear sense of how they can progress from this course both to work on their dissertations and to work at RRCHNM. Former students have taken both of those routes. The fact that the course is required has also meant that at least some students feel like it is not within their interests, and so I’ve worked hard with individual students to figure out connections to what they care about.

Besides the course’s local context, it has also been influenced by developments in the historical profession. The place of computational history is, I think, needlessly polarized. After all, even one of the leading practitioners of computational history has argued that “computational history is dead for good.” At issue is whether computational history has promised a revolution in historical understanding, possibly a revolution in which all historians must participate willingly or unwillingly, and whether its interpretative results have been underwhelming.

The way that I present the course to students is hardly radical. My approach to historical methods is to “multiply your ways of knowing”—a principle I learned from an advisor in grad school. While historians revise and discard interpretations, as a profession they tend to accumulate methods, adding the new but retaining the old. The usefulness of computational history is not in provoking a revolution of all historical understanding, but in creating a new angle of vision on sources that we can also read with existing historical methods. This also happens to be the way, I have found, that students approach computational history: a tiny handful want to upend the discipline, but a great many find it a useful set of techniques to weave into their other forms of historical research.

Another concern is the distinction between programming and methodology. In an article about a similar course in literary studies, Andrew Goldstone argues that “cultivating technical facility with computer tools—including programming languages—should receive less attention than methodologies for analyzing quantitative or aggregative evidence. Despite the widespread DH interest in the former, it has little scholarly use without the latter.” Certainly, my course (originally titled “Programming in History/New Media”) came into the curriculum at the same moment some digital humanists were claiming that everyone needed to learn how to code and that resources like The Programming Historian were being created. That’s a moment that the field has moved past for some good reasons: the idea that all humanists would learn to program was not reasonable, feasible, or desirable. It is also a moment we have moved past for dubious reasons, including the proliferation of “tools” for humanities data analysis, most of which are of poor quality and have little methodological sophistication, and which enable analysis without understanding.

Over time my syllabus has shifted from emphasizing programming towards emphasizing methodology. The recasting of the course as computational history rather than programming for historians reflects that change. (I have also taught a course on web programming, which is a topic that is highly specific to the culture of the Mason/RRCHNM Ph.D. program.) In the data analysis course I tend to teach programming concepts—say, iteration—in only a cursory way because they are not critical to understanding and using the methods being studied. And methods, rather than programming, is what produces an evidentiary base for historical interpretations.

That said, it is still important that this course is the first place where most of the students encounter some kind of programming. I emphasize to them that, although they are learning R, the same concepts and methods can be expressed in other languages such as Python and SQL. Students with whom I continue working after the class is over not infrequently have a moment when they realize that some difficult problem in a different domain is conceptually the same as what they have already learned. But that kind of abstraction takes time. In a sense, if a programming language is already an abstraction, understanding the shared concepts is an abstraction of an abstraction, and beginners find it extremely difficult to grasp both. In the meantime, learning a specific programming language gives them, well, a language to conceptualize and reason about historical data and the inferences that can be made from it.

Many of the changes to the course have improved the sequencing of what I teach. Like nineteenth-century theories of evolution in which an embryo recapitulates the entire process of evolution, one bad pattern is teaching students is the same way that you happened to learn. That was a trap I sometimes fell into in my early semesters of teaching. Like almost everyone teaching R these days, I focus on the tidyverse as a set of interoperable packages and principles. The course is structured around understanding quantitative data analysis and visualization in generic terms, then applying it to the peculiarities of spatial, network, and textual data. Most of the changes I’ve made are not obvious on the syllabus itself, but by revising the course regularly I am much more confident that the sequence of materials comes to the students in a useful way, even if they still feel it comes at them much too quickly.

Finally, there is the consideration of what kind of work the students should produce. In the earliest iterations, I was content to let students created exploratory graphics. I thought of these methods primarily as tools of exploration and, frankly, hadn’t made the leap from exploration to argumentation myself. But now I try hard to help students move from a dataset through methods to a historical argument. Tutorials or textbooks are generally of little help in this regard, as they use contrived examples where, as Goldstone puts it, “the effort is out of all proportion to the result.” His suggestion, which I am increasingly adopting, is to have them follow and even to reproduce existing scholarship so that students can see how an argument is put together.

In its current version, the course is not so dissimilar from other courses in data analysis for history or literary studies. That convergence reflects what I take to be a growing consensus on how best to teach these methods. But there is one thing that distinguishes this iteration of the course from other offerings: this syllabus assigns work by several former students in the class.


Random screenshot

A random screenshot

Around the web


Updates

Reading: N.T. Wright’s biography of Paul; Neal Stephenson’s Seveneves.

Watching: The Expanse.

Working: Thinking about the infrastructure for presenting computational history on the web.


When I arrived at GMU in 2014, I was issued a fifteen-inch Macbook Pro. Some people think that model is the best laptop ever made. I don’t know about that, but it was certainly the all-around best computer I’ve ever used. Not to get too sentimental, but a lot of my life happened on that computer. I wrote my first book on it. In creating this computational project, its capabilities were strained much harder than could be reasonably expected of a laptop assigned to a history department. I used it to help raise over 1 million dollars in grants. It took far too many flights to conferences and speaking gigs. And I distinctly remember using it to send out an announcement of the birth of my son from the hospital. After delaying the inevitable, I had to return the computer to the university at the end of last semester.

My 2014 MacBook pro

My 2014 MacBook pro


Since this is a new newsletter with next-to-no readers, this first issue is less of a missive to subscribers and more of a love letter to the IndieWeb.


Sometime in 2005 I bought a domain name and web hosting from Quality Host Online. While the level of service never lived up to the name, the price was right: $11.40 per year for premium hosting. “Premium” meant that you could use a MySQL database and consequently that you could install WordPress for yourself. After some futzing around with PHP and CSS—two acronyms I would soon be Googling a lot—I had made some questionable design choices and published some even more questionable posts, all of which would be lost to history were it not for the Wayback Machine.

It isn’t an exaggeration to say that without having learned to mess around with websites I would not have much of a career as a historian. I certainly wouldn’t have had the blind dumb luck (a.k.a. providence) to get hired while ABD, and so I wouldn’t have avoided the cruelties of the academic job market that have tormented most of my generation. Nor would I have the good fortune to work at a place where I can collaborate with colleagues to build scholarly websites for a living.

That domain name is also personal. The 14 years that I have continuously owned lincolnmullen.com is longer than I was in grad school (5 years) or have held the same job (also 5 years), lived in the same house (3 years), been a parent (8 years), or been married (11 years and counting). Measured against being a father or husband, a domain name is a light thing. But still. Other than being a son, brother, or Christian, I’d be hard-pressed to name anything else that has been a part of my identity for longer.

The term “IndieWeb” has been gaining traction of late, and thanks to Cal Newport it has even made it into the New Yorker. The proprietors of indieweb.org define it as a “people-focused alternative to the ‘corporate web’” and defend it in somewhat overblown terms as a matter of “owning your content, your identity, and your self.” Cal Newport calls it “a loose collective of developers and techno-utopians.” A combination of well-founded anti-corporate suspicion and unrealistic hopes that the web can return to an era before Facebook and Twitter certainly seems like an apt characterization of the IndieWeb to me.

Exhaustion with social media is what led me to leave Twitter behind for good and to take up with the IndieWeb crowd on Micro.blog and now Buttondown. I can scarcely claim to have been an independent thinker. No doubt I would not have left Twitter if Alan Jacobs hadn’t set the example first, and the closest thing that I have to a social media strategy is following Dan Cohen around to various web platforms. (I’m not sure why you are reading this newsletter: go subscribe to Cohen and Jacobs instead.) The comparison of social media platforms to the seven deadly sins seems less a joke and more a bitter reality. But after about a year of being pretty much absent from the internet, my hope is that this newsletter might provide an outlet for once again writing about history and history-making.

But for me, the appeal of the IndieWeb primarily lies in building. Almost everything I know how to do, technologically speaking, has its roots in learning to do things on my website. I only happened to be lucky that others had already blazed the path from web servers to scholarship. Alan Jacobs has listed things students ought to know to be full participants of the web, and they are the same things I unintentionally started learning when I signed up with Quality Host Online. I’m not optimistic that the IndieWeb can replace big corporate social media because tinkering with the tech is an element of its appeal. (See also Bethanie Nowviskie’s meditation on “resistance in the materials.”)

People who know me probably have realized that exhaustion with social media is symptomatic of broader exhaustion. The title of this newsletter—“working on it”—is mostly because people seem to like reading about how history and especially digital history is made. But it is in part because I’ve found myself having to write back to so many people saying, “yes, I’m still working on it,” about one project or another. In looking for a path back to doing meaningful work, the best route that I have found is returning to my IndieWeb roots. For a long time, I felt stuck—mired, really. But over the past couple months, I’ve gone back to tinkering with the basic building blocks of the web and figuring out what to do with the blank canvas of an open <html> tag.

All that to say: I love you, IndieWeb.


Random screenshot

A random screenshot

Brief book review

Joy Lisi Rankin’s A Peopleā€™s History of Computing in the United States (Harvard, 2018) locates the early history of computing not in Defense Department initiatives like ARPANET or the creations of Silicon Valley entrepreneurs. Instead, she takes readers on a tour of other sites, such as Dartmouth College and public schools in Minnesota, where users turned computer networks into networked culture. This is a history from the bottom up focused on what users made of technology instead of how technology made them. It’s a story of interest to anyone who wants to know about the roots of the modern internet—especially those who want to get back to them.


Updates

Reading: Robert Alter’s translation of the Pentateuch for myself, and Genesis in the Authorized Version to my daughter.

Watching: Rewatching Halt and Catch Fire.

Working: It’s Thanksgiving weekend, so I’m not working on anything except this newsletter. Grant application due Monday morning, though.

Recently released: The American Religious Ecologies team has released our website, which explains what we are doing with the 1926 Census of Religious Bodies.