My essay in the Oxford Research Encyclopedia of Religion on “Bibles and Tracts in Print Culture in America” was recently published. When it was first released that collection was freely available, though it has now gone behind a paywall. I was under the impression, which must have been mistaken, that it was going to remain freely available. Nevertheless, the Religion in America section of that encyclopedia has a fantastic set of essays.
The main news about the book is an interview I did with The Atlantic about the history of conversion. There should be several more interviews about the book at various history blogs coming out in the next few weeks. And at least two people have let me know that they are assigning the book in class this semester.
It’s always nice to see people mention on Twitter that they’ve received their copies.
Probably the best writing tip that anyone has given me is that the last sentence of any given paragraph I write should often be the topic sentence of the next paragraph. Once this was pointed out to me, I noticed it in almost all of my writing and often in student writing as well. This error probably happens because it’s natural to build up a paragraph until it connects to the next point to be made. But how we write is not how we read. Often I can go through a first draft and move the last sentence of many paragraphs to the beginning of the next.
Bonus tip: When a colleague read a draft of my book manuscript, he said that I should try to shorten break up the paragraphs, because the paragraphs would appear longer on the printed page. Sure enough, when the proofs arrived, he was right.
In the second half of the nineteenth century, the majority of U.S. states adopted a novel code of legal practice for their civil courts. Legal scholars have long recognized the influence of the New York lawyer David Dudley Field on American legal codification, but tracing the influence of Field’s code of civil procedure with precision across some 30,000 pages of statutes is a daunting task. By adapting methods of digital text analysis to observe text reuse in legal sources, this article provides a methodological guide to show how the evolution of law can be studied at a macro level—across many codes and jurisdictions—and at a micro level—regulation by regulation. Applying these techniques to the Field Code and its emulators, we show that by a combination of creditors’ remedies the code exchanged the rhythms of agriculture for those of merchant capitalism. Archival research confirmed that the spread of the Field Code united the American South and American West in one Greater Reconstruction. Instead of just a national political development centered in Washington, we show that Reconstruction was also a state-level legal development centered on a procedure code from the Empire State of finance capitalism.
The authors’ original manuscript (or preprint) is available at SSRN. This is the version that we submitted for peer review in July 2016. The final version will be different, in part because of our revisions in response to the helpful peer reviews, and in part because we have expanded our original corpus by some 40% and plan to expand it further before publication. While we think these revisions greatly strengthen the essay, we don’t think that they invalidate this earlier version. So we are making the authors’ original manuscript available now following Oxford University Press’s policy.
I’ve recently published version 0.3.0 of my USAboundaries R package to CRAN. USAboundaries provides access to spatial data for U.S. counties, states, cities, congressional districts, and zip codes. Of course you can easily get contemporary boundaries from lots of places, but this package lets you specify dates and get the locations for historical county and state boundaries as well as city locations.
This version of the package has a number of new features. It has jumped on the Simple Features bandwagon, so now all boundary data is returned as an sf object. This version also includes updated shapefiles from the U.S. Census for contemporary data, as well as new centroids for Zipcode Tabulation Areas and historical city populations courtesy of Erik Steiner’s project from CESTA.
I’m especially glad that the package has added a new author: Jordan Bratt, a PhD student at George Mason and a collaborator on Mapping Early American Elections. Jordan added functionality to the package that lets users get projections from the State Plane Coordinate System, so that they can make locally accurate maps at the level of the state or below.
In his elegantly written account, Kyle Roberts takes his readers on a tour of Evangelical Gotham. The book has a strong chronological through line, explaining how evangelicals went through three distinct periods in bringing their message of conversion and reform to New York City (10–11). While the spatial organization of the book is less obvious from its table of contents, Evangelical Gotham is a book that is fundamentally organized around place. This may seem like an obvious point to make about a book that focuses on a single city, but my aim is to show how Roberts uses spatial concepts.
Evangelical Gotham is explicit in its debt to the concept of “crossing and dwelling” articulated by Thomas Tweed. Roberts makes this clear in his first chapter, where he writes about spiritual autobiographies at the end of the eighteenth and beginning of the nineteenth centuries. He takes a fresh approach to this topic by giving conversion narratives a meaning both in geographic and spiritual space. Evangelicals crossed religious boundaries by converting, but many of them did so at the same time that they were crossing the ocean or moving to the city. And once they arrived in New York, these newly converted evangelicals had to dwell not just in the city but also had to find a church or “community of faith” (27).
Today Stephen Robertson and I are announcing a new conference and peer-reviewed proceedings titled Current Research in Digital History, hosted (and funded) by RRCHNM and George Mason University’s Department of History and Art History. You can read the announcement at the RRCHNM website, and here is our brief description from the conference website:
Hosted by the Roy Rosenzweig Center for History and New Media, Current Research in Digital History is an annual one-day conference that publishes online, peer-reviewed proceedings. Its primary aim is to encourage and publish scholarship in digital history that offers discipline-specific arguments and interpretations. A format of short presentations provides an opportunity to make an argument on the basis of ongoing research in a larger project.
As a number of people have pointed out, most notably Cameron Blevins, digital history has a problem in that it rarely makes arguments or interpretations that advance conversations in historical fields. We intend for this conference and proceedings to be one part of an effort to encourage those kinds of arguments.
CRDH is also intended to be a publication venue for what we might call preliminary results. Let me give you a specific example. Kellen Funk and I have been working on tracking the migration of law in nineteenth-century U.S. codes of civil procedure for some time. While we are getting close to a final publication about those results and methods, we have had the basic argument down for quite a while. A venue like CRDH would let us not just present but also publish a mix of preliminary conclusions and method on the way to our larger argument. While that’s not the only kind of paper we anticipate digital historians might want to bring to CRDH, we do think preliminary results is one significant category that would be served by this kind of short-form publication.
We’ve tried to think through very carefully what this conference and publication should look like, soliciting advice from a number of different people in the field. We’ve written up a fuller explanation of CRDH (PDF here). We hope you’ll take a look and then send us a paper for consideration.
At his blog, Andrew Goldstone has posted a pre-print of his essay on “Teaching Quantitative Methods: What Makes It Hard (in Literary Studies)” for the forthcoming Debates in DH 2018. It’s a “lessons learned” essay from one of his courses that is well worth reading if you’re teaching or taking that kind of a course in a humanities discipline. This semester I’m teaching my fourth course that fits into that category (fifth, if you count DHSI), and I can co-sign nearly everything that Goldstone writes, having committed many of the same mistakes and learned some of the same lessons. (Except over time I’ve relaxed my *nix-based fundamentalism and repealed my ban on Windows.) Here is a response to Goldstone’s main points.
What’s the relationship between the Secretary of Education’s views and the religious denomination in which she was educated? In a Religion and Politics post, Abram Van Engen takes aim at simplistic news stories which draw a straight line between the Christian Reformed Church (Calvinism! predestination! capitalism!) and Betsey DeVos. It’s a good introduction to why you need to know more than the bullet points about a religious group to explain how it has shaped someone:
That’s the thing about religious traditions: They can be highly formative without yielding predictable results.
The Programming Historian has sent out a call for contributors to write several proposed new lessons. If you have expertise in one of these areas, one of these tutorials would be great to write. The Programming Historian has an excellent collection of widely-used tutorials, with a well-thought out open peer-review process.
I hadn’t quite realized until my colleague Stephen Robertson pointed it out to me that what unites these proposed lessons is a call for historical argumentation. The Programming Historian is exactly right to think that there is a big gap between data analysis methods and making historical arguments, and that what computational historians need to do is hammer out what that process of historical thinking looks like.
But gathering data isn’t research in its own right. We need analysis. And that’s where we believe the The Programming Historian needs to go next. We’re looking to move beyond the gathering stage, because you know how to get the data (thanks to our authors), and you’ve cleaned it to a brillian shine (thanks again to our authors!). But what do you do with it next? How do you perform the types of analyses that lead to publishable historical research articles and monographs? How do you do digital research?