In this week’s issue of The Economist, the Lexington columnist wrote an article titled “The Perils of Constitution-Worship,” with the summary, “One of the guiding principles of the tea-party movement is based on a myth.” After I recommended the article, a colleague at Brandeis asked, what’s so wrong with myths anyway? This is a question well-worth asking, and answering.

Myths are probably inescapable. We all believe myths, in the sense that we all use stories about the past that are more believed than proved as a means of explaining and arguing about the present. Society could scarcely function if there were not some myths held in common.

Though myths may be inescapable, it does not follow that all myths are equally true, nor that they are all equally useful. Some myths more closely align with what can be known about the past and present than others. To that extent, the more truthful a myth is, the more useful it is. For myths are useful because they make prominent certain features of the past, but in so doing they blur other features. In short, myths are vices that must confronted, not virtues to be lauded. The task of the historian is to reshape myths to conform as near as possible to the truth—to be a translator speaking for the past.

To take the case of the Tea Party, since that is the particular issue at hand, it seems clear to me that the myths the Tea Party tells are particularly pernicious, because they are particularly (and demonstrably) false. One might spend a great deal of time expositing exactly why they are false in the light of several generations of historical research, but I think theEconomistarticle, among others, does a good job of explaining this succinctly.

But there is another problem with myths, beyond their deviance from truth. The problem is that the past can never speak entirely for the present. The past can, and should, be made to speak to the issues of the present. But each generation must confront new situations. As Leonard Levy (formerly a professor at Brandeis University) observed, the framers of the Constitution spoke plainly to the issues they anticipated, but were silent on issues that they could not, including the most troubling issues of today. Or, as a statesmen but not a Founding Father pronounced in a time far worse than our own:

The dogmas of the quiet past are inadequate to the stormy present. The occasion is piled high with difficulty, and we must rise with the occasion. As our case is new, so we must think anew and act anew. We must disenthrall ourselves, and then we shall save our country.

Yesterday afternoon, I tried to buy a Qur’an. I used to own two copies of the Qur’an (or, to be precise, a translation of the meaning of the Qur’an). One I bought as a textbook, and read, and one was mailed to me by Muslims trying to proselytize, but a search of my bookcases and attic revealed that neither copy had survived recent relocations. And so, I walked to the center of town to buy another copy.

I’m not the only person in the market for a Qur’an this week. According to Amazon, the Oxford World’s Classics edition is currently at 72 and climbing in their bestsellers list, meaning that Amazon is selling a lot of Qur’ans. Demand is being spurred by two related events: plans to build an Islamic community center in New York City, and plans to burn Qur’ans in Gainesville, Florida.

The two controversies are mirror images of each other—not exactly so, but near enough. In the one, Muslims plan to build a community center that an overwhelming majority of evangelical Christians oppose. Many defend Muslims’ First Amendment right to the free exercise of their religion and fair use of their private property, while questioning the wisdom of trying to build near Ground Zero. In the other, evangelical Christians plan to burn the Qur’ans on September 11 (also the last day of Ramadan), an action opposed by all Muslims and, one hopes, a majority of Americans. It’s pretty clear that the Qur’an burning is protected speech under the First Amendment, and in any case it cannot be prevented in advance, but many have decried the insult to Muslims and the potential danger to U.S. troops overseas. Both cases are linked by the problem of the sacred and the problem of pluralism.

The sacred is enlisted in both controversies. The chief argument advanced against building the Islamic community center is that Ground Zero is a sacred, a claim some 56% of Americans believe. Ground Zero’s alleged sacredness is not the sacredness of a church, but something between the sacredness of a roadside shrine for a car accident and that of a battlefield. Like the shrine, Ground Zero is sacred to the cult of the victims, and it has an out-and-about, everyday kind of sacredness. But the more apt comparison is to a battlefield. Officially, according to Presidents Bush and Obama, the war is the American people against terrorism. It would be naive, though, to miss that many Americans are fighting or wish to fight a different war: one that pits a bastard mix of Christianity and American civil religion against global Islam.

Terry Jones, pastor of Dove World Outreach Center in Gainesville, is just one of those Americans. Had he not been the one to receive the media’s attention, doubtless someone else would have gotten it. Calling his threat to burn the Qur’an on September 11 a publicity stunt misses the point, though. Jones is doubtless after publicity, but his beliefs and his hate are sincere; the media amplifies them, but it did not create them. Jones’s plan is to desecrate the Qur’an by burning it, a double insult to Islam, on the one hand, and to rational secularists who regard books as sacred. (Librarians and intellectuals regularly protest book burning.) Yet the act of burning a Qur’an is, in a way, an inadvertent acknowledgment that it is sacred. Before an object can be desecrated, it must first be sacred. It is the boundary that gives meaning to the transgression, and the transgression is the most visible acknowledgement that a boundary exists.

These controversies over a sacred space and a sacred book raise the question: How does an open society like the United States deal with the problem of multiple religions?

The answer, in a word, is pluralism. Pluralism is not the vague notion that all religions are the same. That idea, propounded by both the religious and the irreligious, is an insult to all religions, for it ignores the actual meaning of their beliefs and practices and reduces them to a set of platitudes. Pluralism, rather, acknowledges that religions are different and irreconciliable, that their practitioners will frequently be at odds, and that they have every right to attempt to proselytize the other. Yet pluralism makes room within society for multiple religions. Making room is, in one sense, dependent on law. The state provides ordered liberty–the protection of rights and the maintenance of order. It is for this reason that the Apostle Paul wrote that rulers are “the ministers of God” who are “not a terror to good conduct, but to bad” and commanded Christians to pray for rulers “that we may lead a peaceful and quiet life, godly and dignified in every way.” But making room is, in another sense, dependent not on legal restrictions but on voluntarily restrictions. It may be legal to burn a Qur’an, but it is wrong by the code of an open society and by the imperatives of the gospel. There’s a saying that your right to swing your arms ends where my nose begins. If that’s the case, then it’s also true that my right to profane ends where your sense of the sacred begins.

Pluralism of this type works when allegiance to the state does not conflict with allegiance to religion. In other words, it works when I can be both Christian and American, and you can be a Muslim and an American, without violating our religious conscience to maintain our political loyalty. The problem in both cases is that religious loyalties and political loyalty are confused because of American civil religion, a syncretistic, quasi-Judeo-Christian political religion intended to promote loyalty to the state. Protestant-Catholic-Jew, to borrow Will Herberg’s phrase, may have equal right to claim Americanness, but Muslims have not yet been welcomed into the cult of civil religion . In both the case of the community center and the Qur’an-burning, the conflict is not primarily between Islam and Christianity, but between Islam and American civil religion. This is apparent from the choice of Ground Zero and September 11 as holy space and holy day, respectively, and in the patterns of opposition to the community center as desecration, strongest among Christians but claiming a broader following. A genuine pluralism cannot exist where a civil religion superimposes itself on religious and political loyalty.

That is why I tried to buy a Qur’an yesterday. Reading about other religions is hardly a panacea, but it’s a better step than burning them. I had to resort to Amazon, though, because my attempt to buy a Qur’an at a bookstore failed. The religion section of one used bookstore held a dozen Bibles and even a Tanakh, but no Qur’an. The other bookstore was closed, ironically enough, for Rosh Hashanah.

What is identity? What is a self? How has selfhood changed over time?

Those are the questions that Charles Taylor, a philosopher with a historical method, sets out to answer in Sources of the Self: The Making of the Modern Identity (Harvard University Press, 1989). His book is an investigation of how the modern sense of what it means to be a person came into being through the influences of philosophers and popular thought. To that end, he first lays a philosophical foundation, then offers a history of selfhood that is somewhere between straightfoward intellectual history and a history of mentalités.

Taylor’s basic argument is that the concept of the self in linked to morality. Morality means not simply a set of claims about what one ought to do or not do to be moral; rather, it means what one ought to be or not be. Morality is related to the self by what Taylor calls a framework. How one thinks about oneself depends (1) on what one considers to be the Good and (2) how one relates to that Good. If this all sounds very philosophical, it is. But the insight is rather simple, though profound: you can only think of yourself as you think of yourself in relation to what is most important.

Taylor’s insight that selfhood is dependent on the Good and on one’s relation to it permits him to ask how those conceptions of self have changed over time, as people held to different goods and related to them differently. The bulk of the book is spent on a sweeping exposition of the changes in the self, running from Plato to Augustine to Descartes to Locke to Montaigne to Protestant Christianity to the Victorians to the present. In brief, the transition the Taylor describes is from an external sense of the self to an interior sense of the self. It is also a transition from finding meaning in extraordinary deeds to one that finds meaning in everyday actions.

I am fairly averse to using theory in the practice of history, but I find Taylor’s work to be extraordinarily useful. At the minimum, it provides a set of questions of interest to the historian of religion. How did people conceive of who they were, and how did their religion influence their conceptions? His work also provides a way of thinking about the question of religion and the self. Taylor argues that the self is undefinable apart from the Good, so it follows that religion, which defines the Good and how to relate to it for many people, is a powerful key to understanding people’s sense of self. This is a way to use religion as a lens to another topic, yet without treating religion as something merely epiphenomenal. Then too, Taylor provides a fairly compelling narrative of the differences in the self over time. I suppose that for myself, it is also appealing that Charles Taylor is himself a Catholic, and so is working to some degree within the Christian philosophical tradition.

I hope in time to make all this philosophizing here a bit more concrete in my own work on conversion. Conversion, after all, is a fundamental change in the self, usually taking the form of new relationship to God, the world, and the community. To take just two examples, what is the difference between a seventeenth-century Puritan in New England undergoing the anguish of conversion and a twentieth-century Cuban Catholic in Miami venerating Our Lady of Charity? The difference is one of ritual and creed and community, to be sure, but also of conceptions of the self.

For some time the news has been filled with debates over a proposed Islamic community center (not a mosque) to be built near Ground Zero in New York City. On the one hand, the usual suspects in the Republican Party and, more surprisingly, the Anti-Defamation League, have opposed the center as an insult to the victims of the September 11th terrorist attacks. On the other hand, American Muslims have defended the center as a perfectly legitimate outreach into the community, as the Islamic equivalent of the YMCA.

The debate, however, has taken a far nastier turn. Opponents of the community center have attempted to use the power of the state to prevent its construction. One attempt has tried to declare the existing building on the site a landmark, to prevent the property from being developed. Another, only slightly less invidious attempt, has offered state funding if the center would be built further from Ground Zero. (It is worth noting in passing the hypocrisy of politicians and voters who rail against state interference in the economy and society yet have no qualms about using state power to interfere in this case.)

The question at issue is not whether one would wish for an Islamic community center to be built at the proposed location. The question is this: Should the state have the power to prevent the free use of private property for a religious purpose?

The answer is no. If that answer is not obvious, then I recommend Mayor Michael Bloomberg’s August 3 speech and Melissa Rogers’s essay for the Washington Post. That answer is required by the basic principles of our republic and, I believe, by the implications of the gospel. To their shame, it is an answer too few Christians, and in particular too few evangelicals, have been willing to give.

Grigg, John A. The Lives of David Brainerd: The Making of an American Evangelical Icon. New York: Oxford University Press, 2009. 276 pages. ISBN: 978-0-19-537237-3.

As John Grigg observes, David Brainerd is second only to Jonathan Edwards in evangelicals’ memory of the Great Awakening. His often-republished diary has been a staple of evangelical devotional literature since Edwards published his _Life of Brainerd_in 1749. Academic historians take note of Brainerd too, both for his role in the controversies surrounding the Awakening and for his missionary efforts among the Delaware Indians. Grigg’s The Lives of David Brainerd is a history of both Brainerds. The book’s first section contains a careful reconstruction of Brainerd’s life, while the second section examines the memory of Brainerd since his death.

To write his life of Brainerd, Grigg has recovered fragments and leaves of Brainerd’s writings, very little of which is extant. He fills in the details with accounts of Brainerd’s hometown, of Yale, and of other missionary efforts to the Indians. Grigg’s argument is that Brainerd stood uneasily on the boundary between the radical and the moderate supporters of the Great Awakening. Brainerd’s expulsion from Yale was not precipitated solely by his intemperate outbursts against Yale leaders, but was a consequence of Brainerd’s attempt to minister to New Haven’s separatist congregation while trying to receive the imprimatur of a Yale degree. Nor was Brainerd forced into a mission to the Indians because he could not get a ministerial position. Rather, Brainerd turned down two offers of a position to continue his mission. Brainerd intentionally based his missions work on a mix of the radical and moderate Awakening. By the time of his death, Brainerd had mostly learned to shed the racist assumptions of his day and to think of people in terms of religion and not race, identifying himself with “godly Indians” rather than “white heathens.”

Grigg’s history of the memory of Brainerd runs from his death to the late twentieth century. He demonstrates that Jonathan Edwards used his Life of Brainerd as an argument in several debates, presenting Brainerd as an opponent of the enthusiastic excesses of the Awakening, as a denouncer of Arminianism, and as a model of life after conversion for his congregants. John Wesley, on the other hand, did his own editing of Brainerd’s journals to provide a model to itinerant Methodists of a minister who was unmarried, ascetic, a proper steward of money, and inured to hardship. Early nineteenth-century evangelicals, notably William Carey in Britain and Adoniram Judson in the United States, also claimed Brainerd, adding a mythical bethrothal between Brainerd and Jerusha Edwards in support of their belief that missionaries should be married. The student missionary movement at the turn of the twentieth century, led by men such as E. M. Bounds and A. J. Gordon, held up Brainerd as a model of prayer. In the second half of the twentieth century, Brainerd inspired missionaries like Jim Eliot, and also stood as a prototype of the campus radical and the civil rights leaders. Grigg thus uses Brainerd’s to reveal significant changes in American evangelicalism and missions.

This book is a revised dissertation, and readers unaccustomed to the genre will be jarred by the historiographical debates, not all of which have been excised from the main text. Because Grigg is obligated by the scarcity of Brainerd’s writings to turn to other sources, the text occasionally wanders from its topic, as in the needlessly long summary of Edwards’s writings. One wishes that the author had not been so generous with people who made up the versions of Brainerd out of whole cloth as to insist that there runs a “thread of truth beneath the surface” (190) of their fabrications.

Quibbles aside, Grigg’s book is precisely the critical study of Brainerd that has been needed by both historians and evangelicals. As such, it is likely to become the standard work on Brainerd.

Bendroth, Margaret Lamberts. Fundamentalists in the City: Conflict and Division in Boston’s Churches, 1885-1950. New York: Oxford University Press, 2005.

Carpenter, Joel A. Revive Us Again: The Reawakening of American Fundamentalism. New York: Oxford University Press, 1997.

Larson, Edward J. Summer for the Gods: The Scopes Trial and America’s Continuing Debate Over Science and Religion. New York: Basic Books, 1997.

Marsden, George M. Fundamentalism and American Culture. 2nd ed. New York: Oxford University Press, 2006.

Watt, David Harrington. A Transforming Faith: Explorations of Twentieth-Century American Evangelicalism. New Brunswick, NJ: Rutgers University Press, 1991.

In 1980 George Marsden published Fundamentalism and American Culture, a history of the first decades of American fundamentalism. The book quickly rose to prominence in the historical profession, provoking new studies of American fundamentalism and contributing to a renewal of interest in American religious history. The book’s timing was fortunate, for it was published as a resurgent fundamentalism was becoming active in politics and society. The rise of the Christian right provoked the question: where did the movement come from?1{#fnref1.footnoteRef}

The historical interpretation of fundamentalism that was then current could not provide an adequate answer. In the standard narrative, fundamentalism was a reaction by late nineteenth- and early twentieth-century evangelical Christians against modernizations in American society, such as industrialization, Darwin’s theory of evolution, and changes in popular mores. Fundamentalists resented modernization because it clashed with their out-of-date worldview and literal faith in the Bible and Christian doctrine. Within the American denominations, fundamentalists fought modernists in losing battles over doctrines such as the inspiration of the Bible, the creation of the world, and the virgin birth of Jesus, but fundamentalists were eventually driven from their denominations in defeat. Fundamentalists also mounted a bid to retain control of American society, most notably through laws prohibiting the teaching of evolution in public schools. Their attempt was soundly defeated and ridiculed at the 1925 Scopes trial. After the trial, fundamentalists were demoralized and in retreat, sufficiently marginalized that they could never again make a serious effort to control the nation. By defining fundamentalism as a reaction against modernism, the standard narrative implicitly predicted that fundamentalism would disappear as the United States completed modernizing.

When fundamentalism reappeared in the 1970s, the flaws in that interpretation were revealed. In its place, a new body of historical work, including Marsden’s book, redefined fundamentalism not as evangelicalism reacting against modernism, but as evangelicalism adopting modernism. The first historian to make this argument was Ernest R. Sandeen in The Roots of Fundamentalism. Sandeen saw fundamentalism as a movement descended from American and British evangelicalism with the additions of dispensationalist eschatology and an explicit definition of the verbal inspiration of the Bible. George Marsden expanded on Sandeen’s definition by unpacking the significance of those additions. Dispensationalism divided history and biblical prophecy into a series of eras, or dispensations—a type of scientific classification. By defining biblical inspiration as extending to the very words of Scripture, fundamentalists created a new hermeneutic which treated the Bible as a source of data to be mined and scientifically analyzed. Marsden further observed that fundamentalism added borrowings from the Holiness movement and from Scottish commonsense realism.2{#fnref2.footnoteRef}

The implications of Marsden’s redefinition were radical. He revealed that fundamentalism was not rural, Southern, and pre-modernist, but rather urban, often Northern, and aggressively modern. Its relationship to modernism led to a paradox in fundamentalists’ identity. On the one hand, fundamentalists identified as heirs to the Protestant establishment of the nineteenth century. On the other hand, they saw themselves as displaced from power by a new modernism, though partaking of what they found desirable in it. Marsden’s explication of this paradox had great power to explain fundamentalism’s struggle to control the United States at the same time that they felt alienated from it. Even though Marsden ended his book in the 1930s, his thesis could explain how fundamentalism, moribund after the Scopes trial, could rise again in the 1970s.3{#fnref3.footnoteRef}

Joel Carpenter extended the history of fundamentalism beyond the 1930s in his book Revive Us Again. Carpenter agreed with Marsden that fundamentalism was not merely a reaction against modernism. Carpenter’s insight was that fundamentalists’ defeat at the Scopes trial did not necessarily mean that fundamentalism retreated after the 1930s. Rather, Carpenter looked at how fundamentalists created their own network of extra-denominational institutions, most notably Bible colleges that turned out thousands of pastors, evangelists, and missionaries. Also important in linking fundamentalists together were publishing houses, radio shows, and Bible and prophecy conferences. In one sense the creation of these networks was a retreat, because fundamentalists increasingly withdrew from “the world” and from liberal denominations, thus forming their own subculture. Still, because most fundamentalists tended to form para-church institutions rather than denominations, one could identify as a fundamentalist and contribute to fundamentalist organizations even while remaining in mainline denominations. Fundamentalists’ withdrawal was driven far more by their theology of separation from the world than by any marginalization at the Scopes trial.4{#fnref4.footnoteRef}

Even in the period that Carpenter studies, fundamentalists refused to give up their claim to cultural dominance and instead planned for a revival. What was surprising about 1970s fundamentalism, then, was not its strength or its claims to cultural primacy, but the decision of leaders like Falwell to give up withdrawal in favor of political activism. Even political activism, though, was a part of fundamentalists’ heritage. They were heirs to the evangelical reform movements in the nineteenth century, such as temperance, abolition, and benevolence. Fundamentalism was also a way to be modern while critiquing the reformers of the Progressive era. Anti-evolution crusades were, for example, an attempt to defend the doctrine of creation, but they were also critiques of efforts to reform society scientifically, such as eugenics.5{#fnref5.footnoteRef}

Legal historian Edward J. Larson took up the study of fundamentalism and anti-evolution in his Pulitzer prize–winning book Summer for the Gods. The book is a history of the Scopes trial in Dayton, Tennessee, in 1925, covering both the trial and its aftermath. Larson pointed out that Dayton was not particularly fundamentalist, but that boosters drummed up the trial as a publicity stunt to put their town on the map. The trial might not have turned into a religious showdown until Clarence Darrow, a famous trial attorney who was a public agnostic, and William Jennings Bryan, a politician and leader of the anti-evolution movement, took the case as lead counsel for the defense and the prosecution, respectively. Darrow and Bryan, along with reporters like H. L. Mencken, turned the trial into a _cause cĂ©lèbre_that tested the validity of fundamentalist Christianity, climaxing in Darrow’s questioning of Bryan on the witness stand as an expert on the Bible.6{#fnref6.footnoteRef}

Larson proves that the Scopes trial was not the defeat for fundamentalists that historians have portrayed it as. Indeed, fundamentalists won the trial and took it as encouragement in their crusade. On appeal, the Tennessee Supreme Court used a technicality to avoid fining John Scopes but also to avoid striking down the anti-evolution law, which remained on the books for decades. The rewriting of the history of the Scopes trial into a victory for modernism did not occur for decades, most notably in the writings of Charles Beard and in the Broadway play and film Inherit the Wind, produced in the 1950s as a fictionalized critique of McCarthyism.7{#fnref7.footnoteRef}

Larson’s book makes it possible to write a history of fundamentalism that could escape the undue influence of the Scopes trial. For too long, historians have relied on the trial as a milestone marking the periodization of religious history. Because it was extraordinary, the trial is a useful lens for studying American religion, but because it is extraordinary, the trial cannot be taken as typifying the course of fundamentalism. What is needed is a history of fundamentalism that takes the trial into account, yet which refuses to periodize the history of fundamentalism around the mistaken notion that it was a turning point. By doing so, historians can move beyond the narratives of declension and revival into which religious history too often falls.8{#fnref8.footnoteRef}

Marsden and Carpenter’s cultural histories provide one way of situating fundamentalism, whether in decline or revival, within American culture. In A Transforming Faith, David Harrington Watt provides a complementary approach. Where Marsden and Carpenter explicate fundamentalists’ distinctive subculture, Watt examines how American culture shapes and controls the culture of fundamentalism. His approach depends on the same definition of fundamentalism as modern, yet it recasts the inquiry in a profitable new way.9{#fnref9.footnoteRef}

Watt examines how a subculture can maintain its identity within a dominant culture, a hegemonic relationship he terms “asymmetrical power.” Watt argues that American fundamentalists since the 1950s, for all their withdrawal from and critiques of American culture, bought into the major characteristics of the dominant culture. Watt begins with an essay on Bill Bright’s evangelistic tract “Have You Heard of the Four Spiritual Laws?” pointing out how the text markets Christianity as a commodity. He extends similar analysis to other parts of evangelism. Evangelical teaching on marriage and the family were often indebted to feminism, while evangelical counseling owed as much to psychology as to the Bible. Evangelical politics bought uncritically into conservative, free market ideas. Watt’s title points to evangelicalism not as a faith that transforms culture, but as a faith that was transforming under culture’s influence.10{#fnref10.footnoteRef}

Marsden’s, Carpenter’s, and Watts’s books are cultural histories that attempt to examine fundamentalism as a whole, to come to grips with its essential characteristics while remaining within the particulars of history. A local history that points in a promising direction for new research is Margaret Bendroth’s Fundamentalists in the City. Bendroth’s book is a fine-grained study of fundamentalist congregations, leaders, and events in Boston from the 1880s to the 1950s. Her chapters on Tremont Temple and Park Street Church in particular make good use of demographic data and show a fine sensitivity to the local motivations and methods peculiar to each congregation. Defining fundamentalism as “oppositional” evangelicalism, Bendroth finds that fundamentalists in Boston did not fight primarily against theological liberals, many of whom called Boston and Cambridge home, but rather against Catholics. Fundamentalists’ battles were inextricably linked to local politics, which in Boston were defined by a statehouse controlled by Protestants and a city hall controlled by Catholics. This kind of insight which could not be deduced from a national history is precisely the promise of local histories of fundamentalism. Bendroth’s study also does valuable work in confirming the conclusions of broader studies, for example, by illustrating how Gordon College was a crucial nexus for Boston fundamentalists, and by showing how fundamentalism flourished even in Boston in the periods when it was supposed to have been in decline.11{#fnref11.footnoteRef}

Bendroth’s history might well be taken as a model by future historians of fundamentalism, who must fill up the deficit of local histories of fundamentalism. To be sure, there have been many highly particular books on recent fundamentalism. Some of these are exposes, whether as journalism or as memoir. Of more scholarly use are David Watt’s brief ethnographic studies of three Philadelphia congregations in the 1990s, and James M. Ault’s sociological study of a Baptist congregation in 1980s Worcester, Massachusetts. These studies are all recent, though, and they are not histories. What is needed are local studies of fundamentalist congregations or institutions, researched in the tradition of ethnographic history and focusing on the congregants rather than the leaders. If the sources are extant, numerous congregations present themselves as options: J. Frank Norris’s First Baptist Church in Fort Worth; William Bell Riley’s First Baptist Church in Minneapolis, John R. Rice’s Sword of the Lord conferences; A. C. Dixon’s Moody Church in Chicago or Metropolitan Tabernacle in London; and D. Martyn Lloyd-Jones’s Westminster Chapel in London.12{#fnref12.footnoteRef}

If the history of fundamentalism could benefit from going local, it could also benefit from going transatlantic. Some of the British connection of fundamentalism are well known, such as the tours in Britain by evangelists from D. L. Moody to Billy Graham. Other known connections include how American fundamentalism imported dispensationalism and the literal interpretation of biblical prophecy from John Nelson Darby and the Plymouth Brethren, and later imported apologetics and fiction from C. S. Lewis. Some pastors, such as A. C. Dixon, held pulpits in both Britain and United States. Less well known, though, is how British and American Christians interacted on a regular basis, and how fundamentalism in America and conservative evangelicalism in Britain functioned in their different political and cultural circumstances. Some excellent work has been done in tracing evangelicalism in the Anglophone world, most notably the series A History of Evangelicalism, edited by Mark Noll and David Bebbington. A transatlantic study along those lines could free the study of American fundamentalists from what may be invalid assumptions about its peculiar Americanness. Such a transatlantic history would be a return to Ernest Sandeen’s insight that dispensationalist theology could be understood only by linking British and American history.13{#fnref13.footnoteRef}

Historians of fundamentalism have made many advances since the 1980s. They have dispelled mistaken interpretations of fundamentalism and contributed a great deal of knowledge about the movement’s culture. These gains might be consolidated in a history told finally without dependence on the Scopes trial. And they might be advanced by pursuing further studies fundamentalism in both its local and its transatlantic contexts.

  • George M. Marsden, Fundamentalism and American Culture, 2nd ed. (New York: Oxford University Press, 2006).↩

  • Ernest R. Sandeen, The Roots of Fundamentalism: British and American Millenarianism, 1800-1930 (Chicago: University of Chicago Press, 1970); Marsden, Fundamentalism and American Culture, 16-18, 43-71, 80-100, 102-22. Marsden's helpful definitions of the terms fundamentalism and evangelicalism and their varying usage over time are on pages 234-35. Fundamentalists themselves have put much effort into defining their movement, for example, David O. Beale, In Pursuit of Purity: American Fundamentalism Since 1850 (Greenville, SC: Unusual Publications, 1986), 3-12. These definitions tend to be normative rather than descriptive.↩

  • Marsden, Fundamentalism and American Culture, 6-8.↩

  • Joel A. Carpenter, Revive Us Again: The Reawakening of American Fundamentalism (New York: Oxford University Press, 1997), 13-32, 57-75, 124-40. For a study of fundamentalists' appropriation of modern mass culture, see Douglas Carl Abrams, Selling the Old-Time Religion: American Fundamentalists and Mass Culture, 1920-1940 (Athens: University of Georgia Press, 2001).↩

  • Carpenter, Revive Us Again, 32, 54, 110-23, 187-232.↩

  • Edward J. Larson, Summer for the Gods: The Scopes Trial and America's Continuing Debate Over Science and Religion (New York: Basic Books, 1997), 92-93, 101-5, 116-21, 198.↩

  • Larson, Summer for the Gods, 225-66.↩

  • An example of a work which purports to displace the Scopes trial as "antievolution's defining moment" is Michael Lienesch, In the Beginning: Fundamentalism, the Scopes Trial, and the Making of the Antievolution Movement (Chapel Hill: University of North Carolina Press, 2007), which nevertheless spends only two chapters tracing the history of anti-evolution movements after Scopes.↩

  • David Harrington Watt, A Transforming Faith: Explorations of Twentieth-Century American Evangelicalism (New Brunswick, NJ: Rutgers University Press, 1991). Most histories of fundamentalism, like those of Marsden and Carpenter, have tried to explicate fundamentalism's subculture. Another fine work in this mode is Randall Herbert Balmer, Mine Eyes Have Seen the Glory: A Journey into the Evangelical Subculture in America (New York: Oxford University Press, 1989). A recent book that, like Watt, is more concerned to show how American culture has influenced religious subcultures is Matthew Avery Sutton, Aimee Semple McPherson and the Resurrection of Christian America (Cambridge, Mass: Harvard University Press, 2007).↩

  • Watt, A Transforming Faith, 4-7, 15-32, 49-154.))↩

  • Margaret Lamberts Bendroth, Fundamentalists in the City: Conflict and Division in Boston's Churches, 1885-1950 (New York: Oxford University Press, 2005), 3-9, 99, 101-24, 155-76.↩

  • For expose as journalism, see Kevin Roose, The Unlikely Disciple: A Sinner's Semester at America's Holiest University (New York: Grand Central Publishing, 2009). For expose as memoir, see Frank Schaeffer, Crazy for God: How I Grew up as One of the Elect, Helped Found the Religious Right, and Lived to Take All (or Almost All) of It Back (New York: Carroll & Graf, 2007). Schaeffer's book is notable only for being the most shameless of the ex-fundamentalist memoirs. For a far more sensitive and sympathetic memoir, used as a means of introduction to the history of fundamentalism, see Brett Grainger, In the World But Not of It: One Family's Militant Faith and the History of Fundamentalism in America (New York: Walker, 2008). David Harrington Watt, Bible-Carrying Christians: Conservative Protestants and Social Power (New York: Oxford University Press, 2002); James M. Ault, Spirit and Flesh: Life in a Fundamentalist Baptist Church (New York: Alfred A. Knopf, 2004).↩

  • Three volumes of A History of Evangelicalism, published by Inter-Varsity Press, have appeared. The two projected volumes, The Disruption of Evangelicalism: The Age of John R. Mott, J. Gresham Machen and Aimee Semple McPherson, to be written by Geoff Treloar, and The Gobal Diffusion of Evangelicalism: The Age of Billy Graham and John Stott, to be written by Brian Stanley, will cover the period of American fundamentalism.↩

  • Moreton, Bethany. To Serve God and Wal-Mart: The Making of Christian Free Enterprise. Cambridge, MA: Harvard University Press, 2009.

    In To Serve God and Wal-Mart, Bethany Moreton looks at a series of big questions using the world’s biggest corporation as a lens. Her book is a cultural, not a business, history of Wal-Mart. Rather than chart Wal-Mart’s rise through its innovations in technology, logistics, and business practices, Moreton explains how Wal-Mart adopted and modified the culture in which it was founded. This approach permits analysis of a range of subjects, including gender in the workplace, the rise of a service economy, Christianity and free enterprise, business training in colleges and universities, and business promotion of free enterprise in the United States and abroad. This broad inquiry is motivated by two central questions: How did a discount retailer from the Ozarks become the world’s largest corporation, and what motivates the workers employed by Wal-Mart?

    The answer to these questions, according to Moreton, lies in the distinctive culture of the Ozarks. Both Wal-Mart’s customers and its employees went from a subsistence-based agrarian economy to a consumer-oriented service economy while skipping a production-oriented industrial economy. Wal-Mart thus incorporated elements of an agrarian economy into its business and labor practices. The retailer had to convince customers who had long valued frugality to become consumers. Wal-Mart reconciled the competing ideas of consumption and thrift by selling consumer goods at the lowest prices in sparsely decorated stores that let the customers serve themselves, in contrast to the ornateness of full-service city department stores. Wal-Mart also overcame the prejudice of its Ozarks constituents, not many generations removed from the Populists, against corporations. It did so by adopting the corporate structure, which the Populists had themselves adopted, while avoiding the taint of “foreign” capital by raising funds from the Walton family then from other Ozark businessmen.

    Like its customers, Wal-Mart’s employees carried over patterns from the agrarian economy. They regarded employment as a way to subsistence, rather than a way to wealth. Many employees, especially women, took jobs at Wal-Mart as a means of supporting a family farm, or of supplementing the family income. Women’s labor at Wal-Mart was undertaken as a “second job” in conjunction with their labor as homemakers and childcare providers. The types of jobs that appealed to women were therefore part-time service position, rather than managerial careers. Work at Wal-Mart was thus highly gendered: women worked as clerks, while men worked as managers. Wal-Mart consciously maintained these distinctions by requiring that managers frequently relocate, and it took advantage of the general undervaluing of women’s labor in order to pay them subsistence wages.

    These labor practices, however, were not resented by employees. Rather, they appealed to workers because they reproduced familiar patterns of labor from families. They also appealed to Christian concepts of “servant leadership.” This idea was that through service to others, one became a leader and fulfilled his or her duty to God. The idea was in the first instance applied to personal and church relationships, but was also explicitly applied by Christians to business. Through service to customers and co-workers became a way of turning work for Wal-Mart into work for God.

    Moreton’s cultural reading of Wal-Mart is perceptive and nuanced. Nevertheless, it suffers from several problems of interpretation and evidence. First, separating the business history of Wal-Mart from its cultural history leaves the reader unable to evaluate the relative weight to be assigned to cultural and economic causes. Put bluntly, did Wal-Mart really prosper because of its cultural adaptation, or because its goods were plentiful and its prices cheap? A simple test is the observation that Wal-Mart has spread far beyond its rural, Christian roots in the Ozarks; it cannot have profited solely from the customer base described in this book. It might also be pointed out that Wal-Mart, while in part a service industry, is also a mover of industrial goods, and so does not fit so neatly in the category to which Moreton assigns it. Then too, Moreton has a powerful explanation for the appeal of Wal-Mart to workers, but one suspects that economic necessity is at least as powerful a motivation.

    Second, the book lacks chronological and geographical specificity. This is not to suggest that this topically arranged book ought to have been cast in a narrative, chronological structure. Rather, within each chapter Moreton cites evidence from many years, without explaining how the highly anecdotal evidence is or is not typical. There is little sense of how Wal-Mart developed over time. At the same time, Moreton cites evidence from stores without placing them in their culture outside of the Ozarks. The effect of this lack of specificity is that the book often casts Wal-Mart as the actor, rather than making it plain who the human actors were behind the corporation.

    Third, it is not always plain what the connection is between Wal-Mart and the other institutions whose histories Moreton sketches. These institutions are intended to provide an illuminate the culture around Wal-Mart. A case in point is the sketch of the Fellowship Bible Church in Arkansas. The corporatist, entertainment-centered ministries of the church supposedly demonstrate the connection between Wal-Mart and evangelicalism. But the connection is so loosely drawn (some Wal-Mart executives have attended the church) that any other mega-church might have been substituted. For example, Willow Creek Community Church outside of Chicago and Saddleback Church in California (pastored by Rick Warren, whose Purpose-Driven Life is sold in Wal-Marts) have much in common with Fellowship Bible Church, yet are in no way typical of the Ozarks. Other than the concept of servant leadership, the connections between Wal-Mart and evangelicalism are not well drawn.

    Any book dealing with the combination of conservative Southerners, Wal-Mart shoppers, and evangelical Christians is ripe for scholarly disdain. Moreton has successfully avoided that potential pitfall, instead analyzing her subjects with insight and sympathy. Still, by knitting the three groups together so loosely, Moreton may have unintentionally perpetuated these stereotypical connections without adequate evidence.

    Digital humanities is a spectrum. To put it another way, all humanities scholars use digital practices and concepts to one degree or another, even those who do not identify as digital humanists. Working as a digital humanist is not one side of a binary, the other side of which is working as a traditional scholar.

    Consider a few examples: one historian keeps notes and transcribed documents in MS Word documents so that they can be searched. A literary scholar uses a print-on-demand machine to get a physical copy of a book or newspaper scanned by Google. A medievalist uses a library or archive website to read a document that would otherwise require a trip to Europe. A professor making assignments for a class posts readings to Blackboard. A graduate student in a hurry uses Amazon’s “Look Inside” feature to verify a footnote. A history department circulates papers for a workshop via e-mail.

    These examples are all done by scholars every day. The examples are unremarkable: using these methods does not imply that the scholar works in the digital humanities. They are unremarkable, though, because they are ubiquitous.

    Moving from these practices to the digital humanities is a difference of degree, not of kind. It’s only one step from searching Word documents to using Zotero, and from there it is only a few more steps to text mining. A scholar who uses online digital collections is that much closer to curating an online collection, perhaps using A professor who can post readings to Blackboard can create a course website using WordPress. Circulating papers for comment via e-mail might be a second cousin to posting your manuscript online for comment, but the two types of review are related.

    My argument that all scholars now use digital practices to some degree is not to miss how the digital humanities fundamentally transform scholarship. I’m simply arguing that we’re already being transformed—all of us.

    Defining digital humanities as a spectrum might help resolve one of the contradictions I see in discussion of the digital humanities. On the one hand, these discussions often lament the barriers between digital scholars and traditional scholars, with worries about how new scholarship can be recognized as valid and how digital humanists can be tenured. I do not want to minimize these concerns at all; they are some of the pressing problems of the digital humanities. But the solution to these problems is not a rhetoric of binary. I think the answer will come from what is, on the other hand, digital humanities’ ethos of inclusion. It’s the ethos that says, I’m a coder and you’re not, so let me teach you, or let me build the tools you need. It’s the ethos that says texts and tools should be available for all and that publicly funded research and instruction should be publicly accessible.

    This concept of a spectrum can turn the ethos of inclusion into a tool of persuasion. Does someone question whether digital humanities work counts as scholarship? Demonstrate how the work advances or refines techniques implicit in more traditional scholarship. Does a scholar doubt the value of identifying as a digital humanist? Point out how that scholar is already using digital methods and concepts.

    In other words: we’re all digital humanists now. Persuading other scholars of that is a way to spread what is best in the ethos of digital humanities.

    Have you heard the saying “What would Jesus do?” Who hasn’t? In the 1990s the phrase became a fad among evangelical Christians, who printed the abbreviation WWJD? on bracelets, t-shirts, and posters, spawning in turn a host of mocking pop culture imitations. WWJD can provide a useful lens for looking at evangelical consumer culture of the late twentieth century. But the phrase can also serve as a parable about contemporary copyright law.

    The phrase “What would Jesus do?” originated in a novel titled In His Steps: What Would Jesus Do?, published in 1897 by Charles M. Sheldon. Sheldon was a Congregational minister in Topeka, Kansas, and a Progressive concerned with Christianity’s relationship to the social politics of his day. His book is a parable about the Reverend Henry Maxwell, a minister to a wealthy middle-class congregation. Maxwell comes into contact with a poor man who owes his plight to industrialization, and is thereby jarred from his complacency about social issues. He challenges his congregation to seriously consider the question “What would Jesus do?” in all their actions. As his congregation takes up the challenge, their views on politics, class, race, charity, and corporations fall in line with the social gospel.

    The long reach of Sheldon’s In His Steps was due in large part to its message, which appealed to both mainline and evangelical Christians, and to its simple though not particularly literary prose. But its long reach was also due to a simple mistake by Sheldon’s publisher: the book was never properly copyrighted.

    In the 1890s, copyright was opt-in, not opt-out. In order for a book to be copyrighted, the publisher had to register it with the federal government. Chicago Advance, Sheldon’s publisher, incorrectly registered the copyright, and so the book was available in the public domain. Many publishers issued their own editions of In His Steps, which sold widely. About the sales of the book, Paul Boyer writes:

    Owing to a defect in the copyright, sixteen different publishers soon had editions of the book in the market, and by the summer of 1897, 100,000 copies had been sold. And this was merely the beginning. While Sheldon’s own later estimate of 30,000,000 sales is overdrawn, Frank L. Mott, tabulator and chronicler of American best-sellers, suggests that a figure of 6,000,000 for total world sales would probably not be far amiss, with perhaps 2,000,000 of these in the United States.

    To be sure, In His Steps would not have been a bestseller simply because of a mistake in the copyright registration. Sixteen publishers would not have issued the book unless it appealed to the reading public. Nor can the resurgence of WWJD? in the 1990s be attributed to the book’s being in the public domain—by that time the book would have long been out of copyright no matter what. And it is worth noting that the book was a best_seller_: the failure of copyright registration did not mean that the book was available for free, just that Sheldon made hardly any royalties from it. Even today, the book is available for purchase in many editions, though it is available free from several sources.

    But would the explosive sales and long-standing popularity of Sheldon’s In His Steps have been possible if the book had been subject to, say, the Copyright Act of 1976, or the Sonny Bono Copyright Term Extension Act, or, worse, the Digital Millennium Copyright Act? If the point of copyright is “to promote the Progress of Science and useful Arts, by securing forlimited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries,” perhaps there is something to be learned from the case of Charles Sheldon and his novel—a fitting parable for a Progressive reformer, indeed.

    Murphy, Andrew R. Prodigal Nation: Moral Decline and Divine Punishment from New England to 9/11. Oxford: Oxford University Press, 2009. 232 pages. ISBN: 978-0-19-532128-9.

    Most Americans do not know the word jeremiad, but it is a familiar term to scholars of early American religion. To them the term indicates a type of sermon preached in seventeenth-century New England. These sermons lamented that New England had broken the covenant with God made by its founders. If New England continued its decline, God’s judgment loomed, but if New England repented, then it would receive God’s blessing. But even if most Americans do not know the term jeremiad, they are probably familiar with the genre. In sermons or political speeches, they have heard the idea that America is a Christian nation that has disobeyed God and so faces divine judgment. The old genre of the jeremiad is still very much a part of American discourse.

    In his recent book, Prodigal Nation, Andrew Murphy has done much to advance our understanding of the American jeremiad. In the first part of the book, he gives the history of three jeremiads: the Puritan Jeremiad in the seventeenth-century, the jeremiads before and during the Civil War, and the jeremiads of the Christian Right from the 1970s to the present. Murphy’s book is the first work (to my knowledge) to study the jeremiad over the entire scope of American history. Murphy has not written the whole history of the jeremiad—he leaves out revolutionary America, the early republic and the War of 1812, most of the jeremiads of the South, and the entire century between the Civil War and the 1960s—but by considering the jeremiad over the long term, Murphy has given us a better understanding of the genre than can be gained from examining it in only one period.

    In the second part of the book, Murphy analyzes the American jeremiad with the tools of a political scientist. He cogently argues that there are two American jeremiads, which he terms the traditionalist jeremiad, and the progressive jeremiad. The traditionalist jeremiad, which is typically religious, calls for a return to the literal past through repentance and renewed obedience. This type of rhetoric, which could be stereotyped as a sermon preached from 2 Chronicle 7:14 in November or July, most obviously fits the genre of the jeremiad. But Murphy also identifies a progressive jeremiad. That jeremiad, which is typically secular, calls not for a literal return to the past but for a renewal of America’s past ideals. For example, Martin Luther King Jr.’s “I Have a Dream” speech called for a return to the ideals of racial equality implicit in the Declaration of Independence, the “text” for his sermon.

    Identifying a particular jeremiad as traditional or progressive can be difficult, given the constant realignments of conservatism and liberalism in American political history. But by pointing out that two competing rhetorical traditions share the same genre, and thus some of the same basic assumptions, Murphy has provided a key insight into American politics and religion, both present and historical. Perhaps that insight can contribute to refuting the false assumptions of the jeremiad tradition, and to bridging the increasing gap between conservatives and liberals in political discourse.