Advance Access: Method as Tautology in the Digital Humanities

coverMy article, “Method as tautology in the digital humanities” has gone up on Literary and Linguistic Computing‘s advance online publication area. If you have an institutional affiliation that lets you access Oxford Journals, it can be found here:

[It appears Oxford are now offering the article free: download PDF here]

In the article I develop a concept of method in computer assisted literary criticism, using some of my recent work with the OED as a case study. So it’s about modelling allusion and influence, but also—and more centrally—it’s a reflection on the intellectual activity of modelling and method-making, and the relationship of models and methods to the humanistic research questions that give rise to them. I develop a series of metaphors to think about the relationships among researcher, research question, research object, and digital method, and the relation of these to the disciplinary concerns of literary criticism. Along the way I discuss the “discipline” of DH, its inter-disciplinarity, and how it might relate to other disciplines, and relate them together. For me the article is a kind of a first step in establishing guiding principles (for myself, but I hope others will also have an interest) for the modelling of figurative language, and for the application of digital methods in general to literary criticism.


  • Gord Higginson wrote:

    Fascinating paper. When you say the “grammar of the critic…[r]ich ambiguity—
    simultaneous, undecideable multiple meaning—is
    the literary feature that stands as the challenge par
    excellence to computer modelling in literary criticism,” is this a kind of a “Turing test” for the digital method?

  • I suppose one could think of it that way, but it’s not the way I think of it, since I’m less interested in computer intelligence, or apparent intelligence, and more interested in how human intelligence can work with computer models and their results. What I mean by the sentence you quote is that it is going to be very hard to get a computer to detect the kind of ambiguity I mean with anything like the success of even those two rudimentary programs discussed in the article. It may come down to or at least be figured in its “digitality”, etymologically, since the computer can be in only one of two states at a time (I.e 1 or 0), and an ambiguity is in two states and neither of those two states at the same time.