By Invitation
Reflections on the Digital Humanities: A Conversation with Mark Algee-Hewitt

The Digital Humanities has existed as an institutionalized field of research at Stanford for now more than a decade, drawing undergraduates, graduate students, and researchers from around the globe. This series, a collaboration between Arcade and Stanford’s Center for Spatial and Textual Analysis, spotlights leading research in the digital humanities at Stanford, and asks key contributors to reflect on the expansion of the field, its culture, and the major misconceptions that remain.

Why has this field sparked so much public engagement in its projects and debates? How has the digital humanities changed what it means to be a more “traditional” humanist? And how is the field engaging new developments in technology like artificial intelligence?

In this interview, Interventions editor, Charlotte Lindemann, speaks with Associate Professor of English and Director of the Stanford Literary Lab, Mark Algee-Hewitt.


CHARLOTTE LINDEMANN: What initially drew you to the digital humanities?

MARK ALGEE-HEWITT: I've realized lately that saying that something drew me to the digital humanities would imply that I knew of the existence of the digital humanities when I was first drawn to it. I actually wound up getting degrees in both computer science and in English—computer science being the practical one, although I've always enjoyed coding, and English being the thing that I actually wanted to do—although, certainly, during my university undergrad, there was no cross-pollination between the two at all. There were people doing encoding or tagging, annotation, that sort of thing; that work was happening in the 90s, but not exactly the kind of work that I do.

And then, during my master's degree at the University of Western Ontario, now called Western University, and then at my PhD at NYU, I found more and more opportunities to begin blending these two interests. I took a course at my master's institution called “The Book Page and the Computer Screen” that was about electronic literature and the ways in which the dissemination of literature via the internet and computers was affecting the kinds of things that could be read and written. And then I became involved in “The Canterbury Tales Project” at NYU, our branch of it was dedicated to transcribing “The Clerk's Tale” in a proprietary markup language based in XML with the idea that we would release, on CD-ROM, a version of “The Clerk's Tale” that would have all of the variations across all of the manuscript copies, all packed into one, and people could look at the one they wanted. And again, none of these are exactly where I wound up, but they were hints that such things were at least in the air and possible.

For me, the real revelation came because I had proposed a dissertation project on 18th century aesthetics, particularly the sublime. I really wanted to take a different approach, something still very compatible with my way of thinking today. Most work on aesthetic concepts from the 18th century, particularly the sublime, began by going to the greatest hits, the famous statements from your Burkes, your Kants, etc. I was really interested in how the word was being used on the ground, as it were. When an author chooses to describe something as “sublime” but isn't very invested in the philosophical theory behind that, why are they choosing that word? Why did that word become so prominent at that time? For my initial, very naïve, dissertation proposal, I suggested that I was going to read everything that used the word sublime in the 18th century that I could find, and draw conclusions based on that. Around this time, this would have been 2003 when I submitted that proposal, we suddenly had access to these new databases of collections of historical texts, including the 18th century collections online, which is published by Gale, and which contained an, at the time, almost unheard of 156,000 18th century books, which, if you search for the term “sublime,” you learn that a third of them contain that word. All of a sudden, my great desire to read everything became entirely impossible. But the coincidence of the availability of these texts in machine-readable format made me realize that the kinds of things I had learned as an undergraduate in computer science, that is, ingesting the file, taking it apart, finding individual text strings, could be put to use for exactly the thing that I wanted to do.

Originally, my method was very simple. I thought, I'm just going to write a script that will parse through all these texts, find every single time the word “sublime” is used, and see where that's happening and what the author is saying. Over time, it’s become more sophisticated as I learned and developed new techniques. But it was that revelation, and the fact that I actually did wind up finding interesting, new and surprising patterns of language that I wouldn't have been able to find otherwise, that really led me down this path. At the time, I still saw myself very much as a literature person who dabbled in quantitative and computational things.

As I moved on from my dissertation, I graduated in 2008 and I spent a year at Rutgers teaching 18th century and Romantic period classes, and then wound up at a postdoc at McGill University with the “Interacting with Print” group, which included Andrew Piper and Tom Mole, both of whom were also early actors in the digital humanities space. This would have been around 2010 when that term became not just very popular, but very prominent through a series of colloquia and presentations at conferences like the MLA, which I was lucky enough to participate in. So all of these sort of coincided, to where I realized this is actually what I care about, what I'm passionate about, and the rest is history.

I don't know if I've ever heard the fullest form of that story before, but it all makes sense in hindsight.

It's a great you know, when I tell it, it all seems very teleological, like this was destined to happen as opposed to like a series of random coincidences that worked out.

We could say the same thing about the phrase “digital humanities.” Historically, there has been some controversy around that term and what to call this field. Would you mind giving me your definition of the digital humanities, and describing the field to an outside audience in broad strokes?

I would define the digital humanities as any scholarly activity that either uses computational or otherwise digital methods to investigate questions of humanities interest, or, conversely, uses humanities methods to investigate questions surrounding digital culture, art, and design. The thing with that, of course, is that the very, very broad definition and encompasses a whole host of things. It was actually Stanford's own Glen Worthy around the 2011 Alliance of Digital Humanities Organizations conference that coined the idea of DH as a “big tent” that included all of these things. But of course, that tent is getting more and more packed as more and more avenues of investigation open, which is why we see various subfields, and now conferences dedicated to those subfields, cropping up. The corner of the digital humanities universe that I inhabit, which is very much the former definition that I gave you that is using computational and quantitative methods to investigate humanities questions, now goes under various labels like, “cultural analytics” or “computational literary studies” (CLS), or “quantitative text analysis.” I think what we're seeing is an evolution of the field where “digital humanities,” that very, very broad term, is still a useful shorthand for people outside, but it's being supplanted, at least at a local level, with the various identifiers for subfields that do a better job at representing what's going on in its various corners.

So, take us to your corner, what's top of mind for you in the field? What are you currently working on?

Those are two different questions. What's top of mind in the field is certainly in my corner of it, I’m talking about all the advances in large language models that go under the name of AI in the popular imagination. Those are objects, algorithms, models, whatever you want to call them, that have a strong relationship to the kinds of methods that people in my area of the digital humanities have been using for years. They're based on, very naively, word embedding models that emerged in the early twenty-tens and became much more powerful towards the end of the decade with the advancement of transformer-based models and the bidirectional encoder transformer models (BERT), and now with the large language models (LMS) they've become much more powerful. That's a real topic of interest in my area, both investigating how they're working because honestly, we have no idea, but also using them to simplify or make more accessible the kind of work that we've done. As it turns out, they are uniquely bad, in some ways, at doing the work that I do.

So that's just very broadly what I’m thinking about in terms of the field. In terms of myself, personally, there's a couple of projects that I'm working on. One of the things that the digital humanities has given me is the ability to really be invested in methodological interventions, as opposed to various period specific ones. I've become what I've been calling period agnostic. I still work in the 18th century to some degree. For instance, I’m doing a study of failed concepts in the long 18th century using a series of computational models. I’m looking for things that, unlike “human rights” or “political economy,” which were successful compound concepts or abstract concepts, I’m looking for the ones that look like they should have come together, but in fact fell apart for various reasons. And then at the other end of the things, I've been working on a project on contemporary climate fiction, that is, novels and other entertainment media that take the environment, and particularly climate change as their primary subject or at least a subject of concern, and looking at how they communicate information about the climate within the fictional worlds that they create and why that may or may not be effective for various kinds of public education.

I’ve heard you present work from both, they’re really exciting. I also know these are two of many irons in the fire for you currently. Could you tell me a little bit about the role collaboration plays in your research?

I honestly think one of the best things to come out of what's been called the digital turn in humanities studies is a renewed emphasis around collaboration. One of the weaknesses of traditional humanities research is that it has been so resolutely uncollaborative, as opposed to work in the sciences or the social sciences or other forms of STEM. We’re missing that understanding and appreciation of collaborative work, that various experts who represent various domains can come together and bring their expertise to bear on the same problem generating kinds of analyses and results and understandings that we just can't do on our own. And I mean, quite frankly, the humanities has so been invested in that great mind theory of the lone scholar, traditionally male and white, laboring alone in a room for years before producing their magnum opus, that I think we've missed a lot of research opportunities. I think there's still a space for that. I still do work on my own, and I'm lucky enough to be able to do that, in that I have enough of a range of skills that I can do projects by myself, and sometimes I enjoy that. But certainly, I think the things that I am probably proudest of, or feel are the most important are the projects that I've done from a collaborative standpoint.

The Literary Lab plays a huge role in how I think about collaboration, the Literary Lab being first and foremost a collaborative research group around computational literary studies that involves me as the director, ideally 1 or 2 postdocs, and a number of graduate students, and we collaborate among ourselves and produce projects that have between 2 and 10 people on them, depending on the project. It also gives us a platform to collaborate with other research groups, both within the US and internationally, as well as with individual scholars who come to us with questions. One tricky thing within the digital humanities is that there still is a lack of understanding about collaborative research models in the humanities themselves. This is a problem both in terms of when we think about evaluating work, but also in creating collaborations. I frequently get people coming to the lab who don't approach us in terms of collaboration, but they approach us as sort of a service model that they have a thing that they want us to do for them. And that's not how collaboration works. And so the lab has given me an opportunity to help foster a more collaborative spirit and an understanding of what it means to collaborate, even among scholars who don't do so traditionally.

So, how do students fit in here?  

Students, whether graduate or undergraduate, come to the lab for a couple of things. Sometimes they want to just participate in projects, but some of them also want to pick up skills so that they can do their own work. The Lab provides a platform for allowing both things to happen. We have organized more traditional, pedagogical instruction under the rubric of the lab. We've run workshops. We've run intensive courses on digital humanities methods. I teach a track within the English department on literary text mining, and the lab has played a role in that. It also gives us a space for a kind of hands on mentoring where students can come to the lab without necessarily a lot of experience, and then if they can start working on projects that they care about, that gives them a platform to actually gain that experience as they work alongside people who already know how to do the kinds of things that they want to do. That creates a way for them to learn what they need to learn so that they can go off and do their own cool digital humanities projects.

You also mentioned that the lab collaborates with other researchers and research institutions outside of Stanford. Can you tell me a bit about what you have brewing in that regard?

I would say actually, the digital humanities world has always been very, very international, partly because we were early adopters of technologies which allowed for a certain ease of international collaboration. For instance, we just uncovered, as we’ve been cleaning up our lab archives on our storage facility, a cache of meetings that we recorded from 2012 to 2014. We could record them because we were using, back then it was When-to-Meet, but a collaborative video platform such that all of our meetings could have international participants. There's also something about the kind of collaborative research that we do that lends itself to international collaborators. The ethos of at least computational literary studies or cultural analytics has been one of scale. That is, using these methods, we can investigate phenomena at scales that are traditionally unavailable to historical research practices in the humanities. For example, Andrew Piper is fond of saying lately, that if you can show something in English, that's all very well and good, but you haven't exactly shown that it's a real phenomenon until you show it across all of the different languages where it happens. And while multilingual analysis has always been a difficult thing within my field, I think right from the get go, we have been really interested in seeing these kinds of cross-cultural or transnational phenomenon. So we do have a number of collaborations, both nationally and internationally.

We have had collaborations with a group at the University of Rethymno on Crete about newspaper short stories in the early 20th century in English and Greek. We have an ongoing collaboration with a group in Romania; they've been looking at creating a Romanian corpus that in some ways echoes the kinds of novel corpora that we have. We have an ongoing collaboration with a group at the École normale supérieure in France run by Thierry Poibeau, who have taken inspiration from our work on canonical literature and are running similar experiments. We've collaborated with Dominique Pestre, who's a French economist working on a project about the World Bank. We have an ongoing collaboration with a group in Arhus, who are working on their own digital humanities infrastructure. We've worked with groups in the UK. I've worked with the Cambridge Concept Lab as well as Cambridge Digital Humanities. We have an ongoing collaboration with the Turing Institute in London. And I just returned from Darmstadt, where we have an official collaboration, which is actually our second German collaboration. We have some ongoing work with Fotis Jannidis in Würzburg. And our current collaboration, which we have an international seed grant for is with the fortext lab run by Evelyn Gius at the Technical University Darmstadt. We've discovered we have some very similar interests there, for example, they’re working on a project on scene detection and event detection, which very much meshes up with our project on domestic space in the 19th century. And they're working on concepts of category theory, which actually meets our project on literary theory. Through a series of exchanges and co-run conferences, we've been working out how we can bring these projects together and do something bigger than we could do on our own. Otherwise, I'm sure I've forgotten a ton of different European collaborators. There was a point at which you could sort of point to a European country, and we had a collaboration going on there. Um, but those are sort of the big ones. Oh, we had a number with Poland as well. There's a group there that does stylometry historically that we've worked a lot with.

It occurred to me that one thing that might facilitate your ability to have so many collaborations running simultaneously is the sharing of methods and data sets, and the kind of open access ethos of this field. I know that there have been strong ideological reasons for this open access model in the digital humanities, and then there's also been some pushback.

Yeah. Of course. I'm a big believer in radical transparency and open sharing of resources. I think that's really, really important. One of the things that the digital humanities has allowed is a kind of transparency of method. When your method is, at least in some part, in code, that is something that we can make available, and we have made available, to other researchers. We actually had a project on literary suspense at the Lab, and we shared our code with a group working at the Higher School of Economics in Moscow, who actually duplicated our methods for a corpus of Russian literature. And that was really, really successful. Obviously in a perfect world, we'd also be able to share the data resources, the full text data resources. But that's been much trickier because we have to negotiate copyright laws not just within the US, but also more broadly. Certainly, a lot of work happens in pre copyright era. I am myself interested in historical literary texts, so that's been a good thing for me. But a lot of people are working with 20th century and 21st century textual and other media material, and I've certainly been doing some of that as well. There have been recent advances in copyright law, like the exemption to the Digital Millennium Copyright Act specifically for text and data mining, which allows us to at least extract and work with copyright texts for the purposes of text mining. It's still a very complicated field and there are a lot of legal negotiations to be worked out and a lot of limits that are placed in the way, but I think the digital humanities are leading the way towards a certain kind of radical sharing and radical transparency of research across institutions and across universities that will not only enable really important research to happen, but also do good work in terms of equity and inclusion, particularly for groups that have less access to resources than we're privileged to have, say, for example, at Stanford.

Yeah, that's a great point. I’ve also noticed, anecdotally, that digital humanities scholarship seems to find public audiences more readily than I would expect from most, specialized academic fields. How do you understand this? Do you see any promise in the digital humanities as a way to engage the public in humanities research, or any natural points of intersection between digital and public humanities efforts in the university?

The phrase, public humanities, as a concept feels new because it’s not something that we're used to based on the 20th century university structure. But if you hop back in time 200 years, what we now consider humanities scholars and public intellectuals were then playing a really prominent role in the cultural conversation. The “public humanities” is a longstanding, historical phenomenon that we've sort of lost touch with. Certain kinds of specialization, the changing role of the university, and the public's orientation to education and knowledge have all played a role in that. And I think you're right that the digital humanities is in touch with a certain kind of public engagement in ways that traditional humanities has sometimes struggled with. I think there's a couple of reasons for that. For one, there is, for better or worse, a sort of appetite in the public for data, in that, currently, the public understands data as being the thing that makes arguments. The traditional methods of the humanities are much more wrapped up in things like anecdote or close reading or critical analysis. Those are powerful tools, and we certainly need to hang on to them and leverage them, but there's less understanding of how they work in the general public than of, for example, a graph that shows a very clear trend. The digital humanities can bridge that gap by bringing data analyses to bear on problems that concern the public, like our project on climate change, for example. Ideally, more traditional humanities methods will become more digestible when they're accompanied with these kinds of data driven explanations. I think the combination is a uniquely powerful one. And you see it more and more. For example, The New York Times has a whole organization that's dedicated to data design and the creation and publishing of data. At the same time, digital humanities is able to deal with issues and ideas that can be more prevalent and topical for the public at large. Digital humanities just moves faster in many ways than the traditional humanities. As someone with a foot in both, I really feel that. And so we can deal with things that are a little more topical.

I wonder if there's also something in how the more traditional humanities methods can offer a kind of critical apparatus through which we can look at data more skeptically than we might in statistics departments or other departments that use data as evidence. I know that the relationship between the digital humanities and the more traditional humanities can be quite fraught, especially in the early days of the field. More traditional humanists could be skeptical of the way graphs can lend an objectivity to the presentation of evidence. Maybe digital humanists have that same advantage over data scientists, of being always implicitly critical of their own methods in a way that gives this scholarship urgency or traction in a world where data seems to be taking over everything.

I think that's exactly right. As humanists, we are trained in the methods of critique, and critique is the dominant method within traditional humanities discourse, or at least, certainly in literary studies. And yeah, that does mean that we've always been skeptical of the results that we produce. We sort of serve as a bridge between the kind of veneer of objectivity that STEM lends and the kind of critical engagement that we see in the humanities. We do work to bring those two things together.

Are there still misconceptions about the field or about the methods you use that you could speak to?

There are some still, although as the digital humanities has grown, it's become, by and large, more accepted within academic circles. Certainly, in the early years, there was a great degree of skepticism around the work that we did. And that vein has continued. And it's important that it remains, it's good to be skeptical. There are some avenues of criticism that I think are less helpful. I don't think, for example, the digital humanities is a neoliberal shill, gathering up all of the university resources in order to transfer them to our tech overlords, as some have argued. Another misconception that remains is the idea that what we're after is objectivity, like we're out to prove things about literature, art and culture. That goes hand in hand with a misunderstanding of the way we treat abstraction and aesthetics, at least within my branch of the field. Coming from a humanities background, I'm deeply, innately skeptical of any sort of attempt to prove anything. Rather, the methods that we employ create new objects that we can then turn a critical eye on for analysis so we can see a pattern that exists that we wouldn't be able to access through traditional humanities methods. Once we have that pattern, then we can start thinking critically about it.

I'm teaching a class right now on digital humanities methods, and I always tell my students that there are three parts to a digital analysis. First, there's the initial formulation of the question, the gathering of the corpus, and the selection of method. Then, there's the computational work to produce results. And finally, there's the work of analyzing the results. From the outside, I think people tend to emphasize that middle step as the hard one, and the different one. But that's actually the easy one, right? It requires a certain kind of specialized knowledge, but once you have that, it's pretty routine. It's actually the beginning and the end, which fall very much more in line with traditional humanities forms of thought. Those are the difficult parts. The middle step is just a different way to access information, so that we can include a broader perspective in some of the some of the analyses that we do.

That's a great way to put it. Of course, the humanities has always been about asking questions, on the one hand, and interpretation, on the other. And so is the digital humanities.


Is there anything you've observed, about how this field, over the last however many decades it’s existed, has changed the humanities? In other words, how the humanities, the traditional humanities as a sort of construct, might have responded in any way to the advent of the digital humanities, or how being a “traditional humanist” might have become a category in a way that it hadn't before been one?

It may still be too early to tell. This is a kind of inverse of the question, where do you see this field in another five years or ten years or whatever? And I've always said that there are two possibilities, right? Either what we do fissions off and becomes more of its own thing, so that we see the rise of departments of digital humanities or some branch thereof, and there are some of those that exist. Or, what we see is the collapse of the digital humanities as a distinctive entity as it gets folded back into humanities departments, but becomes a core set of skills and methods of those departments. And I still think we're too early to figure out which one's going to happen. But I think we're finally at the point where there's a general understanding that the field is not going away. Whether or not our colleagues in the more traditional humanities fields or who employ more traditional methods are going to start doing large scale text analysis is another question, although that’s probably not going to happen. But then not everyone's a Marxist, so that's fine. But they are being forced to reckon with the idea that we can make broader based claims in more responsible ways than we have been able to before. Scholars who aren't necessarily hardcore digital humanists now have ways of accessing information that's important to their own research. For example, we have a number of people, who come to the lab who work on traditional humanities projects, but they realize that for some of the questions they have, instead of just sort of speculating wildly and overgeneralizing, they can ask questions that quantification might be able to help them answer. And we're seeing more and more of that. What we're seeing is, essentially, the penetration of digital methods into the traditional humanities.

I can see that. Personally, as you know, my scholarship sticks more to the traditional humanities track, diving into computational analysis now and then when I see fit. But my engagement with the digital humanities has made me think a lot about the status of the example in traditional humanities criticism. Even in the dissertation I’m writing, I can make a claim, and then I provide two examples, three examples, and then I move on and make the next claim. But once I know that there's this huge corpus of texts available to me that I could query, why is three examples enough? Or what about five? Or what about 25 or 100, or more? It's changed the way I look at the structure of an argument.

I think that's exactly right. I've never been one of those evangelists that says everything must become distant reading, because I don't think it should, I think there's still there's still an enormous amount of value to humanities methods. But, interestingly enough, I would see myself very much along the same vein as you just described yourself, as a traditional humanist that dips into the digital when they want to show something. It's just that I tend to do that earlier and faster than most people. But at my most polemic, I would say that what I would love the digital humanities to do, honestly, is disallow certain kinds of unsupported, wildly speculative humanities arguments that pretend to have the status of fact. For example, the old school Victorianist article that will give two examples from Trollope and say, you know, based on these, we can understand urbanization in the 19th century. And no, we can't, we never could. But if we gather together enough examples of literature—and this is the other thing, and you were saying this earlier: computation, interestingly, teaches us to be skeptical in new ways—so, if I gather together all sorts of examples of literature, do an analysis for city terms and that sort of thing, maybe I can't say something incredibly expansive about urbanization in general, but I can say something much more responsible about how urbanization was represented in the Victorian novel. Right? It forces me to be responsible to the scale of the claim I'm making, even while it lets me make that claim in a much more rigorous way than I could before.

Right, and thoughtful about its parameters. Maybe we're no longer talking about the Victorian novel, we're talking about 900 novels published in England between 1800 and 1900. I'm convinced. So, can I ask you, looking forward, what are you most excited about in terms how the discipline is evolving, or changes in your own work?

Yeah. Of course, like it or not, AI is a subject that the public is interested in, and that we're all being forced to confront. For the field of the digital humanities, it presents a different set of opportunities. Like I said, it's, interestingly, uniquely bad at the kinds of things that the digital humanities are really good at. So, for example, I was just doing an experiment when I was in Germany, and I passed one of the AI engines, like a really good one, the text of Edgar Allan Poe's “The Black Cat.” And I said, how many nouns are there? And it gave me an answer, and that answer was wrong. And I asked how many words were italicized, and it told me none of them. And then I pointed out, you know, how you would recognize an italic word, and I asked it again, and it pointed out way too many of them.

I did the same thing with counting words inside quotation marks. It has no idea, no matter how hard you try to train it.

Not a clue. And that's fascinating. That's fascinating to me because on the one hand, what AI is teaching us is that Wittgenstein was right, that meaning does come through context and not through an abstract series of rules. AI shows that language is, interestingly, the basis for kinds of understanding beyond what we might have thought, but on the other hand, that is also interestingly insufficient. There's a lot of research opportunities around this, whether you're interested in the public interface between AI and the computational world in general, whether you're interested in questions about language and knowledge and understanding, whether you're interested in questions of epistemology, this is a really interesting area to be both excited and deeply skeptical of, and I would count myself very much among the AI skeptics.

At the same time, the changes in rules around copyright, the expansion of international collaborations, the fact that a number of our European collaborators are finally getting access to very sizable volumes of text that now are beginning to loosen the Anglophone grip on large scale digital analysis. This also means that we're being able to study cross-cultural and transcultural phenomenon in ways that we just couldn't before. That, to me is really exciting. Finally, the rise of various internet cultures is, as you are well aware, something I've long been interested in, say, for example, fan fiction. Most reading that happens today is happening on fan fiction websites. That is a whole new area of research, we can study phenomena in the consumption and publishing of literature in ways that just we never could before. And all those different things are places where this field is opening up new avenues.

In terms of very specific things, you can see this to some degree with the Sawyer seminar that we're running through the Center for Spatial and Textual Analysis, wherein we've invited a number of prominent scholars representing a number of different fields to talk about both the promise and the dangers of data. I think that's really interesting and something that we're having great conversations around. We also have under review, a contract for a book coming from the Literary Lab that will both show off some of the work that we've been doing, but also provide some scaffolding in terms of how that work was produced. It's both gives research results, but also thinks critically about how those results were produced. That's actually really exciting, the kind of meta reflection that this field lends itself to.

At the field’s inception, there was this understanding—Stephen Ramsey in 2011, very famously at an MLA panel that I happen to be at said that you had to code to be a digital humanist—that if you couldn't code, you weren't in the field. That's gone away by and large. I think you still need to be able to understand what's being done, but you don't necessarily need to have the nuts and bolts. That said, I do believe in the concept of buy-in. You don't need to know how to code, but you need to know enough so that you can interpret the results. The most important thing for me is that you understand the decisions that were made in the analysis because there are so many and they can be so arbitrary. I see digital humanities very much as a method, both as a means to an end—it lets me get at stuff that I've always wanted to know about and just couldn't through just sitting and reading by myself—but I am also equally methodologically invested. I'm fascinated by the methods. I'm fascinated by what they teach us about what it means to read and write and think.

My Colloquies are shareables: Curate personal collections of blog posts, book chapters, videos, and journal articles and share them with colleagues, students, and friends.

My Colloquies are open-ended: Develop a Colloquy into a course reader, use a Colloquy as a research guide, or invite participants to join you in a conversation around a Colloquy topic.

My Colloquies are evolving: Once you have created a Colloquy, you can continue adding to it as you browse Arcade.