I did my PhD in Comparative Literature at Stanford. There is likely no university in the US with a culture more antithetical to the humanities: Stanford embodies the libertarian, technocratic values of Silicon Valley, where disruptive innovation has crystallized into a platitude* and engineers are the new priestly caste. Stanford had massive electrical engineering and computer science graduate cohorts; there were five students in my cohort in comparative literature (all women, of diverse backgrounds, and quite large in contrast to the two- or three-student cohorts in Italian, German, and French). I had been accepted into several graduate programs across the country, but felt a responsibility to study at a university where the humanities were threatened. I didn't want the ivory tower, the prestigious rare book collection, the ability to misuse words like isomorphism and polymorphic because they sounded scientific (I was a math undergrad), the stultified comfort that Wordsworth and Shelley were on the minds of strangers on the street. I wanted to learn what it would mean to defend a discipline undervalued by society, in an age where universities were becoming private businesses tailoring to undergraduate student consumers and the rising costs of education made it borderline irresponsible not to pursue vocational training that would land a decent job coding for a startup.
Stanford's very libertarianism also enabled me to craft an interdisciplinary methodology—crossing literature, history of science and mathematics, analytic philosophy, and classics—that more conservative departments would never entertain. This was wonderful during my coursework, but my Achilles heel when I had to write a dissertation and build a professional identity more conservative departments could recognize. I went insane, but mustered the strength and resilience required to complete my dissertation (in retrospect, I’m very grateful I did, as having a PhD has enabled me to teach as adjunct faculty alongside my primary job). After graduation, I left academia for the greener, freer pastures of the private sector.
The 2008-2009 financial crisis took place in the midst of my graduate studies. Ever tighter departmental budgets exacerbated the identity crisis the humanities were already facing. Universities had to cut costs, and French departments or Film Studies departments or German departments were the first to go. This shrunk the already minuscule demand for humanities faculty, and exponentially increased the level of anxiety my fellow PhDs and I experienced regarding our future livelihood. In keeping with the futurism of the Valley, Stanford (or at least a few professors at Stanford) was at the vanguard for promoting alternative career paths for humanities PhDs: professors discussed shortening the time to degree, providing students with more vocational communications training so they could land jobs as social media marketers, extolling the values of academic administration as a career path equal to that of a researcher. Others resisted vehemently. There was also a wave of activity defending the utility of the humanities to cultivate empathy and other social skills. I've spent a good portion of my life reading fiction, but must say it was never as rich a moral training ground as actual life experience. I've learned more about regulating my emotions and empathizing with others' points of view in my four years in the private sector than I had in the 28 years of life before I embraced work as a career (rather than a job). Some people are really hard to deal with, and you have to face these challenges head on to grow.
All this is context for my opinions defending the utility of the humanities in our contemporary society and economy. To be clear, in proposing these economic arguments, I’m not abandoning claims for the importance of the humanities in individual personal and intellectual development. On the contrary, I strongly believe that a balanced liberal arts education is critical in fostering the development of personal autonomy and civic judgement, to preserve and potentially resurrect our early republican (as political experiment, not party) goals that education cultivate critical citizens, not compliant economic agents. I was miserable as a graduate student, but don’t regret my path for a minute. And I think there is a case to be made that humanities will be as important as STEM to our national interests—if not more so—in the near future. Here’s why:
Technology and White-Collar Professions — In The Future of the Professions, Richard and Daniel Susskind demonstrate how technology is changing professions like medicine, law, investment management, accounting, and architecture. Their key insight is to structurally define white-collar professionals by the information asymmetry that exists between professional and client. Professionals know things it is hard for laymen to know: the tax code is complex and arcane, and it would take too much time for the Everyman (gender intentional) to understand it well enough to make judgments in her (gender intentional) favor. Same goes for diagnosing and treating an illness or managing the finances of a large corporation. The internet, and, perhaps more importantly, the new machine learning technologies that enable us to use the internet to answer hard, formerly professional, questions, however, levels this information asymmetry. Suddenly, tools can do what trained professionals used to do, and at a much lower costs (contrast the billed hours of a good lawyer with the economies of scale of Google). As such, the skills and activities professionals need are changing and will continue to change.
Working in machine learning, I can say from experience that we are nowhere near an age where machines are going to flat out replace people, creating a utopian world with universal basic income and bored Baudelaires assuaging ennui with opiates, sex, and poetry (laced with healthy doses of Catholic guilt). What is happening is that the day-to-day work of professionals is changing and will continue to change. Machines are ready and able to execute many of the repetitive tasks done by many professionals (think young associates reviewing documents to find relevant information for lawsuit—in 2015, the Second Circuit tried to define what it means to practice law by contrasting tasks humans can do with tasks computers can do). As machines creep ever further into work that requires thinking and judgment, critical thinking, creativity, interpretation, emotions, and reasoning will become increasingly important. STEM may just lead to its own obsoleteness (AI software is now making its own AI software), and in doing so is increasing the value of professionals trained in the humanities. This value lies in the design methodologies required to transform what were once thought processes into statistical techniques, to crystallize probabilistic outputs into intuitive features for non-technical users. It lies in creating the training data required to make a friendly chat bot. Most importantly, it lies in the empathy and problem-solving skills that will be the essence of professional work in the future.
Autonomy and Mores in the Gig Economy — In October 2015, I spoke at a Financial Times conference about corporate sustainability. The audience was filled with executives from organizations like the Hudson Bay Company (they started by selling beaver pelts and now own department stores like Saks Fifth Avenue) that had stayed in business over literally hundreds of years by gradually evolving and adding new business lines. The silver-haired rich men on the panel with me kept extolling the importance of "company values" as the key to keeping incumbents relevant in today's society. And my challenge to them was to ask how modern, global organizations, in particular those with large, temporary 1099 workforces managed by impersonal algorithms, could cultivate mores and values like the small, local companies of the past. Indeed, I spent a few years helping international law firms build centralized risk and compliance operations, and in doing so came to appreciate that the Cravath model, an apprenticeship culture where skills and corporate culture and mores are passed down from generation to generation, as there is very low mobility between firms, simply does not scale to our mobile, changing, global workforce. As such, inculcating values takes a very different form and structure than it did in the past. We read a lot about how today's careers are more like jungle gyms than ladders, where there is a need to constantly revamp and acquire new skills to keep up with changing technologies and demand, but this often overlooks the fact that companies—like clubs and societies—used to also shape our moral characters. You may say that user reviews (the five stars you can get as an Uber rider or AirBnB lodger) take the place of what was formerly subjective judgment of colleagues and peers. But these cold metrics are a far cry from the suffering and satisfaction we experience when we break from or align with a community's mores. This merits much more commentary than the brief suggestions I'll make here, but I believe our globalized gig economy requires a self-reliant morality and autonomy that has no choice but to be cultivated apart from the workplace. And the seat of that cultivation would be some training in philosophy, ethics, and humanities. Otherwise corporate values will be reduced to the cold rationality of some algorithm measuring OKRs and KPIs.
Ethics and Emerging Technologies — Just this morning, Guru Banavar, IBM's Chief Science Officer for Cognitive Computing, posted a blog admonishing technologists building AI products that they "now shoulder the added burden of ensuring these technologies are developed, deployed and adopted in responsible, ethical and enduring ways." Banavar's post is a very brief advertisement for the Partnership on AI created by IBM, Google, Microsoft, Amazon, Facebook, and Apple to formalize attention around the ethical implications of the technologies they are building. Elon Musk founded OpenAI with a similar mission to research AI technologies with an eye towards ethics and safety. Again, there is much to say about the different ethical issues new technologies present (I surveyed a few a year ago in a Fast Forward Labs newsletter). The point here is that ethics is moving from a niche interest of progressive technologists to a core component of large corporate technology strategy. And the ethical issues new technologies pose are not trivial. It's very easy to fall into chicken little logic traps (where scholars like Nick Bostrom speculate on worst-case scenarios just because they are feasible for us to imagine) that grab headlines instead of sticking with the discipline required to recognize how data technologies can amplify existing social biases. As Ted Underwood recently tweeted, doing this well requires both people who are motivated by critical thinking and people who are actually interested in machine learning technologies. But the "and" is critical, else technologists will waste a lot of time reinventing methods philosophers and ethicists have already honed. And even if the auditing of algorithms is carried out by technologists, humanists can help voice and articulate what they find. Finally, it goes without saying that we all need to sharpen our critical reading skills to protect our democracy in the age of Trump, filter bubbles, and fake news.
This is just a start. Each of these points can be developed, and there are many more to make. My purpose here is to shift the dialogue on the value of the humanities from utility in cultivating empathy and emotional character to real economic and social impact. The humanities are worth fighting for.
*For those unaware, Clayton Christensen coined the term disruptive innovation in The Innovator's Dilemma. He contrasted it was sustaining innovation, the gradual technical improvements companies make to a product to meet market and customer demands. Inspired by Thomas Kuhn's Structure of Scientific Revolutions, Christensen artfully demonstrates how great companies miss out on opportunities for disruptive innovation precisely because they are well run: disruptive innovations seize upon new markets with an unserved need, and only catch up to incumbents because technology can change faster than market preferences and demand. As disruption has crystallized into ideology, people often overlook that most products are sustaining innovations, incremental improvements upon an existing product or market need. It's admittedly much more exciting to carry out a Copernican revolution, but if we consider that Trump may well be a disruptive innovator, who identified a latent market whose needs were underserved only to topple the establishment, we might sit back, pause, and reconsider our ideological assumptions.
Originally appeared on February 20, 2017 on quamproxime.com.