Recently, both Prof. Arvind Narayanan (CS) and Prof. Kevin Munger (Political Science) called on the academic community to rethink its research agenda in the post-social media digital age: Narayanan writes that studying social media’s effect on Y (e.g. mental health, disinformation, radicalization, echo chambers) is akin to asking whether cars made roads more dangerous. The relevant research question, according to Narayanan, is whether mandating seatbelt-equivalents in the digital sphere would save lives.1 In a similar vein, Munger encourages us to think about the opportunity cost of pursing any line of research. He doesn’t propose specific alternatives in his post, but he also writes that the research on echo chambers, algorithmic radicalization, and misinformation tends to confuse rather than inform.2
I’m not an expert in those areas, but I wanted to take the opportunity to reflect on my personal priorities: What research and research products do I want to build or see in the world? I hope by throwing some ideas out there I can meet more like-minded people and even build with them!
Political Scientists as Democracy Plumbers
I don’t know where the web3 crowd will end up, but I appreciate some of the products they built to facilitate the exchange of ideas and the funding of public goods: see this simple quadratic voting website and GitCoin. Another open-source website Pol.is lets users input their opinion on any given issue in plain English and performs standard dimension reduction operations to generate clusters of user opinion.
I’d love to see “democracy plumber”-style research because:
- Information technology (and soon a general-purpose AI?) has transformed the way we interact with others, yet the high-level designs that would facilitate the safer, more productive use of these new technologies haven’t come to being. To borrow Narayanan’s cars-on-roads metaphor, we need seatbelts, road signs, and driving schools for social media/AI. And the research community is the best group to develop them because private firms have a limited range of things they could experiment on and will likely under-provide such public goods.
Given what is at stake (democracy, social cohesion, world peace!), the opportunity cost of researching anything else seems way too high.
Methodologically, democracy plumbing isn’t that different from what political scientists are already doing. Most researchers work with data, and some routinely handle large datasets and use state-of-the-art NLP methods.
The lines between disciplines are blurring: computational social science and human-computer interaction in CS, quantitative political science, computational communication, all fields of the Information School… There should be ample opportunities to collaborate since people speak the same language.
A very concrete way to start would be to build different Twitter ranking algorithms based on past academic research. Lots of academics and policymakers have argued that a potential solution to all of our “algorithm problems” is to start a marketplace of algorithms. Given that Twitter data is open to academics, perhaps we could build several ranking algorithms,3 recruit volunteers to use them for several months, and follow up on the many personal and social outcomes we care about.
Why hasn’t this style of research taken off already? I’d say that 1) the engineering overhead of these projects is quite large, not at all realistic for the typical social science research team of one or two; 2) social science traditionally values theory-building (about the past), not engineering (for the future). And as Munger points out in his blog post, the field evolves very slowly through faculty hiring and peer review. The pace might be fine for some other century or decade, but I worry that technology has transformed our society so rapidly in this century (and will continue to do so in the next few decades) that a social scientist’s highest-impact career is found in “democracy plumbing” rather than (grand) theory building with amazingly detailed data.
What makes me optimistic about this “democracy plumbing” future is that lots of social scientists currently use online experiments to test the effectiveness of “fact-check nudges” and publish well. I think we should channel this energy into building more ambitious platform prototypes, tools, and high-level designs that go beyond simple “labels” or “alerts” that flow above social media posts. As 21st-century social scientists, we can’t afford to study only “technology X’s impact on Y.” With the skills we already have, we can embrace the engineering mindset and build products that make democracy work in the age of technology X.
Political Scientists as Empirics-Informed Philosophers
Another field I hope more social scientists could enter is empirics-informed philosophy, particularly as it pertains to artificial intelligence.
Prediction markets say that the first general AI (AGI) will be tested around 2050. Even if the predictions are off by 50 or 100 years, the scale of the impact this technology would have on society, as well as the speed with which the impact will be felt, justify a large push for social scientists to enter the field. Plus, the existing bottleneck to AGI development seems to be mostly hardware (chip manufacturing, compute efficiency), so I wouldn’t bet against the 2050 timeline being wrong.
Underlying the “AI technology” are large language models: From a chat bot that literally speaks to you in English to a robot that follows human instructions, from protein predictions to image and video generation, every product that has come out of this “AI magic” is powered by large language models.
AI Safety and Alignment
Recognizing the massive upside and downside of such powerful technology, AI researchers are working on “alignment” and “safety” problems – giving humans more levers to control the model output, aligning model output with human values.
As a starting point, some teams recruited (the mostly Filipino and Bangladeshi) MTurk workers to label “harmful” or “immoral” content and used it as training data to prevent models from generating certain content without warning.
But just as we’ve learned that social media content moderation is much more than assigning each post a “truthiness score” and ghost banning those that fall below a threshold – recent news shows that moderating speech is harder than sending people to Mars – collecting the “ground truth” of “morality” or “human values,” assuming that this is at all the correct approach, is much more complicated than collecting the ground truth of dog and cat pictures.
Social scientists who enter the field of AI alignment research should incorporate the fields' decades of empirical research on human values, which we know are highly varied and contextual in nature.
Most importantly, I think social scientists bring frameworks that supplement those used by the many analytical philosophers currently active in AI research.
Philosophers (moral philosophers, lawyers, political theorists) generally think in terms of edge cases. They advance arguments through thought experiments and analogies. Social scientists, on the other hand, think in terms of aggregate welfare, trade-offs, counterfactuals, general equilibrium effects. While both perspectives are important, I think the latter is largely missing in current AI alignment research, likely due to a lack social scientists entering the field.
Another way social scientists will be able to contribute is through their understanding of empirics. 1) They can synthesize the large empirical literatures on human values in different societies and morality in different contexts, as they relate to the domains an “AI” enters; 2) They can design online and field experiments that test the individual- and the social-level reactions to different versions of an “AI.”
Overall, I hope that more social scientists recognize the societal/civilizational impact of a powerful AI or AGI and appreciate our ability to shape this future. I also hope that the CS community recognize the skills of modern social scientists (data collection in labs or online, hypothesis testing, model fitting) and their potential contribution of alignment research (empirics, tradeoffs, general equilibrium effects).
Similarly, I think social scientists also have lots to offer in AI governance research. Currently, many researchers with backgrounds in the history of technology, geopolitics, and specific countries (e.g. China) are thinking and writing about AI governance.
But I think the field is missing the perspective of social scientists fluent in the decades of empirical research on systems of government (e.g. semi-presidentialism vs. presidentialism), sources of conflict (when and which countries go to war), theories of the firm (e.g. shareholder vs. stakeholder value, different corporate governance structures).
We already know a lot about “governance”: the issues with presidential systems, the pluses and minuses of ranked choice voting, the optimal ways to draw district maps, the value of independent board directors, the ways ethnic and linguistic diversity interacts with governance structures, reasons why nuclear deterrence may fail…
If we apply what social science has learned about human and corporate history, we can:
- Make better predictions of how AI will change politics, society, and the world order;
- Design better systems to govern AI research organizations and coordinate between national governments (if states are predicted to exist)
There’s no need to re-invent the wheel! If humans maintain control over AI development, the fundamental forces driving technology-induced social change will be human nature. The social science disciplines have accumulated many data points on individual behavior and group dynamics in the past, and surely they can inform our armchair theorizing about the future.
Distributive Justice and Other Areas of Moral Philosophy
Besides AI alignment and governance research, distributive justice is also an area where I think the empirics-minded social scientists could add value.
For example, meta-analyses of genetics research show that genetic associations explain more than 30% of the variation in children’s outcomes. In another line of research, IRS tax data shows that inter-generational mobility varies widely by neighborhood, and we also have credible estimates on income and wealth inequality around the world at different points in history.
Of the moral philosophers I’ve read, everyone from Rawls to Nozick, from Aristotle to Confucius, bases their theory on some conception of desert. But their ideas are almost all theoretical – they didn’t have the digital and statistical tools we have now to look into snapshots of human society and grasp the magnitude of variation in individual outcome as well as the extent to which different factors are at work.
But social scientists do now, often at a granular level. I think exciting research projects would incorporate empirical findings into existing theories of desert and justice more broadly. Social scientist often shy away from making normative claims because an empirical paper’s goal is to make credible estimates about a specific quantity, often the strength of a causal relationship. But I feel that the purpose of describing our lives or the lives of our ancestors is not only to understand how different societies came about but also to inform us about what kind of society we want to live in. I’d love to see philosophers incorporate social science empirical findings into their research and more social scientists cross over to become empirics-informed moral philosophers.
Political Scientists as Knowledge Product Managers
The abundance of data and cheap compute not only made AI a powerful technology for social scientists to study and, dare I say, influence, they’ve also made a new generation of research tools possible. I hope more researchers could become product managers of knowledge tools.
For example, the universe of bibliometric data, the metadata of all academic papers including abstracts and citation networks, is now publicly available. Thus, we are seeing an explosion of NLP-based and network-based literature review tools: Elicit, Research Rabbit, Connected Papers, and Scite. On these platforms, you no longer have to put in the exact words in a paper’s title or check a paper’s bibliography to find the most relevant studies for your literature review.
So far, adoption of these new tools has been slow, likely due to the fact that 1) the academic user base is extremely fragmented; 2) compared to industry, the academic user base has less financial incentives and financial resources to increase productivity,4 so organizations (labs, academic departments, universities) are less wiling to purchase new tools and take up the overhead of training.
I’m not sure how this adoption problem can be solved. In my experience, top academic departments codify a much lower percentage of their knowledge compared to top firms. Answers to questions such as “which OCR tool works the best for scans of historical Japanese text,” “how to use Julia on the university’s computing cluster,” and “how might the phrasing of a survey question bias the experimental results” are all passed around verbally between advisors and students, not written down anywhere.
The adoption problem aside, there are ample opportunities for new knowledge tools to come on the market and increase research productivity. Some ideas include:
- A search engine that conducts semantic search on local files: I have lots of personal notes and drafts but often fail to locate them because I didn’t search the exact string of words.
- A Q&A AI that assists humans in reading papers and nonfiction books: For example, when skimming a paper, I often want to quickly find out the dataset(s) and model(s) it uses. These pieces of information are often not in the abstract, and searching “data” or “model” returns many false positives. When it comes to books, I’m simply overwhelmed by the number of nonfiction books one can read. If an AI could 1) summarize key arguments of the book and key sources of evidence used; 2) direct me to the relevant paragraphs when I ask for more information; 3) tell me how this book differs or builds on similar books I’ve read in the past or related knowledge nodes I already have in memory, my productivity would massively increase. I’ll also end up reading more books because the AI, after having read the universe of all books, will pick out those that I can skim and direct me to focus on the ones more relevant to me.
- An AI-assisted Wikipedia (or Stanford Encyclopedia of Philosophy, Our World in Data) for all fields of human knowledge: What are the effects of immigration on local wages? What are the effects of divorce on child outcomes? What do we know about the effect of different forms of diet and exercise? What were the modes of production in different historical societies around the world? Fields such as psychology, public health, economics, sociology, political science, evolutionary biology, and digital humanities have accumulated so much knowledge in the past few decades. Lots of academics have talked about starting a Wikipedia to synthesize the insights from individual studies on the same topic. I think a powerful AI with literature review and Q&A/semantic search capabilities can meaningfully assist human editors and accelerate this encyclopedia-building process. As researchers often say, don’t look at individual studies – only a series of papers with varied methods and contexts reveal generalized knowledge. A large language model could certainly help!
- A dataset search engine with variables and levels of aggregation labeled: The production function of most quantitative social science and digital humanities research consists of data. Researchers find unique datasets, create datasets from scratch (by scanning archival material or conducting experiments with humans), and build new models to understand the data. Thus, making data more widely accessible and discoverable is key to unlocking more research opportunities. From the perspective of a social science researcher, the state of dataset search is quite sad: Google Dataset Search is unusable because it contains few domain-relevant datasets and doesn’t have the right search filters; repositories such as the Harvard Dataverse and Humanities Commons also have poor search capabilities; lots of journals (e.g. all the American Economic Association journals) and authors post data on their own websites, often with no descriptions of the dataset in HTML, making it impossible for a Google search to discover this dataset. I venture to say that the vast majority of an “applied” PhD student’s time is spent downloading datasets, reading them, and figuring out which variables represent what. My dream would be to have a search engine built for datasets, with variables and their levels of aggregation clearly labeled for ease of discovery. This initiative may have required too many human hours in the past, but I think with an AI automatically reading in variable names and making imputations, we can quickly build such a dataset search engine. The end scenario is for a common dataset sharing standard to emerge: Everyone from all disciplines uploads data (or, in the case of sensitive data, the metadata of their datasets) to a central repository (which also houses our search engine). Once data is fully communicated across individuals and disciplines, we can 1) conduct meta-analyses more efficiently and comprehensively; 2) nearly eliminate the possibility of duplicate efforts; 3) discover new opportunities for research and collaboration (e.g. a dataset or a variable someone in a distant field collected).
In sum, I’m excited about building new tools for democracy, for AI alignment and AI governance, for theories of justice, and for knowledge generation, synthesis, and dissemination.
Social scientists since Weber and Marx have always explored both the positive and the normative. The digital revolution of the past few decades allowed social scientists to answer positive questions very precisely: what present and past societies look like, what led to what. But for me, the end goal of assembling more data and making more credible inferences about the present and the past is to guide the future.
The world is simply changing too rapidly for us not to act, and social scientists do indeed possess valuable knowledge that can inform system design at scale.
I hope people from all backgrounds – CS, neuroscience, philosophy, the social sciences, you name it – can join forces and build together! Yes, there are no value-free designs, but an important problem with no perfect solution deserves more thinking and more perspectives. Shying away from participation will only result in particular interests capturing our future.
Prof. Jonathan Haidt’s policy recommendation is real-name registration for all and bans on social media for everyone below 16. Interestingly these policies are very similar to the status quo in China. ↩︎
My sense from reading the literature is that what researchers find depends highly on the counterfactual and the population studied: Are you comparing algorithmically recommended timelines with reverse-chronological ones, or no use of social media at all? Are you studying the Very Online population, light users, everyone with an account, or all Americans? Another neglected aspect of this research seems to be, is the content passively fed to you, or do you have to actively search a keyword to find it? ↩︎
A machine learning researcher friend’s dream plan is to have personal AI assistants learn our individual preferences and interact with the Internet platforms. I doubt that most users explicitly want to be outraged, similar to how most people wouldn’t want to eat junk food all day. ↩︎
R&D tools in biotech (industry) seem to have taken off quite rapidly. Benchling, for example, helps researchers share notes and record lab data. ↩︎