In a time of deep social and political division, many are mystified about how fellow citizens can have such different ways of seeing the world. Dr. Francesca Tripodi used her training as a sociologist and media scholar to explore this divide. She went into the field to observe how conservatives seek news and information through an ethnographic lens, attending meetings, barbecues, fundraisers, and election night parties, learning how media literacy practices are shaped by scriptural study and through Internet searches. She learned that deep epistemological frames shape how we search and that the keywords people use — often manipulated by political actors — influence what they believe is factually true.

Francesca isn’t just interested in how people seek information, she has studied the places where information has been deleted from sites where participants decide what topics aren’t of value. These erasures tell us much about how information systems are shaped by social factors and power relationships. Her knowledge of the technical workings of search engines and social media, and how social trends influence information behavior has led to her sharing her expertise with journalists and with members of the U.S. Senate.

We caught up with Francesca in August 2021 to ask about her research into the media literacy practices of conservative Americans, what she has learned by paying attention to information deleted from the web, and her forthcoming book on how propagandists work the system. (Interview posted: September 1, 2021).

PIL: We were struck by the insights into the reading practices of Christian conservatives you described in Searching for Alternative Facts: Analyzing Scriptural Inference in Conservative News Practices, research you conducted while you were a Fellow at Data & Society in 2018. What do you mean by “scriptural inference”? Do you think understanding this practice could inform the ways we typically teach students to seek and evaluate “authoritative” information, especially when working with conservative students?

Francesca: Scriptural inference is a “compare and contrast” method of closely reading documents deemed sacred and applying the lessons from those texts to one’s own life. While the practice of returning to an authoritative text and leveraging a personal dissection of it is fundamentally bound to Protestantism in the U.S., I argue that the practice transcends the boundaries of church, creating a form of media literacy that guides conservative sense-making. By inverting traditional assumptions that truth is only curated at the top, scriptural inference lets everyday people consider themselves to be subject matter experts. Such a practice allows for a community to come to a collective consensus in a way that honors conservative values of individualism. The most important part of this concept is understanding that what counts as authoritative varies by audience and is layered with historical understanding of truth and trust. As students go out to “do their own research” (often via Google) we can help guide them with algorithmic literacy tools. Few of us understand that our search results (i.e., returns) are so closely tied to our keywords and that those starting points are coded with bias before we even begin our search.

PIL: In two recent publications, you delve into the ways gender intersects with our contemporary information landscape. In Media Ready Feminism and Everyday Sexism, you and your co-author, Andrea L. Press, explore the ways interaction with sexist content online shapes the popular understanding of feminism. Your recent article published in New Media & Society, “Ms. Categorized: Gender, notability, and inequality on Wikipedia,” examines how biographies about women on English Wikipedia are frequently considered “non-notable” and disproportionately targeted for deletion. What led you to this research topic, how did you collect evidence of this deletion practice, and how might teachers and librarians address this form of discrimination?

Francesca: The idea of analyzing “deleted data” was part of my dissertation and started while studying the anonymous social media app YikYak. YikYak’s programmers created an algorithm to automatically delete content with a cumulative score of -5. The goal of this program was to allow community members to immediately erase hateful or defamatory comments without having to flag or wait for a review. However, in my ethnographic study of how a campus used the app, I found that students were just as likely to downvote/remove statements like “Black Lives Matter,” as they were hate-speech. In this way, those who already felt marginalized from the community because of race, sexual orientation, or socioeconomic status felt further isolated when their sentiments were erased from the conversation. By looking more closely at erasures — or attempted deletions — we can better understand the values of a society. As I started conducting ethnographic observations at edit-a-thons, I saw a similar pattern. Women who met Wikipedia’s threshold for inclusion were repeatedly being flagged as “non-notable” and nominated for deletion. 

When I tried to publish this data, journals did not see ethnographic/qualitative findings as valid, so I sought out a computer scientist to scrape deleted data from Wikipedia for statistical analysis. I wanted to do the same for YikYak but their API prevented scraping and the company would not make their deleted datasets available to researchers. Because of these constraints, most studies of deleted data must be done ethnographically. I do think that paying attention to what doesn’t trend and in some cases, focusing on what is deleted, is such a valuable tool for contextualizing discriminatory practices that would otherwise remain invisible.

PIL: As a Senior Faculty Researcher for the Center for Information, Technology, and Public Life, you recently put together an episode in the Does Not Compute podcast that delved into how Google’s search algorithm works, how it can be gamed, and how our own prejudices and backgrounds might influence results. Are there basic things about Google search that many of its users have wrong, or things about using Google that you wish they knew?

Francesca: I think the biggest thing we don’t realize is just how Google works. Search engines use complex algorithms to generate a ranked list of results that “best match” our queries — what information scientists refer to as relevance. Relevance is dependent on a variety of factors (e.g., geolocation, surrounding text, click-through data of other users) but are largely driven by users’ keywords. The point of view from which an individual sees the world shapes the kinds of key words they chose when searching. These ideological fissures create multiple internets fueled by confirmation bias.

Many are very concerned about personalization and what corporations might be hiding (or selling) to us. This is important and valid, but few of us pay attention to how our keywords also drive our returns. Many studies, including my own research, have shown that people trust the first returns as more important, more relevant, and/or more accurate. Despite Google’s claims that results are ranked by credibility or popularity, top returns are deeply affected by those willing to pay and take advantage of search engine optimization. However, SEO is rooted in matching keywords and our own click-through history so, in some ways, we also teach Google what we want to see and what we consider to be a credible source of information.

If you seek out information about illegal aliens voter fraud you’re going to get very different returns than if you sought more information about immigrant voting rights. And, I can’t stress this enough, siloed returns driven by keywords are not exclusive to Google. DuckDuckGo might not sell your data, but it still runs on relevance and will reinforce ideologically opposed keywords.

The most important thing I wish everyone knew about Google is that it is tailored to sell your data and is invested in returning content they think you want. It’s not a place to learn about new ideas as much as it is a platform designed to easily verify our existing biases.

PIL: In 2019 you testified twice before the U.S. Senate Judiciary Committee. Can you tell us about that experience? What insights from your research were especially relevant to their concerns about free speech and the workings of dominant tech platforms? Are there particular areas in which scholars in LIS (Library and Information Science) can contribute more to civic life and public policy?

Francesca: The crux of my testimony was debunking the idea that conservatism is being silenced by Big Tech. Contrary to anecdotal arguments, my data demonstrate that conservatism thrives online. Research by New York Times’ tech reporter Kevin Roose, The Markup’s Corin Faife and a new report I have coming out with Define American this fall further support these claims. Conservative content creators (e.g. The Daily Wire, PragerU, Tucker Carlson) have a sophisticated understanding of how information flows and their content often outperforms mainstream news outlets online. 

In my testimony I also argue that while people like to think of places like Facebook, Twitter, and YouTube as the public square, they are not. They are privately held corporations with their own set of participatory norms and standards. Being banned by a private organization for violating their community standards is not protected by the First Amendment. I think all scholars should contribute more to civil life by publishing their papers open-access, working with journalists as a source, as well as writing op-eds whenever possible. I was approached by a Senator’s office to serve as an expert witness because of an op-ed I had written.

PIL: You have a book in the works, The Propagandists’ Playbook: How Conservative Elites Manipulate Search and Threaten Democracy, under contract with Yale University Press. Can you give us a brief preview about what you will be tackling in this book?

Francesca: In The Propagandists’ Playbook I pull together much of what we’ve already discussed in this interview. I begin the book by explaining the connection between epistemology and media literacy, arguing that conservatism is both a way of seeing the world and a set of media practices. Then I consider how conservative elites exploit these practices to advance their political candidates and causes. I draw on ethnographic observations, media immersion, content analysis, and web-scraped metadata to demonstrate how pundits and politicians wield the power of search to influence the democratic process by connecting to their audience’s core values and engaging in likeminded media literacy strategies.

The Propagandists’ Playbook explains and examines this seven-step strategy of manipulating information systems (one, know your audience; two, build a network; three, engage in their form of media literacy; four, understand how information flows; five, set the traps; six, make old ideas seem new; seven, close the feedback loop). At the same time, the book considers the multidirectional nature of propaganda, explaining how information seekers can reinforce partisan silos, the historical legacy of misinformation campaigns, and their ties to promoting white supremacy. Understanding the interactive mechanisms of search is important for unveiling how active audiences engage with mis-dis-mal information and how those processes are exploited. The problem is creators of propaganda are encouraging their audiences to “go and search for themselves,” knowing they have seeded the internet with problematic content. I call this process the IKEA effect of misinformation. It borrows from a concept created by business scholars who explain that people value low-quality products more when they build them on their own. My study explains how a similar phenomenon is happening when it comes to information-seeking and complicates our existing framework of “information disorder.” It’s not just that audiences are being fed misleading ideas, they are made to feel like they are drawing their own conclusions rather than being told what to think.


Francesca Tripodi is a sociologist and media scholar whose research examines the relationship between technological platforms, political partisanship, and democratic participation. She taught at James Madison University before becoming an assistant professor at the University of North Carolina’s School of Information and Library Science (SILS). She serves as a senior faculty researcher with the Center for Information, Technology, and Public Life (CITAP) at UNC at Chapel Hill, and is an affiliate at the Data & Society Research Institute.

Her research has been covered in The Washington Post, The New York Times, The New Yorker, Wired, and other publications, and she has been called to testify before the U.S. Senate’s Judiciary Committee on the subject of social media, perceptions of bias against conservatives, and public discourse. Learn more about her research at https://ftripodi.com/

Smart Talks are informal conversations with leading thinkers about new media, information-seeking behavior, and the use of technology for teaching and learning in the digital age. The interviews are an occasional series produced by Project Information Literacy (PIL). PIL is an ongoing and national research study about how students find, use, and create information for academic courses and solving information problems in their everyday lives and as lifelong learners. Smart Talk interviews are open access and licensed by Creative Commons.​


Suggested citation format: “Francesca Tripodi: Ideological Fissures, Multiple Internets” (email interview) by Barbara Fister, Project Information Literacy, Smart Talk Interview, no. 35 (1 September 2021). This interview is licensed under a Creative Commons Attribution-Non-commercial 3.0 Unported License.