Information Literacy for Mortals
In the academic imagination, depth and attention are the highest of virtues. But in pushing students to apply high-attention strategies to all incoming information, we risk creating a new and dangerous shallowness.
Epistemology December 14, 2021 TweetIn this essay, I will talk (eventually) about information literacy, misinformation, and the fallibility of human reason. But, if you’ll indulge me just a bit, I want to talk about real-world decision-making first. In looking at the structure of decisions we can better illuminate how the information-seeking process works when up against constraints — and it’s an information literacy that is honest about constraints that I argue we must embrace.
So let’s start with this: I am currently in the middle of a job transition. My wife and I have begun the process of looking for an affordable neighborhood. But what is affordable? Our current house was bought with different salary assumptions, but the housing market has shifted. For the first time in our lives, we actually have accrued some value in our current house that allows for a substantial down payment. On the less positive side, we’re moving into an expensive market where even entry-level housing is going to put a bigger hole in our budget than we are used to. The house prices are, frankly, a bit shocking. We’re going to have to spend more, but are unsure how far we can push our housing budget.
What do we do to figure out the maximum amount we can spend?
One approach might be to list out all of our income and all of our expenses. We could then decide which expenses we could cut, where we could consolidate items, and plot likely future earnings and required savings. Get the spreadsheet out. We could really throw ourselves into it — what is the difference in grocery costs in our new neighborhood? How much is the house likely to appreciate in value?
Or we could do what most people do: Start with the rule that in the United States one should not spend more than 28% of gross income on a house. From there, if there is anything particularly unusual about the house (requires extensive repairs) or us (two kids in college at the moment) we can nudge that percentage down a bit. We’d start not with a complex spreadsheet, but with a simple rule.
In the Face of Complexity, Less Can Be More
As you’re probably aware, financial experts recommend the second course of action. Many people assume that recommendation is based on the lack of skill of the average homeowner — that is, that the recommendation is for those that don’t know how to work a spreadsheet, or how to research likely home appreciation rates. The rule is fine for the rookies, people think, but to make a really good decision requires several orders of magnitude more research, background knowledge, and study.
But this is not the case. The rule is there because the complexity of the decision makes a farce of pursuing detail. Yes, I can plot out my current expenses to the last penny, and perhaps I would find I can spend much more than anticipated on a house. But what about unexpected future expenses? Likewise, I can calculate our current spending, but levels of spending are relative in part to the amount of disposable income a family has.
Add to this what we’ve all observed in ourselves or others when we’re just dying to make a purchase that is out of our budget: The more calculations in our “can I afford this?” calculus, the more opportunities there are for us to unwittingly put a thumb on the scale. We often don’t know we’re doing it: The house or apartment is a bit closer to work, so surely that will save us $40 a month (rounded up to $50), and with that kitchen we’re going to eat in more often, which is practically money in the bank, right? Never mind that every calculation is an opportunity for error.
Because we’re fallible beings, we start with a rule that accepts we don’t know the future and tend to distort the present. Researcher Gerd Gigerenzer, a psychologist, has long tracked how these simple rules have improved decision-making in a variety of professional contexts. From estimating the amount of fuel needed for a flight to improving emergency room triage, he has termed this approach “rationality for mortals.” He has demonstrated the wide variety of situations where more information, more calculation, and more precision makes us substantially worse at bottom-line decision-making compared to simpler processes and rules of thumb.
Why do we continue to assume that more is more, then? In his 2008 book, Rationality for Mortals: Risk and Rules of Thumb, Gigerenzer discusses the ways in which the Enlightenment modeled the rational decision maker as a sort of “secularized God.” In such a scheme the ideal decision maker benefits from:
- Omniscience: They can know or find all variables that would impact a decision
- Omnipotence: They have unlimited time, mental resources, and intellectual capability to come to the best decision
- Determinism: Given all relevant variables the world is deterministic enough that the future is foreseeable
If you take this view, it’s true: Each new fact, statistic, or spreadsheet cell gets us closer to omniscience. Each minute we spend produces an incrementally better decision. And none of that will be undone by things unforeseen (or influenced by the bias of the decision-maker).
I’m willing to stipulate that for a researcher, working in their field and investigating a narrow issue, such an ideal might be useful. Given more time and more information, you might see better results.
But what about the rest of us? What happens to specialists when they are outside their domain of expertise? And how might such non-expert needs inform how we teach?
Information Literacy for Mortals
A few years ago, I developed the SIFT method that makes use of quick checks over deeper analysis. Its fundamental insight is that when we rush to deeper analysis before assessing the basics, we end up making poor judgments about information. Just as we might want to start assessing the house we can afford by looking at its monthly cost as a percentage of gross income (rather than computing an all-encompassing budget), we encourage students to start with a few simple moves. Before diving into an article and asking the 26 questions and subquestions of CRAAP, we encourage them to check out the source’s page in Wikipedia. Does anything on that page surprise them? Is the source what they expected it was? If not, their assumptions about what they’re looking at may need adjusting before they proceed.
Before addressing the complex question of whether they think a news story is plausible, they are encouraged to ask a simpler question: Are other reliable outlets reporting it? Again, if it’s the sort of story that would be reported widely and there aren’t other outlets jumping on it, maybe the story isn’t what they thought it was. The approach has proven quite effective, with a recent study showing dramatic increases in student capability to reach competent judgments about sources and claims after completing a short module.
I’ve talked extensively about SIFT (and the lateral reading research that informs it) over the past several years. But I seldom get to talk in detail about how weird classroom activities are when you take a closer look. In classroom information literacy sessions over the past few years, I noticed that given a bit of time on a question — “Did this thing happen?” “Is this source reliable?” — students do well. They apply the SIFT method we’ve taught them and make decent judgments about claims and sources.
Given a lot of time, however, students often do worse. And, as you watch the individual groups, you begin to understand why. Early on in their research, the students discover the fundamentals: Source X is from an oil industry lobby group, source Y from a respectable trade organization. But students keep going, engaging in a process I call search result overfitting.
What does this look like?
As the search progresses past broad strokes and easy answers, the students find more information with a smaller scale impact. They discover the oil industry group did produce a well-cited report a few years ago. They find that a reliable publication was called out for an inaccurate story several years back. It also was founded in 1903 as a Republican paper, which seems interesting. They probe a respected health site where someone who used to sit on the board is also on the board of a company recently charged with financial malfeasance.
In statistics and machine learning, “overfitting” is a phenomenon where too many data points of varying usefulness are incorporated in a model. Because the model does not distinguish between “signal” and “noise,” a model is produced that incorporates the available data better than alternative models and yet turns out to be less useful than a model that incorporates only a few data points. And to some extent this is what is happening here. There is a belief from the students that every fact discovered will play some part in their analysis, and, in practice, the later a fact is discovered in a search, the more weight they seem to give it. This, of course, is opposed to how facts often present themselves, with the most important facts emerging early in a search, and the least relevant facts emerging later.
We see the same features in our post-tests. Our recent research has shown dramatic increases in student capability when they learn basic lateral reading techniques. A student using basic lateral reading techniques might note that a Wikipedia check of a specific cancer therapy revealed that it is long debunked and its inventor lost their license for practicing it. But they’ll also point out that the site is running a lot of advertisements, which seems suspicious. This is the equivalent of noting that you would not recommend taking a ride from Jeffrey Dahmer as he is both a serial killer and also chronically late. It’s a good decision, but there’s clearly still confusion about the relative importance of criteria.
And, mind you, these are the students who do well! For many others, the result is a feeling of being increasingly overwhelmed, as each link on each page provides yet another fact to slot somewhere in the student’s working memory, as they try to accurately weight the importance of each thing they discover. The biggest struggle students have with SIFT is not with the skills themselves. Rather, their biggest struggle is with their underlying assumption that the quick answer is a bad answer. It seems a matter of honor more than anything else: Sure, you can get an accurate answer quickly with the right skills — but how worthy of admiration is that?
Something is seriously wrong here. At the very least, it’s clear that the strategies students are using in our classes are not well matched to the task environment.
Our students, after all, come to the web as people with limited time, limited domain expertise, and access to a near infinite amount of detail. And I’d argue that they apply to that environment the behaviors that we lionize in academia, and encourage them to perform: The application of deep attention, the discovery of endless tiny details, and a publish-or-perish related obsession to not only having a useful perspective, but having a unique one.
This is a bad match. As Herbert Simon noted a half-century ago, you cannot solve the problem of a scarcity of attention by requiring more attention. Attention is, after all, the scarcity. To the extent that attention is scarce, you need to develop strategies and techniques that use less, not more of it. Encouraging students with a little expertise and less time to engage deeply is a recipe for disaster.
Then, why do we do it?
What is the Task Environment of Civic Information Literacy?
I’d argue that there is a persistent belief in higher education that all thinking is just a degraded form of research. In this view, for people to reap the benefits of science they should become scientists. To think more effectively, people can learn philosophy. What is conveniently ignored is that research as practiced in higher education is not at the top of a pyramid of cognitive virtue. Instead, research reflects a set of practices grounded in a specific endeavor, namely the discovery of new knowledge and the testing of old knowledge by people with substantial expertise and access to a specialized discourse community.
There are many ways in which the task environment of the current citizen is not only different, but orthogonal to such concerns. For instance, the environments in which most student citizens currently consume information are social feeds. In general, these are high-volume, low-attention environments used to sort through many different options for applying one’s deeper attention. They are not libraries as much as attention markets. In such an environment, the question for a citizen is never whether something is good enough to cite — they are often asking whether something is good enough to read, worth continued viewing, or perhaps just “plausible enough to worry about.”
Additionally, when engaged in information-seeking (or, more often, information-encountering), the citizen is often not looking for academic precision, but to make good decisions under conditions of uncertainty. The question a person researching a vaccine must ask is not what the full inventory of vaccine side-effects might be, but whether the risks of taking it outweigh the risks of declining it, both on a personal and community level.
When looking at a recent event, the citizen often wants to know the likely proximate cause, as they build a rough model of what to worry about and what to celebrate. Are ballots found in a ditch an indication of election fraud or something less sinister? Most of the time, the question they have is even simpler: Can I ignore this for now? Or do I need to understand it? And if I do need to understand it, who should I trust to tell me more?
Reputation Heuristics and Building Out From the Student
Contrary to the belief that this process is a form of light-weight research, it bears far more connection to non-academic capabilities our students already bring into the classroom. After all, in their offline social interactions, students have at least some techniques to sort fact from fiction and hype from straight talk. They understand concepts like conflict of interest when it comes to rumors in their friend group or when it comes to whether to take a used car dealer’s word on the condition of a car. They know that it’s important to assess the reputation of the speaker when assessing a claim. They might not know a phrase like “epistemic trespassing,” but they would find the idea that they should trust an electrician's opinion on their house’s wiring more than a plumber’s unremarkable.
Humans are social animals, and have developed over time complex and often accurate rules of thumb about when people can be trusted and when claims merit attention. And these rules often require far less information than more complex investigations. It’s clear those intuitions do not map directly onto the larger information space of the web. But rather than trying to map academic understandings of information onto non-academic tasks, it seems to me much more interesting to start here, with what the students already know and build on that. How can a set of social intuitions and rules that work quite well in a physical world of small groups map onto this new virtual environment? What skills do the students already have that can be adapted to this larger world?
As an example, many people follow a simple rule in regards to resolving conflicting stories that circulate around their social circle: Start by finding people in a position to know the real story, then eliminate those who are untrustworthy or have a substantial conflict of interest. Listen to the most trustworthy person who remains. It’s not perfect, but it works, and it works by eliminating information in a way that would make a researcher wince.
Now, on the web, a lot of these evaluation techniques fall apart, because the environment of the web is different from that of a small social group. In small social networks, for instance, people track who in their social network is reliable, who is not, who tries to play it straight, and who stirs up drama. Sources have histories; they earn their reputations over time. Likewise, if a fight breaks out and one is looking for a source, “position to know” can be as simple as “Who saw the fight?” But how do you assess who is in a position to know about climate change, or deficit spending? When thousands are in such a position, what do you make of disagreements between them?
One approach to dealing with multiple conflicting sources might be to bring the students into a research process, to assume that a complex environment requires more complexity, more depth, and more data. Yet, as we saw with our budgeting example, depth does not seem to help. And this holds true in information literacy as well — the foundational study on the technique of lateral reading showed how lost accomplished researchers get on the web when they apply their research skills to an unfamiliar topic.
A better way is to see that with some modifications the simple rules students use outside schoolwork can work. Yes, it’s true that we do not have the personal knowledge to make reputation judgments of novel sources online. But the web makes it easy for a reader to get a quick summary of the reputation of the source they are reading before they read it. So in SIFT, the methodology I teach, we highlight this problem of the web for the students (unknown reputation of sources) and show how simple moves like looking for a summary of an organization in Wikipedia can address that problem within seconds. Similarly, the common social heuristic that a story heard from multiple independent sources is more likely to be true can be turned upside down on the web, where repeated exposure to misinformation is common, and sources are often less independent than one realizes. Yet with some training, it can still be applied if approached intentionally: A quick search can reveal if the sort of people you’d expect to be reporting on a story are in fact reporting on it.
These are small but important modifications. The fundamental idea driving them is that reputation of claims and sources is discoverable on the web, and discoverable relatively quickly. And what we find is that if students take steps to evaluate that reputation, even briefly, before engaging with content, they end up being served by the off-web skills they have developed quite well. Their intuitions improve dramatically.
Add in the small connective piece of a few simple web techniques and they are once again able to tap into abilities and understandings they already have. And to the extent SIFT and other lateral reading approaches have been successful (and they have been quite successful), a big part of the secret is this: Rather than seeing the students’ non-academic rules of thumb as a thing to be eliminated and replaced by a more logical framework, we see students’ native understanding of the social heuristics of information as capability to be tapped. They just need to know how to map those extant understandings and heuristics to the web, where reputation is stored less in one’s head than it is in the network.
And as for us as teachers, as librarians, as decision makers? Our task is much more challenging. We have to understand that in the context of decision-making, simple can be good, less can be more, and the skills our students bring into the classroom may be a more valuable starting point than anything the traditional research process can provide. Some of that may come in adopting lateral reading approaches, such as SIFT, or the Stanford History Education Group’s COR framework. Some is in getting students to understand the social nature of knowledge and why quick judgments about the reputation of claims and sources can be more valuable than deep personal analysis of them.
But the first and biggest step may be a simple shift in attitude.
A few months ago, I was giving an introductory presentation on SIFT as a guest lecturer, and I walked through an example: A tweet about ankle monitor surveillance of unvaccinated students. It is a three-step example with a falsely contextualized claim that demonstrates SIFT quite well. Investigating the source reveals the publication has a history of spreading false information. Finding better coverage reveals that reliable publications told a very different story of the event. And clicking through to the source of the claim (“trace” in SIFT terminology) reveals even the linked article does not support the tweet. But I did something I have learned to do over time. After I revealed that the source of the tweet was associated with false stories in the past, I paused.
“Now,” I said, “You could actually stop here. You saw a tweet, got mad, hovered over the source and found out this wasn’t a great source. So maybe it’s not worth any more of your attention. Shut down the phone and see what’s on Netflix, or go outside and get some sun. You have no obligation to waste good attention on bad sources. But if you wanted to dig deeper…”
And then I continued on, showing all the other ways you could approach it. At the end of the class, a number of students came up and asked various questions. One student was hanging back, not saying anything.
“Was any of that useful?” I asked, self-effacingly, trying to draw her into the conversation.
“Yeah,” she said. “That part where you said we could stop there? The Netflix thing? No one ever says that. It feels almost wrong.”
“You think it’s wrong?” I asked.
“Oh, no,” she said, “I agree! I’ve just never heard it in a classroom.”
There’s a wealth of opportunities to pursue as we think about what “information literacy for mortals” might look like. But if we could start here, with that comment, we might make some progress.
Read the author’s reflections on what inspired this essay
Discussion questions for reading groups
Endnotes
- Julie Compton (September 7, 2018), "How much house can you afford? The 28/36 rule will help you decide," NBC News, https://www.nbcnews.com/better/pop-culture/how-much-house-can-you-afford-28-36-rule-will-ncna907491
- "Gerd Gigerenzer" (September 18, 2021), Wikipedia, https://en.wikipedia.org/wiki/Gerd_Gigerenzer
- Gerd Gigerenzer and Wolfgang Gaissmaier (January 2011), "Heuristic decision making," Annual Review of Psychology, 62(1), 451-82, https://doi.org/10.1146/annurev-psych-120709-145346; https://www.researchgate.net/publication/49653132_Heuristic_Decision_Making
- Gerd Gigerenzer (2008), Rationality for mortals: Risk and rules of thumb, Oxford University Press, https://www.worldcat.org/title/rationality-for-mortals-how-people-cope-with-uncertainty/oclc/1036237068&referer=brief_results
- Mike Caulfield (May 12, 2019), "Introducing SIFT, a four moves acronym," Hapgood, https://hapgood.us/2019/05/12/sift-and-a-check-please-preview/
- Dimitri Pavlounis, Jessica Johnston, Jessica E. Brodsky, and Patricia J. Brooks (November 2021), The digital media literacy gap: How to build widespread resilience to false and misleading information using evidence-based classroom tools, CIVIX Canada, https://ctrl-f.ca/en/wp-content/uploads/2021/11/The-Digital-Media-Literacy-Gap.pdf
- Sam Wineburg and Sarah McGrew (October 6, 2017), Lateral reading: Reading less and learning more when evaluating digital information, Stanford History Education Group Working Paper No. 2017-A1, https://dx.doi.org/10.2139/ssrn.3048994
- Overfitting (2021), Techopedia, https://www.techopedia.com/definition/32512/overfitting
- Wang, Hongbin, Jiajie Zhang, and Todd R. Johnson (2000), "Human belief revision and the order effect," Proceedings of the Annual Meeting of the Cognitive Science Society, 22, https://escholarship.org/uc/item/3wb4r7kf
- Jessica E. Brodsky, Patricia J. Brooks, Donna Scimeca, Ralitsa Todorova, Peter Galati, Michael Batson, Robert Grosso, Michael Matthews, Victor Miller, and Michael Caulfield (2021), "Improving college students’ fact-checking strategies through lateral reading instruction in a general education civics course," Cognitive Research: Principles and Implications, 6(1), 1-18, https://link.springer.com/article/10.1186/s41235-021-00291-4; https://doi.org/10.1186/s41235-021-00291-4
- Sam Wineburg and Sarah McGrew (2019), "Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information," Teachers College Record, 121(11), 1-40, https://cor.stanford.edu/research/lateral-reading-and-the-nature-of-expertise/
- Herbert Simon (1971), “Designing organizations for an information-rich world,” In M. Greenberger (Ed.), Computers, communications, and the public interest, Johns Hopkins Press, https://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=33748
- Mike Caulfield (February 4, 2019), "Attention is the scarcity," Hapgood, https://hapgood.us/2019/02/04/attention-is-the-scarcity/
- Michael Strevens (2020), The knowledge machine: How irrationality created modern science, Liveright, https://www.worldcat.org/title/knowledge-machine-how-irrationality-created-modern-science/oclc/1233268650&referer=brief_results
- Mason Walker and Katerina Eva Matsa (September 20, 2021), “News consumption across social media in 2021,” Pew Research Center, https://www.pewresearch.org/journalism/2021/09/20/news-consumption-across-social-media-in-2021/
- Alison J. Head, John Wihbey, P. Takis Metaxas, Margy MacMillan, and Dan Cohen (October 16, 2018), How students engage with news: Five takeaways for educators, journalists, and librarians, Project Information Literacy Research Institute, https://projectinfolit.org/publications/news-study/
- Anastasia Kozyreva, Stephan Lewandowsky, and Ralph Hertwig (2020), "Citizens versus the Internet: Confronting digital challenges with cognitive tools," Psychological Science in the Public Interest, 21(3), 103-156, https://doi.org/10.1177/1529100620946707
- Homero Gil de Zúñiga, Brian Weeks, and Alberto Ardèvol-Abreu (2017), "Effects of the news-finds-me perception in communication: Social media use implications for news seeking and learning about politics," Journal of Computer-Mediated Communication, 22(3), 105-123, https://doi.org/10.1111/jcc4.12185
- Prabha, Chandra, Lynn Silipigni Connaway, Lawrence Olszewski, and Lillie R. Jenkins (2007), “What is enough? Satisficing information needs,” Journal of Documentation, 63(1), 74-89, https://doi.org/10.1108/00220410710723894; pre-print: https://www.webjunction.org/content/dam/research/publications/newsletters/prabha-satisficing.pdf
- Borchuluun Yadamsuren and Sanda Erdelez (2010), "Incidental exposure to online news." Proceedings of the American Society for Information Science and Technology, 47(1), 1-8, https://doi.org/10.2200/S00744ED1V01Y201611ICR054
- Joshua DiPaolo ( 2021), "What’s wrong with epistemic trespassing?" Philosophical Studies, https://doi.org/10.1007/s11098-021-01657-6
- Jutta Haider and Olof Sundin (December 14, 2020), "Information literacy challenges in digital culture: conflicting engagements of trust and doubt,” Information, Communication & Society, 1-16, https://doi.org/10.1080/1369118X.2020.1851389
- Marc Meola (2004), "Chucking the checklist: A contextual approach to teaching undergraduates Web-site evaluation," portal: Libraries and the Academy, 4(3), 331-344, http://dx.doi.org/10.1353/pla.2004.0055
- Devon Greyson (2018), "Information triangulation: A complex and agentic everyday information practice." Journal of the Association for Information Science and Technology, 69(7), 869-878, https://doi.org/10.1002/asi.24012
- Josh Landy, Ken Taylor and Gloria Origgi (June 27, 2021), “Does reputation matter?” Philosophy Talk, https://www.philosophytalk.org/shows/does-reputation-matter
- Wineburg and McGrew (2017), Lateral reading, op.cit.
- Gordon Pennycook, Tyrone D. Cannon, and David G. Rand (2010), “Prior exposure increases perceived accuracy of fake news,” Journal of Experimental Psychology: General, 147(12), 1865-1880, https://doi.org/10.1037/xge0000465; https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6279465/
- Priyanjana Bengani (August 4, 2020), “As election looms, a network of mysterious ‘pink slime’ local news outlets nearly triples in size,” Columbia Journalism Review, https://www.cjr.org/analysis/as-election-looms-a-network-of-mysterious-pink-slime-local-news-outlets-nearly-triples-in-size.php
- Elizabeth Kocevar-Weidinger, Emily Cox, Mark Lenker, Tatiana Pashkova-Balkenhol, and Virginia Kinman (2019), “On their own terms: First-year student interviews about everyday life research can help librarians flip the deficit script,” Reference Services Review, 47(2), 169-192, https://doi.org/10.1108/RSR-02-2019-0007; https://hcommons.org/deposits/view/hc:38932/CONTENT/on_their_own_terms.pdf
- Brodsky et al 2021, “Improving college students’ fact-checking strategies” op. cit.
- Pavlounis et al 2021, The digital media literacy gap, op.cit.
- Civic online reasoning, Stanford History Education Group, https://cor.stanford.edu/