In my work as an applied researcher so far, a few big truths have come up again and again. They aren’t the hardest concepts to understand, nor are they the most technical and nuanced problems in stats, nor the most tangled strategic insights. But these ideas have come up again and again, even though they’re simple. They’re important. I keep these reminders ready at hand when I think about any research project, strategic initiative, or real-world problem. Things change over time. Think about interaction effects. Examine operationalizations. Do the documentation before you think you need to.
And for my work, this is a big one: be mindful of deficit thinking.
Deficit thinking is interpreting an observed difference as resulting from failing, lack, absence: that’s the deficit. The thinking part is when this calcifies into an interpretation that we apply to everything. It is individualistic, oblivious to systems, and it turns into bias when we apply it unevenly. We usually apply it unevenly. The “different” group is only allowed to exist as a lesser version of the default group.
It’s not a new term or a new concept. There’s a ton of work on this idea in education . Deficit thinking, and the interrelated patterns of using individualistic over-explanation for structural problems, have been explored deeply by people better than me at this. We are more likely to give environmental explanations for privileged groups and deficit explanations for marginalized groups . The consensus is that deficit thinking from people in power and from organizational cultures is damaging, inaccurate, and distortive.
Yet in tech, generally speaking, these years of conversation about deficit thinking have simply not made inroads. It’s not that tech isn’t interested in social science. In my experience (and contrary to a lot of pushback I’ve gotten over the years from well-meaning and disparaging academics), tech is very interested in social science. But the translation is difficult for a lot of reasons — not least of which that there aren’t enough of us in tech, and that we get siloed from each other. But another reason is that the translation happens haphazardly.
There are many frustrations inherent to watching what concepts do and don’t gain traction outside of academic social science research (“nudges” caught on like wildfire for some reason), but this is one of my biggest, most of all with any tech that enters the realm of education. Despite being enthusiastic about optimistic concepts like growth mindset and grit, despite many words spilled in tech celebrating the idea that people can overcome and past does not determine potential, tech tends to airlift in only the most individualistic versions of these ideas. Growth mindset is translated in tech as a concept only about individual perseverance, never about an organizational culture’s attitudes towards failure and mistakes. They have mindsets. We have “truth” and “data.”
I think a lot of people in tech want something different. We don’t always have words for it, but it’s there. It’s why it’s so grating when an organization puts up flyers saying “anyone can learn anything!” while they won’t even look at resumes from people who don’t have a degree from a tiny list of supposedly top schools. It’s why we wince when we see products claim that they are going to swoop into the world’s most complex issues, health or education or civic action or justice systems, to “educate” and “fix” and “save.” And it’s one of the reasons we mismeasure so much in this work, in my experience. Operating from a deficit lens, we assume that success can only be measured when people look and act like us. Operating from a deficit lens, we discard achievements that would contradict that narrative. Operating from a deficit lens, we already have a biased causal narrative in mind about exactly why we see something that we’ve already decided is failure.
It’s a lot of confidence in objectivity where none is warranted: failure isn’t even objective. I use this example a lot when I give talks about my work, and it always resonates with people. I’ll say it again here: a student struggling under incredibly difficult adverse circumstances, who gets a C grade in a math class, is giving us a fundamentally different measure compared to a student facing no adversity at all, who also gets a C grade in a math class. Often we’re comparing the wrong things altogether. Under a strengths-based lens, we might start to realize that the student who got a C grade in a math class still showed up to that math class instead of dropping out of school. We might begin to question how useful a grade is, as a measure of the achievement of a student who is taking care of younger siblings while their single parent works. We might consider how enormous other definitions of success really are: showing up at all. Trying. But this also forces us to localize the problem outside of individuals, and and often, onto ourselves.
It remains both interesting and deeply painful to me that tech has lifted half-concepts from social science, or contextless concepts, the feel-good story of individual potential but not the critical importance of a sustainable architecture for that potential. When I say tech I’m not saying “everyone” in this big, complicated industry. A lot of the people I’ve worked with in tech really do want to think about this, or have done their own thinking about this. But when I network in tech, when I talk to funders or VCs or heads of engineering, they often ask me about the mysteries of psychology. The vast majority only want to talk about some user, amorphous and distant. When they talk about doing social good with tech, they conceive of it as bestowing knowledge or skills onto the blank slate of an individual. They don’t think of it as inviting someone in, as a moment for us to learn what we might have been missing about a person all along, and giving that person the chance to show it.
Tech rarely wants to talk about communal, societal, and environmental psychology: the fact that we are all in it together, and that we need to learn things about our own thinking as much as scrutinize the psychology of people inside our products. Or more. And it’s a pity, because I think despite being an industry obsessed with impact, we are missing an absolutely enormous way that we could have impact.
e.g., Valencia, R. R. (Ed.). (2012). The evolution of deficit thinking: Educational thought and practice. Routledge.
e.g., Garcia, S. B., & Guerra, P. L. (2004). Deconstructing deficit thinking: Working with educators to create more equitable learning environments. Education and urban society, 36(2), 150–168.