top of page
Search
  • Cat Hicks

Five Things I’ve Learned in Ten Years of being a Social Scientist in Tech

I’ve been doing work in tech for ten years. Ten years ago I didn't say much about leaving academia, even though it was a big choice that held a lot inside it. I’ve also recently committed to a new research agenda, which has been making me think a lot about all the reasons I do applied research. It is, incidentally, my birthday.


All this made me feel like there was a piece, a thought, a current underneath my surfboard this week, but it was difficult to figure out what. Assigning lists and numbers in order to reify difficult things felt appropriate for talking about social science. This post is therefore Five Things I’ve Learned in Ten Years of being a Social Scientist in Tech.


But this post might also be about:

  • Ships of Theseus

  • The eternal struggle of working on hard human things

  • Evidence science as a valuable thing to do if only they let you do it

  • Celebrating ten years of doing it anyway


Learning 1. We keep saying data when we mean measurement


I sometimes describe applied social science as doing measurement work on big human experiences. People like to call everything data. Data is a thing, but measurement is an action. I have learned it is important to describe what you do as much as what you aspire to accumulate.


Caring about how we measure people has been a driving focus all my life. It was not a privileged entrance. My first experiences with data collection systems were defined by absence: a single parent crying over a form that did not have the right box to check for our situation, a school admissions form that expected data I did not have, apartment applications written by people who thought they could be lie detectors and single parent detectors. I didn't have to learn that measurement was constructed because my first experiences with it were seeing it fail.


Doing the numbers on the measurement, though--the math of statistics, the jargons people use to believe evidence--that did not come naturally to me. Now, I feel the gift of this. It had a causal influence on why I do the work I do. I pull prediction apart more than I take it as a given.


Ten years into being an applied researcher, I have found this more useful than taking a lot of math classes. In a world where access to opportunity hinges on other people’s records about you more than your belief that you did something, a lived experience that samples this reality is better than a column in a toy spreadsheet. I spent a lot of time in ugly buildings arguing about the gap between what really happened and what was measured and recorded. That gap captured my interest, first as a form of survival, then as a form of protecting other people.


The funny thing is that ten years of applied research has actually taught me a lot of math, which is cool because you realize that math is not one thing, like they say about curing cancer–many situated solutions are required. Perhaps you can get a lot done by getting good at just a few of them. It has been a powerful and transformative realization. I wish that I had been told earlier that we were so often talking about measurement when we said data. I wish our fields would see this path.


Unfortunately I can also say with some certainty at this point that the majority of the tech industry doesn’t value working on the measurement of humans. What is paradoxical is that most people certainly want to use the data that results from the work. But once you’ve created good data, people question whether you did anything at all.


It has always been interesting to me that people doubt the impact of social science data and the power of collecting it, even though we all talk about it all the time. “You can really study learning? You can really study collaboration? You can really measure any of that?” Hey, you brought it up, I’m just trying to create some tangible help for the stuff we’re all going to talk about anyway!


But also, as long as we admit we're dealing with human experiences anyway, people should question hard things and push our work to be better. I don't mind that. If you want to criticize my methods I'll agree and then say great I'm glad we agree research will take more time than you want. Being a researcher in any field means being at the edge of comfort and the edge of understanding. Being a social science researcher means the edge yells at you a lot, that's all.


Perhaps people have such strong reactions to social science precisely because it feels like it is ours more than other sciences. Social science topics are cocktail party topics, waiting for the bus topics, talking to people at the grocery counter topics. They are nebulous and messy but that makes them more real to me, not less real. They are personal even when they are niche. They are us. At this point in the work, I’ve learned you have to love the edge because it yells back.


Learning 2. Specificity is ok, qualitative is ok


Generalizability matters. Ten years into doing applied work, I bring the question “hey, so, do we believe this change will also happen in another context?” to every project I work on.


But, a thing about being an applied social scientist in tech is this: I have realized that it is profoundly helpful to do two things that traditional academic social science gets very angsty about: be specific and value qualitative insight. This also makes it much more probable you'll get interesting findings. Granted, they will be interesting to an audience with different constraints or caveats. But at this point I think about trading one kind of portability for another. In applied work, specificity matters.


Only caring about getting your story as far as you can throw it risks flattening a thing so badly it loses all validity. I’ve learned that doing applied research means you need to use the tools of academia without giving in to all of its anxieties about generalizability. Generalizability does matter to us too. Applied researchers need to be accessible and generalizable in some ways to have impact, and that often means using clear, broad language and focusing on portable findings, tractable effects that will reliable show up in your field of contexts.


But I think it is a mistake to assume that specific exploration can’t be portable. Or that qualitative insight can’t have transformative impact. For the last few years I have committed to talking more about qualitative methods. This has had strong results in helping people use research at all. This has also gotten me accused of ruining science by an academic here and there which is very funny given all the time I spend, being applied, on measuring the things that actually happened as opposed to things I imagined could’ve happened.


On specificity, another thing I’ve learned is we should all spend more time documenting the context of findings. Look, speaking from outside academia, the academics need to care about this the most. I try to build on methods from academic studies all the time and if I, a methodologist who reads research papers in five fields over weekend coffee, can't figure out exactly what you did, who can? I'd love to see more questions devoted to the particular situations of your subjects, and more interactions as the unit of analysis.


Academia or industry, everyone wants to pay for prediction when we need more observational work. One of the most remarkable experiences I have as a researcher is when I carve out enough space in the context that other people point out interesting findings I didn't even know how to see.


Learning 3. Ships of Theseus & Zombie Luggage


Discipline labels get shipped out into the world and used as passports and identity markers for various forms of work. But we don't have the luxury of all that "so what did you major in I can't believe you didn't read X theorist" in applied research. I have learned that solving problems in the world with research means learning anywhere, from any field. Having worked in education enough, I also know that these shibboleths of discipline membership are hardly uniform. Individual differences are larger than group differences in many cases--this is a thing interdisciplinary people know.


People ask sometimes why I say social scientist and not psychologist. 1) people don't know what psychology is 2) I have taught myself so many things and am using so many tools & methods that I refuse to be bound by the baked-in electric fence of that one degree 3) I have started to see the entire concept of a research field as a politely maintained fiction, a ship of theseus–constantly swapping out conventions and evidence over time and generations, using new jargon and old structures. Like measurement over data, I have learned in applied research it is helpful to focus on the thing you’re doing and not the name you’re being called.


Methods are also ships of theseus. Take the humble survey. When you ask people how to measure things about humans, they typically say two things: “you can’t” and then “surveys” (this gives you some idea of how we feel about the surveys).


When I was a research fellow in a design lab, I began to work with large-scale MOOC data from a course that my collaborators ran online. Large in this case referred to the number of individual students–far more than would fit in a lecture hall–but it also referred to the number of clickstream-style datapoints we could talk about. Student behavior over time is quite large even if you are only looking at one student. But we also had smallish data: survey responses from before and after the experience.


I wondered about the implementation of those surveys. They were embedded in a course, which meant they were in a sense, a question being asked by a teacher. Is that different from a question asked by a researcher? Is a survey different when it is a website? Perhaps “survey” refers to “items.” Once again, I realized that one of the things I would work on in this job was defining our unit of analysis. This is true for methods, not just math; the thing I began distinguishing was not whether a survey question lived on a platform or on paper but whether we were asking people to self-report, and on what level of their experience.


But interestingly, people were also self-reporting on things that we didn’t call surveys. We called them metrics. For example, people could indicate whether they’d finished a certain piece of content. On another, research software HCI platform I saw, people could easily zoom a video forward and game the metric of “video completion.” Many students played videos on 10x speed, a form of “engagement” that one might recognize from one’s own “engagement” in required trainings.


I have learned that as soon as something is a metric, it becomes a positivistic proof of human behavior. Pay attention and you will see these zombies everywhere. A “survey” of people in a certain job was actually a scrape-and-count of profiles on a social media website. A “behavior” on a tool was actually a user deciding to check a box (isn’t that self-report?). People trust metrics even when they are bad; they doubt surveys even when they are good.


The point of all this is that concepts can be suitcases full of other concepts. Be careful about sticky disciplinary labels and what luggage you pick up because you might have to lug it everywhere.


Learning 4. Is Social Science any good?


When I was first doing research in tech there was a pretty big trend of calling everything “X….for good!” “Machine Learning for Good!” “Data for Good!” “HCI for Good!”


This always had big “a lot of questions already answered by my shirt” energy to me. I used to make an Amelia Bedelia kind of joke that people were saying the “for good” meant they were done with that thing.


Nevertheless I ask myself what good my work is a lot. Even though I don’t believe disciplines are real things I think that every research discipline should ask itself what "it" is doing in the world. Ten years into being a social scientist in tech, I reflect on:

  • Once a project I led gave rise to unexpected findings about how more support was needed for people in the middle of changing jobs and highlighted a miscalculation about how much time was being taken away from them. It was a very simple finding with a big impact on a lot of people. Giving people an extra hour matters.

  • Once upon many times, in multiple rooms I have argued, to varying degrees of success, both that human attributes matter and that they are complex to measure and needed to be analyzed with intersectionality, and sometimes not at all. Being a social scientist has given me some measure of authority in saying we need to talk about experience if we are using individuals to sample probabilities of their experience. Something I might share after ten years of this is that witnessing that admitting this complexity exists and is a category of technical work feels as important as winning any particular project logistic at any particular time.

  • For the past three years I have organized a small “career brunch” for women I know, in science, in industry, high achieving, amazing people. In nearly every one of those brunches people have shared and strategized around how to deal with harassment and sexism. Social science does not solve this but it does give me tools to value creating this space, listening, and language for our shared experience. It has also helped me help people increase their salaries.

  • Sometimes I read research papers from social science and then I send them to a person who is suffering and I can say I don't know how to fix all this but what you're experiencing is real. Sometimes I am able to take data from those papers and put it where it will change an action that is hurting someone. In this, I feel a healing kind of deja vu. The imperfect measurement of human experience coming back into my life, but I have power over it instead of it having power over me.



Learning 5. Evidence Science


One of the most difficult parts of remaining a social scientist in tech over the past ten years–and actually, I am including the global pandemic in my scale of difficulty–has been dealing with people telling me to be something else, be anything else.


Everyone wants evidence but few want the science it takes to create evidence. As best I can tell, most places want to live in the murky middle space of convincing themselves they have evidence but also not knowing how to act on it.


I decided to start calling our work a form of evidence science and I find it very meaningful. Evidence science applied to human experience. I like it when people succeed and that includes organizations succeeding at using evidence. I do not think we should take this happening for granted. I like creating evidence, and I like evaluating whether evidence is fit for purpose given the constraints at hand, and I like working on the systems of human and environment interaction that define our most meaningful experiences. I like topics like how people learn and whether people feel safe talking about mistakes. To me this meant being an applied social scientist.


But it has not seemed to mean this to many people in tech. Hence the need to experiment with so many new words and labels. The last ten years have felt sometimes like a constant treadmill of updating the labels other people will believe. I often get questions like: what does your deliverable look like? What is useful about your study? What is the difference between social science and engineering? Can a survey really teach us anything? Can we measure learning? Can we measure happiness? Why haven’t you just become a data scientist? Why haven’t you just become a UX person? Why do you keep saying that these research skills are technical when we don’t have a job category for the skills you’re describing? For gods sake why do you keep doubting our metrics?


But here are some questions I ask in return: when has a story about humans changed the way you understood something? Was there a time that you had an experience and you knew that it was measured in a way that didn’t make sense? Do you do things in your own life, right now, to try to create happiness? Do you think we are measuring what matters?


I recently asked on twitter what people thought of when they saw the phrase “social science data.” (this is a leading question on a desperately constrained selected sample in an already highly constrained digital space likely meeting with systematically differing levels of people’s tendencies and capabilities to opt-in to answer subject to unknown surface currents in whether the content was surfaced to my followers--tangent, do you see how a thing can be unscientific and yet the mapping of that unscience can teach us about doing science? mutatis mutandis, being an applied researcher in the world). People answered in ways I expected. But there were some surprises. A stronger focus on surveys than I imagined. A general idea that demographic variables, identity questions, are what make social science data.


I am not insensitive to the idea that we need our outputs to be clearer, our deliverables to match with people's needs, that our lane is under construction. Ten years into this work, I consider it my job to build that lane.


But even among discordant responses, an idea usually emerges that I find matches my own worldview: it's about social science questions, not social science data.

Social science questions to me mean taking interactions and environments seriously. Social science questions let the unit of analysis be human experience. Social science is a loose umbrella term for disciplines that none of us really stay in forever, nor do we experience the same discipline over time, so social science questions are a ship of theseus moving over an unmapped ocean of experience. Social science grappling with measurement taught me to think about it that way. Social science is perhaps a thing that you major in for a while or even do a whole PhD in and then go out into the world and try to recreate in your own team, your own project, your own moment with a participant.


There is a tendency for any and every discipline to fall into something I call World Saving Syndrome. This is when a bunch of people under a certain job label claim that their thing and their thing alone could fix everything. I try to avoid doing this. I don’t think the world is saved by social science alone. But I do think maybe each of us is looking for something to save our particular world. I am glad to find that social science is still saving mine.

bottom of page