top of page
Search

Assorted (unsorted) thoughts about trauma-informed research 

Cat Hicks

Whenever I start a research project, I pull from my fiction-writing toolkit. One of my initiating questions I use to interrogate any narrative idea is whether I have enough worldbuilding to really do it. Your mind is a gestaltic genius: your mind will happily fill in the details in a kind of ambient fuzz, misleading you until you step off the ground and into the blank hole where there should be ground. In fiction, worldbuilding to me means thinking through enough systems, details, and causal relationships that you can construct a “coherent world.” When you do this, the story doesn’t feel like cardboard (or an invisible ground). In research, I believe we have to do a similar thing. It's part of building projects that move us forward--seeing your project as a piece of a bigger system. So I ask myself: have I looked closely at how this phenomenon, mechanism, or experience seems to operate in the world? Have I listened to people who are closer to it than I am? Have I pulled from diverse sources of knowledge, such as both scholarly analysis and lived experience? Have I asked where people just like me decided to study something just like this, and did it badly?


Worldbuilding to me is about setting the preconditions, the base rates, the given this, what should I ask about?


Here are some specific examples of information that may or may not fit into your worldbuilding when you think about human experience, particularly when we define “X Experience” in that proper noun product-like way, such as “User Experience,” “Developer Experience.” But this information fits into my worldbuilding:

If there is a math to it, the math of adverse experience is more complicated than any single statistic. One example is that traumatic events compound for marginalized people. Individual rates of traumatic events must be understood as happening over a career pathway, learning journey, and lifetime. There are probabilities of vicious cycles which interact to create something greater and more damaging than each individual event alone. We are not neutral observers randomly sampling our environment. 

Trauma is everywhere. Everywhere, it feels to me, except for in our research thinking in tech. 


Once when I was an undergraduate psychology major, determined to not become a clinician, I sat in class alarmed when one of our psych faculty said, "Any project in psychology, no matter what, has the ability to go to the hardest experiences a person has had."


When I started leading applied social science projects, I was saddened by how little applied researchers drew from our colleagues in clinical work, educational equity, and healthcare. The hierarchies of knowledge here have become a fascination to me. Once, when I cited a study about the experiences of nurses navigating stress in a talk, I was told it was insulting to ask an audience to imagine themselves learning from nurses, that it would go over better if I talked about doctors, or surgeons.


My graduate training included standards set up for a university environment: how to navigate an IRB, how to debrief a bored college student who has sat in your lab for an hour. Since I worked in developmental social cognition, we had layers of other, important ethics consideration: making sure we obtained parental and school consent, sharing back research with every institution that partnered with us, and rigorous practices around oral assent and checking in with participants. I also lived through several methodological shifts where we were all taught to use certain forms of measuring (e.g., clumsy, overconfident statistics) that were later critiqued, and we all have needed to update methods greatly, which often feels painful and full of shame, although I recognize now from an interdisciplinary vantage point that it can be wonderful protection to even have standards to reference in the first place. I took a great many important lessons from these structures in research psychology. However, something research graduate programs rarely emphasize is how to take care of yourself as a piece of that ethics; how could they, when they want you to work like they want you to work?


However important, and I do believe they are very important, the above research protocols are attempts to control and predict and define. A complexity about research and especially community-based research that is less frequently discussed is the inherent unpredictability of the research moment itself. Research brings things out of people. In one study I led from my research consultancy, I was conducting interviews with people who were discussing relatively boring financial decision-making and reacting to a product prototype. Yet the process of qualitative research necessitates (for me) an aspect of unconditional positive regard, an acute listening. This can create painful surprises. Even though we were testing buttons and navigation, the woman I was talking to began to share about her relationship to money, which was also her relationship within her community, which was also her relationship with traumatic memories involving resources, need, and dehumanization by other people. Such moments can become triage rather than research, as we need to remind people of the boundaries of our relationship and ensure they’re fully consenting to share what they share, and in the same moment, navigate our own boundaries and be mindful of what we can and can't provide. There are strict limits to what I feel a research session can and should experience. In this case, we successfully triaged rather than ending the session, and this woman deeply wanted to share some elements of her experience. What do you do with those moments, though? Labeling this “User Experience” felt itself like it would be a moment of fraud, a violation of research ethics, a dereliction of the duty of care that you have as a researcher to represent the world that supercedes your specific duty to a specific project. 


I have resolved many times over that my research can never be accurately categorized by these "____ Experience" labels. Only certain things are legible and allowed in that bucket of Experience. I am interested in experience, and that's far more ambitious.


I do have some principles, when I think about trauma-informed research design:

  1. Respect the ubiquitous nature of traumatic experience. Do your worldbuilding. Even when you think you are doing research in an area that isn’t “risky,” sensitive, or difficult for people, you might be. All research, no matter how mundane, takes place in this human landscape. At Catharsis, the fact that we recognize this has been an immense strength in our projects.

  2. Safety. Trauma-informed practice recognizes that the safe/unsafe frame fundamentally shifts what kind of interaction we are having, what kind of information we are receiving, and our long-term wellbeing. This is researcher responsibility to monitor. Yet researchers also have a need for safety, and human limitations. We are taught poor rules, half-finished, about safety, we have all learned to do our work in systems that sometimes get a lot out of our ignoring our own instincts about this. No one, I think, will teach you about this in full. You have to make it your business to learn about safety. Read the literature, learn from the people doing harder work than you (like nurses).

  3. Co-design. A core need for human wellbeing is agency, and self-efficacy, and often rehumanization. Since we live in broken systems, it behooves us to understand not just the faults of the brokenness but also take responsibility for steps of repair and rehumanization. Across every research project I lead, I am constantly thinking of ways to invite research participants to be “co-designers” of research, from treating consent as a back-and-forth “dialogue” throughout a session instead of just a box checked on a form, to asking them for feedback on how they were measured, to allowing multiple exit points from research participation. I believe that we need to study difficult things like experiences of threat at work or distress and anxiety, but we will also be limited in how we can study these things and co-design with communities is one of the best ways I know to thread that needle between creating the evidence we need and making sure that you are operating, as a researcher, in service to a community.

  4. The practice of always engaging with your own positionality is itself a rehumanization of you as scientist.

  5. Honoring, and not discounting, the findings of difficult experience. There are many reasons that we might be incentivized to discount, discredit, and otherwise make invisible the realities of traumatic and difficult experience. Frequently, researchers feel pressure to deliver “representative” data to stakeholders, and to hypothesize about “ideal” scenarios. It is important to think about what we want to learn from, and not every research project has the resources or capacity to ethically manage collecting data on adverse experience. But trauma-informed research also allows us to recognize that we should not act like difficult experience has nothing to teach, share, and clarify. Often, by inviting difficult experience in and truly seeing it, we obtain a much truer picture of the world. At Catharsis, I decided that we needed to always challenge ourselves to listen to and allow space for unexpected, meaningful, but difficult stories in research. This has taught me to be ever mindful of whether “data cleaning” is really systematically removing adverse experiences, and I stop and question when we feel compelled to do this. Better, I build explicit protection for such challenge: in the initial vetting of projects, we agree that we might surface distressing and challenging findings. Steps toward open science can be incredible helpful for this: by moving analytic plans and project proposals into the light before a project begins, by ensuring projects won’t move forward without consensus on how they’ll be evaluated, we make these known and visible decisions.  

bottom of page