The New Developer:
AI Skill Threat, Identity Change & Developer Thriving in the Transition to AI-Assisted Software Development
Authors
I was the lead author, supporting authors were Dr. Carol S. Lee (lead of main body statistical models), & Kristen Foster-Marks
Summary
In this quantitative observational study, we share original empirical research with 3000+ software engineers and developers across 12+ industries engaged in the transition to Generative AI-assisted software work. We bring a human-centered approach to pressing questions that engineering organizations are facing on the rapidly-changing possibilities of AI-assisted coding. How are developers impacted by changing demands on their roles? Where might there be emerging equity & opportunity gaps in who has access to these new development capabilities? What are the risks to the quality of technical work, and the developer productivity, thriving, and motivation which drive that technical work?
Understanding failure to thrive: AI Skill Threat. From this work we present a new evidence-based framework to help developers, engineering managers, and leaders as they grapple with failure to thrive in the transition to AI-assisted work: AI Skill Threat. AI Skill Threat describes developers’ fear, anxiety, and worry that their current skills will quickly become obsolete as they adapt to AI-assisted coding. Our framework also predicts when and why AI Skill Threat emerges: engineers who maintain a strong belief in competition and the demonstration of “innate brilliance” are more likely to report AI Skill Threat.
A path forward: Developer Thriving centers the human innovation of developers. Our framework also helps answer what engineering leaders, teams, and developers can do about AI Skill Threat. We show empirical evidence that software teams’ investment in key psychological affordances that support developers' thriving – Learning Culture and Belonging – strengthens both individual and team resilience.
​
Emerging risks & evidence for equity and opportunity in the transition to AI-assisted coding. Our research also documents important emerging group differences in developers’ experiences with and perceptions of AI-assisted coding tools. AI Skill Threat is higher for Racially Minoritized developers, who also rate the overall quality of AI-assisted coding tools significantly lower than developers overall. Both female developers and LGBTQ+ developers were significantly less likely to report plans to upskill in new AI-assisted workflows. These and other emerging differences point toward a critical need to understand how organizations ensure that new AI-assisted coding workflow are truly vetted for their effects on quality and teams, that AI-assisted coding tools are used in ways that are equitable and accessible, and that key insights from developers with important perspectives on the risks of AI-assisted coding tools are heard.
A Generative-AI Adoption Toolkit. We accompany our novel empirical findings with a Generative-AI Adoption toolkit – free and adaptable research-backed resources to help practitioners increase learning and belonging inside of their own organization’s engineering rituals. In this resource, we provide facilitation guides and an assessment tool that shares a practical, abbreviated version of our new empirically validated scales that software teams can use to measure their own AI Skill Threat, learning, and belonging, as well as track pre- and post-changes in these critical measures.