
The STAIR Method:
Socio-Technical AI Reflection
AI is not just a new digital tool in our work. It represents a deeper transformation that challenges how we structure knowledge, define roles, and create value together. It challenges who we are professionally, and it prompts us to continuously reflect on the role AI and technology plays in our lives.
The STAIR Method is an open-source, research-based framework designed to help organizations approach AI. Rather than viewing AI as a one-time technical upgrade, the STAIR Method sees AI as an ongoing socio-technical transformation that requires continuous learning, participation, dialogue, and critical reflection across disciplines. As such, STAIR provides a structured yet flexible setup that includes reflection guides, facilitation tools, and shared principles.
In short, STAIR helps organizations, individuals, and teams move beyond a mindset of control and towards a culture of curiosity, experimentation, critical thinking, and ethical responsibility. It’s not just about implementing AI; it’s about living with AI.
“When the conversation is conducted in this way — using the STAIR Method — a completely different attitude and enthusiasm for technological change emerge.” – Rikke Schriver Nielsen, Strategy and Innovation Officer, Rigshospitalet’s Hearing and Balance Center. Read more.
Download STAIR Guides
The Core of STAIR: Reflection as a Practice
STAIR is built on the premise that continuous reflection is essential for responsible AI use. It is not a one-time assessment but an ongoing process that should be embedded in organizational decision-making. At its core, STAIR is guided by socio-technical principles that encourage professionals and organizations to ask critical questions about AI’s role in their work.
These principles can be fitted to fit different contexts and organizations. As an example the principles would emphasize:
- Value Creation – AI should demonstrably enhance work processes and contribute meaningfully to organizational goals.
- Ethical and Legal Alignment – Clear frameworks must be in place to guide responsible AI use.
- Experimentation and Learning – AI adoption should support continuous learning and adaptation.
- Competence Development – Employees must have the skills to engage effectively with AI tools.
- Autonomy and Accountability – AI should augment, not replace, human agency in decision-making.
- Social and Relational Considerations – AI should not erode workplace collaboration or professional identity.
- Enhancing Creativity and Expertise – AI should support rather than diminish professional skill and innovation.
- Ongoing Ethical Reflection – AI use should be continuously evaluated in relation to ethical norms and societal impact.
Learn More
What is STAIR?
STAIR (Socio-Technical AI Reflection) is a methodology designed to help organizations navigate the complexity of AI adoption through structured reflection. Unlike traditional AI governance frameworks that focus primarily on compliance and risk management, STAIR can be used by all professions within the organization, as it provides a dynamic, proactive, and exploratory approach to integrating AI. Developed through years of socio-technical research and real-world case studies in Denmark, STAIR acknowledges that AI is a transformative force that reshapes workflows, decision-making, and professional roles.
Rather than attempting to stay ahead of AI’s rapid technological evolution, STAIR enables organizations to stay ahead in critical reflection, ethical considerations, and informed adaptation.
Why is STAIR Necessary?
Generative AI and other AI-driven technologies are increasingly embedded in workplaces, enabling new efficiencies but also raising complex challenges. These include:
- Blurred professional boundaries – AI can automate tasks traditionally requiring human expertise, challenging existing roles and responsibilities.
- Ethical, quality, and legitimacy concerns – AI outputs can be biased, misleading, or inconsistent with professional standards, requiring continuous oversight to ensure accuracy, trustworthiness, and alignment with organizational values.
- Shifting organizational dynamics – AI integration changes collaboration, decision-making, and workplace culture in unpredictable ways.
- Loss of human agency – Without structured reflection, AI can drive decisions without sufficient human oversight or accountability.
STAIR addresses these challenges by providing a structured yet flexible framework that ensures AI integration aligns with organizational values, professional expertise, and human autonomy.
Who Can Use STAIR?
- Leaders and decision-makers seeking to ensure AI aligns with strategic priorities and ethical guidelines.
- AI practitioners and developers who want to integrate human-centered perspectives into AI design and deployment.
- Regulators and compliance officers looking for a structured yet flexible approach to AI governance.
- Employees and knowledge workers using AI in daily tasks and seeking clarity on its role, limitations, and impact.

A New Way to Navigate AI
As AI continues to reshape work and decision-making, the ability to critically engage with technology is more important than ever. STAIR provides a structured yet flexible methodology that empowers organizations to reflect, adapt, and integrate AI responsibly. Not through rigid control mechanisms, but through continuous learning and thoughtful engagement. By embedding socio-technical reflection into AI adoption, STAIR helps ensure that AI serves people, professions, and organizations, rather than the other way around.
