Navigate AI
with reflection.

STAIR is an open-source framework that helps organizations integrate AI through continuous learning, participation, and critical reflection.

Socio-Technical AI Reflection
The Method

What is STAIR?

STAIR (Socio-Technical AI Reflection) is a research-based methodology for organizations navigating AI adoption. Unlike traditional governance frameworks focused on compliance and risk, STAIR engages all professions in structured reflection — ensuring AI serves people, not the other way around.

Developed through years of sociotechnical research and 18 months of real-world implementation in Danish public-sector organizations, STAIR acknowledges that AI is not just a technical upgrade. It is a human transformation.

See how it works in practice →
Why STAIR?

AI is not a project. It's a continuous transformation.

When new technology arrives, the instinct is familiar: define scope, roll it out, move on. But AI doesn't work that way. It evolves continuously — and so must your response to it. Classical change management and project-oriented digitization were designed for episodic change. AI demands continuous reflection, course correction, and joint optimization — where you don't just adapt people to the technology, but shape the technology around your values, your work, and your people.

STAIR exists because the old playbook doesn't fit the new reality.

Blurred Boundaries

AI enters the core of professional work — writing, analyzing, advising. It doesn't just change tasks. It challenges roles, identity, and expertise.

Ethical & Quality Risks

AI outputs can be biased, misleading, or misaligned with professional standards. Continuous human oversight isn't optional — it's essential.

Shifting Dynamics

AI changes collaboration, decision-making, and culture in ways no project plan can predict. Top-down rollouts miss what matters most.

Loss of Agency

Without structured reflection, AI quietly drives decisions. The question isn't whether you use AI — it's whether AI is using you.

Eight Socio-Technical Principles

The STAIR foundation principles

These eight research-based principles serve as a ready-made starting point for reflecting on AI. Use them directly — or let them inspire you to build your own principles tailored to your organization.

← How It Works
01

Value Creation

AI should demonstrably enhance work — not automate for automation's sake.

Click to explore →

It is important to have a clear understanding of the value that Generative AI brings. This value can range from benefits to your recipients to benefits for you as a user, your colleagues, and your organisation. Value might include increased productivity, higher quality, improved well-being, learning and development, professional expertise, and new competencies.

Reflection questions

  • Does AI create the desired value for your users, business or task?
  • Is there a risk of losing value as well? Where? Are you willing to take that risk?
  • Is the value sustainable in relation to the effort?
  • What value does Generative AI create or remove for the employees who will work with it?
02

Ethical & Legal Alignment

Clear frameworks must guide responsible AI use at every level.

Click to explore →

Frameworks and guidelines serve as your ground rules for using Generative AI. What's important is that these exist, that you know them, and that you have access to them, as they contribute to confidence and security — for yourselves, your organization, and your recipients. They may be adjusted regularly to match evolving knowledge and experience.

Reflection questions

  • Are there guidelines in your organisation for the use of Generative AI?
  • Do you know them and do you have access to them?
  • Is there a need to develop new guidelines specific to your professional area?
  • Do you have opinions about when you will never use Generative AI?
03

Experimentation & Learning

AI adoption requires continuous learning, testing, and adaptation.

Click to explore →

Generative AI is a technology that is changing all the time and fast. Therefore, it is important that there is continuously the opportunity to try out new opportunities, experiment, and learn — so that you can adapt along the way.

Reflection questions

  • How do you create a safe learning environment with the opportunity to share experiences?
  • How do you find the right technologies and resources to try them out?
  • Do you have a culture that allows experimentation with Generative AI?
  • Do you have the opportunity to drop a solution if it turns out to be inappropriate?
04

Competence Development

Employees must have the skills to engage critically with AI tools.

Click to explore →

Generative AI is a new technology that you have to learn how to use. This may mean that time and resources have been set aside for the necessary research and for courses, subscriptions, and knowledge sharing. It can be difficult to know in advance what skills are needed — the introduction of a technology can change workflows and outputs.

Reflection questions

  • What competencies do you already have in relation to Generative AI?
  • How do you find resources to maintain, practice, or learn?
  • How do we get a better understanding of the potentials and pitfalls?
  • What new competencies does a changed workflow possibly bring? Which skills may be less needed?
05

Autonomy & Accountability

AI should augment human agency in decision-making, never replace it.

Click to explore →

Generative AI can in many cases be used as a personal assistant — contributing with considerations, analyses, and arguments. The challenge may be that language models can hallucinate or have certain types of bias, just as we may transfer and delegate our decision-making authority to it. It is important to be very aware of when Generative AI is the right tool for the specific task.

Reflection questions

  • Is Generative AI the right tool for the task at hand?
  • What do you do if someone feels pressured and reluctant to use Generative AI?
  • Who is responsible for solving the tasks? Have they been involved in the decision to use AI?
  • Should individuals have the option not to use the technology?
06

Social & Relational

AI must not erode workplace collaboration or professional identity.

Click to explore →

When we change workflows or solve tasks differently, it can affect relationships and social aspects. The work with Generative AI can both reduce, change, or affect knowledge sharing and social aspects of task solving. It is important to be aware of these changes — you may need to do something compensatory.

Reflection questions

  • Are there any special working methods or networks you want to preserve?
  • Do you get energy, ideas or informal sparring in workflows that can be replaced?
  • Do you talk about which relationships are valuable in your work tasks?
  • How do you retain or strengthen them despite changes in task solving?
07

Creativity & Expertise

AI should support professional skill and innovation, not diminish it.

Click to explore →

Generative AI can summarise, structure, give feedback, or come up with new angles on tasks. It can realise visual concepts and provide inspiration for content. It can often contribute to task solving, support people's professionalism and creativity. Here it is important to be aware that technology acts as a contributor that strengthens our professionalism or creativity.

Reflection questions

  • What is your basic professionalism and core task?
  • Do you use Generative AI to support and strengthen that professionalism? Or does it replace it?
  • What responsibility do you not want to hand over to Generative AI?
  • Does Generative AI inhibit or contribute to creativity in task solving?
08

Ongoing Ethical Reflection

AI use must be continuously evaluated against ethical norms and societal impact.

Click to explore →

As a technology, Generative AI is in many ways opaque and difficult to control. It produces content based on probabilities, and the models have trained on content that may have been created by other people — who have not given permission for their data to be used. Generative AI can create confusion and insecurity for users and recipients, especially if it is unclear why it is used and for what effect.

Reflection questions

  • Is it clear if and what Generative AI has contributed to in an end product?
  • How do you avoid manipulating users or creating inequality in the user experience?
  • How do you ensure that users can distinguish between reality and stay in control?
  • Are there special ethical considerations of the citizens or beneficiaries that are important?
Voices

What organizations are saying

Media & Podcasts

Listen & watch

Research

The research behind STAIR

2025

The Influence of Team Leaders on Well-being and Productivity During Unforeseen Digital Change

Louise Harder Fischer

Examines how team leaders influence employee well-being and productivity when digital change arrives unexpectedly.

Business & Information Systems Engineering (BISE) Read paper →
2025

Team Leaders' Influence on Well-Being during Unforeseen Digital Change in Knowledge Work – A Research Agenda

Louise Harder Fischer

A research agenda exploring the role of team leaders in maintaining well-being during rapid digital transformation.

ITU Copenhagen Read paper →
2025

Are We Ready for The New Reality of Scholarly Publishing? A Nordic IS Community Perspective on GenAI

Louise Harder Fischer et al.

Explores how the Nordic information systems community views the impact of generative AI on scholarly publishing.

Scandinavian Journal of Information Systems Read paper →
2025

Helping Leaders and Employees Navigate Generative AI Through Sociotechnical Reflection: The STAIR Method

Louise Harder Fischer, Sanna-Maria Marttila

Presents STAIR as a methodology for responsible, participatory AI integration. Based on Action Design Research within a Danish municipality.

ECIS 2025 TREOs Read paper →
2025

Tracing Human-AI Relations: A Participatory Approach to GenAI Integration in Creative Public Service Work

Sanna-Maria Marttila et al.

A participatory approach to understanding how generative AI integrates into creative public service work.

ServDes 2025 Read paper →
2024

Crafting Meaningful Generative AI-Enabled Knowledge Work

Louise Harder Fischer, Hanne Westh Nicolajsen, Sanna-Maria Marttila, Sunniva Sandbukt

Explores how GenAI can be integrated into knowledge work without diminishing professional meaning and job satisfaction.

ECIS 2024 Read paper →
2024

How Sociotechnical Reflection Influences Wellbeing and Productivity During GenAI Integration

Louise Harder Fischer

Examines how structured sociotechnical reflection affects well-being and productivity during AI integration.

CEUR Workshop Proceedings Read paper →
2024

Review for Future Research in Digital Leadership

Louise Harder Fischer et al.

An updated review charting the landscape and future directions of digital leadership research.

arXiv (open access) Read paper →
2023

Explaining Sociotechnical Change: An Unstable Equilibrium Perspective

Louise Harder Fischer

Proposes an unstable equilibrium perspective to explain how sociotechnical systems change under technological pressure.

ITU Copenhagen Read paper →
2023

Artificial Intelligence and Digital Work: The Sociotechnical Reversal

Louise Harder Fischer, Nico Wunderlich, Richard Baskerville

Examines how AI reshapes the social aspects of work, proposing a recalibrated sociotechnical approach.

HICSS 2023 Read paper →
Education

Courses & Training

Get Started

Download STAIR Guides

Team

The people behind STAIR

Louise Harder Fischer
LF

Louise Harder Fischer

PhD, Associate Professor

IT University of Copenhagen

Specializes in digital transformation, AI adoption, and organizational change through a sociotechnical lens.

LinkedIn →
Sanna Marttila
SM

Sanna Marttila

Associate Professor of Digital Innovation

IT University of Copenhagen

Two decades of experience in IT design, digitalization, and collaborative approaches to innovation and digital futures.

LinkedIn →
Martin Lassen-Vernal
ML

Martin Lassen-Vernal

Head of Communications

City of Copenhagen

Specializes in how AI transforms workplaces, leadership practices, and employee well-being.

LinkedIn →
Morten Christian Andersen
MA

Morten Christian Andersen

Web Consultant & AI STAIR-master

City of Copenhagen

14+ years in public digital communication. Facilitates AI learning and reflection through STAIR.

LinkedIn →
Contact

Get in touch

Interested in STAIR for your organization? Have questions about the method, the research, or upcoming courses?

contact@stairmethod.org