
Testimonials & Media
Read testimonials, opinion pieces, podcasts, and more about the socio-technical approach and the #STAIRmethod.
Blogs, Media & Mentions
Blog
A negotiated – not just automated – future
How do we ensure that AI is a driver of meaningful change that matters – to people and organisations? How do we turn technology into a conversation for everyone and avoid making aye and nay divisions? If artificial technology is our common future, how do we negotiate this future?
This negotiation – this conversation – was at the center of this splendid episode of The Only Constant, in which I had the pleasure of hosting two brilliant guests. Martin and Louise have co-developed STAIR – Socio-Technical AI Reflections -, at method for ensuring that AI drives meaning and well-being. This episode goes far and wide, retaining a laser focus on human value and honest opinions – all in good company!
As always, I am very happy that we once again managed to make an episode that explores more than it explains 💡
Listen to the episode here:
Apple: The Only Constant on Apple Podcasts
Spotify: The Only Constant | Podcast on Spotify
Spreaker: The Only Constant (spreaker.com)
What made this episode special to me?:
Martin was one of the first people to reach out to me after having listened in to some of the early episodes of The Only Constant. He was curious, explorative and wanted to engage in some of the discussions not found everywhere. We quickly formed a good understanding – and even friendship -, and have been discussing AI ever since. Then, during 2024, he developed STAIR together with Louise Harder Fischer from the IT University in Copenhagen, and we decided it would make a lot of sense to do a special episode with both of them to explore why STAIR and the socio technical perspective was not just of value to the AI hype discussion, but critical for its success.
Perhaps it is not clear directly from the episode recording, but we simply had so much fun recording it. This was both a great conversation, but also a meeting between people, humans and minds that led to a great episode of The Only Constant. I am happy to say, that this might be an episode as explorative as ever – but also one as good natured, humorous an engaging as ever. Passion, energy, relevance – all at once.
My key post-recording deliberations are:
- How do we ensure that sheer productivity hunting does not compromise human goals and well-being in an age where systems apparently can do anything? Business cases suddenly have to include measures for human factors now that machines can recite poetry or regurgitate caring feelings…
- How can organizations not only train employees in AI usage but also ensure they develop deeper, transferable competences for a continually evolving tech landscape? It was never really about LLMs – but about technology, 10x innovation and rapid change.
- How can we integrate generative AI in ways that protect and enhance human creativity, autonomy, and shared purpose and not just replace them? Ensuring people continue to find genuine meaning and connection in their work is as important as ever – and why everyone will eventually need to have a good look at STAIR.
Martin and Louise deliver so many great insights in this splendid episode, and I hope everyone enjoy listening in as much as we enjoyed creating the episode!
Opinion Piece
Implementing Generative AI Calls For A New Approach. Here’s How To Structure It
By Louise Harder Fischer, PhD and Associate Professor at Danish IT-University in Copenhagen, and Martin Lassen-Vernal, Head of Commuincations in The City of Copenhagen and External Lecturer at The IT-University in Copenhagen. Published in DIGITech/Ingeniøren, April 2025.
For years, digital transformation has relied on agile setups to modernize and streamline processes. It is no surprise that many organizations now treat artificial intelligence, especially generative AI, as just another step in the same digital journey.
We fall back on the familiar tools and expectations. AI may be new and complex, but in practice, we still see it as a digital challenge to be solved with standard methods.
This works to some extent. But experience and research suggest that generative AI is something more. It is not just another IT upgrade. It represents a deeper and more complex shift – a human, socio-technical transformation.
Traditional, goal-oriented models, often top-down and agile, assume that change can be managed through planning and iteration. But generative AI introduces high levels of unpredictability. Organizations are simply not used to this degree of complexity.
AI changes not only how we work but how we organize knowledge and collaborate. Unlike many past technologies that restrict, AI enables. It expands options. It gives users new powers to adapt solutions to their own needs. This creates new possibilities, but also new risks.
The benefit is that we cannot yet imagine all the creative and efficient uses that will emerge. The risk is that many users are not equipped to assess whether their AI use is actually helpful or responsible.
That is why organizations must support what we call AI literacy. Users need more than access to tools. They need reflection, guidance and structured learning. This is not just a strategic idea. It is a necessity.
The STAIR method – Socio-Technical AI Reflection – is one way to support this shift. It builds a practical framework for learning, discussion, and critical thinking. Through reflection guides, local facilitation and visual tools, STAIR helps organizations integrate AI in a way that is meaningful and responsible.
It is based on principles such as:
- AI should add real value, not automate for automation’s sake
- Employees must retain autonomy and professional judgment
- Organizations should support experimentation and professional growth
- Reflection and ethics must remain part of the process
STAIR does not assume that AI is good or bad. It creates space to explore what works and what does not, depending on context. It can support learning, change management or risk assessment.
The key insight is this: AI is not a static tool. Implementation should not rely only on predefined plans. Instead, it should be flexible, reflective and user-centered.
To make that happen, organizations need to do three things:
1. Create meaning through shared frameworks
Guidelines and rules are still essential. But they must be developed together with the users. AI is used in highly specific professional contexts. What matters to a legal advisor is different from what matters to a communicator. Local standards, values and legitimacy must be considered.
2. Make learning a goal in itself
Generative AI creates open-ended possibilities. It requires organizations to support testing, failure and exploration. We need to move from control to curiosity. Governance models must evolve. Some responsibility must shift to the user, who needs better intuition and judgment. Otherwise, innovation will stall.
3. Focus on real value
Value is not static. It develops through use. Organizations must ask what kind of value AI should bring. Is it better professional quality? Greater productivity? More job satisfaction? Or all of the above?
The answer will change over time. That is why we must keep value creation, meaning and participation at the heart of AI integration.
The above is an abridged version of the original Danish opinion piece, translated using ChatGPT. Find the published opinion piece here.

Podcasts
Learn about the STAIR Method in this episode of the Tech & AI podcast “The Only Constant” with host Lasse Rindom. The podcast is in English.
A short introduction to the STAIR Method in this episode of the Tech & AI podcast “AI DENMARK”, with host Anders Høeg Nissen. The episode is in Danish.
Testimonials
Rigshopsitalet: Department of Otorhinolaryngology, Head and Neck Surgery & Audiology (may, 2025)
“We have shifted from thinking about technology — and speaking only about research-related AI — to also recognizing its value in operational optimization. This has a broader impact on our department, bringing into play the various professional disciplines across the entire organization.
Today, we think much more holistically. The socio-technical perspective has been an eye-opener: it ensures that different professional groups feel seen, heard, and involved. They now look to changes brought by new technology with anticipation and curiosity. We have learned that it’s not only about algorithms and data. It’s also about people, meaning, and value — all of which tie into our core health-related mission. For example, medical secretaries now experience an enhanced AI imagination —they think in terms of AI solutions and professional quality. We have moved from a narrow focus on quantitative gains (such as minutes saved) to an appreciation for qualitative improvements — how tasks are performed better, and how technology can support what truly matters.
The team feels inspired to innovate and united in driving these changes. When the conversation is conducted in this way — e.g., using the STAIR method — a completely different attitude and enthusiasm for technological change emerge.”
– Rikke Schriver Nielsen, Strategy and Innovation Officer, Rigshospitalet’s Hearing and Balance Center, Dept. of Otorhinolaryngology and Head Neck and Audiology.
UNIVERSITY COLLAGE COPENHAGEN (Købehavns proffessionshøjskole, kp) (MAY 2025)
“It was exactly what we needed. It provided clarity, structure, and direction that allows us to move forward with virtually no resistance – partly because the socio-technical perspective aligns very well with our organization and the way we see ourselves and aspire to be in the world.”
– Thorsten Brønholt, Chief Consultant, Innovation & Service