
AI is changing how we work, reflect, and decide.
Let’s make sure trust doesn’t get lost in the process.
Responsible AI isn’t just about what tools we use, it’s about how those tools shape trust, understanding, and human agency.
This is for organisations adopting AI but unsure how it affects:
Psychological safety
Reflection and decision-making
Power and accountability
Human-to-human communication and insight
We help you ask better questions, and avoid misplaced certainty.
-
AI Culture & Clarity Workshops
Format: 90-minute facilitated workshops (remote or in person)
Audience: Strategy, people, and leadership teams (6–15 participants)
Purpose: Practical, psychologically grounded workshops exploring the human impact of AI in your organisation.
Use cases: Leadership misalignment, ethical rollout, trust concerns, unclear cultural impact
Pricing: €1,250 per workshop (excluding VAT) or €4,500 for all four.
Custom workshops available — design and prep billed separately.
Workshops available:
• Psychological Safety in the Age of AI
• Reflection vs Automation
• Power, Bias & Responsibility
• Human Communication in AI Contexts
-
AI & Trust Diagnostic
Format: Insight sprint combining interviews, behavioural analysis, and strategic review
Audience: Organisations adopting AI tools in people-facing or decision-heavy areas
Purpose: To surface hidden cultural risks introduced by AI, including trust gaps, power shifts, decision confusion, and communication breakdowns
Use cases: HR tech rollouts, leadership AI use, feedback tooling, team disconnection
Includes:
• Stakeholder interviews
• Organisational risk mapping
• Leadership briefing with recommended actions
All diagnostics are scoped in collaboration with you, based on your context, questions, and what’s at stake.
-
Responsible AI Advisory Support
Format: Ongoing strategic partnership, delivered through 1:1 calls, team sessions, and behind-the-scenes framing support
Audience: Internal leads responsible for AI-related change (HR, Strategy, Ops, Learning, DEIB)
Purpose: To support internal decision-makers navigating the behavioural, cultural, and relational implications of AI adoption
Use cases: Messaging complexity, ethical framing, AI fatigue, internal misalignment
This work is grounded in behavioural insight, not hype, designed to help you lead responsibly, communicate clearly, and stay human in systems that are changing fast.
Advisory work is scoped around your reality, whether it’s one strategic sprint or ongoing partnership.
-
Speaking: The Illusion of Insight
Format: 60–90 minute keynote, internal talk, or facilitated leadership session
Audience: Executive teams, strategy groups, learning communities, public events
Purpose: To explore how AI can mimic understanding, and what real reflection, care, and decision-making still require
Use cases: Leadership offsites, ethics roundtables, strategy resets, organisational learning events
This talk opens space for critical reflection at a moment when many are moving fast without asking the right questions.
Adapted formats available for different audiences and settings.
Available as a keynote or interactive session, tailored to your team’s focus and level of familiarity with AI.
How We Work
We don’t sell certainty.
We help you create the conditions for clarity, trust, and better questions so that your people, processes, and culture don’t get lost in the noise of AI adoption.
Every offer on this page can be tailored to your organisational context.
If you’re navigating new technology and want to stay human as you do it, we should talk.