Ansvar & accountability

The AI agent recommends letting an employee go

Your HR agent has analysed performance data and proposes ending three employments. The reasoning looks sound.

Time

45 min

Level

Intermediate

Roles

4

Framework

RACI for AI recommendations

Read the full scenario

Before you begin

3 minutes of prep for the facilitator

Materials

  • A projector or large screen for presentation mode
  • A notebook for each participant
  • A facilitator (can be someone in the group)
  • Water, coffee — and silenced phones

The room

  • Sit around a table, not theatre-style — the conversation should feel horizontal.
  • Close the door. This is not a meeting to drift in and out of.
  • Decide who will document the group's decisions and reasoning.

Say as intro

"There are no right answers in this scenario — only clearer and less clear reasoning. The value comes from where you actually disagree, not from reaching consensus."

Briefing

The situation

Six months ago the company introduced an agent-based HR assistant that continuously analyses delivery, sick leave and 360 feedback. This morning an automatically generated report landed in your inbox with a clear recommendation: three employees should be offered exit packages. The reasoning references 14 data sources and a confidence level of 0.87. As a leadership team, you need to decide how to handle this — both the specific case and the principle behind it.

Discussion

Questions to wrestle with

Who owns the decision?

  1. 1.Who in the room carries formal accountability if the recommendation is followed — and if it is not?
  2. 2.What does it take for us to be able to say 'we made the decision', not 'the AI made the decision'?
  3. 3.What information needs to be documented so an outside party could review the process?

The quality of the underlying data

  1. 1.What questions must we ask the model before we even discuss the case?
  2. 2.How do we tell the difference between 'AI is right' and 'AI confirms what we already believed'?
  3. 3.Where in the data would we need human context the model could not possibly have?

Framework · RACI for AI recommendations

To lean on

Responsible

Who carries out the action if the recommendation is followed?

Accountable

Who answers — internally and externally — for the outcome?

Consulted

Who must be heard before the decision (HR, legal, employee reps, the affected manager)?

Informed

Who should be informed about the decision and how the data was produced?

Decision

Possible paths

  1. AReject the recommendation entirely and pause the model's HR usage for now.
  2. BUse the recommendation as one of several inputs in a human-led process.
  3. CFollow the recommendation after a formal review by HR, legal and the affected manager.
  4. DPause the decision and invest in better data quality and transparency first.

Triggers

Drop in when the discussion stalls

  • One of the named employees has just returned from parental leave.
  • The model has not been trained on the most recent reorganisation.
  • The press has started asking questions about algorithmic decisions in HR.

For the facilitator

Tips to get more out of it

  • Have participants write down their own first impression for 2 minutes before discussing — it reduces group pressure.
  • Explicitly ask for counter-arguments to the dominant view in the room.
  • Use the 'triggers' as add-on cards when the discussion stalls — drop them in one at a time.

Reflection

To take with you

  • "What principle do we want our organisation to follow the next time an AI agent suggests a personnel action?"
  • "What would we have wanted to tell ourselves six months ago, before the agent was introduced?"