Briefing
The situation
Six months ago the company introduced an agent-based HR assistant that continuously analyses delivery, sick leave and 360 feedback. This morning an automatically generated report landed in your inbox with a clear recommendation: three employees should be offered exit packages. The reasoning references 14 data sources and a confidence level of 0.87. As a leadership team, you need to decide how to handle this — both the specific case and the principle behind it.
Discussion
Questions to wrestle with
Who owns the decision?
- 1.Who in the room carries formal accountability if the recommendation is followed — and if it is not?
- 2.What does it take for us to be able to say 'we made the decision', not 'the AI made the decision'?
- 3.What information needs to be documented so an outside party could review the process?
The quality of the underlying data
- 1.What questions must we ask the model before we even discuss the case?
- 2.How do we tell the difference between 'AI is right' and 'AI confirms what we already believed'?
- 3.Where in the data would we need human context the model could not possibly have?
Framework · RACI for AI recommendations
To lean on
Responsible
Who carries out the action if the recommendation is followed?
Accountable
Who answers — internally and externally — for the outcome?
Consulted
Who must be heard before the decision (HR, legal, employee reps, the affected manager)?
Informed
Who should be informed about the decision and how the data was produced?
Decision
Possible paths
- AReject the recommendation entirely and pause the model's HR usage for now.
- BUse the recommendation as one of several inputs in a human-led process.
- CFollow the recommendation after a formal review by HR, legal and the affected manager.
- DPause the decision and invest in better data quality and transparency first.
Triggers
Drop in when the discussion stalls
- ▸One of the named employees has just returned from parental leave.
- ▸The model has not been trained on the most recent reorganisation.
- ▸The press has started asking questions about algorithmic decisions in HR.
For the facilitator
Tips to get more out of it
- Have participants write down their own first impression for 2 minutes before discussing — it reduces group pressure.
- Explicitly ask for counter-arguments to the dominant view in the room.
- Use the 'triggers' as add-on cards when the discussion stalls — drop them in one at a time.
Reflection
To take with you
- "What principle do we want our organisation to follow the next time an AI agent suggests a personnel action?"
- "What would we have wanted to tell ourselves six months ago, before the agent was introduced?"