Briefing
The situation
Erik leads an analytics team. Half of them have embraced AI tools and now produce three times as much as before. The other half is struggling — not because they don't want to, but because they can't quite see how the tools fit into how they work. The result: a growing gap. The fast ones are getting frustrated with the slow ones. The slow ones feel increasingly insecure. Erik has to allocate work fairly, prevent the productive ones from being overloaded, and at the same time clarify what 'performance' even means now. As a leadership team, you need to back Erik — and decide which principle should apply across the whole organisation.
Discussion
Questions to wrestle with
What is performance now?
- 1.If one person delivers three times as much with AI and another delivers the same as last year — which of them has performed better?
- 2.Which capabilities become more valuable in an AI-accelerated team — and are they visible in our current evaluation?
- 3.How do we tell the difference between 'produces a lot' and 'produces something that's actually used'?
The gap in the team
- 1.Is Erik's job to close the gap, manage it or let it grow — and what's the consequence of each choice?
- 2.What happens to the team's culture if the three fastest consistently get the most interesting assignments?
- 3.What would a reasonable six months of structured learning look like for the five slower ones — and who pays for the time?
Framework · AI performance redefined
To lean on
Output
Volume delivered — important but no longer enough; easy to inflate with AI-generated material.
Quality
How well the output holds up under scrutiny, decisions and time — where judgement shows.
Learning
How quickly the person themselves gets better — own development, new methods, willingness to try.
Spread
The degree to which the person lifts others — sharing patterns, pairing up, writing down what works.
Decision
Possible paths
- AIntroduce 'AI mentorship' — each productive team member is paired with a colleague and part of that time counts as delivery.
- BAdjust the evaluation model so Spread and Learning weigh as much as Output, starting with the next salary review.
- COffer structured learning time (4 hours/week for three months) to those who've fallen behind, with explicit protection from delivery pressure.
- DAccept the gap, redistribute tasks by capability and recruit more AI-fluent profiles.
Triggers
Drop in when the discussion stalls
- ▸One of the fast ones has started hinting that 'some people in the team aren't contributing at the same level anymore'.
- ▸One of the slow ones has applied for a job at a competitor.
- ▸Salary review is in two months and the bonus model rewards delivered volume.
For the facilitator
Tips to get more out of it
- Ask participants to rank the four dimensions (Output/Quality/Learning/Spread) by how they're actually measured today — Output almost always wins by a mile. That's the insight.
- Surface that 'fair' and 'equal' are not the same thing — ask the group to articulate which of the two they actually want.
Reflection
To take with you
- "Does our current leadership development prepare managers to lead teams where people use AI to very different degrees? What's missing?"
- "What signal do we want our salary review to send about what's valued in an AI-accelerated organisation?"