Many of us work in roles where our performance cannot be represented by an objective measure that neatly sums us up, and yet many of our performance management systems demand we do just that.
A lot of what we do has a fuzzy impact, with ambiguous barely-detectable results; results that ripple over longer timelines than the yearly appraisal cycle. In some cases our impact might be negative in the short-term because we want to make a lasting change in the long-term: change curves go down before they go up.
This is especially true when learning new skills, because there are phases of incompetence you have to struggle through if you want to gain competence at something new.
Even without these complications it is difficult to judge the full range and quality of what our colleagues do. We might see them dealing with a difficult customer so skillfully that they make it look easy, so we under-value how much legwork they put in to achieve this level of performance and so we mentally tot it up as routine. Then we move on, only catching glimpses of other people because we’re busy with so much else, those glimpses being more memorable if they ping a bias (or heuristic) such as “confirmation” (it confirms what we thought anyway) or “effect” (we have an emotional response to it) – but these superficial glimpses don’t stop us jumping to subjective judgements and thinking we’re seeing the objective truth about someone’s performance.
If that so-called “performance truth” is decided by people “thinking fast” (Daniel Kahneman), and confirmed by hierarchy rather than expertise, then we’re in trouble.
To quote Ryan Holiday (from The Daily Stoic):
because our senses are often wrong, our emotions overly alarmed, our projections overly optimistic, we’re better off not rushing into conclusions about anything
And this doesn’t change when you get promoted.
To quote John Dickson Carr (from The Emperor’s Snuff-Box):
‘Oh, no,’ said Toby, shaking his head like one who has ineffable knowledge above the gods …
I know a few Toby-leaders who confuse their seniority for an ability to directly perceive objective reality, believing they alone can see what’s true and what’s not. Senior leaders may have the skills and experience to make better decisions once they have the information in front of them (maybe, and if they do, it’s not because of rank!), but unless they’ve made a conscious effort to improve their “thinking slow” skills (Kahneman again), they’re just as susceptible to bias and wonky assumptions as everyone else.
This point is crucial.
We are all limited by our biases and ability to process sensory data, we are all susceptible to believing our own perceptions are better than everyone else’s and being over-confident in our ability to perceive reality … therefore we all need to be humble enough to admit that our perception and instincts are biased and incomplete, yet confident enough to build on them by thinking slowly and deeply to get to a better level of understanding.
This is not to ignore instinct entirely (what Kahneman calls “system one thinking” or “thinking fast”) but let’s demystify it: it’s not magical, it’s not the divine spirit speaking to you, it’s your brain reading the signs and jumping to risk-averse conclusions based on innate instincts and life experience.
This is helpful at a survival level. If you’re mucking about on the Savanna and you spot a lion, it’s best to assume the worst and get out of the way. You may be missing out on the chance to pet a friendly lion, but it’s much more likely you’re saving your skin. Unfortunately the same life-saving mechanisms can mean that you will be likely to assume similarly negative intentions if you spot an unknown person that doesn’t appear to think the same way you do. Your amygdala might just as quickly flip you into fight or flight before you even know what’s going on. Again this makes sense if you’re trying to survive on the Savanna where these behaviours evolved, but they work less well in the office.
But they’re not entirely wrong, and instincts in a context where we have a lot of experience are very useful. Driving a car is an example Kahneman uses a lot, and your instincts to brake before you even see the danger is a great example of system 1 doing its job very well. Instinct is not just a load of biasy background noise, it is information, but it’s incomplete error-strewn information and should be treated as such.
The problem I’m talking about here is how we overuse it when judging the performance of others in the workplace.
How is performance-truth decided in your organisation?
Is it through fast (“System 1”) thinking by seniors who just know who’s rocking and who’s rubbish?
Do they jump to conclusion based on proxies like good communication skills, or irrelevant measures like charm? Do they really examine the quality of thinking of the your intervention, or is everything welcomed with a nod but not really engaged with? Is there a bias to get things done today – by COP* – or is there a sensible discussion about where the balance between “quick” and “good” sits for each task? Are you judged by your internal “visibility” rather than your impact on customers? Are you judged on short-term deliverables within the appraisal cycle that fit the SMART objective acronym, or is there a deeper systems-thinking approach that understands more subtle work?
If it is true that you get what you measure, and that recognition is a key part of motivation, then having highly-skilled humble “slow thinkers” who can understand systems and think long-term is essential to building a high-performing highly-motivated team.
Some references and notes
Link to a useful article on Kahneman and Twersky’s research on heuristics: Why Do We Take Mental Shortucts: Heuristics Explained on The Decision Lab website
* COP, usually pronounced “cop” is a cricketing term (“close of play”) and is used in many British workplaces instead of the more usual COB (“close of business”).