Judging the accuracy of witness evidence

I’m taking a detour in my reading at present into the world of memory. There is a considerable amount of literature out there in relation to police investigations, but not so much for workplace investigations.

One book that I’ve found particularly user-friendly is Daniel Reisberg’s The science of perception and memory: a pragmatic guide for the justice system. (The link is not to Amazon, but to Better World Books, which is a “greener” option when buying books).

The book has made me pause and think about the need for investigators to have a checklist to assess the potential for memory error in witness testimony. If such a checklist would exist, it would contain the following:

  1. Whether neutral questions were asked by the interviewer. What may surprise you, is that asking questions such as “what were you paying attention to during this event?” will actually increase memory error.
  2. The extent of the witness’ spontaneity in providing their memories. The less questions we ask as investigators, the less we will potentially contanimate their memories.
  3. The extent that the event was clearly observed (e.g. viewing angle, amount of light, the complexity of the situation, the amount of attention that was being paid).
  4. Contamination (otherwise known as source confusion): our memories may have been influuenced after talking to others about the event – particularly if it’s someone we trust (“intrusion error”); or we may have filled the “gaps” in our memory, unconciously.
  5. The urgency in which the witness is asked to recall a memory: the more pressure we put on a witness, the more they will “fill the gaps” with erroneous memories. This has a technical term: imagination inflation.
  6. Motivation: the drivers behind the witness’ desire for their recollections to be heard (which are not always noble)
  7. Plausability: whether the recollection is believable or not (think: being abducted by aliens).
  8. Retentional retrieval: it is harder to recall memories over time, and our memories fade quickly at first; we may also fail to recall a memory, as we’ve not been given the correct “retrieval cues”.
  9. After the event memory: when someone recalls an event that they didn’t think was significant that time, but they do now. The chances are that they didn’t lay down a complete memory, and therefore unconsciously they will have “filled the gaps” as they sought to retrieve the partial memory.
  10. Claiming a flash-bulb memory: when a witness claims that the memory is as clear as the day that it occured….it’s not.
  11. The level of stress that a witness has or did experience: stress has a negative impact on memory.

Lack of consistency in recalling the details of an event, does not mean that a witness is not credible. Research shows that the other elements recalled by witnesses could be accurate. Equally, just because someone seems super-confident in their recollections, it doesn’t mean that their memories are more accurate than someone feeling less confident.

It should also be noted that many of the above (but not number 5) are “honest memory errors”. As such, witnesses should not be berated for their mistakes in accurately recalling an event, as human beings are unable to spot what is a “real memory” and what part of the memory has been contaminated, unconsciously. And witnesses won’t recall every part of an event – they will only recall what was relevant to them at the time.

I should quickly mention that Reisberg makes a disclaimer: that despite the concerns around memory error, memories are often more accurate than they are inaccurate. So, maybe I’m over-thinking the need for a checklist?

Removing the personal element of investigations

Last week I wrote about how much I enjoy Malcolm Gladwell’s books. I’m currently half-way through “Talking to Strangers”, and frankly it’s making me think that I will never trust my judgement or anyone else’s again. I’m hoping that by the end Gladwell puts some positive spin on the topic or provides an uplifting, aspirational conclusion.

The book focuses on how meeting someone in person can influence your thoughts and perceptions about them, in a favourable way. For example, in 1938, Chamberlain met Hitler following which he believed that Hitler was not intent on starting a second world war (no spoilers: Chamberlain was wrong). Gladwell details many fascinating (and at times complex) stories of how people have been decieved, because we like to see the good in others.

Gladwell refers back to a story he told in a previous book (I’m not sure which one…maybe Blink), about “blind” orchestra auditions reducing unconscious gender bias . He then talks about a piece of research in the US around “blind” arraignment* decisions which caught my attention. Normally, a judge will review the documentation in front of him – the details of the offense and the mitgation/evidence surrounding it, following which the judge will meet the alleged criminal. Most likely a few questions will be asked of the alleged criminal, from which the judge will make a decision as to whether the individual should be formally charged and what the bail conditions will be.

A computer programme with a complex algorithm was developed to analyse the documentation and determine what the decision should be around formal charges and bail conditions (essentially the judge’s job). An analysis of computer-driven decisions and those made in “real life” by a judge were compared. It is important to note that the “success” of the decision was determined based on subsequent outcomes – did the alleged criminal adhere to their bail conditions etc.

The analysis demonstrated that the computer-driven decisions were considered to be more “accurate” than those made by judges – who had actually met the alleged criminal. This led researchers to believe that judges are unconsciously influenced when they meet and speak to alleged criminals, and this negatively impacts their ability to make sound decisions.

I think that this research is fascinating: as an investigator I frequently inform clients that I will not make a final conclusion on the evidence that has come to light until I have taken time to write it down and presented in a logical manner. I have learnt from experience that my “gut” isn’t always right, and it’s only when I’ve spent time with the evidence collected that I feel able to draw balanced and unbiased conclusions.

It then led me to think about whether investigations should be managed in a different way, to reduce this bias further. My proposal is as follows: each case would need two investigators. One investigator would be responsible for liaising with and interviewing witnesses. The other investigator would be responsible for undertaking the desk top analysis of all the evidence presented. There would need to be collaboration between the two: to ensure that the right questions were asked (and in the right way, obviously), and when follow-up is needed with witness when gaps in the evidence have been identified. From an external perspective, I don’t believe that this methodology would be any more expensive than the current methods we use in my organisation.

However, a drawback might be that the investigator is unable to provide advice on the more subtle aspects of a case. For example, how easy is it to spot the lack of emotional regulation when this has not been directly witnessed (by investigator 2)? Informing a manager that one of their team would benefit from support around emotional regulation is, in my opinion, helpful advice. The advice doesn’t just address the issue in hand, but also helps to prevent any future incidents – where is where we, as external investigators can add value.

I’m hoping that I come across more research about this in my reading which will shed more insight into this phenomenon. I’ll keep you posted.

_____________________________________________________________________________________________________________________________

*an arraignment is when the court formally charges the alleged criminal and sets bail conditions.

Hit vs Smash. The impact of leading questions on memory recollection

Through my reading about memory investigations, I’ve recently come across the same piece of research twice: undertaken by Loftus and Palmer in 1974, the research shows how the phrasing of a question can impact on a witnesses recollection.

In brief: witnesses were asked to watch a short film clip of two cars who collide. Afterwards, the witnesses were asked to recall their memories, in detail, of what they had watched. For one set, the witnesses were asked How fast were the cars going when they hit each other?” The other set were asked How fast were the cars going when they smashed into each other?”

By changing just one word, the answers of the two groups differed: Group A (the “hit” team) suggested a slower speed than those in Group B (the “smash” team). The researchers concluded that the use of a subjective word, such as “smash” had a demonstrable (and potentially erroneous) impact on witnesses’ recollections.

None of the witnesses were conscious that the way the question had been asked influenced their answer. We are all triggered unconsciously by language. As investigators it is important that we are not just neutral in our mindset, but how we ensure neutrality through our language when we interview witnesses.

Equally, as investigators we can also be triggered by language. Investigators should pick up on emotive or subjective words and question the witness further: what to they mean? how can they qualify that statement further? Sometimes, emotive language is useful witness testimony – and you wouldn’t expect a witness to be neutral through-out their interview.

From time to time, I suspect that we, as investigators, will make assumptions when we hear emotive language – that we know what a witness means because we have also observed the same. We should still stop and ask. Otherwise, we could be accused of overlaying the witness testimony with our own unconsious bias.

Meditation & decision making

I’m readng David Robson’s book “The Intelligence Trap”. It’s like the books that Malcolm Gladwell writes (and for the record, I enjoy Gladwell’s books) but with a bit more academic theory thrown in.

I’ve picked up loads of interesting points about decision making, leading me to think about how this influences the management of workplace investigations and the outcomes of disciplinaries or grievances. But the one that has struck me most is around emotional regulation.

Robson’s point is that you are more likely to make better decisions if you are skilled at recognising your own emotions and taking appropriate strategies to manage them. This is beyond emotional intelligence – it’s about sophisticated emotional regulation.

There are a number of ways that Robson suggests that you build up your ability to self-regulate your emotions – but the key one is meditation. He advises that you spend 5 – 10 minutes a day in meditation (or mindfulness) and within a matter of weeks you will find that you are able to step back when you feel triggered, or you’re more able to take a deep breath and think about your response instead of launching into an argument with a challenging colleague.

This leads me to think that if we built in a 5 minute mindfulness moment before making a decision as to whether someone should be subject to formal proceedings or not, would we make bettter decisions? The same goes for the Chair of a disciplinary or grievance hearing. If the proceedings started with 5 minutes of mindfulness, how impactful would this be? Would all parties listen to each other better? Would more insightful questions be asked? Would more appropriate decisions be made? The research seems to suggest that they would.

I’m a big fan of mindfulness. I practice it every day before I go to sleep. In an ideal world, I’d like to implement this in an organisation and see what happens. But whilst I’m trying to persuade one of my clients to do this, I’m going to try and ensure that I spend 5 – 10 minutes in mindfulness before I attend any challenging meetings and see what impact it has for me personally.

George Floyd: how far have we come?

This week the news has been dominated by the outcome of the George Floyd court case in America and I’ve been reading the debates around this topic (what does it mean? what does it matter? what happens now?). Without doubt, it is a significant moment in US history that the abusive behaviour of a white policeman towards a non-white individual has been examined and found wanting in a legal environment.

However, this phenomenon (of abusive white police officers) won’t disappear overnight due to the outcome of one court case. The issue is more complex than that, and there is still so much more work to do both over in the States and here at home in the UK.

What is more depressing is that it’s taken so long just to get to this point in history. We’ve been talking about micro-aggressions, a less obvious and more sublte form of discrimination since the 1970s. (or maybe earlier – I just haven’t found that reference yet!).

I came across an article from Mary P Rowe, written in 1990: Barriers to Equality: The Power of Subtle Discrimination to Maintain Unequal Opportunity. In Rowe’s article, she has no academic studies to rely on to prove her point: all she has are her observations as a consultant working with organisations. The examples she describes are familiar, although I’d like to think that some of them no longer happen (e.g. showing porn during the evening segment of a conference).

However, Mary’s observations have been validated through research (such as Sue’s – which I have written about before):

  • that the unconscious bias of manager can lead to their behaviour being less than appropriate or professional. Or as Rowe says: “predisposes a manager to even worse behavior”.
  • That “the senior person has little idea what the “invisible people” actually contribute”.
  • The “negative Pygmalion” effect: in that if managers don’t expect a particular employee to perform, it is likely that they will live up to that expectation – and not perform to a standard that they are truly capable of.
  • That the employee who is subject to unconscious bias will spend time and energy trying to understand and dealing with the personal impact: it “takes a lot of energy to deal with an environment perceived as hostile, or it takes lots of energy to maintain one’s level of denial of difficulties”.
  • That it is a challenge to erradicate “micro-aggressions”, as the “slights are culturally so “normal” that they simply are not noticed” (by the aggressor).

Rowe talks about the “intermittent, unpredictable, “negative reinforcement,” of micro-aggressions“, that leaves the reciever feeling powerless as they cannot change their gender or race. The powerlessness, coupled with the sense of uncertanty can lead to “misplaced” anger – which could re-inforce the stereotypes held by the manager.

There is one observation within Rowe’s paper where it is clear that we have moved on as a society. In 1990, Rowe talks about how individuals often look for, but are unable to determine, the intent of the aggressor. We now know that the intent is not something that is always conscious. The phrase “unconscious bias” is what I would consider to be common parlance in today’s workplace.

We’ve also moved on in terms of our understanding of what consitutes power. Rowe comments that “it is generally the less powerful who have most difficulties in coping with inequities, since less powerful people by definition have less influence“. We now understand through the work of Sue, that the less powerful are the marginalised group, and it is hard for the marginalised to challenge the power of the majority.

And more importantly, in 1990, Rowe discussed how those who feel aggreieved found it hard to seek support from others, even from their own background, because in their responses, they would perpetuate the micro-aggression. That has definitely changed. Today we see how powerful informal support networks are. Would we have seen the same result from the George Floyd trial without the Black Lives Matter movement?

Interestingly, Rowe talks about the disproportionate means by which those who feel aggrieved can take action – by means of raising grievances or making a legal complaint. Nowadays, we try to encourage informal resolution, looking at restorative justice, mediation, facilitated discussions etc. But those mechanisms – in the form of grievances and Employment Tribunals – still exist.

Rowe’s solution, back in 1990, is education through raising awareness about micro-aggressions and how to identify and respond to allegations of discrimination, encouraging employee networks, and developing mentorship schemes for all staff (not just those who are perceived to be talented). It is reassuring at least that many medium to large organisations have taken these suggestions and put them into practice. However, what is less reassuring is that three-fifths of employees work for a small business, who do not have the same level of access to training, networks and mentoring arrangements.

Most importantly Rowe suggests that we should talk to each – to recognise and value our differences, especially with someone who isn’t like ourself. We can all do this regardless of who we work for. We’ve started this journey, but there’s more work to be done…..

Perspicacity

I’ve recently been reading the work of Derald Wing Sue – an exciting and prestigious Amercian academic. Sue has a way of writing that brings research to life, putting it into context using layman’s terms. Today she has introduced a new word to me: Perspicacity.

The definition, coined by Sternberg in 1990 is:

the power of accurate perception; allows the person to see beyond the obvious, to read between the lines, to not be easily fooled, and to intuitively understand the motives, intent, and meaning of others.

Many employees possess perspicacity. It occurs when there is a power imbalance, or in lay-man terms – being in an employee-manager relationship. You know when you are demonstrating perspicacity when you can anticipate your boss’ decisions, or you can read their mind. You can do this because you’ve spent time, consciously or unconsciously studying your boss. When you have perspicacity, you will consider your relationship with your manager to be working well, as you feel the synergy between your perceptions and reality.

In Sue’s research, she highlights that people of colour have a heightened sense of perspicacity, particularly when growing up and living in a white (or read Western) culture. With perspicacity, an individual is more heightened to unconscious biases. This can also lead to a greater sense of mistrust.

I think that the fact that there is a word for this phenomenon is interesting, as there doesn’t appear to be one for the opposite phenomenon about those in power, also described by Sue, who “don’t need to understand disempowered groups to survive or do well“. As such, Sue asserts that often the reality of the disempowered (or those lower down the ranks in employment terms) is the most accurate.

In workplace investigations, how often do we listen to the employee, but prefer “the version of events” provided by the manager, because we consider them to be more knowledgeable (a perception earned by status – or is that power?). We are taught to be objective in investigations, but when we are asked to determine “on the balance of probabilities”, and after weighting the credibility of witness evidence, to what extent are we also considering our unconscious bias? Or the power imbalance, and how that affects perceptions?

In the NHS, are are significantly more people of colour in the lower bands of pay than in the higher bands, and I don’t believe that there has been any significant work on recognising their perspicacity and its benefits.. My suggestion is that NHS Trusts should ensure, through their policies and procedures (and the associated training) that their managers undestand how the power balance can impact on decisions, the existence of perspicacity and how to approrpiately weight evidence. This could result in less discriminatory behaviour.

Unpeeling the Diversity Onion

I like a good graphic, and the Diversity Onion particularly appeals to me (although I haven’t been able to find out who to attribute it to – please let me know!).

I’ve seen different versions, with the outer layer “Cultural” being omitted. But the premise is that people are more comfortable with the outer layers, particularly when it comes to understanding others.

When undertaking workplace investigations, the organisational layer will be the internal investigator’s comfort zone. These elements (role, union affiliation, deparment etc) are all facts and cannot be disputed.

A Movement of Ideas for Growth Results London, November ppt download

However, an investigator may start to feel uncomfortable if they start delving into the external and internal factors and might just question the relevance of asking questions around status or experience. But without unpeeling those layers of the onion, how can we, as an investigator, feel assured that we have sought to understand the issue from the employee’s perspective? Just because the questions make us feel uncomfortable, it shouldn’t stop us from asking them.

It should be recognised that part of the “uncomfortableness” is an individual’s lack of confidence of asking questions that aren’t perceived to be inappropriate in some way. In large organisations, investigators are chosen for their skill set, or that fact that they are considered to have good interpersonal skills and have experience of managing diverse teams. But in smaller organisations, where conduct issues also arise, can the owner/manager feel so confident that the person they’re asking to investigate has the right skill set?

(You could ask does it matter? The answer is always, Yes).

From my perspective, the issue is complex because the diversity of workplace investigations is massive. As an expert workplace investigator, I am commmissioned to undertake complex, challenging and sensitive investigation. I see one extreme. But I am aware, because this is how I learnt my trade, that there are many more investigations being undertaken each day which are simple in nature: a theft, an employee who has gone AWOL.

The research shows that unconscious bias occurs when there is a lack of understanding and that could happen in any investigation, irrespective of it’s simplicity or compexity. Investigation principles are covered by ACAS, and therefore, should understanding the inner layers of the onion be a compulsory part of an investigation? I doubt that would ACAS ever go that far. I could be wrong.

Measuring unconscious bias – lessons from the Police

When considering unconscious bias in workplace investigations, I’m specifically looking at the measures used in the NHS. Every year, NHS Trusts have to provide data through their Workforce Race Equality Scorecard (WRES) on the number of staff who have been disciplined between the 1st April – 31st March, broken down into two categories: White and BAME staff. This data shows that porportionally BAME staff are more likely to be disciplined than White staff, and there is an active drive by NHS England to remove this bias. It is assumed (and I think that it’s untested) that these figures & the unequal treatment of BAME staff is due to unconsoius bias.

Targets have been set by NHS England as to the “optimal” balance between White and BAME staff who are subject to disciplinary procedures. But my query is how was that target determined?

A similar debate has been had in policing, where racial profiling is a public concern, particularly in the States. However, with policing, there are more elements within the processs of managing illegal acts/criminality than there are with working investigations. Two researchers (Greg Ridgeway and John MacDonald) have studied the methods used to measure racial bias with the police and have found that there is not one optimal method that provides a definitive conclusion.

One approach, which is attributed to the media, is external benchmarking. For example, 32% of the BAME population have been charged with an offence, but the BAME population on represents 12% of the total population. These figures show a wide disparity in the percentages. Ridgeway and Macdonald have found that not only is this disparity common, “such statistics promote the conclusion that there is evidence of racial bias in police decision making”. In my view this is what the NHS has done when it has looked at its WRES data.

This is not denying that there could be racial bias, but any academic researcher will inform you that there are more factors that need to be taken into consideration. For example, to what extent are a particular group potentially exposed to scrutiny?

So when looking at external benchmarking, such simplistic conclusions need to be viewed with caution.

In the NHS, employees are encouraged to raise what is referred to as a DATIX when an adverse incident has ocurred. For example, if there has been a drug medication error, or a patient has lost a significant amount of blood during surgery, or if a patient/relative has been aggressive or violent towards a member of staff. Each incident is subject to an informal fact-find, following which areas for improvement, or “lessons learnt” are identified, accompanied by an action plan for implementation. Anyone can raise a DATIX (after recieving training), and it is positively encouraged as part of the NHS’s drive to promote a culture of candour. However, I’m not aware of any Trust undertaking an analysis of the workforce population, to see if there are certain groups who are more likely to raise DATIX’s than others (and equally, if the DATIX is about another member of staff, what proportion of those are BAME staff?). The act of raising a DATIX isn’t currently subject to such scrutiny and could also, potentially be subject to unconscious bias.

Other approaches for examining racial profiling in the police include direct observation (e.g. the infamous Lamberth research of the New Jersey Turnpike), or looking at a micro-level – where each police officer’s track record is compared to a colleague in a similar role to identify any variants: these are not applicable in workplace investigations. Using internal benchmarks is non-sense as the data being benchmarked itself could be racially biased – and therefore perputate the myth of “the statistics are valid”.

One option that Ridgeway and MacDonald put fowards is for police officers to be “race-blind” – to make decisions without knowing the race of the “suspect”. However, they feel that this would not be possible IRL. However, in workplace investigations, and similar to the approach taken in recruitment, initial decisions around whether a case should be formally investigated or not could be presented without any information that could determine the employee’s race. Which brings us back, full circle, to the point I made a few weeks ago in my post on “name-ism”.

Microaggressions as a manifestation of unconscious bias

We regularly hear the phrase “microaggressions” in our every day conversations, but do we always understand what it means?

According to Derald Wing Sue, “microaggressions target negativity to marginalised groups”(2007) and there are three types of microaggressions:

  1. Microinsults: compliments which are loaded with unconscious bias
  2. Microinvalidation: a dismissive comment which has discriminatory undertones
  3. Microaggression: overt (and often conscious) verbal comment that intends to harm/upset

What is more alarming is that Sue’s research shows that the most harmful forms of microaggresson are made by those who are most unaware that they demonstrating such behaviours. According to Sue’s findings, the more a person believes that they are a “good, moral and decent” person, the more likely that they will struggle to understand their own prejudices and how they manifest themselves through micro-aggressions. It is believed that the reason for this is that gaining such awareness would challenge a person’s belief system in terms of how they see themselves. As Sue succinctly comments, it “threatens their self-image”.

The other challenge that Sue has observed through her research is the invisibility of microaggressions to the majority group. It is those that are marginalised that are more likely to identify microaggressions, but they are also the most disempowered and therefore unable to effectively challenge the behaviours. Equally, those responsible for the microaggressions are in a position of power, but Sue believes that they frequent deny the existence of inappropriate behaviours, such as microaggressions, because ultimately, this would be a challenge to their position, power and associated benefits.

In terms of the workplace, and in particular, workplace investigations, it is therefore of vital importance that marginalised groups are actively involved in the decision making process. Being able to hear their voice, to hear their perception of reality and using it to make appropriate decisions regarding an employee’s future is an important factor in reducing unconscious bias.

I was talking to an NHS Trust yesterday and I specifically asked them about what steps they were taking to reduce any unconscious bias in their workplace investigations. The response was detailed, with a clear procedure designed to make sure that only the cases where it is felt “appropriate” are formally investigated. But I didn’t learn about how marginalised groups were engaged in devising what is essentially a desk-top exercise, or how they are contributing to the (almost!) weekly decisions about who should be subject to formal disciplinary proceedings and who shouldn’t.

To be fair, they might be….I just didn’t hear it in our conversation yesterday and I didn’t want to delve too deeply as challenging unconscious bais wasn’t the topic of the meeting. However, the day before I had reviewed their progress to reduce unconscious bias in the disciplinary procesess (see WRES score) and they haven’t made as much progress as other Trusts. This may be the reason why. Maybe this is a conversation for another day.

If you want to read more about Sue’s research, check out her book: Microaggressions in every day life.

What happens before the start?

Sometimes my brian doesn’t work as fast as it should. That is not an apology for what I’m about to say, or an excuse. Just an explanation.

It’s late at night, I’m in bed and I thinking about Lecioni’s Trust Pyramid and its impact /influence on employee relations. We know that the trust between an employee and their employer is often broken during a formal HR investigation (where the employee is either the subject of disciplinary allegations, bullying and harassment allegations or has raised a grievance).

More specifically, I was thinking about the trust in the relationship between the employee and their manager prior to any disciplinary allegations being made. Are allegations more likely to be made if there’s already a breakdown in trust?

And then I remembered my post the other day about Social Contagion – which was about what happens when people are “othered”. When someone is “othered” there will always be less trust between them and the person who has “othered” them (e.g. the manager), compared to the individuals who are part of the “in-group”.

As a manager, are you aware that you trust those that you have “othered” less? Are you aware that the language that you use about them changes because they are “othered”? And what is the impact on an intervention to stop unconscoius bias in a disciplinary process, which relies on you, the manager, to detail why a particular employee (the one that you have “othered”) should be subject to a disciplinary process? Do those who are reviewing the cases that are subject to disiplinary proceedings considering the “before”? Are they considering the potential unconscious bias of the manager presenting the case?

Therefore, to stop unconscious bias in the disciplinary process, we need to look at what happens before……

…….I told you at the start that I should have reached this conclusion sooner.

Perception of Fairness

I’ve learnt after undertaking hundreds of investigations, that if I do the following two things, the employee is more likely to accept the to outcome. (I say “more likely” as it’s not always the case!)

The first is allowing the employee to feel that they have been able to voice their concerns, fully.

The second is showing I have heard their concerns and these are reflected in the investigation report, in a balanced way.

So, I was very interested to read this weekend that there had been some research in the States that looked at procedural fairness.

In this research, by Jonathan Casper (& others), the authors looked at the perceptions of criminals on the procedural fairness of their convinction. They learnt that the “felon” was more likely to accept the verdict of their crime if their lawyer had spent more time with them pre-trial. There was no correlation between the severity of the verdict in proportion to the crime. The perception was based on the lawyer’s behaviour towards them, in demonstrating that they wanted to fully listent to the felon’s perspective on the issue and their concerns.

Another conclusion that Casper and colleagues made was that procedural shortcuts were met with “unanticipated hostility”. I’m aware that the Police use the “Pre-Charge Expedited” approach when a criminal has already pleaded guilty – introduced to save time in court & therefore tax-payers money. I have also seen a similar approach in some NHS Trust’s disciplinary policies. I’ve always felt uncomfortable when I’ve seen this in a discipinary procedure (regardless of whether the trade unions have agreed to it’s introduction), because it doesn’t follow the ACAS guidelines, and I have advised Trusts not to exercise this part of the policy for the case that I’m investigating.

Interstingly, in their book, Sway: the irresistible pull of irrational behaviour, Ori & Rom Brafman dicuss how the perception of fairness varies in different countries, giving examples of Russia, France and the United States. I don’t know (yet) how the British culture impacts on the perception of fairness, but I would be intersted to know.

The Brafman’s also discuss how the extent to which an individual is updated on a particular matter also influences the perception of fairness (and by default how much the other person trusts you). I always believe in regularly updating my client on how a case is progressing, but I won’t necessarily do that for the person at the centre of the investigation.

The take-home point here is that the behaviour of the investigator is of vital importance when undertaking workplace investigations, perhaps more than we realised. The onus is on the investigator to not only ensure that they are following the correct procedure and that they undertake a robust, objectve and balanced investigation, but that they are taking time to listen to those involved and to communicate with them regularly with progress updates. And don’t take any short-cuts.

Erradicating “name-ism” in workplace investigations

It is well researched that “name-ism” (that is, making a judgement on someone’s background, in terms of gender, race, social class, country of origin etc) has a negative impact during recruitment. Many (if not most!) organisations now have processes in place where an individual’s application is anonymised so that the recruiters cannot make judgements – conscious or unconscious, about them.

However, this concept has not yet been applied to workplace investigations; in the NHS it is known that BAME staff are more likely to be disciplined than white staff. NHS Trusts have put interventions in place to try and reduce this bias, but I’m not aware of any intervention that has chosen to anonymise the case – from the point when the decision to take disciplinary action, to undertaking the investigation (and therefore the analysis), to holding the disciplinary hearing.

In recruitment processes, the shortlisting element is a desk-top exercise, so it’s easy for organisations to withhold an applicant’s name at this stage of the process to reduce any unconscious bias. However, there is no equivalent in workplace investigations. Some NHS Trusts have implemented the pre-disciplinary checklist, which is an additional stage to the workplace investigation process. However, I haven’t yet see any pre-disciplinary checklist that doesn’t contain the employee’s name.

The investigative process is front-loaded with face to face meetings, followed by a period of desk-time when interview notes are drafted, documents reviewed, leading to the analysis of the findings and the drafting of the report. Realistically, I don’t think that it’s feasible to undertake a workplace investigation without seeing the individual(s) concerned. The face to face contact (even if it is through MS Teams or Zoom) is an important part of the process. You observe the body language, and ask supplementary questions which are inspired by the non-verbal communication being “spoken” within the interview. The face to face contact is also important for establishing rapport, encouraging the interviewee to open up and provide useful evidence – more so than if the transaction occured via email or in response to written Q&As. And even more importantly, during the face to face meeting, the investigator seeks to demonstrate the credibility of the process through the way that they manage the interview: if an employee believes that the investigator has undertaken an objective and empathetic investigation, they are less like to find fault with it, or dispute the findings. I also can’t imagine a disciplinary hearing where you don’t have the individual in the room.

Name-ism is about race, origins, family background, gender, identity. The research shows that “easy-to-pronounce” names (and the bearer of that name) are more favourably viewed than those who have “difficult-to-pronounce” names. (For the record, I have little patience for those who choose to shorten a colleagues name as they find the original “too hard to say”). Shortening someone’s name so that it’s “easier” only seeks to devalue that colleague; it’s disrespectful. But equally harmful is mispronouncing a name. By doing so, you’re letting this colleague know that their culture (and therefore they, themselves) are not as equal as you. So for all these reasons, we need to make sure that when we’re undertaking workplace investigations (or even deciding to commence formal proceedings) that we are aware of the potential to be baised, just based on the name of that individual.

Social Contagion: can you catch it?

I was reflecting on a disciplinary investigation that I undertook a few years ago. I recently heard that the employee had moved to a new organisation and was doing really well. I was pleased to hear this as I had a sense that he was being “othered” in his old job, which is why he ended up being subject to disciplinary allegations.

Let’s call this guy Thom. Thom had worked in a mental health ward for 15 years. But in 15 years he didn’t make any friends with any of the other staff on the ward. Thom came to work, worked hard on every shift, and kept to himself. There were no problems with his performance or his conduct. After 15 years he had a “clean sheet”.

I met Thom when he had been accused of mistreating a patient. As is usual practice in most hospitals, Thom was moved to another ward for the duration of the investigation. Whilst working on the new ward a second complaint came in, this time about his attitude towards another member of staff.

It turned out that there was insufficient evidence to prove that Thom had mistreated the patient. It could have been Thom, but it could also have been one of his three other colleauges on shift that day.

What I learnt during the investigation is that Thom had been “othered”. The Occupational Therapists told me that “they knew about Thom and kept away from him”. So when they learnt that a patient was mistreated, they assumed it was Thom who was responsible. The new ward staff knew that Thom had been moved to their ward because of an incident, and so they were also on the “look out” for any adverse incidents involving Thom – and then one occured and they quickly reported it.

It was clear that this was a case of Social Contagion: this is when ideas, impressions, feelings and beliefs spread across a “population”. In this instance the population were those working in the hospital. Thom had not taken steps to ingratiate himself with his colleauges, choosing to keep his distance, and in this way he was part of the “out-group”. Social contagion is greatest when applied to those in “out-groups”. So when something went wrong, it was assumed that Thom was responsible, twice.

If Thom hadn’t been “othered” and if the social contagion hadn’t spread across professions and different clinical areas, then Thom most likely wouldn’t have faced any disciplinary action. The facts would have been established first, and Thom would not have been subject to weeks of anxiety, wondering if he would be able to keep his job.

Social contagion is directly linked to unconscious bias. We need to be mindful of this when determinig whether or not a case merits a formal disciplinary investigation. Are we really looking at the facts, or are we listening to our gut and the informal intelligence that permeates our organisation?

Behaviour as a function of the person and the environment: Kurt Lewin

I first heard of Kurt Lewin when I was studying for my CIPD qualifications some 20 (*cough*) years ago. We were taught the “Change Management Model” of the three states of change (Unfreeze, Change, Refreeze). I have to admit that when I was young, enthusiastic, naive, I didn’t appreciate this model. I thought it was too simple, too obvious. I have a few more miles on the clock now and its simplicity is its strength. I’m advising a client later today on what actions they can take to resolve the dysfunction that they’re currently facing in their team, and one of my recommendations is a practical version of this Kurt Lewin model. Using the model will help me explain exactly why they need to take certain actions and why it will be successful.

But I didn’t intend to talk about Kurt Lewin’s three states of change in this post. Instead, I wanted to discuss a different Kurt Lewin theory that I’ve recently come across.

B = f (P , E)

Principles of Topological Psychology

“P” refers to the Person. Every person brings their own set of experiences and that makes them unique.

“E” refers to the Environment. In the workplace, the “E” might be considered to the constant in this equation, as everyone is working in the same environment.

“f” refers to the function: that is, the function of the Person in that Environment.

“B” refers to the behaviour displayed.

What this formula is saying is that because everyone is different, the behaviour displayed by individuals within the same environment will differ.

So, one person might grow up being taught that it’s important to follow the rules, and in doing so, they will be rewarded (with promotion, advancement, opportunities?)

Another person migth be taught that the rules are a “guide” and following the rules may not be best way to get results.

With two different sets of “P”s, the behaviours displayed by each individual will be different. And because their “P” is different, they may not understand the other’s perspective.

Quite often when we are undertaking workplace investigations, we are looking at the rules: to what extent did the employee breach them? I also look to internally benchmark the behaviour: do people know the rules, do they generally follow them, and are individuals held to account (or supported with appropriate coaching and development) when rules are breached?

But perhaps we should also be considering the “P”? What is it about that individual, their background, their experiences, that led them to allegedly breaching the rules?

And of course, bringing this all back to unconscious bias, to what extent do we, as managers, make assumptions about the “P” based on stereotypes and/or our own implicit bias? And how do we prevent this?

I don’t have the answer to these questions today……

The impact of stereotypes on the self

Research shows that if an individual is aware that they are the subject of a stereotype, this has a negative impact on them as an individual.

Let’s call our individual Joe. He’s recently joined a team of administrators

Joe is a young, black male and he is working within a team of females, ranging from 30 – 55 years old. 8 of the team are white, and the other person in the team (apart from Joe) is Asian.

Whether the other women demonstrate it or not, Joe will be accutely aware of the potential of being negatively stereotyped – either because of his age, his race, or because he’s both.

The impact on Joe may not be obvious, but the research shows that if Joe thinks that his colleagues might be applying a potentially negative stereotype to him, he will become anxious and stressed; he will start actively proving that he can do the job, trying to suppress any negative thoughts.

But Joe is at a disadvantage. Because he’s aware of being potentially stereotyped, his heart rate rises, and his blood pressure increases along with his cortisol levels (as he’s fighting off a perceived threat). This has a negative impact on Joe’s ability to perform, his ability to remember, and his ability to focus.

This leaves Joe in a vulnerable position. He’s trying so hard but his body’s response to fear fails him.

So when Joe makes a mistake, how do we respond as managers?

Are we aware of what is happening with Joe, or do we just see the “facts”: that Joe has made a mistake.

We can all be stereotyped, and so we might all find ourselves in “Joe’s” position at some point in our working lives. We may not be consciously aware that we perceive the stereotype that we’re fighting as a threat. In turn, we may not realise how it’s impacting us and our performance. And if it happened to us, how would you like others to treat you?

It doesn’t matter what the situation, I’m learning that all roads lead back to the one piece of advice that I regular give: take your time to find out what’s going on. Don’t jump to conclusions. Speak to the person involved. Listen, and try to understand. Show empathy and compassion.

Three Theories about why we have implicit bias

Have you ever thought about why we have unconscious or implicit bias? It’s useful to think about as this could impact on your management approach – particularly in challenging staffing situations.

Evolutionary Pyschologists have developed 3 theories, which are as follows:

Firstly, Error Management Theory: this is where your brain misleads you, due to your environment to think X . However, if you were in a different environment, you would think Y – which is the more accurate and appropirate conclusion to make in the circumstances. As such, your brain draws conclusions which are not accurate, or could be described as “false positives”. And at times these conclusions could be prejudicial or stigmitising. It can lead people to “other”, ostracising or isolating others.

In the workplace, you will often see a group of people ostracising the team member who isn’t quite like them, who doesn’t following the same set of unwritten rules. Is a manager more likely to discipline a team member who they consider to be an “other”, than some they share an affinity with?

Secondly, Artefact Theory: implicit bias occurs when you find youself working with an employee working who doesn’t share the same values, attitudes or approach. This theory also covers situations when a set of workplace expectations are applied, but they are not aligned with the employee’s understanding of these workplace expectations.

More simply put: A young person who has not had a job before may not realise the importance or necessity to arrive at work on time every day. However, it is an unwritten expectation of the workplace that employees rock up for work on time, ready to work the hours that they previously agreed with their manager. Our young person will soon find themselves subject to informal disciplinary proceedings for timekeeping. But is that the right action to take?

Lastly, Heuristic Theory: My way of explaining this theory, which is attributed to Daniel Kahneman, is to consider it as “System 1” thinking (which is also Kahneman). Whilst it’s important to think on our feet, sometimes the lack of care in truly understanding a situation can lead to decisions being made which may not be appropriate.

In my mind, this theory is interlinked with the other two. By jumping to conclusions based on a lack of understanding of another’s perspective, and outside the right context, you have the potential to display unconscious bias. The NHS has tried to tackle this with the introduction of pre-disciplinary checklists. This makes the decision making move from System 1 to System 2, and then, hopefully, more balanced decisions are made about whether an issue should be managed formally or not.

My first thoughts were that if unconscious bias is considered to be a by-product of our evolutionary development, then why does it impact us in the today’s society, in the workplace? Is it because subconsciously we’re all fighting for “survival” on a daily basis at work? But looking at the examples above, my thoughts have led me to ask: as managers, are we being clear about our expectations with our team? Are we seeking to understand those who are different from us, so we can try to see the world from their perspective? And how often are we leaping to conclusions instead of stopping, thinking, finding out more information, etc before we reach our conclusions?

The three theories that has inspired this blog post can be found in Pragya Agarwa’s excellent book: Sway: unravelling unconscious bias. The book is written in the style of Malcolm Gladwell: easy to read whilst making some excellent points.

Definiton of Intersectionality

About three months ago, my colleague and I were looking for a good defintion of intersectionality. Clearly, we knew what it meant. We had read Kimberlé Crenshaw’s seminal essay (which is fascinating, but not helpful in providing a definition). But it took us a while to find a reputable, credible definition of what intersectionality means. This is what we finally settled on:

The United Nations definition of Intersectionality, following the Durban conference on racism in 2001

“An intersectional approach to analysing the disempowerment and marginalisation of women attempts to capture the consequences of the interaction between two or more forms of subordination. It addresses the manner in which racism, patriarchy, class oppression and other discriminatory systems create inequalities that structure the relative positions of women, races, ethnicities, class and the like……Racially subordinated women are often positioned in the space where racism or xenophobia, class and gender meet”.

Causes of Workplace Drama

I’ve recently come across Patti Perez’s book “The Drama-Free Workplace: How You Can Prevent Unconscious Bias, Sexual Harassment, Ethics Lapses, and Inspire a Healthy Culture”. It is perhaps one of the most interesting books I’ve read for a while. Patti has a common sense approach, rather than academic, and she makes some very interesting points. Whilst the book centres on sexual harassment, the issues that are explored could be applied to a range of misconduct situations or teams with a poor workplace culture.

Patti talks early on in her book about the causes of workplace drama, which can be summarised as:

  1. Inauthentic leadership
  2. Communication gaps
  3. Increased division (amongst individuals)
  4. Culture of complicity
  5. Lack of transparency
  6. Persistent confusions
  7. Problem solving deficit
  8. Blind Spots (think Johari Window)
  9. Unwillingness to admit wrong-doing
  10. Wrong Solution

I think that this is a great list and I’m still thinking about whether there is anything else I would add to it (currently, I have nothing!)

The key theme underlying most of these points is around skills gap: knowing how to problem solve; knowing what you can and can’t say whilst creating an environment of transparency; how to give feedback and actively listen to your team; how to have courage to tackle inappropirate behaviour in a way that has a positive outcome. I could go on.

My observation, as a workplace investigator, is that management do not often see the issues as part of the wider picture – in that the environment they have created has either contributed or prevented the “workplace drama”. Instead, the focus is on the actual incidents themselves. Although there is always an outcome to confirm “what most likely happened and why” with that particular issue, I’m not convinced that employers spend time considering what has been learnt, what needs to change and applying that to the entire organisation. And if there is a period of reflection and learning, to what extent is that done in an inclusive approach?

I could discuss each point in detail, but I’m thinking that I’d rather save that for future blog posts.

The limitations of memory in workplace investigations

Many investigations rely on participants’ recollections of events, and the information from an investigatory interview is a valid source of evidence.   However, there are times when participants make memory errors.    These are not conscious errors (as that would be lying, or potentially vexatious), but occur unconsciously due to a range of factors.

Memory errors can occur at two points:  The event itself, and at points after the event.   These are explored further below:

The event

Memory errors can occur at the very moment that the memories are being made.  The reasons for this could be:

  1. The event is similar to other events that the individual has witnessed.   As such, when the individual is recalling the memory, they may confuse what happened at one event with what happened at another event.  A typical example is if you park your car in the same car park on a regular basis.  There will be some days when you forget where you have parked your car on that day.   This is technically called “source confusion”.
  • The individual wasn’t paying attention at the time – either they were thinking of something else, or their attention was diverted to another event happening simultaneously.   As such, the individual won’t lay down a memory about the event.  It’s not possible to recall a memory when it hasn’t been stored originally.    But this doesn’t mean that their memory of the periphery events is any less accurate than other person’s memory.
  • Similarly, an individual is more likely to remember something if it was the focus of their attention at the time. They are less likely to recall what was occurring on the periphery.
  • Individuals are more likely to remember an event if it’s associated with a negative emotion.  However, if the event was traumatic, the individual may suppress the memory as a coping mechanism.

After the event

  • The biggest challenge with memory recall is that memories fade over time (“memory decay”).   After 24 hours, memory is only 30% accurate.  After a month, it will only be 10% accurate.
  • Memory errors can occur if an individual discusses the event with a second person, who shares new or different information.  Unconsciously, the individual may adopt this new information but stores it in their memories as if they had seen it themselves, particularly if that new information is plausible or they have learnt about this new information from someone they trust.  Therefore, when the individual is asked to recall the memory, they will include the new information, and truly believe that they witnessed this new information at the time of the event.  This is called “intrusion error”, and regularly occurs when witnesses discuss events between themselves prior to their investigatory interview.   It should also be noted that it actually aids the accuracy of an individual’s memory recall if the new information is accurate.  
  • Our brain makes sense of the world by developing “schemas”.  For example, when you got into a restaurant for a meal, you know what will happen and in what order.   It’s not something that we are consciously aware of, and we will have numerous “schemas” for different situations.   When an individual is asked to recall a memory, they may “fill in the gaps” with information from their “schema”.   Again, this is an unconscious act, but occurs because the witness is thinking “this must have happened, because this is what always happens”.   For example, an individual may not remember that a colleague was late to a particularly meeting, but they know that the individual is late to every meeting.  Therefore, their “schema” will lead them to believe that they recall their colleague arriving late to that particular meeting. 
  • “Imagination inflation”, is the term given to individuals who describe recollections which are partially imagined, rather than what actually happened.  

Factors to consider

  • It is possible for witnesses to have different memories of the same event, and this could be attributed to what they were focussing on at the time the memory was being made. 
  • Just because an individual’s memory is limited, it does not reduce the accuracy of that memory.
  • Equally, if someone has a vivid memory “Flashbulb memory”, it does not mean that it is any more accurate than another person’s memory of the same event.  It just means that they remember more.
  • It is not possible for an individual to know what a false memory is and what isn’t. If questioned, individuals will be adamant that their memories are accurate – unless they are proven otherwise by other more tangible and credible evidence.
  • The level of an individual’s confidence of does not correlate to the level of accuracy in that individual’s memory of an event.
  • Despite all of the above, individuals’ memories are more often accurate than not.

Therefore, in investigations, it is important that the evidence is triangulated, preferably with contemporaneous evidence such as audio recordings, CCTV, emails, diaries.   Failing that, the balance of probabilities, which is applicable for workplace investigations is used to determine what actually happened.