top of page
Writer's pictureThe Behavioural Spectator

VAR, autonomous vehicles, and the rejection of innovation

Let’s stick to worse, but more cognitively compatible systems




Described as ‘dangerously flawed’ and ‘inefficient’ by sages Lineker and Murphy – VAR has continued to take a public battering in recent weeks. The hounding of referees is hardly a new phenomenon – but the discourse surrounding the critique of VAR is particularly noticeable for its lack of objective foundations.


Yes, it does ‘take too long’ – but how long, how does this compare to other needless stoppages. Yes, it ‘gets things wrong’ – but what is the error rate. ‘It is taking the emotional/fun out of the sport’, oh bore off.


So, away from the baseless exasperations spouted from the MOTD armchairs every week – how is VAR actually fairing?


The PGMOL recently released some data on the metrics being used to evaluate the efficacy of VAR.

  • 12 errors have been identified in the 150 games played since the World Cup

  • Incorrect interventions have reduced to one every 37.5 games

  • Missed interventions have reduced to one every 21.4 games.

One error every 12.5 games! That seems almost too good to be true – and whilst fans criticise the ways in which errors have been defined: the criterion of what a clear and obvious error (the threshold at which VAR intervenes) is too conservative – this is more of a reflection of poor human, not virtual, refereeing.


So why does VAR still have such a bad reputation – the answer to that question may have implications for driverless cars, robot-assisted surgery, and the rollout of AI across industry.


Consider the following...

If a random word is taken from an English text, is it more likely that the word starts with a K, or that K is the third letter?

  • Word starts with K

  • K is the third letter

  • About the same


When making frequency estimations, we tend to rely on information that comes quickly and easily. When asked to compare the number of words beginning with K against the number of words with K as a third letter – obviously words beginning with K come more readily to mind – and this leads (most of) us to assume this means higher frequency.


There is nothing irrational about this, and it’s a strategy that will point you in the right direction most of the time. But it can lead to mistakes – and sometimes with serious consequences. News stories sensationalising relatively rare events (e.g., plane crashes) can distort the availability heuristic, making people wildly overestimate the chances of these events happening.


It has been estimated that an extra ~1,500 Americans died in car accidents due to increased road traffic after 9/11. More teenagers may take drugs if they think everyone else is doing it. And we make think that VAR is shit because every time an error occurs – its front-( back-) page news.


What this coverage means is that when we evaluate the efficacy of VAR – devoid of actual data – there are plenty of readily available examples of how it has messed up – oh then, it must be rubbish.



As well as how our frequency estimations are biased by availability – our recollection of the past itself is distorted by emotional intensity. The peak-end rule explains how people recall experiences by emphasising peaks in emotional intensity and endings*.


The peaks – the heights of emotional intensity – are amplified but the nature of VAR. Fans are rightly enraged by the injustice of refereeing errors – but this doesn’t come close to VAR errors, and the reason for this may lie in VAR's faceless communication.


Our moral judgment is rooted in a cognitive template of two perceived minds – a moral dyad of intentional agent and suffering patient. When an on-field referee makes an error, I can assign moral responsibility, bringing an element of closure.


But despite being a collection of people – I perceive no mind in VAR. Screaming at Michael Oliver is cathartic, screaming at a video display is psychotic. With no moral agent to blame, the intense injustice of VAR errors fester away – forming lofty emotional peaks that distort any objective evaluation.



Availability bias and collapse of the moral dyad distorting our judgement and recollections of VAR has implications for the rollout of other tech or AI decision systems. Driverless cars may improve road safety, robotic surgery may increase survival rates, and algorithms may improve diagnostic accuracy.


However, when a rare error occurs – with no moral agent to blame – the pain is intensified; such is the media attention, we overestimate the error rate. Ultimately, we will reject safer AI-driven innovations in favour of our current more dangerous, but more cognitively acceptable systems.



*How was this explored? Sadistic psychologists adapted colonoscopy procedures. In one demonstration, leaving the scope in for an additional three uncomfortable, but not painful, minutes. Patients who were subjected to the additional three minutes recalled their experience as less painful and were more likely to return for subsequent procedures then those receiving the typical treatment. When VAR is debated, it is invariably after a mistake – a peak-end which biases judgments of our overall experience of the system.


57 views0 comments

Comments


bottom of page