Humans v. the Infernal Machine: Cross Examination in Virtual Hearings


Print Friendly, PDF & Email

Author: José María de la Jara*

Jurisdiction:
International
Topics:
Online Arbitration
Cross-Examination
Hearing

In 1906, John Phillip Sousa traveled to the United States Capitol to talk about a new technology. This is what he had to say:

“When I was a boy, in front of every house in the summer evenings, you would find young people together singing the songs of the day, or the old songs. Today, you hear these infernal machines going night and day. We will not have a vocal cord leftThe vocal cords will be eliminated by a process of evolution as was the tail of man when he came from the ape.” [1]

He was talking about the telephone.

More than 100 years have passed. Even so, Sousa’s fears resonate with the way that we, as an arbitration community, have reacted to the popularization of virtual hearings during the pandemic.

Some have mourned and claimed that virtual hearings have effectively ripped away their ability to “read” witnesses, while others have stressed that computer-to-computer examination would actually enhance their talent to detect false statements. Like Sousa’s predictions about the elimination of our vocal cords, these views rely on intuition, fear to the unknown and pseudoscience.

What hides behind these views is the loss of physical interaction, something that even the most tech enthusiastic among us are grieving about. The aim of this post is to provide assurance by rebutting both positions and, at the same time, advocating that virtual hearings offer a possibility to fast-track the development of the international arbitration community.

I. THE GLASS IS HALF EMPTY

Pessimistic practitioners think that they have lost the ability to “read” witnesses. The problem is that they never had that ability to begin with.

The notion that human beings are capable of correctly detecting whether witnesses are lying and distinguishing their emotions has been rejected by a myriad of psychological studies.

In 2019, Lisa Feldman Barret conducted one of such studies along with other four psychologists.[2] After reviewing more than 1,000 papers on the topic they arrived to a unanimous conclusion: the ability to detect emotional states based on facial movements lacks a scientific ground. In other words, there is no one-to-one correlation between facial configuration and emotions.[3] People smile, frown and scowl for other reasons than being happy, sad or angry. And they also express these emotions without showing those facial configurations.

Since the 1980s, psychologists have conducted numerous meta-analysis on the topic – i.e. studies using statistical methods to determine whether past research can be translated into a common scale. This vast aggregation of scientific knowledge arrives to the same conclusion.[4]

In fact, we are not good at judging emotions even if our job depends on it. For example, a study conducted by Aamodt and Mitchell showed that federal police officers and judges trying to detect lies scored an accuracy rating of only 54.5% and 59%, respectively.[5]

In sum, the worry of pessimistic practitioners is misplaced. We cannot lose what was never ours to begin with. Human beings are awful lie-detectors, both offline and online.

II. THE GLASS IS HALF FULL

Optimistic practitioners believe that the fact that the screen is focused on the witness’ face alone allows them to “read” witnesses more accurately. In doing so, they claim that computer-to-computer examination improves their ability to extract relevant evidence from witnesses. This view, too, relies on intuition and bypasses the cognitive impact of virtual hearings.

Communication via video provides cross-examiners with less information about the witness’ mental and emotional state than face-to-face interaction.[6] Body language is out of the picture and most systems use a standardized bandwidth that filters high and low voice pitch.[7] This raises a barrier to the transmission of emotions. In the end, it seems like we are actually worse “readers” in computer-to-computer examinations.

Furthermore, digital cross-examiners lack an important asset in their toolbox, as the vast majority of computer screens are not designed to foster eye contact. Since webcams are placed on top, it is impossible to look into the witness’ eyes and to the camera at the same time. Without eye contact, the examiner loses a crucial tool to convey the witness that he “will not allow deviations from the question and answer approach.”[8]

The virtual setting also affects arbitrators’ perceptions. In a study conducted by Landström, 122 juries watched witnesses provide testimony either face-to-face or via video.[9] While the former group received a credibility ranking of 4.30, the digital witnesses only received 3.61 points.[10] Based on his own research, Goodman concluded that witnesses that provided evidence via video were perceived as less credible than the ones that did so in a regular setting, even when the first were more accurate.[11] This could mean that the medium of communication could actually be more important than the content of the witness statement. This is definitely something we should be paying attention to.

III. HUMAN LIMITATIONS AS AN INVITATION

Both arbitrators and counsels should not trust their ability to “read” witnesses, either on virtual or face-to-face hearings. However, I don’t think this necessarily leads to a dim scenario. Instead, the realization of our human limitations could act as an invitation towards enhancing critical thinking. With the assessment of witness’ facial expressions out of the way, arbitration practitioners could instead focus on what really matters: reviewing the documents, asking deeper questions and contrasting the testimonies with the evidence.

Furthermore, our inability to “read” witnesses serves as a prediction of what is to come, as other professions are already relying in artificial intelligence to enhance the assessment of their users. This could be dangerous. Affective computing currently relies on the same prototypical and simplistic view of emotions as humans do – e.g. thinking than a scowl necessarily shows that a person is angry. As Feldman Barret explains people scowl less than 30% of the times when they are angry.[12] So, scowls are only one among many expressions of anger.[13] Do we want to use algorithms that are only accurate only 30 percent of the time?

Finally, the recognition of our human limitations places a stronger accent on the necessity to consciously design how the arbitration of the future will look like. Even if affective computing were to work correctly, is that something that we really want to apply to witnesses? Will that development be really centered around the concerns and needs or arbitration users or will it be just an added perk for attorneys to brag about?

Whatever our future might be, we will be better off with a data-driven approach, discussing scientific research on a deliberative manner and leaving fear to the unknown aside. After all, the infernal machines are still here and we haven’t lost our vocal cords yet.

[1] Lawrence Lessig, TED Talk: Laws that choke creativity (March 2007) (emphasis added) (transcript available at https://www.ted.com/talks/lawrence_lessig_laws_that_choke_creativity/transcript).

[2] Lisa Feldman Barrett et al., Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements, 20 Psychol. Sci. in the Pub. Int. 1 (2019).

[3] Verónica Arroyo & Daniel Leufer, Facial recognition on trial: emotion and gender “detection” under scrutiny in a court case in Brazil, Access Now (June 29, 2020, 4:20 PM), https://www.accessnow.org/facial-recognition-on-trial-emotion-and-gender-detection-under-scrutiny-in-a-court-case-in-brazil/.

[4] Michael G. Aamodt & Heather Custer, Who can best catch a liar? A meta-analysis of individual differences in detecting deception, 15 The Forensic Examiner 6, 8 tbl. 2 (2006).

[5] Id. at 9.

[6] Sophie Nappert & Mihaela Apostol, Healthy Virtual Hearings, Kluwer Arb. Blog (July 17, 2020), http://arbitrationblog.kluwerarbitration.com/2020/07/17/healthy-virtual-hearings/?doing_wp_cron=1595013438.0641150474548339843750.

[7] Robin Davis et al., Research on Videoconferencing at Post-Arraignment Release Hearings Phase I Final Report (National Institute of Justice Contract No. GS-23F-8182H), https://www.ncjrs.gov/App/Publications/abstract.aspx?ID=271040.

[8] Susan Rutberg, Conversational Cross-Examination, 29 Am. J. of Trial Advoc. 353, 367 (2005).

[9] Sara Landström et al., Witnesses Appearing Live Versus on Video: Effects on Observers’ Perception, Veracity Assessments and Memory, 19 Applied Cognitive Psychol. 913 (2005).

[10] Id. at 922, tbl. 1.

[11] G S Goodman et al., Face-to-face confrontation: effects of closed-circuit technology on children’s eyewitness testimony and jurors’ decisions, 22 Law and Human Behavior 165, 169 (1998).

[12] James Vincent, AI ‘Emotion Recognition’ Can’t Be Trusted, The Verge (July 25, 2019), https://www.theverge.com/2019/7/25/8929793/emotion-recognition-analysis-ai-machine-learning-facial-expression-review.

[13] Id.

*Global Visiting Lawyer at Covington & Burling. The author would like to thank Sophie Napper for her feedback and Hon. Judge Rakoff for the lessons he passed on at the “Science & Courts” seminar at Columbia Law School.
The views expressed in this article are my own and do not reflect the position of Covington & Burling or its clients. Furthermore, this article should not be construed as an assertion that virtual hearings or virtual cross-examination are inconsistent with due process or affect the integrity of the proceedings.