Author: Rana Sajjad*
Topics: |
AI’s problem of control has been described as the dispassionate, almost inhumane drive to achieve the objective embedded in its algorithm. A self-driving car, for instance, does not merely need to get from point A to point B. Along the way, it needs to stop at red lights and exercise caution while driving to ensure it does not hit other cars or humans. If its algorithm does not include a set of instructions to ensure safety, it will run over anything in its path. In essence, nothing can be left to chance, and the machine’s discretion, if any, will only be exercised in a very machine-like manner – that is, devoid of caution, nuance, context, and, of course, empathy. A tremendous responsibility, therefore, lies with those building and programming these machines, the coders – or those who hired the coders – writing the algorithm, to ensure that not only the objective of the algorithm is achieved but that it is done in a way that does not imperil other parts of the system within which they operate. By the same token, AI’s use by lawyers in general, and the international arbitration community in particular, must focus not just on improving the outcomes but also on how those outcomes are achieved. The means, therefore, are as important as the ends. As the adage goes, justice must not only be done, but must also be seen to be done. So, what can the arbitration community do not only to harness the power and promise of AI but also to mitigate its potential pitfalls to ensure that any AI-enabled arbitration system is perceived as fair and just?
Before we try answering these questions, let us take a step back and consider why the world of arbitration needs AI in the first place. What value proposition does it have? Is it because pretty much everyone under the sun is using it; perhaps a bit of FOMO (Fear of Missing Out)? If so, what could we possibly miss out on if we do not ride this wave? Appearing as a staid, conservative community resistant to change? Or is AI really the panacea for some or all of the problems in the realm of arbitration? Or is it not just about the problems, but about doing things better – orders of magnitude better – quicker, and at a lower cost? If so, would this also entail fairer and more just decisions and procedures? After all, the fundamental feature of any dispute resolution method is how fair it is. On this basis, can AI help arbitration be seen as a system that dispenses justice in a fair and impartial manner? More than a perception, would it make the entire experience of arbitration feel fairer and more just for the parties to the dispute?
“Fair” and “just” are important and meaningful terms because if any mechanism lacks fairness, it cannot be considered a viable option for deciding disputes. You could have all the bells and whistles of a nice courtroom or a conference room, of a well-respected and accomplished Judge or Arbitrator, of the most modern legal framework, and of course of the most advanced technology. But if the entire process comes across as somewhat partial at best and completely one-sided or lopsided at worst, it is an unworkable and unsustainable system. Therefore, a fair and just resolution of a dispute is of paramount importance.
Now let us turn back to the questions posed earlier about how AI can help in making arbitration fairer and more just. First off, can or should we trust AI with settling our disputes? Would AI be completely neutral and impartial, free of biases and prejudices that we humans are so often guilty of harboring? For one, AI-enabled systems would not have any emotions or feelings, which is ideal for a neutral and dispassionate analysis of the facts and evidence to arrive at a fair and just decision. Or not. Remember, AI is trained on the data created by humans. With power comes responsibility; with AI comes speed and accuracy, but no responsibility attributable to AI itself. The responsibility lies solely with those who wrote the code or those who hired them to write the code. AI is then beholden to the algorithm whose instructions it diligently and tirelessly follows, instructions encoded by humans. So, if we were to have an AI/bot Judge or AI/bot Arbitrator or “Bot-rator” (for lack of a better term and for the cool ring to it), how impartial, fair, and just would the artificially judicial mind or intelligence being applied be considered? How satisfied would parties to the dispute be not just with the decision but with how the AI-enabled system arrived at that decision? Is dissatisfaction and disappointment with the AI procedures or the outcomes more likely, or would the absence of the human element reassure parties of the impartiality and technical soundness of the decision? More broadly, would parties to a dispute trade off thoughtful human deliberation for AI’s time- and cost-efficiency? Would they trade off human judgment and discretion grounded in empathy and a sense of justice for AI’s dispassionate and presumably impartial analysis and decision? While these questions are being wrestled with, it is imperative that the preferences and concerns of the parties to a dispute, the users who are the ultimate beneficiaries and vital stakeholders of the dispute resolution system, are taken into account. Because being empathetic to them while deploying AI technology would help us determine how much they value empathy and, in turn, the human elements that go into deciding their disputes. That will then help design a truly fair and just dispute resolution system that is workable and sustainable regardless of whether the disputes are being decided by an Arbitrator, “Bot-rator,” or a combination of both.
*Rana Sajjad is a dual-qualified lawyer (licensed in New York and Pakistan). He is the Managing Partner of Triage Law, a Lahore-based commercial and arbitration law firm, and the Founder & President of the Center for International Investment and Commercial Arbitration (CIICA), Pakistan’s first international arbitration center.