A Closer Look at the New SVAMC Guidelines for AI in International Arbitration


Print Friendly, PDF & Email

Author: Juan Perla*

Jurisdiction:

Topics:

On April 30, 2024, the Silicon Valley Arbitration and Mediation Center (SVAMC) published the first edition of its “Guidelines on the Use of Artificial Intelligence in Arbitration” after months of public consultation.

The Guidelines recognize that the emergence of generative artificial intelligence (AI) presents opportunities and challenges for arbitration both domestically and internationally. Although AI promises to increase efficiency and precision by automating certain tasks such as research, document review, translation, and drafting, it also raises concerns about accuracy, transparency, bias, confidentiality, and due process.

The Guidelines  are an important contribution to the conversation around these issues (see, e.g., here and here). Legal rules and ethical norms governing the use of AI in dispute resolution are still evolving across jurisdictions.

The European Union’s AI Act classifies certain AI systems as “high-risk” when used in the administration of justice, such as researching and interpreting the law and applying it to a concrete set of facts. The EU AI Act does not refer explicitly to arbitration, but it does refer to AI systems that are intended to be used by a “judicial authority” and “in alternative dispute resolution.”

Several courts have criticized or sanctioned attorneys for improperly relying on generative AI to prepare legal submissions. For example, U.S. courts have imposed monetary sanctions on attorneys for citing fake, AI-generated cases in court filings (see, e.g., here and here). In another case, a court of appeals referred an attorney to a grievance panel for potential disciplinary action. Similar incidents have occurred in Canada and the United Kingdom (see, e.g., here and here).

In response, some U.S. courts have issued standing orders or proposed local rules requiring certifications that generative AI has not been used in preparing court filings or that a human has verified any AI-generated content. Bar associations such as California, New York, and Florida have also provided practical guidance to attorneys grappling with how to use AI ethically and responsibly.

Although there are still no publicly reported incidents involving the use of AI in arbitrations, the lack of public reporting may be due more to the confidential nature of arbitral proceedings than any greater reluctance by arbitration practitioners to use AI in preparing their submissions. Indeed, scholars and practitioners have been debating the potential benefits and pitfalls of using AI in arbitration for some time without any clear guidance.[1]

Step in the SVAMC. The new Guidelines take into account unique issues that may arise in multi-jurisdictional proceedings in which different and potentially conflicting laws, ethical norms, and professional obligations may apply.

As used in the Guidelines, AI “refers to computer systems that perform tasks commonly associated with human cognition, such as understanding natural language, recognising complex semantic patterns, and generating human-like outputs.” Although this definition seems to focus primarily on generative AI, the guidelines may also apply to other non-generative forms of AI such as “evaluative or discriminative AI,” which are designed primarily to provide recommendations or classify information.

The Guidelines aim to establish principles that can help practitioners harness the benefits of AI while mitigating any potential risks to the integrity and fairness of the arbitral proceedings. The principles are classified into three categories: (i) guidelines for all participants, (ii) guidelines for parties and their representatives, and (iii) guidelines for arbitrators. Each guideline is accompanied by additional commentary.

Guidelines for all Participants

Guideline 1 encourages all participants to make reasonable efforts to understand how specific AI tools work, what data they were trained on, their tendencies to invent information or “hallucinate,” and their potential for bias.

Awareness of these issues is crucial for determining when and how much to rely on various AI tools. For instance, the commentary to Guideline 1 explicitly calls out concerns about using AI blindly to select arbitrators, counsel, or experts, as doing so may inadvertently perpetuate historical biases against women and other underrepresented groups.

Guidelines 2 and 3 emphasize confidentiality and disclosure considerations. Participants are advised to exercise caution and verify data protection protocols when deciding whether to submit privileged or confidential information to third-party AI tools.

This principle echoes a similar rule adopted by a judge on the U.S. Court of International Trade. The rule requires attorneys to certify that any use of generative AI “has not resulted in the disclosure of any confidential or business proprietary information to any unauthorized party.”

As with the other standing orders issued by U.S. courts, the SVAMC Guidelines address whether and how counsel should disclose their use of AI in arbitral proceedings.

However, unlike existing court orders, which seem to apply only to attorneys and say nothing about a court’s obligation to disclose any use of AI by judges and their clerks, Guideline 3 goes a step further. It contemplates disclosure not only by counsel but also by all participants involved, including experts and arbitrators.

Guideline 3 suggests that disclosure requirements may vary based on the AI tool used and its potential impact on the arbitration. Still, by expressly noting that it is not intended to impose a default disclosure obligation or to create a presumption for or against disclosure, Guideline 3 departs from the mandatory disclosure requirements currently in force in some courts.

The discretionary nature of Guideline 3 may inadvertently open the door for asymmetrical disclosures and tactical disadvantages for some participants, especially in international arbitration, because some participants may be subject to stricter ethical and legal obligations in their home jurisdictions than their opponents. This approach may prove problematic over time and may need to be revisited.

Guidelines for Parties and their Representatives

For parties and their representatives, Guideline 4 draws attention to applicable ethical rules and professional standards, such as the duty of competence and diligence. These principles mirror similar duties found in different codes of ethics and professional responsibilities, such as the ABA Model Rules of Professional Conduct (see, e.g., here and here).

Guideline 4 and its commentary make clear that, despite AI’s human-like ability to synthesize information, humans bear ultimate responsibility for any errors or hallucinations in outputs generated by or with the assistance of AI. As with any work performed by human subordinates, attorneys must always exercise appropriate due diligence in reviewing AI outputs for accuracy before submitting them to the tribunal.

While it should go without saying, Guideline 5 emphasizes that parties may not use AI to falsify evidence or otherwise mislead the tribunal. The commentary also cautions parties and their counsel to be wary of AI’s ability to generate highly convincing fake data.

Not only is this a concern with respect to legal research and drafting, as demonstrated by the various incidents involving fake, AI-generated legal citations, but also with respect to the impact that “deep fakes” may have on how evidence is received in arbitral proceedings.

This is particularly concerning in international arbitration because the standards for admitting and relying on evidence in arbitral proceedings can often be less rigorous than in judicial proceedings.

Guidelines for Arbitrators

Arbitrators are subject to their own set of guidelines. Under Guideline 6, arbitrators may not delegate any part of their mandate or decision-making functions to an AI system, no matter how advanced.

While arbitrators may use AI to streamline tasks such as summarizing facts, analyzing arguments, and drafting the award, they must thoroughly vet the AI system’s outputs before adopting them as their own. Even if an AI tool cites sources, arbitrators are instructed to verify their veracity. This is good advice for all participants, including counsel and experts. Ultimately, arbitrators must always apply their independent judgment.

Guideline 7 prohibits arbitrators from relying on any “AI-generated outputs outside the record,” without first disclosing it to the parties and giving them an opportunity to comment.

These principles are consistent with existing ethical rules and standards that generally apply to arbitrators, such as independence, impartiality, competence, and the duty to render an enforceable award. Indeed, the improper use of AI by any participant, but in particular by arbitrators, could implicate grounds for challenging an award.

Consider that under French law, only a “natural person having full capacity to exercise their rights” may act as an arbitrator. Could this rule have anything to say about whether parties or arbitrators may delegate any decision-making functions to an AI model? In other contexts, courts have determined that AI models are not natural persons, such as for purposes of registering patents for inventions created by or with the assistance of AI models. Thus, if an arbitrator improperly delegates adjudicatory functions to the AI model, that could provide a basis for challenging the award.

The New York Convention, which applies in most jurisdictions, provides various grounds for defending against enforcement of an award. For instance, a court may refuse to enforce an award where “the arbitral procedure was not in accordance with the agreement of the parties, or . . . the law of the country where the arbitration took place.” It is plausible that an arbitrator’s use of AI without the parties’ consent or in violation of applicable law could give rise to a defense under this or other provisions.

Other Practical Guidance and Model Clause

The draft version of the Guidelines offered examples of compliant and non-compliant uses of AI that may shed some light on their application. Using AI for basic tasks such as legal research or detecting inconsistencies in arguments would likely be deemed compliant if a human verifies the accuracy of the outputs. But using AI to evaluate evidence or generate legal reasoning without human validation would violate the Guidelines.

The Guidelines propose a model clause that could be included in a procedural order or an arbitration agreement for purposes of adopting the Guidelines in specific disputes, provided that the parties and the tribunal all agree. Regardless, the Guidelines would not and could not supersede mandatory rules that may apply in different jurisdictions.

As different use cases emerge in international arbitration, more collaboration across jurisdictions and sectors will be needed to refine and adapt these Guidelines. One possible area to explore is the use of AI for expert evidence or even as a substitute for experts. This is not a far-fetched idea.

In New York, a law firm attempted to use ChatGPT as a “cross-check” to justify the reasonableness of its hourly rates in an application for attorneys’ fees. Although the court rejected the use of ChatGPT in that case, perhaps the outcome could have been different if the lawyers had used an AI tool specifically trained for that purpose. Similarly, in a case involving a patent for baseball bats, a Texas court rejected a litigant’s attempt to rely on ChatGPT as a substitute for “expert testimony” in support of its definition of the term “foam.”

Arbitration counsel and experts may wish to use AI in similar ways, for instance, to argue for or against legal fees in a cost submission or to offer objective views on technical aspects of a construction project in an expert report. Arbitrators may wish to submit competing expert reports to an AI model to identify areas of agreement and disagreement, or even evaluate the quality of the expert opinions as measured against industry standards.

Should those types of uses be categorically prohibited, or should it depend on the specific AI tool and how it was trained? These are questions that may require additional consideration and may benefit from the input of participants across disciplines.

In sum, while the new SVAMC Guidelines are still in their infancy, they represent a robust first step towards establishing a human-centric framework for using AI ethically and responsibly in international arbitration.

 


*Juan Perla is a partner in the New York office of Curtis, Mallet-Prevost, Colt & Mosle LLP, focusing on international disputes and appellate litigation. He is also a leader in the firms emerging AI practice.

 

[1] See, e.g., Derick H. Lindquist and Ylli Dauta, AI in International Arbitration: Need for the Human Touch, J. Disp. Resolution (2021), available at https://scholarship.law.missouri.edu/jdr/vol2021/iss1/6/; Kathleen Paisley and Edna Sussman, Artificial Intelligence Challenges and Opportunities for International Arbitration, New York Dispute Resolution Lawyer (2018).