Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
May 16, 2025

Navigating AI in International Arbitration: Key Insights and Guidelines

Embracing AI With Caution

At a Glance

  • Parties must take care to understand how a given AI tool safeguards confidentiality and privacy, whether it has been properly vetted, and whether it operates within closed or open systems. 
  • As AI tools become more sophisticated and widely used, arbitrators and parties must carefully navigate emerging regulatory frameworks. The European Union’s AI Act, set to take full effect in 2026, is a notable development.
  • Transparency is a central theme in the Ciarb Guideline. Arbitrators are encouraged to consult with the parties before incorporating AI into the process. If there is any disagreement over the use of AI, arbitrators should refrain from using specific AI tools until consensus is reached.
  • The Ciarb Guideline also recommends that parties disclose their use of AI, particularly when it may affect evidence or procedural decisions. Failure to disclose the use of AI could lead to sanctions, including adverse inferences, as nondisclosure could undermine the integrity of the arbitration and the enforceability of the award.

The Growing Use of AI in International Arbitration

The use of artificial intelligence (AI) in international arbitration is growing rapidly, offering both significant opportunities and complex challenges. AI tools are now assisting in tasks such as legal research, case analysis, document review, and even drafting procedural orders, thereby improving the efficiency and accuracy of the arbitration process. However, these advancements also raise important concerns regarding transparency, fairness and ethics.

Benefits of AI-Use in Arbitration

AI brings clear benefits to arbitration. It enhances efficiency by reducing the time spent on routine tasks like legal research and document review, allowing parties to focus on more substantive matters. Additionally, AI can support decision-making by analysing large datasets, detecting patterns and offering insights that might take human researchers significantly longer to uncover. While some of these advantages are still emerging in practice, AI also holds promise for promoting greater equality between parties. For example, financially constrained parties can now access advanced AI tools at reasonable cost — generating work products that just a short time ago would have required many hours of associates’ time to produce — helping to level the playing field with more sophisticated opponents.

Risks and Challenges of AI-Use in Arbitration

Despite its advantages, the use of AI in arbitration introduces several significant risks. Chief among them is the issue of confidentiality, a cornerstone of arbitration. The use of third-party AI tools may expose sensitive data to potential misuse or unauthorised access, particularly if parties input confidential material into platforms with unclear levels of security and opaque data-handling practices. AI systems often involve multiple levels of applications, making it difficult to determine the provenance of the input data, how the AI model is trained, where data is stored, how it is used, whether it has adequate security, and whether it is capable of being audited. As such, parties must take care to understand how a given AI tool safeguards confidentiality and privacy, whether it has been properly vetted, and whether it operates within closed or open systems.

Another major concern is bias in various forms. AI tools may be trained on skewed or incomplete data, which does not fully represent the body of previous decisions (data bias). There are also risks of bias in the decision-making algorithms (algorithmic bias), which may inadvertently favour certain groups or give undue weight to certain fact patterns. Over time, the biases can become self-reinforcing (feedback-loop bias). The “black box” nature of many AI systems can make it difficult to trace how the AI system’s outputs are generated. This complicates efforts by arbitrators to assess the reliability of such tools and may undermine their obligation to justify decisions transparently. Closely linked to this is the risk of hallucinations, where AI generates seemingly authoritative, but entirely fabricated content, such as nonexistent case citations or misquoted legal texts. A high-profile recent example highlighted how reliance on such output can lead to serious professional consequences.

These practical risks are intertwined with broader legal and ethical obligations. As AI tools become more sophisticated and widely used, arbitrators and parties must carefully navigate emerging regulatory frameworks. The European Union’s AI Act, set to take full effect in 2026, is a notable development. Although it does not apply within the United Kingdom post-Brexit, its extraterritorial scope means it may still affect UK-based practitioners. The Act applies to any organisation, regardless of location, that places AI systems on the EU market or whose systems’ outputs are used within the EU — potentially including AI tools employed in arbitration for tasks such as legal research, fact analysis or the application of law.

While the UK currently lacks equivalent horizontal AI legislation, it continues to explore sector-specific guidance and ethical frameworks. In this context, UK-based arbitrators and parties must remain alert to international developments, particularly in arbitrations with a nexus to the EU or involving EU-based parties.

Moreover, if AI plays a substantial role in the arbitral process, particularly in jurisdictions where regulation around AI is still evolving or restrictive, it may introduce uncertainty. Lawyers must ensure that they harness AI in line with their professional obligations, while parties must ensure that their use of AI tools aligns with mandatory rules, applicable laws and institutional requirements if they are to avoid jeopardising the validity or enforceability of the resulting award.

Ciarb Guideline on AI Use in Arbitration (2025)

Aware of the growing need to manage the risks posed by AI in dispute resolution, the Chartered Institute of Arbitrators (Ciarb) has recently published detailed guidance on its use in arbitration. The Ciarb Guideline provides a practical framework for the ethical and effective deployment of AI tools in arbitral proceedings, addressing core principles such as party autonomy, the responsibilities of arbitrators, transparency and confidentiality.

Ciarb’s approach can be compared to other recent guidance in the field. In April 2024, the Silicon Valley Arbitration and Mediation Center (SVAMC) released its Guidelines on the Use of Artificial Intelligence in Arbitration. These guidelines offer a principle-based framework for integrating AI tools into arbitration processes, emphasising responsible innovation and maintaining human oversight. Similarly, in October 2024, the Stockholm Chamber of Commerce (SCC) issued a Guide to the Use of Artificial Intelligence in Cases Administered Under the SCC Rules. This guide outlines best practices for incorporating AI into arbitration, focusing on maintaining confidentiality, ensuring quality and integrity, and preventing the delegation of decision-making to AI tools.

In March 2025, the American Arbitration Association’s International Centre for Dispute Resolution (ICDR) published guidance on arbitrators’ use of AI tools. This guidance encourages arbitrators to embrace AI technology while adhering to professional obligations, emphasising the importance of accuracy, fairness, independent decision-making and transparency. Arbitrators are advised to critically evaluate AI outputs, maintain control over decision-making and disclose the use of AI tools when it materially impacts the arbitration process. 

Collectively, these guidelines reflect a growing international consensus on the need for cautious and transparent integration of AI in arbitration, tailored to the unique contexts and user-bases of each institution.

Party Autonomy and Arbitrators’ Roles

The Ciarb Guideline underscores the importance of party autonomy in arbitration, affirming that parties are free to decide whether and how AI tools should be used in their proceedings. Arbitrators are encouraged to respect and uphold these choices, ensuring that any use of AI by the parties aligns with their agreement and complies with applicable laws, regulations and institutional rules. 

The Ciarb Guideline provides practical tools to help parties and arbitrators give effect to these principles. Appendix A includes a template “Agreement on the Use of AI in Arbitration,” which parties may choose to adopt or adapt. Appendix B offers a model “Procedural Order on the Use of AI in Arbitration,” which can be incorporated into the procedural framework of the case. These documents are intended to facilitate clarity and consensus from the outset, reducing the risk of later disputes over how AI tools have been employed.

While this section of the Ciarb Guideline focuses on the parties’ use of AI, it also highlights the arbitrator’s role in managing that use within the procedural framework of the arbitration. For example, arbitrators are advised to ascertain early on — typically upon receiving the request for arbitration — whether and (if so) how the parties have addressed AI in their arbitration agreement. Where the agreement is silent or unclear, arbitrators should invite the parties to express their views, usually during the first case-management conference.

Arbitrators remain ultimately responsible for maintaining the integrity and fairness of arbitral proceedings. While parties enjoy significant autonomy in shaping the process, this autonomy does not extend to the use of AI in ways that could undermine procedural fairness — for example, by replacing human judgement in decision-making.

Where arbitrators intend to rely on AI-generated information, particularly in evaluating party submissions or legal arguments, they are advised to independently verify that information and consult the parties beforehand. This approach not only upholds due process but also reinforces transparency and trust in the arbitral process. In doing so, arbitrators can appropriately balance party autonomy with their own duty to ensure a fair, legally sound outcome.

Transparency and Disclosure of AI-Use

Transparency is a central theme in the Ciarb Guideline. Arbitrators are encouraged to disclose their use of AI and consult with the parties before incorporating AI into the process. If there is any disagreement over the use of AI, arbitrators should refrain from using specific AI tools until consensus is reached. The Ciarb Guideline also recommends that parties disclose their use of AI, particularly when it may affect evidence or procedural decisions. Failure to disclose the use of AI could lead to sanctions, including adverse inferences, as nondisclosure could undermine the integrity of the arbitration and the enforceability of the award.

Conclusion: Embracing AI With Caution

AI presents an exciting opportunity to enhance arbitration proceedings, but it must be managed responsibly. The Ciarb Guideline, alongside guidance by the SVAMC, the SCC and the ICDR, offers a common-sense framework for navigating the complexities of AI integration. These guidelines emphasise responsible innovation, maintaining human oversight, and ensuring the fairness, transparency and integrity of the arbitration process. Parties should think carefully about incorporating these guidelines into their arbitral procedures. By incorporating their reference, and maintaining a clear framework for AI use, arbitrators and parties can ensure that AI contributes positively to the arbitration process, improving both efficiency and fairness without eroding the human element that is at the heart of arbitration.

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.

Related Topics