For mediators, arbitrators, and other third-party neutrals, the central question of 2026 is no longer whether artificial intelligence will appear in dispute resolution. It already has. The question is how to use it ethically — and how to recognize when a colleague, a party, or a platform has crossed a line.

The most useful guidance now in circulation is the joint document published in 2026 by the National Center for Technology and Dispute Resolution (NCTDR) and the International Council for Online Dispute Resolution (ICODR): Guidance for Third Parties Using Artificial Intelligence in Dispute Resolution. The Guidance applies the existing Online Dispute Resolution Standards — themselves adopted as ISO 32122 by the International Organization for Standardization in March 2025 — to the new reality of AI-assisted practice.

This post walks through the framework and points to where the immediate ethical pressure sits.

Why this Guidance matters

There are three reasons every certified mediator should read this Guidance now:

The nine ODR Standards — and the AI test for each

The ODR Standards say that, to operate ethically, a process must be Accessible, Accountable, Competent, Confidential, Equal, Fair and Impartial, Legal, Secure, and Transparent. Each of those words now carries an AI-specific obligation.

Accessible

AI tools should accommodate users with disabilities, support the languages parties actually speak, work in the parties' jurisdiction, and adapt to cultural nuance. A mediator using a tool that fails on any of those dimensions is, by definition, restricting access.

Accountable

Human oversight is required — for administrative support, analysis, recommendations, and decisions. Practitioners must ensure the origin of AI-generated documents and the path to AI-generated outcomes are auditable, and must clarify the proportion of decision-making performed by humans vs. AI. Parties should have access to those audits. The Guidance also flags the ecological cost of heavy AI use as a legitimate factor in tool selection.

Competent

Competence now includes both dispute-resolution expertise and technical-and-ethical expertise in the AI being used. The mediator must be able to explain the basic elements of the AI to the parties — its benefits, its limitations, its biases, its cultural and linguistic implications, and its legal exposure. The human user retains responsibility for the lawfulness and accuracy of any decision based on AI output.

Confidential

This is the area that demands the most immediate attention. AI platforms must safeguard party data, articulate clear policies on who can view data, identify the purposes of any AI data use, and define how data is destroyed or modified. AI-related breaches must be promptly disclosed with remedial steps. Parties must be told what role they themselves play in AI usage and confidentiality. Inputting party communications into a public general-purpose model can be a confidentiality breach in itself.

Equal

AI must not produce or amplify bias. Practitioners are responsible for human oversight that detects and eliminates bias in process and outcome, must understand the data sources and reliability of the AI tool, and must use that information when selecting tools. ODR systems should be evaluated periodically to ensure no participant is gaining a technological or informational advantage from the use of AI.

Fair and Impartial

Tool selection must include awareness of the tool's bias re/production and inaccuracy rates. AI-driven processes need human oversight to identify and eliminate bias against any party. Conflicts of interest from AI algorithms or their developers must be disclosed as soon as discovered.

Legal

AI systems must comply with all relevant data-privacy, AI-governance, and dispute-resolution laws. Practitioners must keep current as those laws evolve. AI data sets and outputs should be geographically contextualized to ensure analyses, resolutions, or recommendations are appropriate to the parties' jurisdictions — a recommendation calibrated to California law has no business landing in a Florida mediation.

Secure

Strong security to protect data integrity and privacy. Secure identification of parties. Disclosure of any AI-related breach with corrective actions. A data-protection plan accessible to all parties.

Transparent

The standard the Guidance flags as demanding immediate attention. Mediators must disclose how and when AI influences the process, what data sources the AI used, the magnitude of AI's influence on option generation and decision-making, the level and type of human oversight, the reasons or motives behind AI-influenced decisions, and compliance with applicable AI legislation and standards. And the disclosure must be in accessible, plain language — not buried in a click-through.

The mediator's AI checklist

A short pre-engagement checklist drawn from the Guidance:

What this Guidance does not do

The Guidance does not tell mediators which AI platforms to use. It does not provide a safe-harbor list. It does not create a regulatory standard backed by an enforcement body. What it does is articulate the ethical framework — the same framework the profession has had for years — applied with the seriousness AI now requires.

The mediators who read this Guidance carefully and build it into their pre-engagement practice will be the mediators trusted with the most sensitive matters. The mediators who do not will, at some point, be answering an uncomfortable question.

The full Guidance is published by NCTDR-ICODR and is available at icodr.org/standards and odr.info.


If you are a mediator, panel chair, or party considering AI integration into a dispute-resolution process and want a candid review of where the ethical risk is sitting, request a private introduction or call 877-862-7188.