The Journal of Things We Like (Lots)
Select Page
Lee-ford Tritt, The Use of AI-Based Technologies in Arbitrating Trust Disputes, 58 Wake Forest L. Rev. 1203 (2023).

Would you rather have government decisions made by artificial intelligence or by a presidential administration that you loath? The concept of the villainous AI overlord became part of the zeitgeist with the Terminator movie franchise, but the reality is that the greatest threat to the future of humanity may be itself. AI decision-making has demonstrated remarkable reliability and efficiency, often outperforming human decision-making in various domains. The ability of AI to quickly process immense amounts of data, identify patterns, and make decisions based on objective analysis minimizes the impact of biases and emotions that can cloud human judgement. As AI technology continues to progress, there is a growing possibility that AI may eventually displace humans in governing and decision-making positions. It is estimated that AI may soon replace 300 million jobs, or 9.1% of jobs worldwide. Jobs with a higher level of exposure to AI tend to be in higher paying fields, where education and critical reasoning skills are required. Prof. Lee-ford Tritt’s article, The Use of AI-Based Technologies in Arbitrating Trust Disputes, considers whether it is appropriate or feasible to supplant or support human decision-making with AI technology in the context of trust litigation.

This is less science fiction and more science fact, as China has already started to use AI-based courts to resolve legal disputes. The central question undergirding Prof. Tritt’s examination is the degree to which the experience of being human should control or guide dispute resolution. AI has several possible applications to arbitration, generally. It may assist arbitrators in the performance of their job, with tasks such as case management and fact gathering. AI may also assist with decision-making. One study demonstrated that artificial intelligence is able to predict the vote of individual Supreme Court justices with more than 70% accuracy, which far exceeds the reliability of human predictions. AI is less accurate with predictions involving factually similar cases, which may mean either that AI is less likely to identify legal nuances or that human factfinders are inconsistent in the application of the law. If the latter, we may find that AI decision-making is more equitable because of the precision with which the technology applies the law. We may also find that running our decisions through AI to ensure the fairness of the decision is a useful and supportive tool.

Professor Tritt’s article is the first to consider the role that AI may play in arbitration within a specific area of law. Trust disputes may be external (disputes between trustee and creditors) or internal (disputes between settlor, beneficiary, or trustee), with the latter arising far more frequently. Internal trust disputes involve either administration or management of the trust, or the validity of the trust itself. Trust administration issues may involve questions of fiduciary breach (care, loyalty, or impartiality) or actions to modify or reform the trust. The disputes often involve “pre-trial motions, a cascade of witnesses, and a mountain of documentary evidence,” as well as separately represented parties. As with any disputes involving family and money, emotions run high. Prof. Tritt notes that AI may not be ideally suited to resolving these types of claims.

The article stands out by focusing on the compatibility of AI-based arbitration in trust disputes, offering insights that could inform future developments in this area of law. By highlighting the potential challenges and opportunities of integrating AI into trust dispute resolution, the article makes a unique and valuable contribution to the broader discourse on AI in legal decision-making processes. But perhaps the most important contribution made by the article is to frame a nuanced discussion of supportive, substitutive, and disruptive use of AI. While the idea of AI governing society and assuming decision-making roles raises ethical concerns and challenges related to accountability and transparency, the potential benefits of leveraging AI merits careful consideration and exploration. Professor Tritt’s article is the start of an important conversation.

Download PDF
Cite as: Victoria J. Haneman, Artificial Intelligence as Arbitrator, JOTWELL (February 24, 2025) (reviewing Lee-ford Tritt, The Use of AI-Based Technologies in Arbitrating Trust Disputes, 58 Wake Forest L. Rev. 1203 (2023)), https://trustest.jotwell.com/artificial-intelligence-as-arbitrator/.