Blog

How to Protect Yourself from AI Washing in Legal Tech

April 23, 2024

What is “AI washing,” and why do you have to be aware of it when you’re evaluating vendors?

Having AI capabilities is the hottest selling point in the tech world right now, and the legal tech segment is no exception. But it’s always good to be diligent about testing a software vendor’s AI claims before accepting them.

“AI washing” is a newly coined term describing the practice of companies mischaracterizing or exaggerating the AI capabilities of their products or services to capitalize on the intense AI hype that has gained force over the last few years. 

Providers in the CLM and legal tech space are, in our experience, scrupulous and straightforward about the capabilities of their products. Still, the existence of AI washing reminds us how important it is for a prospective tech adopter in any space to verify the capabilities of potential solutions before making an actual investment. 

The rapid growth of legal tech (and legal tech AI)

As this article from Law.com points out, there’s been an explosion of startups in legal tech.  “We are seeing two to three times the amount of startups that we’ve seen,” it quotes Zach Posner of The LegalTech Fund. “We’re seeing an influx of traditional technologists that are coming in to focus on legal that we’ve never seen before.”

In the CLM space, we’ve seen a rise in innovative generative AI use cases such as clause creation, surgical redlining, conversational AI, and agreement summarization. These use cases will reshape the way attorneys and their business counterparts handle contracts.

It’s an exciting time in the world of legal AI, but it’s important to be mindful of potential limitations of newcomers in our space and others.

What are AI washing abuses?

AI providers should bear in mind that regulators are paying attention to exaggerated claims about AI. The Federal Trade Commission, for instance, is sufficiently concerned about AI washing that it posted a warning about avoiding deceptive AI claims. Regulators have taken assertive action in some sectors already. One example is the $400K in fines levied by the SEC against two financial services firms found guilty of making misleading claims about their AI capabilities.

Here are some of the abuses that watchdogs and critics have observed from software providers making AI claims:

  • Fuzzy definitions: Companies are touting AI-powered solutions but are suspiciously vague about describing how they use it as they try to create perceptions they’re using cutting-edge AI. Their tools are advertised as being “AI-powered” but lack advanced AI capabilities, leaning on simple rule-based systems without complex machine learning models.
  • Exaggerated claims of predictive powers: Providers have been known to exaggerate the AI predictive capabilities of their offerings, which actually rely on simple statistical analyses or predetermined rules.
  • Shallow conversations: Chatbots are sometimes sold as being AI-driven, but they don’t have the capabilities claimed, particularly in terms of having conversational AI capabilities for understanding and replying to users.
  • New paint on old products: In some cases, old technology is being trotted out as having been updated with AI even when that’s truly not the case.

The risks of not recognizing AI washing

As you might expect, there are multiple criticisms of AI washing: 

  • Misleading communication: It creates a false impression of ethical AI even though no concrete ethical theory, argument, or application is in place.
  • Trivialization of ethics: It's argued that AI washing leads to a trivialization of ethics, which may even result in “ethics bashing” where ethics in AI is not taken seriously.
  • Exaggerated claims: Companies inflate the capabilities of their AI technology, which can mislead customers and investors.
  • Undermining trust: It can turn AI into a vacuous buzzword, injuring user and public trust in actual AI technology.
  • Impact on reputation: A customer who experiences some of the negative consequences of using an "AI-washed" product might see their business and reputation damaged.
  • Distraction from real value: As they focus on inflated claims, providers engaged in AI washing may fail to promote the pragmatic applications of AI that can actually solve specific issues.

Beyond these ethical or reputational concerns, there are real risks for organizations that adopt legal tech whose AI capabilities don’t deliver as promised: 

  • Inadequate or inaccurate insights: If an AI model outputs misinformation or inaccurate insights, this can misinform decision-makers and result in poor business outcomes or legal challenges.
  • Bias and discrimination: Poorly designed AI models can inject bias into decision-making processes, inadvertently causing discrimination against certain groups that can lead to violation of anti-discrimination statutes.
  • Poor reporting: A lesser AI legal tech solution may deliver inadequate reporting, hindering visibility into contract statuses and performance.
  • Data security dangers: A provider may not leverage enterprise-grade security to provide strong safeguards within its AI offering.
  • Hidden costs and labor: An AI software solution should save your team in costs and labor, but it may create even more of these as staffers find they need to spend a great deal of time reviewing and correcting its outputs. 
  • Squandered investment: A legal tech solution may be no solution at all if its capabilities are marginal and it can’t be updated to improve its performance or customized to meet your specific needs. So it may have to be sunset far earlier than your CFO would like.

The CLM software market is expected to be worth over $8 billion by 2026.

How can you prevent AI washing?

It’s important to be cautious of potential AI washing in any software category, not just legal tech and CLM. Here are key measures you can take to protect yourself against it.

Evaluate your vendors’ talent

AI and machine learning have experienced a surge in popularity in the past few years, but the AI space is far from being a nascent one. It’s critical to onboard a vendor with a team of subject matter experts with deep experience in AI. A simple review of the provider’s LinkedIn profile can do wonders; Look for extensive experience and relevant skills like algorithm development, machine learning, natural language processing, data engineering, neural networks and deep learning, and computer vision.

Ask the right questions

It’s crucial for organizations to choose reliable and effective AI-powered solutions to mitigate these risks and ensure seamless, efficient and insight-driven contract management processes. Asking the right questions of would-be vendors during your software evaluation process is a subject we’ve tackled in this on-demand video.

What are just a few of the questions we cover?

  • Has the provider built a proprietary AI and large language model (LLM) specifically for contracts, or do they rely on third-party LLM providers like OpenAI?
  • Will its LLM let us build custom AI to extract specific contract data like dates, numbers, and text fields?
  • Does their AI require “human in the loop” review and validation during contract ingestion?
  • If the product analyzes contract data using AI, will the provider let us test its capabilities and fidelity by uploading thousands of contracts into a real environment to see firsthand how it works at scale? 

Review your vendors’ Responsible AI policy

AI providers that are mindful of responsible AI practice should, by now, have a Responsible AI policy in place to inform smart and ethical development – with an emphasis on things like transparency, explainability, eliminating bias, and sound privacy and data governance.

Be cautious of industry newcomers

It’s important to exercise caution when considering vendors who are new to the AI space. While a new startup often brings exciting innovations, their product’s performance can vary and there may be unforeseen vulnerabilities or limitations in how the AI trains and processes data. Newcomers may also lack the time and resources to develop fully fleshed-out AI infrastructure or adequate developer and support staffing to respond to exigent issues that may arise. 

In particular, be sure to understand the benefits and limitations of “GPT wrappers” – those providers that have built their GPT features and functionalities around licensed external LLMs. While this can provide significant value, standalone generative AI tools that aren’t built around solid infrastructure in their core platform limit these tools to discrete AI tasks. 

Working with a “GPT wrapper” vendor also means that the vendor will often lack direct control over the safety and fidelity of generative AI outputs.

Demand a “proof of concept”

AI features such as process automation, data validation, natural language processing (NLP) and predictive analysis are spotlighted features of AI CLM software solutions. But it's crucial for would-be adopters to verify any claims about features and performance. They should ensure the software really utilizes advanced AI technologies like machine learning (ML), NLP, and retrieval-augmented generation (RAG), rather than simple automation or rule-based systems.

If you're evaluating contract management software, look for transparent information about the AI's functionalities, demand demonstrations of the AI in action, and consider independent reviews or case studies that can validate capabilities. Demanding a real-time “proof of concept” demonstration from your would-be providers – using your organization’s contracts – is another way to verify an AI CLM software solution works as promised. 

This due diligence can help you avoid being victimized by software AI washing as you choose a platform that genuinely meets your legal operations needs.

Find out how

Evisort

can help your team

Test Evisort on your own contracts to see how you can save time, reduce risk, and accelerate deals.

Related Resources

Guide

Customizable Contract AI

On-demand Webinar

Contract with Care: How Healthcare Organizations Are Using Contract AI for Compliance, Finance, and Procurement

On-demand Demo

Protect The Business: 13 Questions to Ask Your Legal AI Vendor

Find out how

Evisort

can help your team

Test Evisort on your own contracts to see how you can save time, reduce risk, and accelerate deals.