I've worked in AI for 15 years. There are a few telltale signs of AI washing.
John Fitzpatrick
The chief technology officer of the document-management company Nitro highlighted some of the ways companies overhype AI.John Fitzpatrick/Nitro
  • Longtime technologist John Fitzpatrick said he has seen companies exaggerating their AI capabilities.

  • Some businesses rebranded as AI, despite few real changes.

  • Regulated industries need to be especially watchful for hallucination, privacy, and other issues.

This as-told-to essay is based on a conversation with John Fitzpatrick, the chief technology officer at the document-management company Nitro. It has been edited for length and clarity.

I have worked in AI for 15 years and am one of the original engineers behind Apple's Siri. I'm currently the chief technology officer at Nitro — a software company that helps businesses manage and secure documents more efficiently.

Over the last year, I've seen a lot of AI washing, especially after ChatGPT took off.

AI washing is when companies exaggerate or misrepresent what their AI can actually do, just so they can say they're using AI.

Suddenly, tons of apps that were just a new skin slapped on top of ChatGPT popped up. Businesses started rebranding their existing automation features as AI without making any real product enhancements.

I see this as similar to the "cloud" hype many years ago. Suddenly, every business became a cloud business. We're seeing that with AI today. If you listen to earnings calls, every company's talking about AI.

Recent AlphaSense data shows a 779% increase in mentions of terms like "agentic AI," "AI workforce," "digital labor," and "AI agents" during earnings calls in the past year.

Almost every single startup now has to have an AI angle to secure funding.

Telltale signs of AI washing

There are a few different examples of AI washing.

One example is thin user interface layers on top of ChatGPT and maybe a small amount of prompt engineering. In some cases, that can be really valuable, but in many cases, it doesn't add any particular value.

The other challenge with AI washing is companies rushing AI features to the market with these simple integrations without considering customer privacy or security.

In the worst cases, major players launch assistant features and update their terms and conditions to allow them to use customer data for training.

Then there's the problem of relying on third-party public APIs and services vendors don't control. This would mean sensitive documents would be sent to third parties, which is a major security risk.

In regulated industries, where they often deal with extremely important documents, you want to be very careful of hallucination and ensure you're getting things like confidence scores from the models.