Clarity raises $16M to fight deepfakes through detection

Fake porn of Taylor Swift. Photorealistic -- but fictionalized -- images of Gaza. The list of disconcerting deepfakes goes on, and -- as deepfake-creating tools grow easier and cheaper to use -- the waves of fakes are coming faster and fiercer.

According to a recent Pew Center poll, about two-thirds of Americans (66%) say they at least sometimes come across altered videos and images that are intended to mislead, with 15% encountering them often. In a separate survey of AI experts by Axios and Syracuse University, 62% said that misinformation will be the biggest challenge to maintaining the authenticity and credibility of news in an era of AI-generated content.

So what's the answer? Is there one?

If you talk with folks like Michael Matias, a cybersecurity specialist and the co-founder and CEO of Clarity, they'll tell you it's deepfake detectors. Matias started Clarity with Gil Avriel and Natalie Fridman in 2022, with the goal of developing technology to spot AI-manipulated media -- mainly video and audio.

Clarity is among the many vendors large and small racing to develop deepfake-spotting tools. Others include Reality Defender, which offers a platform to isolate text, video and image deepfakes, and Sentinel, which focuses on deepfaked images and videos.

It's difficult, actually, to distinguish Clarity's offerings from the others out there -- at least for this writer. Like rival vendors, Clarity maintains a scanning tool available via an app and API that leverages several AI models trained to identify patterns in videos, image and audio deepface creation techniques. In addition, Clarity provides a form of watermarking that customers can use to indicate their content is legitimate.

But Matias insists that the differentiators lie not above but beneath the surface, with Clarity's rapid response to new types of deepfakes.

"At its core, Clarity is leveraging AI but operating as a cybersecurity company," Matias said. "Clarity treats deepfakes as viruses, acting like pathogens that quickly fork and replicate. As such, its solution was also built to fork and replicate to maintain adaptivity and resiliency ... The team built infrastructure and AI models dedicated to accomplishing the ask."

Of course, precision in the deepfakes detection realm is a moving target. Even with the best expertise and tech stack money can buy, it's an impossible game to win considering the rate at which GenAI, deepfake-creating apps are improving. That's perhaps why some major players -- including Google, Microsoft and AWS -- are embracing more sophisticated watermarking and provenance metadata as alternative -- albeit imperfect -- deepfake-fighting measures.