Exclusive: Reality Defender expands deepfake detection access to independent developers

Jul 31, 2025 - 13:26
 0  0
Exclusive: Reality Defender expands deepfake detection access to independent developers

New York-based cybersecurity company Reality Defender offers one of the top deepfake detection platforms for large enterprises. Now, the company is extending access to its platform to individual developers and small teams via an API, which includes a free tier offering 50 detections per month.

With the API, developers can integrate commercial-grade, real-time deepfake detection into their sites or applications using just two lines of code. This functionality can support use cases such as fraud detection, identity verification, and content moderation, among others.

The Reality Defender platform features a suite of custom AI models, each designed to detect different types of deepfakes in various ways. These models are trained on extensive datasets of known deepfake images and audio made using many different types of generative tools.

“What we’re doing now is saying you don’t need to be a big bank, you don’t need to have a bunch of developers,” Reality Defender cofounder and CEO Ben Colman tells Fast Company. “Anyone that’s building a social media platform, a video conferencing solution, a dating platform, professional networking, brand protection—all of them can now have deepfake and generative AI detection.” 

The new Deepfake Detection API currently supports audio and image detection. But the company plans to expand coverage to additional modalities in the coming months. The detection system can identify visual deepfakes based not only on faces but also on other image features and the broader context in which the media appears.

Deepfakes are a form of synthetic media created using artificial intelligence to produce convincing video, image, audio, or text representations of events that never occurred. These can be used to put sham words in a public figure’s mouth or to trick someone into sending money by mimicking a relative’s voice.

Global losses from deepfake-enabled fraud surpassed $200 million in the first quarter of 2025, according to a report by AI voice generation company Resemble AI. The most damaging uses of deepfakes include nonconsensual explicit content (such as revenge porn), scams and fraud, political manipulation, and misinformation. As generative AI tools advance, deepfakes are becoming increasingly difficult to detect. An unidentified imposter recently used a deepfake of Secretary of State Marco Rubio’s voice to place calls to at least five senior government officials.

Colman says that as generative AI tools become more widespread and deepfakes more common, both consumers and businesses will likely start viewing protection against fake content much like they do protection against computer viruses or spam.

The key difference, he adds, is that the tools required to create deepfakes are far more accessible than those needed to produce viruses or spam. “There’s thousands of tools that are free, and there’s no regulation yet,” Colman says.

In other words, we’re likely just seeing the beginning of the deepfake era. “It just gets worse from there for companies, consumers, countries, elections,” Colman says. “The risks are endless.” 

Developers can access the new API and free tier starting today from the API page on the Reality Defender website.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0