UF Ph.D. student Kevin Warren, left, and Professor Patrick Traynor led a recent study on audio deepfakes with 1,200 participants. Their team’s paper on the study earned a Distinguished Paper award from the Association of Computing Machinery. (Photo by Dave Schlenker)

Deepfake: The Future of Truth Starts Here.

As audio deepfakes blur the line between fact and fiction, our researchers are engineering solutions to protect trust, security and authenticity in an increasingly AI-driven world. 

The Growing Threats of Audio Deepfakes

Have you ever received a phone call from a supposed loved one in distress, only to find out later that it was a scam? With just a few seconds of publicly available audio — be it from a voicemail, a YouTube clip, or a public speech — bad actors can now clone voices and fabricate conversations, challenging the very foundation of trust in communication. From online banking to national security, from political discourse to personal safety, deepfake audio threatens critical aspects of modern society. And as technology continues to evolve, AI detection tools alone won’t be enough to stop the tide of misinformation and fraud.

Fortifying the Future of Cybersecurity 

Led by deepfake expert Patrick Traynor, Ph.D., and the Florida Institute for Cybersecurity Research, our team of interdisciplinary computer scientists and engineers are pioneering solutions to: 

  • Identify and reveal deepfake audio using advanced forensic models and AI-assisted analysis 
  • Analyze and Authenticate real voices through advanced vocal tract analysis 
  • Fortify and safeguard identity verification using breakthrough technology on smartphones and devices 
Hands holding a cellphone with an incoming call.

The Science of Staying Ahead 

UF’s latest HiPerGator supercomputer is revolutionizing the fight against deepfake deception. By analyzing the micro-effects of human speech — breathing patterns, vocal tract resonance and speech turbulence — we’re identifying inconsistencies that even the most advanced AI struggles to replicate. Our groundbreaking studies have also shown that leveraging real-life human studies alongside advanced machine learning can create formidable defense mechanisms against the most insidious digital deception. 

A woman works on a tablet before several screens showing national security data.

Defending the Digital Future: UF’s National Leadership 

Our commitment to cybersecurity doesn’t stop in the lab. Traynor has advised the White House, the Federal Communications Commission and the Federal Trade Commission on combating the risks posed by deepfake technology. With support from the National Science Foundation, the Office of Naval Research and leading industry, UF is setting the national agenda in deepfake defense.

Listen Carefully: The Call for Action 

Cybersecurity and deepfake detection must evolve to address machine learning limitations and human bias. While people instinctively trust audio, AI models are trained to be skeptical, yet their perspectives are not always complementary. With deepfake audio soaring tenfold a year, stronger detection systems, better education and real-world deployment are critical. Future solutions must adapt to shifting biases, keep pace with rapid technological manipulations, and cultivate expertise to detect and deter deepfakes and digital deceptions in our increasingly connected world.