Deepfake Audio Has a Tell – Researchers Use Fluid Dynamics to Spot Artificial Imposter Voices

In AI University, Carousel, Department of Computer and Information Science and Engineering, In the Headlines, News, Security

a man wearing a hooded jacket and with a sculpted mask over his face speaks on a phone while looking at a laptop in a dark room

By Patrick Traynor, Ph.D., Professor and John H. and Mary Lou Dasburg Preeminent Chair in Engineering in the Department of Computer and Information Science and Engineering (CISE), and Logan Blue, a Ph.D. student in CISE. This story originally appeared in The Conversation.

Imagine the following scenario. A phone rings. An office worker answers it and hears his boss, in a panic, tell him that she forgot to transfer money to the new contractor before she left for the day and needs him to do it. She gives him the wire transfer information, and with the money transferred, the crisis has been averted.

The worker sits back in his chair, takes a deep breath, and watches as his boss walks in the door. The voice on the other end of the call was not his boss. In fact, it wasn’t even a human. The voice he heard was that of an audio deepfake, a machine-generated audio sample designed to sound exactly like his boss.

Attacks like this using recorded audio have already occurred, and conversational audio deepfakes might not be far off.

Deepfakes, both audio and video, have been possible only with the development of sophisticated machine learning technologies in recent years. Deepfakes have brought with them a new level of uncertainty around digital media. To detect deepfakes, many researchers have turned to analyzing visual artifacts – minute glitches and inconsistencies – found in video deepfakes.

Read full article at The Conversation

Share