<img height="1" width="1" alt="FB pixel" style="display:none" src="https://www.facebook.com/tr?ev=6024262178418&amp;cd[value]=0.01&amp;cd[currency]=CAD&amp;noscript=1">

Be prepared for more fake news, cloned people and manipulated images

Generative AI examined at EAB and CiTER Biometrics Workshop
Be prepared for more fake news, cloned people and manipulated images
 

The growing accessibility and power of deepfakes and generative AI are causing headaches for fraud prevention professionals and forensic investigators, and the problem appears to be getting worse. Tencent Cloud is offering Deepfakes-as-a-Service, charging $145 to generate digital copies of an individual based on three minutes of video and one hundred spoken sentences, The Register reports.

The interactive fakes take only 24 hours to produce, and avoid the flat intonation that can sometimes alert viewers to the presence of a virtual human with timbre customization technology.

The Cyberspace Administration of China has put rules in place for generative AI that seem to require the products of this service to be clearly marked as such.

Criminals are demonstrating the nefarious uses this kind of technology can be put to, with Arizona outlet Arizona’s Family reporting an incident in which criminals faked a teenager’s voice in an attempt to fake a kidnapping. The purported kidnappers phoned the teenager’s mother and demanded a million dollars in ransom, threatening to harm her if the victim did not comply.

The teen’s mother alertly ascertained that her daughter was safe without paying, but AI experts are warning people to be alert to the possibility of similar fraud attacks.

Journalists, too, are finding a ready audience for their tales of AI trickery, with the latest example coming from a Wall Street Journal columnist who managed to trick her bank’s voice biometric system and family members, at least temporarily. Senior Tech Columnist Joanna Stern cloned herself with help from a professional generative AI service and an extra layer of voice technology.

Research from Regula indicates that roughly a third of businesses have already suffered a deepfake fraud attack.

Generative AI threatens digital forensics

Deepfakes were one of the four topics in focus at the recent EAB & CiTER Biometrics Workshop

Anderson Rocha, professor and researcher at the State University of Campinas and visiting professor at the Idiap Research Institute, presented a keynote on ‘Deepfakes and Synthetic Realities: How to Fight Back?’

“Deepfakes are just the tip of the iceberg,” Rocha says. Generative AI is overturning longstanding assumptions in forensics.

Multiple complete yet fake narratives are possible, with the ability to create synthetic video, audio, text and other kinds of data.

“The singularity” is a long way off, Rocha argues, but as Arthur C. Clarke noted, “any sufficiently advanced technology is indistinguishable from magic.”

AI is used in digital forensics to help identify, analyze and interpret digital evidence, in part by searching for the artefacts that are, at least in theory, left behind by every change made to a piece of evidence.

The problem of determining media provenance was raised to Rocha’s team in 2009, with a real world investigation into the legitimacy of photos of Brazil’s then-President published in news media. Rocha describes the techniques used at the time, and their evolution to include computer vision techniques, up until the explosion of data and advancement of neural networks changed the possibilities for manipulating photos and other evidence, around 2018.

Now, combinations of detectors with machine learning are necessary to detect the more-subtle manipulations that have become possible with AI. The pace of AI advancement, however, poses a constant challenge to forensic investigators.

The true threat of generative AI, therefore, on Rocha’s view, is not so much from deepfakes as it is from manipulations that do not leave detectable artifacts.

The topic was further explored with presentations from Pindrop’s Nick Gaubitch on PAD in echoey environments, Arun Ross of Michigan State University on iris deepfakes, and a quartet of presentations from academic researchers.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Worldcoin eyes big name partnerships, runs low on biometric Orbs

Undeterred by increasing regulatory crackdowns across the globe, Worldcoin is hoping to seal more partnerships with tech and finance companies…

 

Smart Engines says new method boosts neural network efficiency by 40%

Scientists from the computer vision software company Smart Engines have announced they have found a way to improve the efficiency…

 

Airport biometrics bans proposed in US state and federal legislation

Biometric access control and airport security screening company Clear is facing legislative pushback in California from lawmakers who say that…

 

Governance framework key to trust that enables data-driven policymaking

The UNDP publication Development Advocate has published a new article by Tariq Malik, former chairman of Pakistan’s National Database and…

 

Fusion Technology wins $159.8M contract for FBI’s CJIS

Fusion Technology (Fusion), an IT provider for U.S. government services, has been awarded a $159.8 million five-year contract with the…

 

French data regulator sees 35% more privacy complaints

The European Union’s AI Act, age verification and keeping biometrics out of surveillance during the Paris Olympics were among the…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read From This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events