Deepfake apps are here and we can’t let them run amok (2024)

Many deepfake apps address these concerns by being “on rails”, or restricted: users can only swap faces into a selection of scenes from pre-approved films or shows. But these restrictions are often the outcome of technological limitations rather than a deliberate security choice. In order to quickly generate high-quality face-swaps with one or a few user images, apps “pre-train” their generative models on a number of popular movie scenes, such as the twins from The Shining, or Sean Bean’s “one does not simply walk into Mordor” meme from The Lord of the Rings. As the technology becomes more powerful and pre-training less restrictive, developers might see a competitive advantage in opening up their apps to user-uploaded content in an “off-rails” approach.

Other technology companies offering potentially hazardous services such as lip synchronisation and voice synthesis have adopted policies to prevent their products from being misused – like individually vetting clients, and gaining permission from all parties whose likeness is being altered. Yet it’s difficult to imagine deepfake apps enforcing similar protocols, given their reliance on uptake by a large number of users eager for novel deepfake capabilities. As apps vie for users’ attention in a crowded market, it seems almost inevitable that they’ll “go off the rails.”

Sure, both Apple and Google have implemented bans on apps that create deceptive or malicious deepfakes from their app stores, and developers are working on security features to avoid falling foul of these policies. These include app-specific deepfake detection tools, automatically blocking p*rnographic or malicious content, and the watermarking of deepfakes generated by the app.

While developers’ readiness to address misuse of their apps is promising, deploying these security features poses several challenges. One is how developers roll them out in the first place. For detection tools to be effective at stopping malicious deepfakes, they would need to be widely adopted by the social media platforms and messaging apps – but no social media platform currently has deepfake detection in their media upload pipelines, and implementing detection on messaging apps like WhatsApp or Telegram would require monitoring users’ conversations, a significant change to these services’ current privacy-focused model.

Another is how reliable these security measures would be. A watermark would notify viewers that a video is fake, but developers might be reluctant to place one where it would obstruct the image entirely, meaning it could simply be cropped out of frame. Preemptively detecting and blocking malicious content would also prove difficult given the wide range of possible harms that could be wrought through this budding technology. Capturing the near-limitless variety of malicious uses is currently impossible to automate, while manual moderation would be unfeasible given the volume of content being generated online.

Given all this, what could be plausibly done to minimise deepfake apps’ misuse? One approach could involve the creation of an app safety framework for developers, including measures such as threat assessments, limited access without user authentication, or even moratoria on releasing new capabilities that lack harm-mitigation strategies. If such a framework were enforced by app stores and other stakeholders critical to an app’s success, it could help create a safety standard for deepfake apps that all developers would have to follow in order to be published.

A stronger reaction may involve new legislation that allows victims or authorities to hold developers to account if their deepfake apps are deemed open to, or intentionally designed for, misuse. This could entail fines and the payment of damages to victims, the removal of offending apps from app stores, or even criminal charges against the developers. What both these possible approaches share is that they target deepfake apps’ shift “off the rails” by either restricting who can access them and what capabilities are released, or punishing developers if this shift ends up causing harm.

There is clearly an inherent friction with the model deepfake apps are moving towards: the more open and powerful they become, the harder it is to prevent the harms they can cause. Striking the right balance of openness and safety will be essential to deepfake apps’ future success. If they stay their current course and continue to move “off the rails”, this balance will be difficult to achieve.

Henry Ajder is an adviser on deepfakes, disinformation and media manipulation. Nina Schick is author ofDeepfakes: The Coming Infocalypse

This article was originally published by WIRED UK

Deepfake apps are here and we can’t let them run amok (2024)

FAQs

Is it illegal to watch deepfake? ›

Watching deepfakes is not illegal in itself, except in cases where the content involves unlawful material, such as child p*rnography. Existing legislation primarily targets the creation and distribution of deepfakes, especially when these actions involve non-consensual p*rnography.

Are deepfakes really a security threat? ›

Even scarier are the AI-generated deepfakes that can mimic a person's voice, face and gestures. New cyber attack tools can deliver disinformation and fraudulent messages at a scale and sophistication not seen before. Simply put, AI-generated fraud is harder than ever to detect and stop.

Are deepfakes identity theft? ›

By leveraging artificial intelligence, deepfakes enable fraudsters to clone your face, voice, and mannerisms to steal your identity.

Can AI detect deepfakes? ›

But AI can also be used to detect AI-generated deepfakes. And the good news here is, even as deepfake creation evolves, so too will AI-powered deepfake detection technologies. Several detection tools are available today that ingest large sets of deepfake images, video and audio.

Can you sue for deepfakes? ›

The Disrupt Explicit Forged Images and Non-Consensual Edits, or DEFIANCE Act, allows victims to sue if those who created the deepfakes knew, or “recklessly disregarded” that the victim did not consent to its making.

Is AI face swap illegal? ›

A law that bans AI or deepfakes entirely would violate the First Amendment. If challenged in court, the government could have to show that the law is necessary to prevent specific harm caused by deepfakes and that a ban is the best way to protect against that harm.

Can deepfakes be tracked? ›

As these generative artificial intelligence (AI) technologies become more common, researchers are now tracking their proliferation through a database of political deepfakes.

How harmful are deepfakes? ›

Not only has this technology created confusion, skepticism, and the spread of misinformation, deepfakes also pose a threat to privacy and security. With the ability to convincingly impersonate anyone, cybercriminals can orchestrate phishing scams or identity theft operations with alarming precision.

Is it safe to use deepfake app? ›

Deepfake technologies can be used to steal your identity even if you don't use generative AI platforms. You've probably heard about generative AI (artificial intelligence), but you might not be aware that these technologies have brought with them new concerns about privacy, identity theft, and misinformation.

How can you tell if a video is AI-generated? ›

Look out for strange shadows, blurs, or light flickers.

In some AI-generated videos, shadows or light may appear to flicker only on the face of the person speaking or possibly only in the background.

What is being done to stop deepfakes? ›

Researchers are developing new ways to detect deepfakes, such as AI models that spot color abnormalities. And authentication methods, like digital watermarks, can help prove that a video or image has been altered. But simply detecting deepfakes may not be enough to prevent harm.

Can AI detect if someone is lying? ›

Analyzing CEO speech patterns, artificial intelligence now can detect when business leaders are lying or using deceptive language with 84% accuracy thanks to a data-driven machine-learning model, said a professor at Arizona State University's W. P. Carey School of Business.

Is it illegal to use undressing AI? ›

It's now illegal to generate and distribute 'intimate' deepfakes. Preventative conversations about undress AI can keep children from getting involved.

Is deepfake a crime? ›

Deepfake technology itself is not considered illegal – and deepfakes are by no means all malicious – but depending on the kind of content generated, some violate laws such as data protection and specific offences of non-consensual content.

Are deepfakes illegal in NY? ›

In New York, legislation has been signed into law by Governor Kathy Hochul (S1042A), explicitly banning the distribution of AI-generated deepfake content depicting nonconsensual sexual images. Violators may face up to a year in jail, and victims can pursue legal action for damages in civil court.

Top Articles
Latest Posts
Article information

Author: Arline Emard IV

Last Updated:

Views: 6020

Rating: 4.1 / 5 (52 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Arline Emard IV

Birthday: 1996-07-10

Address: 8912 Hintz Shore, West Louie, AZ 69363-0747

Phone: +13454700762376

Job: Administration Technician

Hobby: Paintball, Horseback riding, Cycling, Running, Macrame, Playing musical instruments, Soapmaking

Introduction: My name is Arline Emard IV, I am a cheerful, gorgeous, colorful, joyous, excited, super, inquisitive person who loves writing and wants to share my knowledge and understanding with you.