2024 Deepfakes Guide and Statistics
Deepfake fraud incidents increased tenfold between 2022 and 2023. Learn how to identify harmful deepfake media and protect yourself from fraud.
Deepfakes are images, videos, or audio recordings that have been manipulated with artificial intelligence. The results can appear incredibly realistic. Many people cannot tell which parts of the manipulated videos or photos are real and which are fake.
Deepfakes have legitimate uses, but bad actors exploit them for purposes such as disinformation, blackmail, harassment and financial fraud. For example, many people and businesses have lost huge sums of money in deepfake scams.
This guide offers statistics and examples to illustrate the severity of problems associated with deepfake technology. We also offer advice on how to identify deepfakes and protect your personal information.
What Are Deepfakes?
Deepfakes, which are images, videos, or audio recordings altered using artificial intelligence, have emerged as a significant concern in the digital age. They usually involve swapping sounds, altering speech, changing facial expressions or synthesizing speech (artificially producing speech).
Deepfakes can be very realistic. You may even find deepfake faces more real than human ones. Many people do!
Want to test your AI image detection skills and see how you compare with others? Take an assessment here.
How Are Deepfakes Used?
Deepfakes can be used in positive or negative ways. For example, movies use them for special effects, and they can play an educational role in historical reenactments. However, people have employed deepfakes maliciously for purposes that include the following:
- Disinformation campaigns
- Election interference
- Blackmail
- Bullying
- Harassment
- Nonconsensual pornography
- Hoaxes
- Fake news
- Financial fraud and scams
When Did Deepfakes Start?
The term, “deepfakes,” gained momentum with Reddit user “deepfakes” in 2017. The word comes from “deep learning” and “fake.”
This Reddit user, along with many others in r/deepfakes, shared their deepfake creations. Pornographic videos, often showing a celebrity’s face on an adult film actor’s body, were common. Deepfake pornography is still prevalent in many places online.
Of course, the technology behind deepfakes has its roots in earlier times. For example, Adobe Photoshop, released in 1990, allowed users to alter photos in new, much easier ways. In the 1990s and 2000s, research in neural networks and machine learning took great strides forward. In the 2000s, face morphs and replacements became much more common for movie special effects.
In 2014, Ian Goodfellow introduced generative adversarial networks (GANs). The next few years saw huge improvements in the quality of deepfake content. In 2019, DeepFaceLab and FakeApp became more user-friendly and accessible.
Various researchers and companies intensified efforts to develop methods to detect deepfakes. Now the technology is so advanced that most people struggle to tell the difference between a genuine and deepfake photo, video or audio recording.
Deepfakes can be very problematic with their blend of AI realism and easy accessibility. The potential for harm is serious, and it is important for people to be aware of what is possible and how to protect themselves.
How Are Deepfakes Made?
Creating deepfakes involves AI and deep learning, using neural networks like GANs and variational autoencoders (VAEs), plus facial recognition algorithms. VAEs help with realistic face swaps by compressing and reconstructing faces, making deepfakes where the target face mimics the source’s expressions and movements. GANs have a generator that creates data and a discriminator that detects fakes. This process continues until the fake data is difficult to distinguish from real data.
A large set of data trains deepfakes. The more comprehensive and higher quality the dataset, the more authentic the results should appear. For videos, the new material is rendered frame by frame and with processes to make movements appear natural. Post-processing may involve “authenticity” adjustments for color matching, lighting and other details.
Despite the complex technology underlying deepfakes, creating a simple deepfake video isn’t hard. For example, people can use smartphones or tablets to create deepfakes with apps, such as MSQRD, Zao and FaceApp. If your image, video or audio sources and targets are good, you could have a high-quality deepfake in less than 10 minutes using apps or services.
>> Related: How to Prevent Being Scammed
Are Deepfake Apps Safe to Use?
Deepfake apps are generally as safe as other apps. For example, they undergo scanning and other processes for viruses and malware before app stores sell them. In other words, take the same precautions with deepfake apps as you would for other apps, and download them only from trusted stores, such as the Google and Apple app stores.
One big risk is the potential for deepfake apps to misuse your personal data. Check whether or how the apps share your data with third-party companies and whether they can store your raw material for various uses.
Deepfake Statistics
Criminals can exploit our desires to help one another, employing deepfaked (cloned) voices to scam people out of thousands of dollars. Cyberthieves don’t stop there, of course. A huge surge in deepfake fraud is hitting people and businesses across the world.
- According to a McAfee survey, 70 percent of people said they aren’t confident that they can tell the difference between a real and cloned voice.1 That said, 40 percent of people in the same study reported they would help if they got a voicemail from their spouse who needed assistance.
- Criminals can use a small snippet of a person’s voice to target the person’s loved ones for money, perhaps staging a situation where they need help in a bad situation. One in 10 people report having received such a cloned message, and 77 percent of these people lost money from scams.
- Cyber scammers have many ways to obtain people’s voices. McAfee reported that as many as 53 percent of people share their voices online or via recorded notes at least once a week. YouTube, social media reels and podcasts are common sources of audio recordings.
- According to Google Trends, searches for “free voice cloning software” rose 120 percent between July 2023 and 2024. Users don’t need a lot of technical skill to generate manipulated audio with these free apps.
- Three seconds of audio is sometimes all that’s needed to produce an 85 percent voice match from the original to a clone.
- DeepFaceLab claims that more than 95 percent of deepfake videos are created with its open-source software.
- False news, lies or rumors can spread faster than truthful news — which explains how deepfakes can be so effective. They provoke emotional responses and offer new information. In one study, the top 1 percent of rumors on Twitter (now X) reached between 1,000 to 100,000 people, while truthful news rarely reached more than 1,000 people.2
- The price per minute to purchase a good-quality deepfake video can range from $300 to $20,000.3 Complex, high-quality projects involving very famous people command prices at the higher end of this spectrum.
- According to internal data from Sumsub, deepfake fraud worldwide increased by more than 10 times from 2022 to 2023. Eighty-eight percent of all identified deepfake cases were in the crypto sector, and 8 percent were in fintech.
- Deepfake fraud increased by 1,740 percent in North America and by 1,530 percent in the Asia-Pacific region in 2022. The increases in other regions were more modest.4
- CEO fraud targets at least 400 companies per day. This is especially troubling for businesses that are not up to date on the latest advancements in phishing, deepfaking and other tactics to separate them from their money.5
Unfortunately, many company leaders, even now in 2024, do not recognize the destructive power deepfakes can have on their operations. According to a business.com study,
- More than 10 percent of companies have dealt with attempted or successful attempts at deepfake fraud. Damages from successful attacks reached as high as 10 percent of companies’ annual profits.
- About 1 in 4 company heads has little or no familiarity with deepfake tech, which is perhaps why 31 percent of executives say deepfakes do not increase their company’s fraud risk.
- Eighty percent of companies don’t have protocols to handle deepfake attacks.
- More than 50 percent of leaders admit that their employees don’t have training on recognizing or dealing with deepfake attacks.
- Only 5 percent of company leaders say they have comprehensive deepfake attack prevention across multiple levels, including staff, communication channels and operating procedures.6
Major Deepfake Attacks
Deepfakes can affect people in many ways. No one is exempt from the negative effects and that includes students, educators, CEOs and U.S. presidents. Here are a few recent deepfake attacks:
- At New Jersey’s Westfield High School in 2023, teen boys created sexually explicit deepfakes of female classmates.
- In Pennsylvania in 2021, the mother of a cheerleader reportedly created deepfakes of her daughter’s rivals on the cheer squad naked and drinking. The mother sent these images to the coach. One student was suspended because the school district believed the deepfake genuinely portrayed the student as naked and smoking marijuana.
- At Maryland’s Pikesville High School, an athletic director, hoping to get the principal fired, created a deepfake audio recording portraying the principal as racist.7
Criminals often target huge companies worth millions or billions of dollars. Two examples include the following:
- In 2024, a deepfake of British engineering firm Arup’s “CFO” led to the transfer of $25 million to bank accounts in Hong Kong. A staff member had a video conference with the false CFO and other deepfaked employees.8
- In March 2019, thieves used the deepfaked voice of a U.K. energy firm CEO to arrange the transfer of €220,000 into an external account.9
There are also plenty of deepfake instances in politics worldwide, such as these from the U.S.:
- In early 2024, thousands of New Hampshire voters got robocalls that used a deepfake voice sounding like President Biden discouraging them from voting in the New Hampshire primary. The audio cost $1 and took less than 20 minutes to create.10
- In 2019, a deepfake video of a seemingly impaired House Speaker Nancy Pelosi circulated on social media. It racked up more than 2.5 million views on Facebook. Despite this deepfake being relatively low-tech, it fooled many people.11
Other notable examples of deepfakes include:
- In February 2023, a reporter used an audio deepfake of his voice to trick his bank’s authentication system.12
- Drake and The Weeknd seemed to perform together on streaming platforms in 2023, but it was not authentic.13
- In a 2022 cryptocurrency scam, an Elon Musk deepfake promised investors 30 percent dividends daily for life.14
- “Queen Elizabeth II” delivered a very special Christmas message in 2020.15
- “President Nixon” somberly announced an Apollo 11 disaster.16
- A resurrected “Salvador Dali” greeted visitors at a Florida museum. “Dali” could even take group selfies!14
Scarlett Johansson demands answers over voice demo
Many deepfake porn videos have shown actress Scarlett Johansson, usually with her face on the body of a different actress. These types of pornographic deepfakes disproportionately target women.
Johansson’s entanglement with deepfakes seems to extend further, though. In 2024, OpenAI released a voice chat demo that sounded like Johansson in the 2013 film, “Her.” Through her lawyers, Johansson has asked OpenAI to explain what happened.18
It turns out that OpenAI CEO Sam Altman had been after Johansson for months to use her voice, even contacting her team two days before debuting the demo. OpenAI denied any link between Johansson and the new voice, but the company is no longer using it. After this incident, Johansson is pushing for legislation to better protect people’s rights.
As for the porn deepfakes, Johnasson said in 2018, “The fact is that trying to protect yourself from the internet and its depravity is basically a lost cause.”19
What Are the Dangers of Deepfakes, and How Can You Protect Yourself?
To understand why deepfakes can be dangerous, it’s important to assess their potential uses. People may make deepfakes for revenge, blackmail, pornography or financial fraud, among other things. They may also clone your voice to try to trick your loved ones into giving them money.
- Set a code word: Create a secret code word with your kids and other loved ones. Ensure only you and they know it. Use this code word in messages asking for help.
- Verify: If someone calls, texts or video messages asking for help, call the person directly to verify.
- Use proactive identity protection: Identity monitoring services can send alerts if your personal information appears on the dark web.
Revenge or Blackmail
Someone who wants to destroy your reputation or blackmail you could create damaging types of clips such as these. For example, they could create a video that falsely depicts you saying offensive things or stealing company supplies. This could be sent to your boss and lead to disciplinary action or job loss.
>> Further reading: All About Doxxing
How to protect yourself
- Limit the information, including pictures and audio, that you share online.
- Use message apps with encryption, such as WhatsApp.
- Watermark your photos and videos so it’s more evident if something is tampered with.
- Verify the authenticity of a person (such as calling them directly) before doing anything.
- Avoid engaging with blackmailers, but keep all communications as evidence. Instead of engaging, get legal advice or talk to the police.
- AntiFake is a proactive way in which you can change the voice pieces you put on websites, social media and other places so that attackers can not use them to mimic your voice.
Financial Fraud
Executives and others in a wide variety of jobs are at risk of impersonation (especially audio-only impersonation). Even if you’re just a typical everyday person, someone could clone your voice to authorize fraudulent financial transactions or gain access to your bank account.
How to protect yourself
- Call your bank yourself instead of doing transactions when “your bank” calls you.
- Use passwords/code words for conversations or secret questions at the beginning of a business call.
- Train employees about the possibility of deepfake attacks.
- Ask voice authentication or biometric security providers how up to date their technology is.20
>> Related: Best ID Theft Protection with Fraud Detection
Celebrity Ads or Celebrity Endorsement Scams
Deepfakes can persuade people to give out their personal or financial information in exchange for merchandise a “famous person” is giving away, such as a laptop from “Jennifer Aniston.”21
How to protect yourself
- Operate on the principle saying if it’s too good to be true, it likely is. It doesn’t make sense that anyone, including a celebrity, would give away goods online for free or a low price in exchange for personal or financial details from you.
- Verify businesses and organizations before doing anything. For instance, if a charity seems to have a celebrity giving away goodies, go to the charity’s website to verify the legitimacy. Alternatively, Google the authenticity of what seems to be occurring.
- Avoid sharing information like your Social Security number, address and bank account number.
- Hover over any links that seem suspicious before you actually click. Look for signs of a scam, such as misspellings in the URL, odd domain extensions like .xyz and very long URLs.
Election Disinformation
Deepfakes can spread disinformation during elections, undermine their integrity and hurt voters’ trust. Manipulated videos or audio recordings can make politicians seem to say something they never voiced or act in ways they never did.22
False information can spread quickly and become “true” before efforts can be made to debunk it. Verifying legitimacy takes time, which slows the debunking process.
Average, everyday people can be responsible for these deepfakes. Foreign governments can also be culprits, and these behaviors may have international repercussions.
How to protect yourself and others
- Wait to share content.
- Check reputable news sources before you spread information, and cross-check information across several sources.
- Be skeptical of videos or audio that are extraordinarily shocking.
- Review content for red flags, such as unnatural body movements, inconsistent lighting and mouth movements that do not match words spoken.
Specific Deepfake Dangers for Teens
Deepfakes can threaten teens in many ways.
- Deepfakes can cause teens a huge amount of emotional pain.
- Fake sexual videos can lead to teens being socially excluded, bullied or harassed (which, in turn, increases their emotional distress). Cyberbullying increases teens’ risk of suicide.23
- Deepfakes posted online can linger for years, potentially hurting teens’ job prospects for the foreseeable future.
- Blackmail can be a concern, with criminals using false videos or audio to blackmail teens into giving them real sexual content, money or other things.
How teens can protect themselves (and how parents can help)
- Avoid sharing photos, video, audio and other content online, even among trusted people.
- Report deepfake content — whether it targets them or others — and block the user spreading it.
- Seek legal or police assistance, especially since many localities have laws against deepfake pornography.
Parents should also keep open and supportive lines of communication between themselves and their children.
It is dangerous for teens to experiment with inappropriate or explicit deepfakes. Doing so could get them in trouble for offenses such as distributing child pornography, depending on the situation.
How to Identify Deepfakes
You will probably have to use several (or more than several) methods to identify a deepfake. It’s usually more complicated than squinting at a video and saying, “Yep! The lighting and eyes are off. It’s a fake!”
This is especially true as the technology improves, and criminals fix tells, such as overly wrinkled or smooth skin, aging mismatches, and odd eye blinking.
Here are a few simple steps to take to identify deepfakes:
- Review the content in question for a label or announcement that it is artificially generated or a deepfake. Many content creators, entertainers and others label their AI-generated content as such.
- Look for jerky movements, distortions and unnatural movements like too much blinking (or a lack of blinking).
- Watch for inconsistencies in facial features. Pay special attention to the cheeks and forehead. Look for facial hair or moles that may seem off, too. If the person wears glasses, do the glasses reflect the natural workings of light? If the person moves, does the angle of any glare on the glasses move as well?
- Analyze speech patterns for differences with normal human tonality and pitch.
- Check whether the lip movements match what is being said.
- Consider whether the person in question would realistically be in this setting saying or doing such things.
- Check the photo or video for digital watermarks. These watermarks can be visible to the human eye (a logo, for example), but their removal or alteration is possible. Despite being visible, these watermarks are useful for copyright protection, and you can combine them with other types.
You may be able to use deepfake detection software. Often, though, it does not identify deepfakes. For example, only one of four free tools was able to flag the “Biden” robocalls audio deepfake as AI-generated.24
Audio deepfakes are more complicated to identify than image and video deepfakes. Audio deepfakes are easier and cheaper to make, too. Identifying deepfake audio requires high levels of skill, and only a few labs in the world can do so reliably.25
Are Deepfakes Illegal?
Laws on deepfakes are usually lagging. Even in the entertainment industry, which has dealt with deepfake issues for a relatively long time, laws are behind the times. The U.S. Copyright Office considers AI-generated works on a case-by-case basis and welcomes public input on these issues.26
However, it is illegal to use deepfaked voices in robocalls, per an FCC ruling in February after the “Biden” robocalls in New Hampshire.27 In general, criminal impersonation laws that make it illegal to impersonate, say, a doctor or government official, may apply to deepfakes.
Deepfakes are a bipartisan issue, with both Democrats and Republicans introducing legislation to protect people.
At least 10 states have laws related to deepfakes. First Amendment lawyer Jenna Leventoff, who works at the ACLU, says that free speech protections apply to deepfakes. This means they could be legal (and stay that way regardless of whatever new laws are enacted) as long as they do not encroach into defamation, obscenity and fraud territory, among other problematic areas.28
- Nonconsensual deepfake pornography is illegal in Georgia, Hawaii, Minnesota, New York, Texas and Virginia (There are no federal protections for nonconsensual deepfake pornography).
- Victims in California and Illinois can sue people who create likenesses of them.
- Minnesota has laws about deepfakes in politics.
In 2024, deepfake legislation is pending in at least 40 states, and 20 bills have been passed. For the latest information, the National Conference of State Legislatures lets you filter by state or territory.
At least 17 states have online impersonation laws that were passed as email and social media became more common. These laws make actions such as online bullying and harassment illegal, and they could apply to deepfakes used for these aims. The states with these laws are California, Connecticut, Florida, Hawaii, Illinois, Louisiana, Massachusetts, Mississippi, New Jersey, New York, North Carolina, Oklahoma, Rhode Island, Texas, Utah, Washington and Wyoming.29
Citations
- McAfee. (2023). Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam.
mcafee.com/blogs/privacy-identity-protection/artificial-imposters-cybercriminals-turn-to-ai-voice-cloning-for-a-new-breed-of-scam/ - Science. (2018). The spread of true and false news online. science.org/doi/10.1126/science.aap9559
- Kaspersky Daily. (2023). How real is deepfake threat? usa.kaspersky.com/blog/deepfake-darknet-market/28308/
- The Sumsuber. (2023). Sumsub Expert Roundtable: The Top KYC Trends Coming in 2024. sumsub.com/blog/sumsub-experts-top-kyc-trends-2024/
- Treasurers. (2024). CEO fraud targeting at least 400 firms per day. treasurers.org/hub/treasurer-magazine/ceo-fraud-targeting-least-400-firms-day
- Business.com. (2024). 1 in 10 Executives Say Their Companies Have Already Faced Deepfake Threats. business.com/articles/deepfake-threats-study/
- The Hill. (2024). From deepfake nudes to incriminating audio, school bullying is going AI.
thehill.com/homenews/education/4703396-deepfake-nudes-school-bullying-ai-cyberbullying/ - CFO Dive. (2024). Scammers siphon $25M from engineering firm Arup via AI deepfake ‘CFO’.
cfodive.com/news/scammers-siphon-25m-engineering-firm-arup-deepfake-cfo-ai/716501/ - MIT Management. (2020). Deepfakes, explained. mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained
- NBC News. (2024). A New Orleans magician says a Democratic operative paid him to make the fake Biden robocall.
nbcnews.com/politics/2024-election/biden-robocall-new-hampshire-strategist-rcna139760?_ga=2.181210351.976717714.1719011557-176973521.1719011550 - CBS News. (2019). Doctored Nancy Pelosi video highlights threat of “deepfake” tech.
cbsnews.com/news/doctored-nancy-pelosi-video-highlights-threat-of-deepfake-tech-2019-05-25/ - Vice. (2023). How I Broke Into a Bank Account With an AI-Generated Voice.
vice.com/en/article/dy7axa/how-i-broke-into-a-bank-account-with-an-ai-generated-voice/ - Variety. (2023). Ghostwriter’s ‘Heart on My Sleeve,’ the AI-Generated Song Mimicking Drake and the Weeknd, Submitted for Grammys.
variety.com/2023/music/news/ai-generated-drake-the-weeknd-song-submitted-for-grammys-1235714805/ - Better Business Bureau. (2022). BBB Scam Alert: Get rich quick scheme uses deepfake technology to impersonate Elon Musk.
bbb.org/article/scams/27185-bbb-scam-alert-get-rich-quick-scheme-uses-deepfake-technology-to-impersonate-elon-musk - Independent. (2020). Channel 4 creates ‘deepfake’ Queen for alternative Christmas message.
independent.co.uk/news/uk/home-news/queen-deepfake-channel-4-christmas-message-b1778542.html - Newsweek. (2019). MIT Deepfake Video ‘Nixon Announcing Apollo 11 Disaster’ Shows the Power of Disinformation.
newsweek.com/richard-nixon-deepfake-apollo-disinformation-mit-1475340 - Dezeen. (2019). Museum creates deepfake Salvador Dalí to greet visitors. dezeen.com/2019/05/24/salvador-dali-deepfake-dali-musuem-florida/
- NY Times. (2013). Study Suggested Hip Device Could Fail in Thousands More.
nytimes.com/2013/01/23/business/jj-study-suggested-hip-device-could-fail-in-thousands-more.html - Business Insider. (2018). Scarlett Johansson says trying to stop people making deepfake porn videos of her is a ‘lost cause’.
businessinsider.com/scarlett-johansson-stopping-deepfake-porn-of-me-is-a-lost-cause-2018-12 - MIT Management. (2020). Deepfakes, explained. mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained
- Ohio Attorney General. (2024). Beware of deepfake celebrity-endorsement scams.
ohioattorneygeneral.gov/Media/Newsletters/Consumer-Advocate/April-2024/Beware-of-deepfake-celebrity-endorsement-scams - AP News. (2024). New Hampshire investigating fake Biden robocall meant to discourage voters ahead of primary.
apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5 - Pediatrics. (2024). Suicide and Suicide Risk in Adolescents.
publications.aap.org/pediatrics/article/153/1/e2023064800/196189/Suicide-and-Suicide-Risk-in-Adolescents?autologincheck=redirected - Poynter. (2024). AI detection tools for audio deepfakes fall short. How 4 tools fare and what we can do instead.
poynter.org/fact-checking/2024/deepfake-detector-tool-artificial-intelligence-how-to-spot/ - Scientific American. (2024). AI Audio Deepfakes Are Quickly Outpacing Detection.
scientificamerican.com/article/ai-audio-deepfakes-are-quickly-outpacing-detection/ - CNN Business. (2023). The viral new ‘Drake’ and ‘Weeknd’ song is not what it seems. cnn.com/2023/04/19/tech/heart-on-sleeve-ai-drake-weeknd/index.html
- KQED. (2024). A political consultant faces charges and fines for Biden deepfake robocalls.
npr.org/2024/05/23/nx-s1-4977582/fcc-ai-deepfake-robocall-biden-new-hampshire-political-operative - AP News. (2024). What to know about how lawmakers are addressing deepfakes like the ones that victimized Taylor Swift.
apnews.com/article/deepfake-images-taylor-swift-state-legislation-bffbc274dd178ab054426ee7d691df7e - NCSL. (2024). Deceptive Audio or Visual Media (‘Deepfakes’) 2024 Legislation.
ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation