On the Sunday before the first presidential primary in New Hampshire, around 5,000-25,000 voters allegedly received a call from President Joe Biden. In the call, Biden told his supporters not to participate in the primaries and instead save their votes for the election in November 2024. It turns out that the call was not actually from Biden yet instead a deepfake robocall mimicking the president’s voice.
What is a “Deepfake”? Deepfakes refer to synthetic media like images, videos, and audios that have been altered using artificial intelligence (AI). The word comes from a combination of deep learning, a subset of machine learning designed to mimic human learning and thinking, and the word fake, because well it’s fake.
Most deepfakes are manipulated images, videos, and audios with the face or voice of the subject replaced or synthesized. They are extremely realistic and difficult to spot with the human eye or the other senses. This has been enhanced through the development of technology, making modern day deepfakes more sophisticated with less obvious indicators.
The creation of a deepfakes often requires two different algorithms, a generator and a discriminator. The generator is fed a series of videos and images to create the best doctored replica possible while the discriminator works in detecting if the image is fake. These two algorithms are turned against each other and work together to make the most realistic deepfake possible.
What is the true danger of a deepfake? They sound relatively harmless and people on the internet use filters and features for face swap for fun all the time. Well, deepfakes are much more serious than mere filters; this is a worse version. Deepfakes are more realistic and are unknowingly ingrained in our lives. They can take a darker turn and cause substantial harm to both the victim of identity theft and the people associated with the victim.
A large part of this is because deepfakes are widely spread on the internet and the scope of their reach is only increasing by the day. Research from the verification platform Sumsub recently found that the number of deepfakes detected globally in all industries between 2022 and 2023 has increased tenfold.
Misinformation
The great scope of deepfakes can lead to us as audiences unknowingly consuming and spreading misinformation. After all, there is risk in decision-making when we have difficulty differentiating fact from fiction. A deepfake that we see on the internet might give us false information on a certain topic, shaping our views and choices. Then we might share this information with our friends, families, and colleagues. Once a deepfake is out on the internet, it is almost impossible to get rid of. Even after a deepfake has been proven to be fake news, it often remains on the internet continuing to lure in unknowing netizens. This all leads to a continuous loop of misinformation.
As deepfakes blur a dangerous line between truth and fiction, people might have to start questioning almost everything they consume. They might think to themselves, “Is this actually real or AI?” While skepticism and fact-checking is crucial in navigating the digital age, extreme distrust in media is just as harmful. To fact-check a piece of media, people often do a quick google search to check the validity or information they just obtained. I personally do this as well, but a recent article published by Nature found that online searches on misinformation often increase the possibility of an individual believing in it. This is as when people are fact-checking, they risk falling into data voids and encountering more misinformation from low-quality sites.
Fraud and Scams
Deepfakes not only spread misinformation, they have also been a frequent tool to commit monetary fraud. Deepfakes have been utilized to scam both individuals and companies out of money. This can include large quantities, up to millions of dollars. Fraudsters do this by modifying videos and audio clips to coerce targets into sending them money.
One of the most well known cases happened earlier this year in Hong Kong. Scammers used deepfakes to pose as the chief financial officer and his employees in a Zoom meeting. They then asked an employee to send money over, believing it was an order from his superior the employee unsuspectingly sent over a total of 200 million Hong Kong dollars, which is around $25.6 million USD. Recently, the victim of this scam has been identified by CNN as the well-known British engineering firm Arup, who is behind the design of the Sydney Opera House.
These scams are not only occurring with large companies like Arup. Smaller companies and ordinary people are also being targeted. As Arup’s chief information officer Rob Grieg told Fortune “Attempts to defraud companies have risen sharply using various forms from phishing scams to WhatsApp voice cloning.”
This is an issue that is happening on a global scale, where swindlers all over the world are using realistic deepfakes to cheat people out of their savings. Deepfake identity fraud has increased by 704% in just the year of 2023 alone. This includes various cases of scammers using deepfakes to pretend to be the owner of an account to commit wire fraud, gain password information, and authorize transactions.
Pornographic Material
It is extremely easy to put anyone’s face in any kind of photo or video using deepfakes. While this can be used for fun, they are often used with malicious intent. In many cases, deepfakes are used to create pornographic material which is then spread on the internet. Some of the earliest and most common victims of deepfake pornography are celebrities. In March of this year, a Channel 4 News analysis found the likeness of nearly 4000 celebrities on 5 different deepfake pornography websites.
Recently, the public’s attention has once again been called to the creation of deepfakes pornography, after an explicit deepfake image of Taylor Swift was spread on the internet. In a short period of time of around just 17 hours, the picture was viewed over 47 million times. Swift’s case brings attention to disturbingly common use of deepfake for pornographic purposes. Back in 2019, the AI firm Deeptrace found that of the 15,000 deepfake videos they looked at, around 96% of them were pornographic and the primary targets of these videos tended to be women.
The spread and creation of pornographic deepfakes are extremely problematic. The nonconsensual use of a person’s image affects their personal social life and reputation. A survey conducted in 2020 by the Centre for International Governance Innovation (CIGI) found that the non-consensual use of intimate images was considered the most harmful type of online harm to both men and women (82.8% of women and 71.2% of men).
Victims are often made to feel ashamed and humiliated for something that is not their fault. “It feels like a violation. It just feels really sinister that someone out there who’s put this together, I can’t see them, and they can see this kind of imaginary version of me, this fake version of me,” said Channel 4 news presenter Cathy Newman, a victim of deep fake pornography.
The Future/Conclusion
The growing use of deepfakes is a pressing issue, as falsities can easily be created and spread on the internet. Deepfakes are especially concerning compared to other forms of fraud considering that they are relatively low budget and accessible to the average user. In the case of the Biden robocall deepfake, the magician who made the robocall said in an interview with NBC that, “Creating the fake audio took less than 20 minutes and cost only one dollar.” This accessibility in deepfake creation is extremely dangerous as it means that many can fall victim to it.
In spite of all the dangers that deepfakes pose, there hasn’t been sufficient regulations on them due to how new AI is. However newfound attention on this topic have led to calls for action to regulate deepfakes. Many believe that the harm of deepfakes can be greatly limited with adequate rules and regulations for governmental bodies.
Representatives from across the country have been calling for laws that protect the public from the dangers of AI like deepfakes. Several states like Georgia, Texas, Hawaii, Minnesota, New York, and Illinois have already made laws regulating deepfakes to some extent. In addition to legal regulation of deepfakes, the further development of AI can also help the regulation process. Technology like those developed by Stanford University and Intel have been key to detecting deepfakes and creating a safer online environment. As long as the issue with deepfakes persists there is and will continue to be calls for action to be taken.
A deepfake that we see on the internet might give us false information on a certain topic, shaping our views and choices. Then we might share this information with our friends, families, and colleagues. Once a deepfake is out on the internet, it is almost impossible to get rid of. Even after a deepfake has been proven to be fake news, it often remains on the internet continuing to lure in unknowing netizens. This all leads to a continuous loop of misinformation.