Thursday

The Growing Menace of Weaponized Deepfakes

The U.S. House Intelligence Committee last week heard expert testimony on the growing threat posed by “deepfakes” — altered videos and other

artificial intelligence-generated false information — and what it could mean for

the 2020 general elections, as well as the country’s national security overall.


The technologies collectively known as “deepfakes” can be used to combine or

superimpose existing images and videos with other images or videos by

utilizing AI or machine learning “generative adversarial network” techniques.


These capabilities have allowed the

creation of fake celebrity videos — including pornography —

as well as for the distribution of fake news and other malicious hoaxes.


The hearing followed the widespread online distribution of a doctored video of House Speaker Nancy Pelosi, D-Calif., which made her appear impaired. The video

made the rounds on social media and was viewed more than 2.5 million times on

Facebook.


Deepfakes has become a bipartisan issue, with

both Democrats and Republicans expressing concerns over the use of manipulated

videos as a tool of disinformation.


The House Intelligence Committee heard testimony from four different

experts in AI and disinformation about the potential risks to the U.S.

government and even democracy from deepfakes. However, one expert also

warned of the threat deepfakes could pose to the private

sector. One such scenario might involve a deepfake video showing a CEO committing

a crime. Putting that type of video in circulation could impact a company’s stock price.


Whether in politics or the business world, even if a video is debunked the damage could be lasting.



Deep History


The phrase “deepfakes” was first coined in 2017, but the ability to

modify and manipulate videos goes back to the Video

Rewrite program, which was published in 1997. It allowed users to

modify video footage of a person speaking to depict that person

mouthing the words from a completely different audio track.


The technique of combining videos and changing what was said has

been used in Hollywood even longer, but it generally was a costly and

time-consuming endeavor. The film Forrest Gump, for example, required

a team of artists to render the character, played by Tom Hanks, into historic footage. Now, more than 20 years later, the results aren’t nearly as good as what

today’s software can do.


Simple programs such as FakeApp — which was released in January 2018

— allow users to manipulate videos easily, swapping faces.

The app utilizes an artificial neural network and just 4 GB of storage

to generate the videos.


The quality and detail of the videos is based

on the amount of visual material that can be provided, but given that

today’s political figures appear in hundreds of hours of footage, it

is easy enough to make a compelling video.



Fighting the Fakes


Technology to combat deepfakes is in development. The USC

Information Sciences Institute developed a tool that can detect fakes

with up to 96 percent accuracy. It is able to detect subtle face and

head movements, as well as unique video “artifacts” — the

noticeable distortion of media that is caused by compression, which also

can indicate a video manipulation.


Previous methods for detecting deepfakes required frame-by-frame

analysis of the video, but the USC ISC researchers developed a tool

that has been tested on more than 1,000 videos and has proven to

be less computationally intensive.


It could have the potential to scale and be used to automatically —

and more importantly quickly — detect fakes as the videos are uploaded

on Facebook and other social media platforms. This could allow near real-time detection, something that would keep such videos from

going viral.


The USC ISI researchers rely on a two-step process. It first

requires that hundreds of examples of verified videos of a person are

uploaded. A deep learning algorithm known as a “convolutional neural

network” then allows researchers to identify features and patterns in

an individual’s face. The tools then can determine if a video has

been manipulated by comparing the motions and facial features.


The results are similar to a biometric reader that can recognize

a face, retina scan or fingerprint — but just as with those technologies, a baseline is required for comparison. That could be easy for famous individuals

such as Speaker Pelosi or actor Tom Hanks, but for the

average person it probably won’t be as easy, as the

database of existing video footage may be limited or nonexistent.



Potential to Be Weaponized


Deepfakes have the potential to be far

worse and do far more damage than “Photoshopped” images — both on an individual and even national level.


“There is a world of difference between Photoshopped images and

AI-aided videos, and people should be concerned with deepfakes because

of their heightened realism and potential for weaponization,” warned

Usman Rahim, digital security and operations manager for The Media Trust.


One reason is that today people accept that photos can be altered, so

much so that these have earned the moniker “cheapfakes.” Video

is a new frontier.


“Much fewer are aware of how realistic fake videos have become and how

easily they can be made in order to spread disinformation, destroy

reputations, or disrupt democratic processes,” Rahim told the E-Commerce Times


“In the wrong hands, deepfakes spread through the Internet, especially

social media, can have a large impact on individuals — and more

broadly, societies and economies,” he added.


“Aside from the National security risk — e.g., a deep fake video of a

world leader used to incite terrorist activity — the political risk is

especially high in a competitive national election such as 2020, with

multiple candidates seeking to unseat a controversial incumbent,”

noted associate professor Larry Parnell, strategic public relations program director in The Graduate School of Political Management at George Washington University.


“Either side might be tempted to engage in this activity, and that

would make ‘old school’ dirty tricks seem mundane and quaint,” he

told the E-Commerce Times. “We have already seen how social media can be used

to impact a national election in 2016. That will seem like child’s

play compared to how advanced this technology has become in the last

two-to-three years.”



Beyond Politics and Security Risks


Deepfakes could present a problem on a much more personal and

individual level. The technology already has been used to create

revenge porn videos, and the potential is there to use it for other

sinister or nefarious purposes.


“In the hands of unsupervised kids, deepfakes can raise cyberbullying

to a new level,” said The Media Trust’s Rahim.


“Imagine what happens if our own or our children’s images are used and

distributed online,” he added.


“We might even see fake videos and social media posts being used in legal proceedings as evidence against a controversial figure to

silence them or destroy their credibility,” warned GW’s Parnell.


There already have been calls to hold the tech industry

responsible for the creation of deepfakes.


“If you create software that allows a user to create deepfakes, well,

then you will be held liable for significant damages, maybe even held

criminally liable,” argued Anirudh Ruhil, a professor in the Voinovich School of Leadership and Public Affairs at Ohio University.


“Should you be a social media or other tech platform that disseminates

deepfakes, you will be held liable and pay damages, maybe even jail

time,” he told the E-Commerce Times.


“These are your only policy options, because otherwise you will have

the social media platforms and websites going scot-free for pushing

deep fakes to the mass public,” Ruhil added.


It is possible the authors of such heinous videos may not be

found easily, and in some cases could be a world way, making prosecution a

non-starter.


“In some ways, this policy is similar to what someone might argue

about gun control: Target the sellers of weapons capable of causing

massive damage,” explained Ruhil. “If we allow the tech industry to

skate free, you will see repeats of the same struggles we have had

policing Facebook, YouTube, Twitter and the like.”



Fighting Back


The good news about deepfakes is that in many cases the technology

still isn’t perfect, and there are plenty of telltale signs that the

video has been manipulated.


Also, there are already tools that can help researchers and the media tell fact from fiction.


“Social media and platforms, and traditional media can use these tools

to identify deepfakes and either remove them or label them as such, so

users aren’t fooled,” said Rahim.


Another solution could be as simple as adding “digital noise” to

images and files, making it harder to use them to produce

deepfakes.


However, just as in the world of cybersecurity, it’s likely the bad actors will stay

one step ahead — so today’s solutions may not solve tomorrow’s methods

for producing deepfakes.


It may be necessary to put more effort into solving this problem before it becomes so great that it is not solvable.


“While it may be a constant and expensive process — the major tech

companies should invest now in emerging technology to spot deep fake

videos,” suggested Parnell.


“Software is being developed by DARPA and other government and private

sector companies that could be utilized, as the alternative is to be

caught flat-footed and be publicly criticized for not doing so — and

suffer the serious reputation damage that will result,” he added.


For now the best thing that can happen is for publishers and social

media platforms to call out and root out deepfakes, which will help

restore trust.


“If they don’t, their credibility will continue to dive, and they will

have a hand in their own business’ demise,” said Rahim.


“Distrust of social media platforms in particular is rising, and they

are seen almost as much of a threat as hackers,” he warned.


“The era of prioritizing the monetization of consumer data at the

expense of maintaining or regaining consumer trust is giving way to a

new era where online trust works hand-in-glove with growing your

bottom line,” Rahim pointed out. “Social and traditional media can also be a

force for good by outing bad actors and raising consumers’ awareness

of the prevalence and threats of deepfakes.”



Peter Suciu has been an ECT News Network reporter since 2012. His areas of focus include cybersecurity, mobile phones, displays, streaming media, pay TV and autonomous vehicles. He has written and edited for numerous publications and websites, including Newsweek, Wired and FoxNews.com.

Email Peter.

Post a Comment

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search