Skip to main content
Tags: deepfakes | fake news | media | politics

Deepfakes Present the Threat of Truly Fake News (Part 2)

Deepfakes Present the Threat of Truly Fake News (Part 2)
(Annegordon/Dreamstime.com)

By    |   Thursday, 17 January 2019 03:00 PM EST

Part 2: The Solution

Last week in Part 1 we talked about the threat from “deepfakes,” which are pictures, audio recordings or video footage showing real people saying and doing things they didn’t actually do or say.

The problem is real, the threat serious, so what can be done about it?

The most obvious solution is to perfect methods of detecting fakes. As we saw in Part 1, that’s a pretty useless endeavor. The technology is going to quickly evolve to the point where fakery is undetectable by technical means. And here’s something I didn’t even mention before, that relates to some method of “certifying” a photo or video as genuine: Do we even know what “genuine” means? Open any magazine to a fashion or perfume ad featuring a human model. It’s a virtual certainty that a graphics expert spent several full days manipulating the image using a tool like Photoshop. Is the resulting image “genuine?” When I take photos of an IRONMAN triathlon, as I do for the World Triathlon Corporation’s real-time feed several times a year, it’s not unusual for me to crop it, tilt it left or left to straighten it out, add a little fill light if it’s too dark, correct “red eye,” tweak the contrast, and fiddle with the shadow level before it gets posted. Is that certifiable as a genuine photo? Note, by the way, that I can do all of that editing on the spot by sending the shot to my iPhone, and it takes less than a minute.

There’s another way to detect a deepfake, and that’s by assessing whether the subject material is reasonable. Is it likely that someone said or acted in the manner depicted?

Okay, I was just kidding with that one. I’m not even going to bother demonstrating why that’s a futile avenue to wander down. Just re-read the bit about “confirmation bias” in Part 1 and consider the thousands of incredibly ridiculous videos and photos that have circulated around the internet and been accepted as real by millions of people, many of whom also sent money to that prince in Nigeria trying to export the family fortune.

There are some non-technical ways of detecting fakes. In the case of celebrities or politicians in public settings, multiple images or in-person witnesses could do the job. If 200 cameras are aimed at the president while he’s giving a speech on trade policy, one media outlet showing a video in which he advocates child slavery during his talk should, hopefully, get drowned out and dismissed because of the 199 that can document that he never said that.

Which leads to one radical suggestion that was made seriously, despite one or two obvious drawbacks. It’s reminiscent of a “driver protection” system popular in places like Russia. Nearly every private automobile in major cities like Moscow has a dash-mounted camera that’s on whenever the car is in motion. It’s in continuous record mode, which means that, in the event of an accident or other event, the driver can offer visual proof of his innocence of any wrongdoing (or, I suppose, claim the camera was off if it was his fault).

The suggestion with respect to deepfakes is that everyone carry a similar and preferably less bulky device on his or her person, 24/7. In the event of a deepfake involving that person, he or she can produce evidence of what he was actually doing during the time frame in question. Third-party services have already been proposed for accomplishing this. (I haven’t yet gotten to the part in the suggestion dealing with how one prevents the subject from faking his or her personal video in the event that the deepfake is, in fact, not fake at all. But it’s still a new field, and one assumes that the personal device might somehow be certifiable as tamper-proof, hopefully not by the same people who assured the security of your credit card information in Target’s data base.)

Such a device would have helped the president cope with a video last May that showed him offering some harsh advice to the people of Belgium on how they should approach the issue of climate change, stirring outrage across the country. The video was a deepfake, and it wasn’t even a good one — the lip movements didn’t match what was being said — but people fell for it anyway. It didn’t occur in a public setting, so there were no other cameras or witnesses to refute it. Then again, there were no data to indicate when it took place, so it would have been difficult for the president to use a personal device to refute the video by showing what was actually going on at the time. And so it goes.

Some observers have told us not to worry about this problem altogether because of the expense and difficulty of creating credible deepfakes. To that I say “Nonsense.” Never underestimate the speed of technological development when there is sufficient motivation involved. It happens fast and it happens big. If automobiles had advanced at the same rate that digital technology did, a Rolls Royce today would cost six bucks and get 800 million miles per gallon. And there are already apps available for making pretty good deepfakes.

There are, at least theoretically, some legal tools that might be brought to bear on individual cases of deepfake deception. Key among them is copyright protection. Nearly all deepfakes begin with an existing video of a real individual. If that original imagery is appropriated and reproduced without permission, there might be grounds for a civil action.

The unauthorized exploitation of someone’s image for commercial purposes is also actionable, which is why you don’t often see advertisers using a celebrity’s picture without permission, and when they do, a lawsuit generally follows. There’s also a case to be made for slander or libel under the right circumstances, as well as tortious interference, which might be invoked if someone’s career is derailed owing to a reputation-sullying fake.

The problem here, of course, is that you first have to find the perpetrators, which is often impossible. Even if you do locate them, they’re likely to be somewhere in the Ukraine or Mongolia or some other place without an extradition treaty or a local constabulary sufficiently motivated to intercede on your behalf. Think about the hundreds of thousands of websites currently perpetrating scams and malware attacks and posting deepfake pornography that have been operating unimpeded for years. Even the people who are known to have attempted to corrupt the 2016 presidential election are running around unmolested.

The reason I know that this is going to get worse in a hurry is that a lot of what’s being written suggests “educating the public” as a solution. Whenever I see a phrase like that, I know the battle is half lost already because raising public awareness is about the dumbest approach imaginable for dealing with security or privacy issues. Despite massive amounts of “public education,” 63 percent of people don’t change their passwords regularly, 83 percent use the same password on multiple sites, and 86 percent don’t use a secure password in the first place.

So what’s the answer? I have no idea. The legal remedies might offer the best alternative, assuming a sufficiently punitive set of laws can be devised and rigorously enforced, but our track record in doing that for other forms of digital fraud have been laughable at best. Bear in mind that, for the most part, authorities in the United States don’t even bother themselves pursuing deceptive or outright fraudulent advertising. And good luck trying to find a lawyer willing to take on a client whose face got digitally grafted onto a porn actress.

I’m waiting for the day a criminal defendant escapes conviction by claiming that perfectly legitimate visual evidence was faked. When that happens — and it’s only a matter of time — run for cover.

Lee Gruenfeld is a managing partner of Cholawsky and Gruenfeld Advisory, as well as a principal with the TechPar Group in New York, a boutique consulting firm consisting exclusively of former C-level executives and "Big Four" partners. He was vice president of strategic initiatives for Support.com, senior vice president and general manager of a SaaS division he created for a technology company in Las Vegas, national head of professional services for computing pioneer Tymshare, and a partner in the management consulting practice of Deloitte in New York and Los Angeles. Lee is also the award-winning author of fourteen critically-acclaimed, best-selling works of fiction and non-fiction. For more of his reports — Click Here Now.

© 2024 Newsmax. All rights reserved.


LeeGruenfeld
The problem is real, the threat serious, so what can be done about it?
deepfakes, fake news, media, politics
1433
2019-00-17
Thursday, 17 January 2019 03:00 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved