It's time to get real about deepfakes
Anyone who spends a decent amount of time on the Interwebs has seen them. No, we’re not talking about those thousands of photos of Grumpy Cat (RIP) or the countless videos of people doing the Bird Box Challenge or even the endless stream of Aunt Becky memes that seemingly flow through social media like an atmospheric river camped out over California. No, instead we’re talking about deepfakes.
For the uninitiated, a deepfake refers to the process of using technology to superimpose an existing image or video onto a separate image or video in such a way that it can be difficult for the viewer to be able to tell the difference between the two. Put perhaps more simply, you take a video of yourself on your iPhone reciting lines from your favorite movie and you superimpose that video into a clip of said scene in said movie. The end product—the deepfake—looks as though you starred in the movie instead of the original actor. It may sound strange—and it admittedly is—but it can be surprisingly effective. Want proof? Watch Arnold Schwarzenegger star in No Country for Old Men thanks to the folks at ctrl shift face:
While perhaps just now starting to get more mainstream awareness, deepfakes have been around for a while. In fact, Wikipedia suggests the term has been around since at least 2017 and in 2018 we were warned via some high-profile PSA’s about the potential dangers associated with the technology.
While the movie use-case can feel fun and lighthearted, the technology clearly continues to advance in ways that may have unintended consequences. With the dawn of a new decade and an extremely consequential election cycle already underway in the United States, it feels frighteningly like we’re about to witness an explosion of deepfakes and deepfake-like content.
All of this begs a question that needs to be continually asked in Silicon Valley: just because we can do something technically, does that mean we should do it? Sure, consumers love technology, but 69% of those recently surveyed said that change was happening too fast. The same survey found that when it comes to ethics, a vast majority of consumers expect brands to use technology and consumer data ethically, always.
So, in the case of deepfakes—how does the technology create something that positively impacts the greater good and moves us forward as a society? Do a few minutes of entertainment spent re-imagining famous movie scenes outweigh the potential for misuse or even abuse by hostile nation states, general bad actors, politicians or others? From where I sit, the answer is obviously no.
There are clear consequences when people have to constantly decipher between real content and something like a deepfake in their everyday lives. Given the vast amount of information coming at us as individuals each day, the reality is most people likely won’t even attempt to decipher whether or not a video that looks and sounds realistic actually is. The issue is further exacerbated by the dangerous bubbles we’ve all created via our algorithmically curated feeds, which likely mean a deepfake shared by a few close friends or acquaintances will be passed along as fact with no questions asked. Twitter is at least working to find ways to label content they see as manipulated. However, in most cases the social platforms will not remove such content unless it threatens someone’s physical safety.
So, what can the rest of us do about it?
As professional communicators we have a responsibility to tackle these issues head-on, particularly as the lines between technology and communications continue to blur. Here are four thought-starters for areas where we might focus our efforts.
Brand Responsibility
83% of consumers believe that brands could be capable of providing stability during times of turbulence. The brands that will be most successful moving forward will be those that are able to earn and maintain consumer trust. This is not a simple undertaking. Trust is earned over time and requires a constant focus and effort in order to maintain. That’s where we, as communicators, can play a significant role. Question number one has to be; what are we doing today and, in the future, to ensure our brand communications are as open and transparent as possible and that we’re not enabling or intentionally promoting the spread of disinformation?
Technology Education
We know most consumers feel technological innovation is speeding ahead at a rate that is too fast to keep up with. Those on the front lines of building new things and bringing new technologies to market need to focus their communications efforts on educating consumers around the intentions of such innovation. Consumers need to know the how and also the why behind these new developments. How will something improve our collective experience and make life better? I’m not calling for companies to ramp up fluffy marketing, but rather consistently deliver true education that brings people along on the journey. Perhaps as marketers we should focus more on creating shared experiences and showing people how things work and why they are being created in the first place. It’s certainly not the easiest of approaches, but perhaps it’s closer to the right way forward.
News + Online Content Literacy
As someone who studied mass communications in college and has spent the better part of the past twenty years working in the industry, I feel like I’ve got a decent handle on how to process, explore and verify information I come across online. However, most people don’t have my background. We need to do far more to educate consumers around the best ways to discover and verify content they come across online and we need to start it even earlier. In school, my kids are currently being taught that Wikipedia is an online resource that should not be trusted and/or cited in research papers unless they can verify content with 2-3 additional sources. It’s an approach that feels ideally suited for 2009, not 2019. The concept of deepfake technology is so far from being on the radar that it might be 2029 before it makes it to elementary school curriculum and by then it will be far too late. We need to be significantly more aggressive in educating our children about the era of disinformation, about how to go about verifying if something they see or read online is in fact true.
Brand Preparedness
A big part of our job as communicators is to protect the brand. That starts with being prepared. While deepfakes haven’t yet been utilized in a systematic way to tarnish a brand, that doesn’t mean the same will be true in the coming months. Imagine waking up one morning and seeing a video making the rounds that appears to showcase your CEO saying something completely off-color to an employee. It’s not only racking up views online, but the video is being shown and discussed live on CNBC and Fox Business and the stock is tanking as a result. This doesn’t feel like a far-fetched scenario. The question is, what are you going to do about it? How will you ‘prove’ it is false? Now is the time to integrate disinformation campaigns and deepfakes into your crisis and scenario planning so you’re not caught flat-footed should the inevitable happen.
What are your thoughts on deepfake technology, the era of disinformation and what we can collectively do as communicators to combat it?