Media Monitoring in an era of AI deepfakes and fake news

The speed and extent to which artificial intelligence (AI) is edging into our daily lives is remarkable. In just a few years, we’ve gone from asking questions of Siri and Alexa to students using ChatGPT to write term papers. And, what made those term papers headline-worthy was that it was difficult to detect AI authorship.

Deepfakes are even more troubling or remarkable, depending on your perspective. Deepfakes are manipulated videos that are digitally altered to trick viewers into believing that a person said or did something they did not say or do. These visual alterations are enabled through the use/misuse of artificial intelligence, and can be accomplished by using facial manipulation or face swapping—and the results can be unnervingly convincing.

The more AI learns, the easier this process becomes. In March, Ethan Mollick, a business professor at the University of Pennsylvania’s Wharton School, decided to see how quickly and inexpensively he could generate a “deepfake” video of himself.

It took him just eight minutes at a cost of $11 US dollars to create a plausible deepfake video of himself giving a talk on a topic that he’s never presented on before.

With the tools to create this type of video widely available—Mollick started his process using ChatGPT, which is free—PR professionals need to start thinking now about how to address its impact on news gathering, crisis communications, and even their own work.

The stakes are high. In late May, an image showing an explosion at the Pentagon turned out to be an AI-generated hoax. Although public safety officials quickly issued statements saying there was no explosion and the photo was quickly debunked, as sharp-eyed individuals pointed out errors in the way in which columns and fences were rendered in the picture, that image and a subsequent one showing a (fake) explosion at the White House led to a brief stock sell-off.

Adding to the confusion was Twitter’s now unreliable account verification system. Some of the accounts sharing the faked image carried “blue checks,” which used to be issued after Twitter verified that the owner of an account was who or what they said they were—but is now available to anyone who pays the monthly fee. This means that fake accounts can be created that impersonate legitimate outlets, including trusted news services. One account that shared the deepfake Pentagon image was an account impersonating Bloomberg News, a respected news organization.

What should PR professionals do?

First and foremost, pause.

There was one big hint that the Pentagon story was false, aside from the oddities in the image. There were no individual eyewitness accounts posted on social media. No other photos of different angles, no astonished employees, nothing that offered anything amounting to an on-the-scene witness. This is a good reminder that big news like an explosion would likely have many witnesses, and the absence of other voices and images was noteworthy.

Next, be ready.

Crisis communications training is essential, and yet few organizations take the time to map out how to respond to a variety of scenarios. Add “deepfakes/AI-generated images” to the list of potential crisis scenarios, and practice how you’d respond.

Consider the ethics, and understand what the limits of these tools are.

The ease of use and availability of many of these tools makes them tempting to use—and using them may be completely appropriate, depending on the task at hand. Using AI means thinking about the ethics of what you are passing off as your work. Plagiarism, we all know, is wrong. Using someone else’s work without attribution is unethical, and in many PR firms, it can get you fired. So, where does machine learning and assistance fit in?

Boris Eldagsen’s AI-generated photo that won a prize at the Sony World Photography Awards

The art world has already seen at least two cases of AI-generated images winning contests and awards, and generating controversy. In September 2022, an AI-generated picture won the blue ribbon at the Colorado State Fair, and in April 2023, a photographer acknowledged that his prize-winning photo at the Sony World Photography Awards was created using AI.

Being aware of the limitations of AI tools, and how they work, is just as important as thinking through ethics. Recently, a lawyer discovered this the hard way. He’d used ChatGPT to write a legal brief, which he then used in court. ChatGPT had made up the supporting cases cited in the legal brief—they were cases that didn’t exist.  

And, one more thing to think about: if you have an existing non-disclosure agreement (NDA) in place with a client, using AI to develop content might be considered a violation of any provisions requiring you to protect your work product from disclosure.

Practice radical skepticism.

This is perhaps going to be the hardest thing to put into practice, because PR professionals seem to be hard-wired to consume large amounts of news and information, and we pride ourselves on being timely and on top of what is happening. But bad actors are depending on this very trait to help disseminate and, frankly, legitimize the fake stuff they are generating. Retraining our brains to being skeptical of “BREAKING NEWS” is going to be very, very difficult. It’s likely going to be an essential trait for PR pros in the future.

We seem to be right on the cusp of a very different and potentially unsettled time for news. It’s unlikely that there’s any way to stop what is coming, so preparation and learning as much as possible will be important skills to navigate what is ahead.

Speak with one of our experienced consultants about your media monitoring and communications evaluation today.