content
Skip to main content

Russian mobile phone operator MegaFon unveiled a new ad recently that starred none other than actor Bruce Willis… well, at least parts of Bruce Willis.

Utilizing artificial intelligence (A.I.), the company was able to place Willis’ face onto that of another actor – a technique that is popularly referred to as a Deepfake. While Willis gave full permission for the use of his likeness in this situation, the majority of deepfakes are being orchestrated with more nefarious intentions. Deepfakes are becoming easier to create and can pose problems for any prominent figure in the modern age. The need to be aware of these potential threats has never been more important for communication specialists.

Usually, when new technology is introduced, it starts as a form of harmless entertainment. As we’ve seen with the advancement of social media and “fake news,” bad actors will always look for ways to use emerging tools to con and deceive. As the speed of technology increases exponentially, so must our ability to be agile and understand where these threats exist. The reality is that anyone with internet access and malicious intentions can spread false information; it’s inevitable. Just because MegaFon openly disclosed that Willis’s character was not the actual actor in their campaign, it doesn’t mean that everyone will. Being that deepfakes can be easily produced, any individual can be a victim, including prominent leaders.

Former President Donald Trump fell victim to deepfakes during his presidency. A Belgian socialist party produced a deepfake of the former President where he’s giving a speech on leaving the Paris climate agreement. The video spread across the internet as some believed it was President Trump. According to a member of the organization the video wasn’t meant to deceive anyone. Consequently, whether it was meant to be harmful or not, the damage was done once the deepfake creator clicked the upload button.

Unfortunately, once any video or image is published on the internet, it’s there forever and can be easily manipulated.

Here are some things communications specialists can be doing to stay prepared:

  1. Employ digital monitoring and listening tools to identify malicious content in a timely manner
  2. Educate employees and relevant stakeholders in how to spot deepfakes
  3. Establish relationships with individuals from various social media platforms who can be helpful in addressing deepfakes that appear in these channels
  4. Generate a protocol ahead of time for identifying, escalating, and responding to deepfake occurrences
  5. Keep track of the latest advancements in AI technology that can both produce deepfakes and recognize threats online

Misinformation will always be around the internet and has the potential to damage individuals and brands significantly. Creating a deepfake isn’t difficult; a simple Google search or YouTube video can help guide anyone to generating a deepfake. Brands and reputations can be tarnished in a matter of minutes.

Now is the time for communications specialists to prepare to respond to deepfakes with authenticity and transparency as their guiding principles.