In today’s digital age, deepfakes – whether in videos, images, or audio clips—pose a significant risk and threat. Given sufficient data, these manipulations can be created for anyone and harm your organization’s image.
Before we take a deeper dive into communication tips about deep fakes, we want to explain what it is:
A deepfake is a type of artificial intelligence (AI)-generated media that replaces or alters the likeness of an individual in an existing image, video, or audio recording with someone else’s. The term “deepfake” combines “deep learning” and “fake,” reflecting the advanced AI techniques used to create highly realistic forgeries.
How Are Deepfakes Created?
Deepfakes are typically created using deep learning, a subset of AI that utilizes neural networks with many layers (hence “deep”). Here’s a breakdown of the process:
- Data Collection: The first step involves collecting extensive data on the target individual. This data can include images, videos, and audio recordings, often readily available online through social media, interviews, and other public sources.
- Training the Model: The collected data trains a deep learning model.
- Generative Adversarial Networks (GANs) are commonly employed for video and image deepfakes. GANs consist of two neural networks: a generator and a discriminator. The generator creates fake images or videos while the discriminator attempts to distinguish between real and fake. Through iterative training, the generator improves its ability to develop realistic forgeries.
- For audio deepfakes, a similar approach is taken using models like WaveNet or Tacotron, which can synthesize realistic speech from text inputs by mimicking the vocal characteristics of the target individual. Audio deepfakes are particularly easy to produce, requiring just a few minutes of someone’s voice. With the prevalence of video recordings, there’s likely enough data online to replicate most individuals.
- Synthesis: Once the model is trained, it can generate new content by manipulating existing media. For example, it can overlay the target’s face onto another person’s body in a video or create an audio recording that sounds like the target’s voice saying something they never actually said.
While some AI programs have measures to prevent cloning the voices of prominent politicians, the general public remains vulnerable to deepfakes. This calls for proactive measures to mitigate risks associated with deepfakes.
What are the TO Dos when a deep fake video or audio about your company or CEO is published?
- Assemble a Multidisciplinary Team: Engage relevant stakeholders such as IT, HR, legal, and communications departments before a crisis occurs. This ensures a coordinated response when dealing with potential deepfakes.
- Utilize Forensic Tools: In addition to verifying with the individual depicted, leverage forensic tools designed to detect deepfakes. These tools can help distinguish between genuine and manipulated content.
- Evaluate the Need for a Response: Not all deepfakes require a formal response. Assess the impact based on who is reacting and discussing the deepfake. Social media chatter alone does not always warrant a formal statement.
By adopting these practices, organizations can better navigate the complexities of deepfakes, ensuring they respond appropriately and maintain their credibility.
Well-thought-out communication is crucial.
Leaders are often under much pressure in crises, such as when a deepfake becomes public. Stakeholders expect information quickly, and communications teams are urged to publish press releases and statements. But caution is advised: Rushed communication can increase the damage.
Why over-communication can be damaging:
– Loss of control: Without complete facts, there is a risk of spreading false information and losing control over information sovereignty.
– Exacerbation of the crisis: Contradictory statements or a lack of transparency can exacerbate the crisis and damage stakeholder confidence.
– Legal risks: Careless statements can have legal consequences.
The right strategy: act prudently
Instead of communicating hastily, companies should take the following steps:
- gather facts: Prioritise internal collaboration and gather all relevant information. Have forensic investigations been carried out to clarify the facts?
- create a communication plan: Develop a clear communication plan that defines who will communicate what information to stakeholders, when and how.
- harmonise messages: Define clear messages that are truthful, transparent and empathetic. Train your spokespeople to deal with media enquiries.
4 communicate proactively: Inform your stakeholders proactively and regularly about the state of affairs. Use various channels such as press releases, employee information and social media.
- be willing to engage in dialogue: Respond openly to stakeholder questions and concerns. Be solution-orientated and offer support.
- Prudent and transparent communication is crucial in crises, such as when a deepfake becomes known. Companies that take these principles to heart can limit the negative impact and maintain the trust of their stakeholders.