Most of us have probably heard about deepfakes, fake news or AI-generated nonsense messing with politics or social media. But this isn’t just a problem for the internet. It’s now creeping into the workplace, and it’s a lot more serious than people whispering rumours by the coffee machine.
“Misinformation, whether shared intentionally or not, has become an increasingly critical challenge in today’s workplace,” says Lorena Blasco-Arcas, a professor of marketing at ESCP. And it’s not just about gossip. Companies are now dealing with everything from fake videos of their own executives to bot-written memos that never came from HR.
So how do you protect yourself and your team? We spoke with Blasco-Arcas to find out.
The new face of workplace lies
There’s always been a bit of chatter at work — that someone’s leaving, someone’s getting promoted, there’s a round of layoffs coming. But now, the misinformation doesn’t just come from internal whispers. It’s showing up as deepfake videos, AI-generated emails and even fake internal documents.
“Unverified claims about company decisions can spread rapidly,” says Blasco-Arcas. “They fuel anxiety, confusion and erode trust across teams.” Add in the possibility of a very convincing fake video of your CEO saying something they never said, and it’s easy to see how things could spiral fast.
For example: in early 2024, UK engineering firm Arup lost HK$200 million (about US$25 million) after an employee at its Hong Kong office was tricked by a deepfake of a senior executive.
“It wasn’t just a tech issue,” Blasco-Arcas notes. “It became a reputational and leadership crisis.”
Misinformation, whether shared intentionally or not, has become an increasingly critical challenge in today’s workplace.
The biggest risk? Your reputation
We often think of misinformation as a PR problem. But inside companies, the damage can go much deeper. “Reputational risk is often the most severe and far-reaching,” says Blasco-Arcas. A single incident — even if quickly debunked — can cause long-term harm. People don’t forget. Even worse, they often remember the fake story more clearly than the correction.
The numbers are worrying. Advisory firm Deloitte predicts losses from AI-driven fraud in the US could rise from $12.3 billion in 2023 to $40 billion by 2027 in the banking sector alone. Research group Gartner has also flagged a sharp increase in deepfake-related threats.
And it’s not just external bad actors. Poor internal communication — things like unclear policy updates or sloppy meeting summaries — can quickly lead to misunderstandings. It’s all connected, according to Blasco-Arcas.
So, what can you actually do?
Here’s the good news: you don’t need a PhD in AI to get ahead of this. But you do need a plan.
Companies should start by using tools that flag suspicious content or online chatter, says Blasco-Arcas. Early detection helps shut down false stories before they take hold.
Additionally, it can help to teach your team what is real and what is not. “Media literacy training is essential,” says Blasco-Arcas. Show people what deepfakes look like. Teach them to slow down before clicking, sharing or reacting to something questionable.
Transparent, timely communication is your best defence. “Clear channels for sharing accurate information and correcting errors are crucial,” she says.
Blasco-Arcas also recommends that you create and enforce policies around sharing unverified content. Make it clear what’s okay, what’s not — and what to do if something suspicious shows up.
It’s not just about tools — it’s culture
Technology can help. But this is really a people problem. The best defence against misinformation, according to Blasco-Arcas, is a workplace where people ask questions, think critically, and feel safe raising concerns.
She offers four things leaders should focus on:
- Lead by example
If you’re the boss, show your thinking. Admit when you don’t know something. Encourage respectful debate. Reward employees who think things through. - Offer real training
Workshops on spotting fake news, understanding how algorithms work, or even just good digital hygiene, can go a long way. - Create a curious culture
Let people ask questions. Encourage them to challenge assumptions. Promote different viewpoints. When teams feel heard, they’re less likely to fall into groupthink or panic over false rumours. - Promote safe skepticism
There’s a big difference between healthy doubt and toxic mistrust. Make space for concerns without shutting people down. “Transparency and consistency from leadership are key,” she says.
Ultimately, the most resilient companies are those that foster open dialogue, encourage critical thinking and lead with clarity and consistency.
Deepfakes and misinformation aren’t just the IT teams’ problem. HR, communications and compliance departments all have a role to play in building resilience.
HR teams need to build guardrails around AI use in hiring or performance reviews. Train people on what to look out for, says Blasco-Arcas — and how to report it.
Compliance should stay ahead of the laws. Blasco-Arcas recommends that they audit how AI is used internally. Also, she says to make sure your policies cover ethics, privacy and misinformation risks.
For communications teams, it’s critical to have a crisis plan. Know how to verify content, and teach employees how to spot fakes and who to alert if they do.
“Cross-functional teams need to work together,” says Blasco-Arcas. “It’s the only way to ensure consistency and responsible AI use across the business.”
If you build a team that knows how to think critically, check facts and talk openly, you’ll be in a much stronger place to handle whatever the next wave of misinformation throws your way.
“Ultimately,” says Blasco-Arcas, “the most resilient companies are those that foster open dialogue, encourage critical thinking and lead with clarity and consistency.”
Sounds like a good place to start.