Most of us have probably heard about deepfakes, fake news or AI-generated nonsense messing with politics or social media. But this isn’t just a problem for the internet. It’s now creeping into the workplace, and it’s a lot more serious than people whispering rumours by the coffee machine.
“Misinformation, whether shared intentionally or not, has become an increasingly critical challenge in today’s workplace,” says Lorena Blasco-Arcas, a professor of marketing at ESCP. And it’s not just about gossip. Companies are now dealing with everything from fake videos of their own executives to bot-written memos that never came from HR.
So how do you protect yourself and your team? We spoke with Blasco-Arcas to find out.
The new face of workplace lies
There’s always been a bit of chatter at work — that someone’s leaving, someone’s getting promoted, there’s a round of layoffs coming. But now, the misinformation doesn’t just come from internal whispers. It’s showing up as deepfake videos, AI-generated emails and even fake internal documents.
“Unverified claims about company decisions can spread rapidly,” says Blasco-Arcas. “They fuel anxiety, confusion and erode trust across teams.” Add in the possibility of a very convincing fake video of your CEO saying something they never said, and it’s easy to see how things could spiral fast.
For example: in early 2024, UK engineering firm Arup lost HK$200 million (about US$25 million) after an employee at its Hong Kong office was tricked by a deepfake of a senior executive.
“It wasn’t just a tech issue,” Blasco-Arcas notes. “It became a reputational and leadership crisis.”
Misinformation, whether shared intentionally or not, has become an increasingly critical challenge in today’s workplace.

The biggest risk? Your reputation
We often think of misinformation as a PR problem. But inside companies, the damage can go much deeper. “Reputational risk is often the most severe and far-reaching,” says Blasco-Arcas. A single incident — even if quickly debunked — can cause long-term harm. People don’t forget. Even worse, they often remember the fake story more clearly than the correction.
The numbers are worrying. Advisory firm Deloitte predicts losses from AI-driven fraud in the US could rise from $12.3 billion in 2023 to $40 billion by 2027 in the banking sector alone. Research group Gartner has also flagged a sharp increase in deepfake-related threats.
And it’s not just external bad actors. Poor internal communication — things like unclear policy updates or sloppy meeting summaries — can quickly lead to misunderstandings. It’s all connected, according to Blasco-Arcas.
So, what can you actually do?
Here’s the good news: you don’t need a PhD in AI to get ahead of this. But you do need a plan.
Companies should start by using tools that flag suspicious content or online chatter, says Blasco-Arcas. Early detection helps shut down false stories before they take hold.
Additionally, it can help to teach your team what is real and what is not. “Media literacy training is essential,” says Blasco-Arcas. Show people what deepfakes look like. Teach them to slow down before clicking, sharing or reacting to something questionable.
Transparent, timely communication is your best defence. “Clear channels for sharing accurate information and correcting errors are crucial,” she says.
Blasco-Arcas also recommends that you create and enforce policies around sharing unverified content. Make it clear what’s okay, what’s not — and what to do if something suspicious shows up.
It’s not just about tools — it’s culture
Technology can help. But this is really a people problem. The best defence against misinformation, according to Blasco-Arcas, is a workplace where people ask questions, think critically, and feel safe raising concerns.
She offers four things leaders should focus on:
- Lead by example
If you’re the boss, show your thinking. Admit when you don’t know something. Encourage respectful debate. Reward employees who think things through. - Offer real training
Workshops on spotting fake news, understanding how algorithms work, or even just good digital hygiene, can go a long way. - Create a curious culture
Let people ask questions. Encourage them to challenge assumptions. Promote different viewpoints. When teams feel heard, they’re less likely to fall into groupthink or panic over false rumours. - Promote safe skepticism
There’s a big difference between healthy doubt and toxic mistrust. Make space for concerns without shutting people down. “Transparency and consistency from leadership are key,” she says.
Ultimately, the most resilient companies are those that foster open dialogue, encourage critical thinking and lead with clarity and consistency.
Deepfakes and misinformation aren’t just the IT teams’ problem. HR, communications and compliance departments all have a role to play in building resilience.
HR teams need to build guardrails around AI use in hiring or performance reviews. Train people on what to look out for, says Blasco-Arcas — and how to report it.
Compliance should stay ahead of the laws. Blasco-Arcas recommends that they audit how AI is used internally. Also, she says to make sure your policies cover ethics, privacy and misinformation risks.
For communications teams, it’s critical to have a crisis plan. Know how to verify content, and teach employees how to spot fakes and who to alert if they do.
“Cross-functional teams need to work together,” says Blasco-Arcas. “It’s the only way to ensure consistency and responsible AI use across the business.”
If you build a team that knows how to think critically, check facts and talk openly, you’ll be in a much stronger place to handle whatever the next wave of misinformation throws your way.
“Ultimately,” says Blasco-Arcas, “the most resilient companies are those that foster open dialogue, encourage critical thinking and lead with clarity and consistency.”
Sounds like a good place to start.
License and Republishing
The Choice - Republishing rules
We publish under a Creative Commons license with the following characteristics Attribution/Sharealike.
- You may not make any changes to the articles published on our site, except for dates, locations (according to the news, if necessary), and your editorial policy. The content must be reproduced and represented by the licensee as published by The Choice, without any cuts, additions, insertions, reductions, alterations or any other modifications.If changes are planned in the text, they must be made in agreement with the author before publication.
- Please make sure to cite the authors of the articles, ideally at the beginning of your republication.
- It is mandatory to cite The Choice and include a link to its homepage or the URL of thearticle. Insertion of The Choice’s logo is highly recommended.
- The sale of our articles in a separate way, in their entirety or in extracts, is not allowed , but you can publish them on pages including advertisements.
- Please request permission before republishing any of the images or pictures contained in our articles. Some of them are not available for republishing without authorization and payment. Please check the terms available in the image caption. However, it is possible to remove images or pictures used by The Choice or replace them with your own.
- Systematic and/or complete republication of the articles and content available on The Choice is prohibited.
- Republishing The Choice articles on a site whose access is entirely available by payment or by subscription is prohibited.
- For websites where access to digital content is restricted by a paywall, republication of The Choice articles, in their entirety, must be on the open access portion of those sites.
- The Choice reserves the right to enter into separate written agreements for the republication of its articles, under the non-exclusive Creative Commons licenses and with the permission of the authors. Please contact The Choice if you are interested at contact@the-choice.org.
Individual cases
Extracts: It is recommended that after republishing the first few lines or a paragraph of an article, you indicate "The entire article is available on ESCP’s media, The Choice" with a link to the article.
Citations: Citations of articles written by authors from The Choice should include a link to the URL of the authors’ article.
Translations: Translations may be considered modifications under The Choice's Creative Commons license, therefore these are not permitted without the approval of the article's author.
Modifications: Modifications are not permitted under the Creative Commons license of The Choice. However, authors may be contacted for authorization, prior to any publication, where a modification is planned. Without express consent, The Choice is not bound by any changes made to its content when republished.
Authorized connections / copyright assignment forms: Their use is not necessary as long as the republishing rules of this article are respected.
Print: The Choice articles can be republished according to the rules mentioned above, without the need to include the view counter and links in a printed version.
If you choose this option, please send an image of the republished article to The Choice team so that the author can review it.
Podcasts and videos: Videos and podcasts whose copyrights belong to The Choice are also under a Creative Commons license. Therefore, the same republishing rules apply to them.