Artificial intelligence is transforming how we work and live, offering incredible opportunities for efficiency and innovation. But it also raises tough questions about ethics, fairness and the role of human decision-making. According to Lorena Blasco-Arcas, professor of marketing at ESCP, businesses are walking a fine line between embracing AI’s potential and staying true to human-centred values.
“The real challenge with AI is finding the balance between its ability to revolutionise efficiency and innovation, while still protecting human agency, ethics and well-being,” Blasco-Arcas explains.
AI’s potential to automate and optimise is exciting, but it’s not without risks. “Take healthcare, for example. While AI can provide faster, more accurate diagnoses, it could also override doctors’ decisions, which raises concerns about autonomy,” she says.
The same challenges show up in criminal justice, where AI tools are being used to help judges decide things like bail or sentencing. These systems analyse past cases to predict whether someone might reoffend, which can save time — but they come with risks if no one’s keeping a close watch.
For instance, an AI might recommend harsher sentences because of biased data, like patterns of racial or economic inequality baked into the system. If judges rely too much on these tools without questioning their recommendations, it could lead to unfair outcomes, taking away the human judgment that’s so important for ethical decisions.
Blasco-Arcas stresses that the goal shouldn’t be replacing humans but working alongside them: “We need to focus on AI augmenting human capabilities, not replacing them — especially in critical areas like healthcare, security or education.”
The real challenge with AI is finding the balance between its ability to revolutionise efficiency and innovation, while still protecting human agency, ethics and well-being.
Efficiency vs. ethics
So, how can organisations balance the push for innovation with their responsibility to uphold ethical standards? Blasco-Arcas believes it starts with a clear, human-centred strategy.
“Businesses need strong ethical frameworks that guide their AI initiatives from the ground up,” she says. Transparency and accountability are key, along with building diverse teams to reduce bias in AI systems. “It’s also about fostering a culture of human-AI collaboration, where technology enhances rather than diminishes human input,” she adds.
She points to AI’s ability to amplify biases if not handled carefully. For example, AI recruitment tools have been shown to favour certain genders or races due to biased training data. “That’s where regular ethical audits and a commitment to using diverse datasets come in,” she adds.
Where AI and human values clash
AI often pushes up against core human values. Blasco-Arcas highlights a few key problem areas:
- Bias and fairness: “AI systems can replicate and even amplify the biases in their training data, which is especially worrying in recruitment or lending decisions,” she says.
- Transparency: Many AI systems function like “black boxes”, making it hard to understand or explain their decisions. “When something goes wrong, who’s accountable?”
- Economic inequality: As automation takes over, job displacement is a growing concern. Blasco-Arcas says: “AI can widen economic divides if businesses don’t invest in reskilling displaced workers.”
- Environmental impact: “AI systems consume significant computational resources, which can have a major environmental footprint,” she adds.
Her advice? Focus on creating systems that support rather than replace human input, especially in high-stakes areas. “And invest in educating your employees about AI ethics so everyone is equipped to handle these challenges,” she adds.
AI should support human decisions, not make them for us.
Maintaining human judgement
AI can be a game-changer when it comes to decision-making. “It can handle repetitive tasks, freeing people to focus on creative or strategic work,” says Blasco-Arcas. In knowledge-heavy industries, such as finance, healthcare or technology, AI can help teams solve problems faster and more effectively.
But it’s not all upside. “The danger is in overreliance,” she warns. “If organisations depend too heavily on AI, especially systems with inherent bias or a lack of transparency, it can lead to poor outcomes.”
The solution, she says, is to see AI as a partner, not a replacement. “AI should support human decisions, not make them for us.”
Rules and self-governance
Regulation and self-governance both have a role to play in making sure AI is used responsibly. “The EU AI Act, for example, sets important safeguards, but self-governance gives businesses the flexibility to innovate while staying ethical,” Blasco-Arcas explains.
She underlines the importance of engaging a range of voices in the conversation. “When you involve diverse stakeholders — employees, customers, academics and NGOs — you create a governance structure that’s adaptable and grounded in real-world needs,” she adds.
For businesses, Blasco-Arcas suggests several steps to keep AI adoption on the right track:
- Develop clear ethics frameworks: “AI ethics can’t be an afterthought,” she says. “Build them into your strategy from day one.”
- Tie AI to corporate social responsibility: Use AI to support environmental and social initiatives, not just business goals.
- Focus on inclusivity: “Design AI solutions that cater to a range of abilities and invest in programmes to reskill workers impacted by automation,” says Blasco-Arcas.
- Stay transparent: Share how AI is used in your organisation and make its impact clear to stakeholders.
And above all, she says, keep learning. “AI is evolving fast, and businesses need to monitor their systems and adapt as they go.”
AI’s potential is undeniable, but so are its risks. For businesses, the key is to embrace innovation without losing sight of the human values that drive long-term success.
License and Republishing
The Choice - Republishing rules
We publish under a Creative Commons license with the following characteristics Attribution/Sharealike.
- You may not make any changes to the articles published on our site, except for dates, locations (according to the news, if necessary), and your editorial policy. The content must be reproduced and represented by the licensee as published by The Choice, without any cuts, additions, insertions, reductions, alterations or any other modifications.If changes are planned in the text, they must be made in agreement with the author before publication.
- Please make sure to cite the authors of the articles, ideally at the beginning of your republication.
- It is mandatory to cite The Choice and include a link to its homepage or the URL of thearticle. Insertion of The Choice’s logo is highly recommended.
- The sale of our articles in a separate way, in their entirety or in extracts, is not allowed , but you can publish them on pages including advertisements.
- Please request permission before republishing any of the images or pictures contained in our articles. Some of them are not available for republishing without authorization and payment. Please check the terms available in the image caption. However, it is possible to remove images or pictures used by The Choice or replace them with your own.
- Systematic and/or complete republication of the articles and content available on The Choice is prohibited.
- Republishing The Choice articles on a site whose access is entirely available by payment or by subscription is prohibited.
- For websites where access to digital content is restricted by a paywall, republication of The Choice articles, in their entirety, must be on the open access portion of those sites.
- The Choice reserves the right to enter into separate written agreements for the republication of its articles, under the non-exclusive Creative Commons licenses and with the permission of the authors. Please contact The Choice if you are interested at contact@the-choice.org.
Individual cases
Extracts: It is recommended that after republishing the first few lines or a paragraph of an article, you indicate "The entire article is available on ESCP’s media, The Choice" with a link to the article.
Citations: Citations of articles written by authors from The Choice should include a link to the URL of the authors’ article.
Translations: Translations may be considered modifications under The Choice's Creative Commons license, therefore these are not permitted without the approval of the article's author.
Modifications: Modifications are not permitted under the Creative Commons license of The Choice. However, authors may be contacted for authorization, prior to any publication, where a modification is planned. Without express consent, The Choice is not bound by any changes made to its content when republished.
Authorized connections / copyright assignment forms: Their use is not necessary as long as the republishing rules of this article are respected.
Print: The Choice articles can be republished according to the rules mentioned above, without the need to include the view counter and links in a printed version.
If you choose this option, please send an image of the republished article to The Choice team so that the author can review it.
Podcasts and videos: Videos and podcasts whose copyrights belong to The Choice are also under a Creative Commons license. Therefore, the same republishing rules apply to them.