Spurred by Covid-19, the adoption of artificial intelligence (AI) recently skyrocketed. Numerous examples show that the use of AI now transcends mere process automation and is increasingly being used to augment decision-making processes at all levels – including top management. “Artificial intelligence for both strategic decision-making (capital allocation) and operating decision-making will come to be an essential competitive advantage, just like electricity was in the industrial revolution or enterprise resource planning software (ERP) was in the information age,” write Barry Libert, Megan Beck, and Mark Bonchek in the MIT Sloan Management Review.
“Artificial intelligence for both strategic decision-making (capital allocation) and operating decision-making will come to be an essential competitive advantage, just like electricity was in the industrial revolution or enterprise resource planning software (ERP) was in the information age,” write Barry Libert, Megan Beck, and Mark Bonchek in the MIT Sloan Management Review.
In AI-augmented decision processes, where algorithms provide suggestions and information, executives still have the final say. For example, a special, unreleased version of Salesforce’s own AI program, called Einstein, has helped to significantly reduce bias in staff meetings and decrease discussions driven by political or personal agendas among members of the top management team.
Our research picked up by the MIT Sloan Management Review reveals that this human filter makes all the difference in organizations’ AI-based decisions. We show that there is no single universal human response to AI, and that individuals make completely different choices based on identical AI inputs.
If we do not recognise the human dimension, we will only understand half the equation when it comes to optimising the interplay between AI and human judgment.ESCP Business School Professor and European Center for Digital Competitiveness co-founder and director Philip Meissner
Our analysis finds that these differences in AI-based decision-making have a direct financial effect on organisations. Depending on their particular decision-making style, some executives invest up to 18% more in important strategic initiatives based on the exact same AI advice.
To champion AI in the boardroom, leaders must acknowledge human biases and decision-making styles. If we do not recognise the human dimension, we will only understand half the equation when it comes to optimising the interplay between AI and human judgment.
The human factor in AI-based decisions
Our findings suggest that executives using AI to make strategic decisions fall into three archetypes based on their individual decision-making styles:
- Sceptics do not follow the AI-based recommendations. They prefer to control the process themselves. When using AI, sceptics can fall prey to a false illusion of control, which allows them to overestimate themselves and underestimate AI.
- Interactors balance their own perception and the algorithm’s advice. When AI-based analyses are available, interactors will trust and make decisions based on these recommendations.
- Delegators largely transfer their decision-making authority to AI. Delegators may misuse AI to reduce their perceived individual risk and avoid personal responsibility. They consider the AI recommendations as a personal insurance policy in case something goes wrong.
In the era of AI-enabled decision-making, people don’t change their decision-making styles.ESCP Business School lecturer and PhD graduate Christoph Keding
These different decision-making archetypes show that the quality of the AI recommendation itself, and how executives make sense of and act on this advice, is equally important in assessing the quality of AI-based decision-making in organizations.
What’s interesting is that people show the same behavioural patterns whether or not AI is involved. In the era of AI-enabled decision-making, people don’t change their decision-making styles.
Three strategies to optimize the interplay between AI and human judgment
In the MIT Sloan Management Review , we provide three recommendations for boards of directors and senior executives to successfully integrate AI into strategic decision-making processes:
- Create awareness. Communicate with all executives who interact with AI-based systems about the impact of human judgment, which remains a decisive factor when augmenting the top management team. Executives should learn about the specific biases they have towards AI, depending on their individual decision making styles. This awareness is the crucial foundation for a successful integration of AI into organizations’ decision making processes.
- Avoid risk shift and illusion of control. Emphasize that the ultimate decision authority stays with the executives, even if AI is involved. And, explain the potential benefits of AI as well as what parameters and data the suggested course of action is based upon. This new and counterintuitive information can help debias illusion of control and contribute to a more balanced and less cautious perception of AI.
- Embrace team-based decisions. Balance the predominant tendencies of the three decision-making archetypes in teams to overcome choices that are overly risky or risk-averse. Different perspectives and multiple options improve human decision-making processes, whether or not AI is involved. Framing the AI as an additional source of input, similar to an additional team member rather than as a superior, undisputable authority, can help successfully integrate AI-based recommendations into discussions.
To utilize AI’s full potential, companies need a human-centred approach to address the cognitive dimension of human-machine interactions beyond automation. With the right balance of analytics and experience, AI-augmented decision processes can increase the quality of an organization’s most critical choices, and drive tremendous value for companies in an increasingly complex world.