Site icon The Choice by ESCP

Behind the Algorithm: Unveiling Trust Issues in AI Decisions

An artist’s illustration of artificial intelligence (AI). This illustration depicts language models which generate text.

©Google DeepMind / Pexels Behind the Algorithm: Unveiling Trust Issues in AI Decisions

In every situation, when there is a problem or an opportunity, someone must decide on several possible actions. In a business context, poor decision-making costs a firm, on average, at least 3% of profits. Besides this financial loss, poor decision-making can affect other operational processes or customer satisfaction, significantly impacting companies’ reputations.

Therefore, good decision-making is crucial for a company. Of the ten most valuable companies, seven are planning to put deep-learning-based AI at the heart of their operations. It signals the increasing importance of AI systems in organisational decision-making (33% of C-suite executives use GenAI in their work), and a shared intent to invest in this technology (40% will increase their investment). 

It is crucial in today’s world to consider and implement AI systems effectively to profit from their decision-making skills.

AI systems matter for decision-making

The relevance of AI systems can be explained by rapid digitalisation and increasing demand to handle thousands of decisions every day. The decision-making processes that deal with the customer, a product or the supply chain cannot be handled manually anymore and go far beyond the capabilities of humans. While AI systems employ similar tools as organisational decision-making processes such as hierarchies of pre-set but adaptable rules and draw inspiration from the brain’s neural network to analyse and react to complex situations, AI systems stand out for their ability to handle tasks with more complexity, speed and scalability, leading to better results. Consequently, it is crucial in today’s world to consider and implement AI systems effectively to profit from their decision-making skills.

However, it is relevant to point out the potential negative side effects of AI systems. Besides the replication of biases from training data, leading to unfairness and discrimination, disadvantages occur from the collaboration between humans and AI systems. AI systems replace humans’ social coordination, enhancing unethical and self-interested behaviour, and inhibiting reciprocity between people. In addition, humans misuse the tools due to incomprehensibility and a lack of transparency. This fosters trust issues in human-machine collaboration resulting in a preference for human judgements and a lack of consideration of AI suggestions.

The ambiguity of trust in AI systems

This preference for human judgements manifests as a mistrust in AI systems’ capabilities while AI systems themselves tend to be more truth-biased than humans. This trust ambiguity is explained by the nature of AI systems’ to be influenced by user characteristics, system design, and human experience with AI systems. Different perceptions of the material (text, picture, voice, etc.) created in specific situations will lower or increase the trust levels of humans in AI systems’ work and decisions.

When specifically referring to managerial decision-making, there is still a lack of knowledge in this field. Therefore, a recent study conducted by myself and classmates aimed to understand the level of trust humans have in AI systems in business. The goal was to understand the practical implications of AI integration into decision-making processes and test factors (Deception, Harmfulness, Confidence, and Integrity) that influence trust and dependency on AI systems in various business situations. 

Our research results have strengthened the ambiguity, presenting a variance in the variable outcomes. Participants do not significantly change their perception of Deception and Harmfulness; however, they do in terms of Confidence and Integrity. Therefore, decisions made by AI systems are surprisingly less trustworthy than those made by an expert.

This highlights the importance of collaboration (human-machine) instead of replacement in a managerial context. Due to humans’ intuitive judgement and ability to manage biases, they can surpass, to some extent, the capabilities of AI and therefore still play a vital role in decision-making processes.

Due to humans’ intuitive judgement and ability to manage biases, they can surpass, to some extent, the capabilities of AI and therefore still play a vital role in decision-making processes.

Be aware of mistrust

Companies need to be aware of these trust issues. Our findings show that people trust AI systems less and therefore undervalue their decisions in a business context. However, AI-made decisions have significant advantages in speed, quality, and depth. Thus, companies need to communicate these concerns as well as the advantages to find the right balance. Compared to the average age of our participant group (20-30 years old), older generations commonly represented in companies might trust AI systems even less, leading to bad decision-making and financial and reputational loss. 

It is the manager’s role to act to understand and optimise the perception of trust in AI decision-making. They must clarify their objectives regarding trust in AI decision-making, observe and identify the levels of trust of their employees towards AI as a decision-maker and identify areas of improvement.

We suggest these three key steps to understand and change trust perception in AI decision-making:

  1. Objectives: Clarify objectives regarding Trust in AI decision-making
  2. Observation: Identify the level of trust of employees towards AI as a decision-maker
  3. Implication: Make improvements as necessary

Following these steps can help to ensure a smoother integration of AI into businesses.

This article is based on a research project conducted by ESCP Business School Master in Management students Matteo Girelli, Suchet Kamble, Henrik Rippert, Alexander Schmachtenberg and Julius von Diergardt as part of the ‘Shaping the Future of Leadership in the Digital Era’ course of the ‘Digital Transformation’ specialisation, in collaboration with the Reinventing Work Chair supported by BivwAk! BNP Paribas.

Exit mobile version