Even when people accept and trust artificial intelligence, they may still not tolerate it. Different stakeholders interpret technology adoption differently. Managers focus on efficiency, reliability and convenience. Employees might see the same advantages, but their acceptance and trust can be overridden by questions around job security, autonomy and the impersonal nature of AI. Alessandro Lanteri, Massimiliano L Cappuccio, Jai C Galliott and Friederike Eyssel suggest steps for decision-makers to address employees’ concerns.
Alessandro Lanteri
Professor of Strategy and Innovation at ESCP Business School (Turin campus)
Massimiliano L Cappuccio
Senior Lecturer at the University of New South Wales
Jai Galliott
Director of the Values in Defence & Security Technology Group within the University of New South Wales at Australian Defence Force Academy
Friederike Eyssel
Professor and Head of Lab at the University of Bielefeld
Ted Kaczynski, the notorious anti-technology criminal known as the Unabomber, is back in the news after he was found dead in his prison cell earlier this year. But his conspiratorial distrust of technology never went away. In 1995, when the FBI authorised the publication of his anti-tech manifesto, it was assumed few would read it. Instead, the manifesto resonated and later became a bestseller.
The Unabomber’s actions are inexcusable, but many sceptics sympathise with his concerns about the impact of technology. The resonance of Kaczynski’s ideas is a powerful reminder of the need to address larger narratives around peoples’ views of technology, particularly their very real sense of alienation.
Technology acceptance and trust
Traditional analyses of technology adoption have focused on actual interactions between people and artificial intelligence agents. Specifically, researchers evaluate the dual questions of acceptance and trust to gauge whether or not a user’s expectations, needs, and preferences are satisfied.
The assessment of acceptance is based on factors like perceived usefulness and ease of use. Trust addresses a more subjective attitude when technology adoption comes with a degree of vulnerability, and involves the confidence in whether, for example, an AI agent will act in a human’s best interests.
Such metrics are useful, but only up to a point. Looking closely, a larger set of issues undermines adoption and this can later translate into active resistance. Deep-seated aversions to and mistrust of technology can undermine its adoption even when trust and acceptance are high. For this reason, we are now proposing tolerance as a third metric that captures a wider set of attitudes toward technology and its effect on work and society.
The threat of intolerance
Imagine the following hypothetical scenario, involving the ground crew in an airbase. A new robotic agent is introduced that transports bulky components from one side of a very long hangar to the other. After a few weeks of seemingly flawless implementation, the crew supervisors report that the hangar’s personnel almost entirely stopped using the agent and, on one occasion, some of them tried to sabotage it.
Investigations reveal that, surprisingly, the crew had high acceptance and trust for the machine. They find it useful and reliable, like its friendly interface and its interactive functions, and even express sympathy for the agent. So, why did they ultimately fight it?
While not disliking or distrusting the agent per se, the crew state that the permanent adoption of similar robotic agents would eventually reduce opportunities to interact between the two teams at the opposite sides of the hangar and deteriorate their relationships. Although there is no plan to replace the workers, they grow concerned about becoming redundant if additional agents are deployed. Some managers even declare that assigning logistic tasks to an AI agent is morally wrong because that responsibility should be given only to humans.
Despite an initial response perfectly accounted for in terms of high acceptance and trust, it soon became evident that the crew’s overall propensity to use the agent was low. Together with their positive impressions, the crew simultaneously held a very negative judgment about the agent. This second response overtook the first. Why the disconnect between apparent high degrees of acceptance and trust on the surface, and a deeper lack of tolerance?
Luddites 4.0
The luddites were a group of British weavers and textile workers in the early 19th century who opposed the way factory owners were deploying a new generation of mechanised looms and knitting frames. The term has been resurrected as a blanket description for technophobes and Ted Kaczynski was sometimes called a neo-luddite.
The textile industry had driven the onset of the Industrial Revolution. But in the early 1800s the industry was struggling. Unemployment and inflation were high. Factory owners tried to cut costs with machines that could be tended by lower-payed unskilled workers. Previously, textile workers had been skilled craftsmen who spent years learning their trade. When efforts to secure better wages and working conditions were rebuffed, some turned to violence. According to one account, the first raid on a factory took place after a peaceful protest had been violently suppressed.
In Brian Merchant’s Blood in the Machine, the Luddites are depicted not as technophobes, but as anxious workers who took their frustrations out on the machines as a last resort, and who viewed the machines as a symbol, and not the enemy. Like the hangar crew, it was a larger set of anxieties around their future work prospects that motivated their resistance to technology, not the technology itself.
Different stakeholders, different perspectives
You don’t need to be a Luddite to acknowledge that different stakeholders interpret technology adoption differently. Decision-makers and owners will make their assessments based on values such as efficiency, reliability, and convenience. Employees might be open to efficiency, reliability, and convenience. But in the end, their acceptance and trust can be overridden by questions around job security, autonomy, and the impersonal nature of AI.
The tolerance construct helps shed light on these divergent views, and how they are likely to play out. Tolerance brings a larger set of values and perceptions into the discussion from anxiety and ambivalence about new technology to outright resistance and even hostility.
Larger narratives
We cannot simply dismiss these anxieties as irrational when the larger narrative around new technology is so charged. In 2013, researchers at Oxford estimated that as many as 47 per cent of all US jobs were “at risk” of automation. A series of apocalyptic headlines about robots replacing humans soon followed. That same year, IBM’s Watson had triumphed in “Jeopardy!” over its human competitors. Anxiety over technology was understandably high.
Even today, headlines about AI replacing humans are good at drawing clicks, but bad at characterising the nuanced reality. In March, Goldman Sachs estimated that popular AI tools could automate the equivalent of 300 million full-time jobs. But that doesn’t mean 300 million jobs are suddenly disappearing; the keyword is equivalent. Researchers at Open AI and the University of Pennsylvania clarified that 80 per cent of the workforce could see at least 10 per cent of their tasks affected. Exactly how is a matter of debate. As David Autor of MIT puts it: “Affected could mean made better, made worse, disappeared, doubled.”
Terminator-like stories about machines taking over the world may be fabricated. But the anxiety is real. While acceptance and trust are shaped by actual interaction with smart agents, tolerance is shaped by belief. And decision-makers need to account for that.
Actionable steps for decision-makers
At a time when the rollout of AI and robots in the workplace seems unstoppable, using the tolerance metrics helps decision-makers proactively address employees’ concerns. We suggest the following steps:
- Assess employee tolerance levels: Conduct surveys or workshops to understand employees’ tolerance to anticipate and manage resistance.
- Provide training and support: Offer trainings and ongoing support to help increase familiarity and comfort levels with new technologies.
- Promote transparency: Clearly communicate why agents are implemented and how they benefit both organisation and employees.
- Implement gradually: Introduce change gradually to allow adjustment and increase tolerance over time.
- Involve employees in the process: Invite employees to share their perspectives to increase their sense of control and therefore tolerance.
- Address concerns proactively: Address any concerns openly and honestly and discuss upskilling opportunities.
This more nuanced, tolerance-inspired approach paves the way for more successful technology implementations in the future.
This blog post is based on Autonomous Systems and Technology Resistance: New Tools for Monitoring Acceptance, Trust, and Tolerance, Journal of Social Robotics, and was initially published by the LSE Business Review.
The views expressed in this article are those of the author and not the position of ESCP Business School.