Can we, or should we trust machines like robots during commercial transactions? Recent discourses on robots in the marketplace have been on the rise. Some companies are using robots to greet guests in hospitality settings, deliver products, or even as personal assistants.
However, not all consumers feel comfortable with the inclusion of these robots at work or in their daily lives. Key is the notion of trust consumers can place in machines like robots. Like interactions between humans, trust in the interaction partner (e.g., a salesperson) tends to positively affect his/her acceptance, thus also contributing to commercial success.
A recent Fortune article states that the design of a robot could be determinant in earning trust. The article argues that adequate design can help overcome the scepticism and negative attitude consumers may have toward the robot. Negative perceptions of robots may partially be grounded in some of the narratives portrayed around robots in movies like Terminator or The Matrix.
The article even states that people have responded physically in a violent manner toward robots deployed in the streets. To overcome these negative responses, the article suggests that robot design features such as the inclusion of eyes can be a first step toward higher acceptance rates. Since humans use eye contact to establish trust, this notion can also be translated to human-robot interactions.
Negative perceptions of robots may partially be grounded in some of the narratives portrayed around robots in movies like Terminator or The Matrix.
Moreover, the article further emphasizes that making robots too human-like (high anthropomorphism) can have adverse effects on acceptance. This notion is in line with the Uncanny Valley theory, which suggests that increases in human-likeness tend to favour acceptance, but only up to a certain point. When the robot becomes too human-like, people tend to react negatively due to feelings of eeriness.
Indeed, the main arguments made in the article are backed by scientific evidence. First, regarding trust, there have been numerous studies supporting that trust is crucial in driving the acceptance and usage of robots and other artificial agents.
Research suggests that trust is also derived from the robot’s functional capabilities.
However, trust is not just based on visual design features. Research suggests that trust is also derived from the robot’s functional capabilities. In other words, is the robot able to accomplish the task it was designed for?
Furthermore, trust goes beyond the robot, as consumers who are generally open to novel technologies are also more inclined to develop trust in robots. Thus, the word ’trust‘ and its development potentially entail many dimensions that need to be considered in human-robot interactions.
Second, regarding ‘human-likeness’, various researchers argue that increases in human-likeness also increase trust, without experiencing dips at higher human-likeness levels. These findings go against the above-mentioned, widely-cited Uncanny Valley theory. Indeed, findings have been mixed, thus painting a complex picture.
Functionality first, human-like features second
I suggest that if companies do not have the luxury of engaging in large-scale research before deciding which robot to use in their business, they could go for an intermediate solution. Robots that have some human-like features that can increase trust and social presence – such as big eyes, a friendly face and voice – could be a viable option.
However, as outlined above, this implies that the robot can satisfactorily accomplish the task at hand from a functional perspective. Otherwise, the visual design features promoting trust may be short-lived in its effectiveness. Indeed, this is an exciting area for businesses and academics as we keep learning every day about how to make human-robot interactions successful.
This post gives the views of its author, not the position of ESCP Business School.