When social robots reshape human behavior


Nicolas Spatola is currently working at the LAPSCO in France (Laboratoire de Psychologie Sociale et Cognitive) as teaching assistant.
His project focuses on the impact of Human-Robot Interaction on human cognitive processes and the condition of such effects. He is taking part in the Pint Of Science festival at Clermont-Ferrand on May 16, 2018.

 

Social robots will progressively have a larger place in our environment in the years to come. While we are increasingly developing the androids’ capacities to perform social roles, paradoxically we still know very little about their impact on human cognition and behavior. As they develop, they have the potential to reshape our social environment, our habits, and even our morality.

Could you make friends with a robot? Click To Tweet

Today, when you enter a store, you could be welcomed by a robot. When you go back after a few days, that robot could recognize you and could even remember what you had bought the last time you visited. But what would you think of it? How could its presence influence your behavior or your thoughts? Would you be surprised if it – an electro-mechanical machine –  smiled at you? Would you feel sad if something happened to it? And, even more so, could you make friends with it? [1]

Recently, several studies investigated the impact of Human-Robot Interaction (HRI) on human behavior. Researchers showed that humans can be influenced by the presence of a robot the same way as they could be by the presence of another human.[2] Research has shown that the presence of someone can improve or impair our cognitive processing according to the difficulty of the task or the competence attributed to the observer. We experience this “social presence” effect every day because it has an impact on fundamental processes, such as our capacity to inhibit irrelevant cues in our environment and select the pertinent ones. For example, watching the road and not looking at the birds when we drive.

In the presence of a robot, similar effects can occur but at a boundary condition: the android must be built in a way that it appears to have a certain level of humanity. [3] Our mind is wired to project human characteristics on non-human entities in order to gauge or even predict their behavior. This process is called anthropomorphism. Because we anthropomorphize robots, especially when they are designed to interact with humans, they can produce “social presence effects”. The problem is that we are developing anthropomorphic robots to assume social roles in schools, while we are just starting to understand that robots can have an impact on our cognitive processes. Moreover, we have no idea how a robot’s presence could modify our learning processes. The study of the HRI in psychology aims to fill that gap by promoting a better understanding, use, integration, and acceptance of robots in our world.

Our perception of robots is influenced by our innate tendency to consider them as valid social agents. Click To Tweet

Another important question that the HRI study aims to answer is “how can we co-exist with robots and what could the consequences of that be”? Our perception of robots is influenced by the fact that we know that they are humanoids or, in other words, creations that resemble a human but are not human, and our innate tendency to consider them as valid social agents. [4]

This dichotomy is rooted in ethical dilemmas; How do we define the humane treatment of robots? Will we consider the suffering of “feeling” machines? Treating robots in a way that is, in general terms, “socially unacceptable” based on current moral values, could entrench philosophically skewed and contradictory behaviors that may target humanoids in the future [5]. What would that say about us? What would that say about humanity?

If we just took a step back to ponder the dramatic changes that a simple smartphone has brought to our everyday lives [6], we can only assume that robots in general and more specifically social robots, will dramatically change our society. How our ethical and moral standards evolve will largely depend on how we prepare ourselves for the ongoing artificial intelligence and social robotics revolution.

References

  1. Darling K (2015) ‘Who’s Johnny?’Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy.
  2. Riether N, Hegel F, Wrede B, Horstmann G (2012, March) Social facilitation with social robots?. In: Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pp. 41-48.
  3. Spatola N, Belletier C, Chausse P, Augustinova M, Normand A, Barra V, Huguet P, Ferrand L (under review) Improved cognitive control in presence of anthropomorphized robots. International Journal of Social Robotics.
  4. Wegner DM, Gray K (2017) The mind club: Who thinks, what feels, and why it matters.
  5. Arnold T, Scheutz M (2017) The Tactile Ethics of Soft Robotics: Designing Wisely for Human–Robot Interaction. Soft Robotics 4:81-87.
  6. Chotpitayasunondh V, Douglas KM (2016) How “phubbing” becomes the norm: The antecedents and consequences of snubbing via smartphone. Computers in Human Behavior 63: 9-18.

Opinions in this blog post are that of the author, and not necessarily that of Hindawi. Profile photo provided by Nicolas Spatola. The text in this blog post is by Nicolas Spatola and is distributed under the Creative Commons Attribution License (CC-BY). Illustration by Hindawi and is also CC-BY.