Robotism… can you be racist towards robots?
As researchers work towards developing AI systems that have human-level intelligence, it has raised questions about whether these systems should have a built-in set of ethics that will enable them to make moral decisions.
After all, the more complex and autonomous these AI systems become, the more likely it is that they are going to need to be able to self-regulate their behaviour to a certain extent.
However, teaching morality and ethics to AI systems poses a number of challenges, including the fact that humans can’t objectively convey what these concepts mean in measurable metrics that are easy for computers to process. Not to mention the fact that each person has their own set of ethics that differ slightly (or a lot) from those around them.
Yet if humans don’t incorporate ethics into AI systems then there are arguments that we could be placing ourselves in the dangerous situation of allowing algorithms to decide what’s best for us. As a result, researchers argue that society will only accept AI machines if they are programmed to act in ways that maximise safety, fit in with social norms and encourage trust.
The difficulty lies in how to achieve this in diverse societies where existing prejudices can even affect people’s relationships with robots.
Research by Monash University and the University of Canterbury, which was presented at the 13th Annual ACM/IEEE International Conference on Human Robot Interaction in March, found that humans ascribe race to humanoid robots and carry their racial biases over to interactions with these robots.
The researchers used the shooter bias paradigm and several questionnaires to look at whether people automatically identify robots as being racialised and if this had an impact on their interactions with the robot.
In the study, participants were asked to play the role of a police officer, shown images of robots with two ‘skin’ colours and asked to decide whether to shoot at the robot or not. The results found that people changed their behaviour towards brown robots in comparison to the white robots and when responding to the threatening robot, were quicker to shoot the brown robot.
“Using robot and human stimuli, we explored whether these effects would generalise to robots that were racialised as black and white. Reaction-time measures revealed that participants demonstrated `shooter-bias’ toward both black people and robots racialised as black. Participants were also willing to attribute a race to the robots depending on their colour even when provided the option to select ‘does not apply’”, say the researchers in their paper Robots and Racism.
“We hope that our paper might inspire reflection on the social and historical forces that have brought what is now quite a racially diverse community of engineers to – seemingly without recognising it – design and manufacture robots that are easily identified by those outside this community as being almost entirely ‘white’.”