Connected Magazine

Main Menu

  • News
  • Products
    • Audio
    • Collaboration
    • Control
    • Digital Signage
    • Education
    • IoT
    • Networking
    • Software
    • Video
  • Reviews
  • Sponsored
  • Integrate
    • Integrate 2024
    • Integrate 2023
    • Integrate 2022
    • Integrate 2021

logo

Connected Magazine

  • News
  • Products
    • Audio
    • Collaboration
    • Control
    • Digital Signage
    • Education
    • IoT
    • Networking
    • Software
    • Video
  • Reviews
  • Sponsored
  • Integrate
    • Integrate 2024
    • Integrate 2023
    • Integrate 2022
    • Integrate 2021
Features
Home›Features›Artificial emotion… can robots ‘feel’?

Artificial emotion… can robots ‘feel’?

By Adelle King
31/12/2018
776
0

The technology behind artificial intelligence is continuing to develop rapidly, with researchers now looking at ways to make these systems more human-like. Adelle King explains.

Previously, the focus of most artificial intelligence (AI) research has been to develop programs for one very particular task. Now, however, researchers are beginning to focus on building systems that can autonomously increment their knowledge in the process of finding solutions to given problems.

ADVERTISEMENT

Rather than the current, inherently narrow AI systems, this new research aims to develop what’s referred to as ‘general’, ‘true’ or ‘real’ AI, meaning systems that can think, learn and reason like a human. This is being done through a combination of neuroscience, cognitive science and computer science, and is being driven by the growth of the ‘deep learning’ statistical technique. This technique, which builds a ‘neural network’ based loosely on the human brain through multiple layers of processing that gradually refine results, has changed the development dynamic of AI research from having to program computers to being able to ‘teach’ them. As a result, it has helped to develop systems that can recognise patterns across a variety of problems, recognise speech, classify images and recognise objects.

However, these systems’ range of capabilities are determined almost entirely by their training data, which means they can’t adapt to new situations, words, phrases, business rules, products, goals, requirements or responses. Humans, on the other hand, can effectively deal with, and adapt to, change.

Famous neuroscientist and professor of neuroscience, psychology and philosophy at the University of Southern California, Antonio Damasio, argues in his new book The Strange Order of Things that feelings and emotions form a large part of human intelligence. Therefore, in order to create machines that can think and act like humans, these machines first need to be able to understand human emotion.

“The systems being created are trying to emulate human pattern recognition and visual and audio analytics but to do this we need to better understand how and why these processes occur,” says Rich Green Design owner and CEDIA Technology Council member Rich Green.

Emotions are deeply intertwined with human decision-making, motivation, evaluation, learning and awareness so it’s important that AI systems are able to recognise emotions and respond appropriately.

Current AI systems can mimic some human thinking processes but they can’t yet reason or interpret emotional behaviour.

“Computers cannot feel but they can eventually be programmed to emulate the response of something that would feel in such a way that humans can relate to it,” says Rich.

“This is referred to as ‘relatable’ AI, where there is the illusion of an emotional entity behind the AI that allows us to build a deeper, more trusted relationship with the system.”

We’re now starting to see cameras, sensors and infrared heat mapping integrated into wearable devices that allow AI to detect the emotional state of users and communicate accordingly.

Soon these devices will also have the ability to use the date gained by measuring things like breathing rhythms, blood sugar levels, heart rate, tone of voice, shakiness of voice and the infrared signature of the face, to learn about human cognitive processes and how humans respond to different situations.

In order for this cognitive thinking to then be incorporated into AI systems, Rich says researchers need to look back at evolutionary processes.

“Evolutionary algorithms are programmed to randomly try things as part of what’s known as the selective retention of random variation. Basically, as a system evolves, it has to have an engine of random variation to be able to innovate and create a novel response to a situation. So researchers are creating evolutionary algorithms that have a random component that encourages the AI system to search for novelty.”

To create this same need to explore new or unfamiliar things, researchers are now starting to realise they need to incorporate some form of curiosity into AI agents.

This is what the University of California, Berkeley’s Artificial Intelligence Research Lab is currently working on. In 2017, PhD students Pulkit Agrawal and Deepak Pathak developed experimental machine-learning algorithms to make a machine curious. These algorithms have enabled the team to design a computer program that uses curiosity to autonomously learn skills required to play video games including Super Mario Bros and VizDoom.

The research is based on the idea of intrinsic motivation, which in psychology refers to spontaneous exploration and the acts of seeking out information and practising new skills for the pleasure of learning and mastering them. This exploration allows humans to learn new skills that can become useful later on.

“The reason for learning skills using curiosity is that often direct supervision is not available for performing the desired task. As human agents, we are accustomed to operating with rewards that are so sparse that we only experience them once or twice in a lifetime, if at all,” says Pulkit.

Using extrinsic motivation to train algorithms, an approach used by computer scientists in the development of AI systems, therefore limits AI to one very particular task.

In order for AI systems to be capable of autonomously learning an open-ended range of skills, they need to be able to proactively seek, select and explore new information via an algorithm for taking actions for exploring new things.

“Human intelligence or commonsense reasoning is characterised by the ability to use previous knowledge to solve new problems either more efficiently or significantly faster than a system that starts off with no prior knowledge,” says Pulkit.

“Instead of building agents with extremely narrow intelligence, we want to build agents that can autonomously increment their knowledge in the process of finding solutions to the given problems. Artificial curiosity provides a way for the agent to increment their knowledge by efficiently exploring the environment.”

The research being undertaken by Pulkit and his team overcomes the pitfalls of previous artificial curiosity methods by proposing a method that only makes the agent curious about things that can affect it. They experimentally show that using their formulation of the agent is unaffected by potentially nuisance sources of novelty such as the static on a TV.

“One of the challenges in artificial curiosity is to quantify novelty,” says Pulkit.

“If an artificial agent is put in front of a TV displaying static white noise, every image that the agent observes will be different from the previous image and will thereby appear as novel. If the agent is incentivised to explore new things, it would inevitably get fixated on watching the static and stall its exploration. Ideally, we want the agent to realise after some time that there is no new information to be learned by simply watching the static on the TV screen.

“Another example is an agent driving a vehicle that observes tree leaves on the roadside blowing due to the wind. Movement of the leaves is extremely hard to predict and is mostly irrelevant to the agent’s task of safely driving to a target location. However, the inability of the agent to predict how leaves are moving will make it curious about their motion and prevent it from learning useful driving behaviours.”

The agent created by the University of California, Berkeley consists of two subsystems – a reward generator that outputs a curiosity-driven intrinsic reward signal and a policy that outputs a sequence of actions to maximise the reward signal.

While the machine is always driven to try new things and explore possibilities, the algorithms only predict those changes in the environment that could possibly be due to the actions of the agent or affect the agent, and ignore the rest.

Pulkit says this type of continuous learning about the environment, combined with learning from other agents in the environment via imitation is one way of approaching common sense.

“The next big leap in exploration strategies is likely to come from the design of systems that can store information gathered about the environment in a structured manner that allows them to hypothesise tasks and set ‘interesting’ goals. The agent will thereby learn via a self-generated curriculum of tasks of increasing complexity,” says Pulkit.

“In addition to undirected exploration, goal-directed exploration is also very important. Building direct exploration strategies that bring to bear past knowledge acquired by the agent is going to be an important step towards building general AI agents.”

And once we build these, we will be one step closer to human level intelligence.

  • ADVERTISEMENT

  • ADVERTISEMENT

Previous Article

How to connect in 2019

Next Article

Robotism… can you be racist towards robots?

  • ADVERTISEMENT

  • ADVERTISEMENT

Advertisement

Sign up to our newsletter

Advertisement

Advertisement

Advertisement

Advertisement

  • HOME
  • ABOUT CONNECTED
  • DOWNLOAD MEDIA KIT
  • CONTRIBUTE
  • CONTACT US