Connected Magazine

Main Menu

  • News
  • Products
    • Audio
    • Collaboration
    • Control
    • Digital Signage
    • Education
    • IoT
    • Networking
    • Software
    • Video
  • Reviews
  • Sponsored
  • Integrate
    • Integrate 2024
    • Integrate 2023
    • Integrate 2022
    • Integrate 2021

logo

Connected Magazine

  • News
  • Products
    • Audio
    • Collaboration
    • Control
    • Digital Signage
    • Education
    • IoT
    • Networking
    • Software
    • Video
  • Reviews
  • Sponsored
  • Integrate
    • Integrate 2024
    • Integrate 2023
    • Integrate 2022
    • Integrate 2021
Artificial IntelligenceFeatures
Home›Technology›Artificial Intelligence›Regulating artificial intelligence

Regulating artificial intelligence

By Adelle King
06/11/2018
572
0

The future of smart homes lies in artificial intelligence but as the technology is embraced by a growing number of companies, debate has begun about whether it needs to be regulated. Adelle King explains.

In August 2017, Tesla co-founder and chief executive Elon Musk sparked debate about artificial intelligence (AI) regulation after he and 114 other leading AI and robotics experts published an open letter to the United Nations Convention on Certain Conventional Weapons.

In the letter, the group of robotics and AI companies warn of the potential threat AI poses in the form of technologies that may be repurposed to develop autonomous weapons.

ADVERTISEMENT

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” says the open letter.

Elon Musk has been warning the public about the danger of AI for years, comparing the development of AI to “summoning the demon” in a 2014 interview at the MIT AeroAstro Centennial Symposium.

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon.”

Critics, including Facebook founder and chief executive Mark Zuckerberg and Microsoft founder Bill Gates, argue this is an alarmist view and that the technology isn’t nearly advanced enough for worries about threats to humanity and autonomous weapons to be realistic.

Though, the Korea Advances Institute of Science and Technology (KAIST) University in South Korea recently joined forces with defence company Hanwha Systems to launch a new facility dedicated to developing AI-based military innovations.

Regardless of whether Elon Musk and the letter signatories’ fears are well-founded or not, the open letter did prompt people to start thinking about whether the development of AI needs to be controlled to ensure it is used only for medical, educational, scientific and social purposes.

“The rate of acceleration is huge right now and the possibilities of what AI technology can do is extraordinary. However, because of this, it’s hard to know where the technology is going and people fear the unknown,” says CEDIA director of technical content David Meyer. David is also a member of the Institute of Electrical and Electronics Engineers’ (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems.

The IEEE is a professional association that promotes the educational and technical advancements of electrical and electronic engineering, telecommunications, computer engineering and allied disciplines. In 2017, the organisation released three new standards for ethics in AI that prioritise human well-being as these technologies advance into the unknown.

One of the biggest unknowns is what exactly is meant when people talk about AI technology. The term has become overused and, as a result, it is often applied in marketing to refer to ‘smart’ devices and systems.

“AI has become a popular fad term so it’s been abused and people have ended up confused and mislead about what AI actually is,” says David.

There are a number of definitions of AI but as a minimum an AI system should be able to learn in some way and then take actions based on that learning. This involves the ability to gain knowledge over time and continue to learn from interactions.

The ‘intelligence’ part of AI is therefore defined as the quality that enables an entity to function appropriately and with foresight in its environment.

While there are some people who argue that ‘true’ AI doesn’t exist yet, it’s widely accepted that some form of AI is already embedded in our lives, mostly in an infant form, and the number of companies in the systems integration market embracing the technology is increasing.

Florida-based AdMobilize is one of these companies, creating an AI platform that allows organisations to analyse people, crowds, vehicles and other objects in real-time with their cameras. Compatible with all major IP/security camera systems and OS platforms, AdMobilize’s technology allows clients to gather smart metrics and share them with customers.

“To enable us to capture data based on computer vision technology, we have created a stack that encompasses crowd and vehicle analytics, recognition and detection, as well as people analytics including age, gender, emotion analysis and demographics. This is all done on one comprehensive platform and all the data we collect is anonymous,” says AdMobilize co-founder and chief executive Rodolfo Saccoman.

Rodolfo says the system is between 93%-99% accurate depending on the location and technology stack.

“At this point we have approximately two million trained and classified images inside our machine learning platform. We’ve created our own models that keep training themselves to increase these accuracies.”

Josh.AI is another company leveraging AI technology to service the custom installer market, though it is currently only operating in the US.

The company has developed an AI known as Josh that users can speak with and that connects with all the professionally installed systems and devices in a home. Using voice control, Josh can change the temperature, play music and turn lights on and off, but unlike other voice systems, it is built to understand natural language and can access a multitude of home systems.

“We’ve used AI and neural networks to train speech recognition to be very accurate so when you’re talking to your home the word accuracy rate is beyond 95%, which is the threshold for human accuracy. This is largely thanks to AI-related technologies that are constantly improving,” says Josh.AI co-founder and chief executive Alex Capecelatro.

“The next thing we’re really excited about is the idea that the home will essentially be monitoring all the occupants’ actions and using neural networks and machine learning algorithms to get smarter. Already, homes are learning occupant behaviours and becoming pre-emptive so we think this technology is only 12-24 months away.”

Here in Australia, Schneider Electric is actively looking to incorporate AI into its C-Bus/Push Controls systems, Edge controllers and cloud platform.

“It’s still early days for our solutions with AI but our journey is beginning with new Edge controllers and the cloud platform to collect data and create new algorithms to offer uses with intelligent buildings,” says Schneider Electric smart space director Ben Green.

“As we progress in developing convergent systems, we move away from a ‘bus’ solution or ‘smart app’ to a smart stack of Internet of Things (IoT) connected devices, Edge (on premise) controllers and cloud processing. This then lends itself to AI, both from our internal development and our ‘enabled’ partners.

“This transition is happening now and will be accelerated with our new Push Plus solution and C-Bus Network Automation Controllers. The AI focus for us will take the form of features for both system integrators during the deployment and site management stages, and the customer in smart building control.”

All these AI solutions, like many available on the market, are currently driven by data and as a result, most of the regulation around AI technology has been focused on cyber security. This includes the European Union’s General Data Protection Regulation (GDPR) and Australia’s Notifiable Data Breaches scheme.

“Privacy and safety are paramount when dealing with sensitive data and should remain the focal point to ensure AI innovators focus on delivering solutions that are safe, reliable and secure,” says Ben.

The question now though is whether there is a need to go beyond current laws and implement specific AI regulation given that there is only ‘narrow’ or ‘weak’ AI, where programs are focused on one narrow task.

“In terms of general AI, where machines can successfully perform any intellectual task that a human being can, I don’t think we’re even close to achieving that,” says Alex.

“We don’t have the tools right now to build general purpose technology and AI, nor do we understand how we go about doing it. It’s so far off from what we’re doing today that I think we’re looking at 15-20 years and maybe even longer before that’s developed.”

AI advocates who argue against implementing regulation say the fact that the potential future of AI hasn’t been fully unlocked yet, and requires more research and development, means the implementation of regulation now will stifle innovation. They argue that regulation could slow or prohibit the development of capabilities that would be overwhelmingly beneficial.

On the other hand, those who argue for regulation say a proactive approach to regulation needs to be taken to safeguard against the threat of AI technology being abused.

There are already projects underway to develop ‘strong’ AI, such as the Allen Institute for Artificial Intelligence’s Project Alexandria, which was launched with the support of Microsoft co-founder and philanthropist Paul G. Allen. Project Alexandria is aiming to advance AI’s common sense abilities as a precondition for general intelligence.

If the project is successful in developing this technology, the social, economic and political impacts and implications are unknown. Additionally, current laws do not stipulate who would be held responsible if AI causes harm. Advocates say applying solutions on a case-by-case basis risks confusion and uncertainty, which could lead to knee-jerk reactions fuelled by public anger. They argue government oversight is therefore needed now to create one unifying framework and ensure public safety.

“At the moment what we’re talking about is the great unknown of what happens when we actually set AI free to start designing new systems,” says David.

“AI is going to be limited to some degree by its human design but when it starts designing its own systems then we can’t fully predict what’s going to happen other than potential singularity.”

However, Alex says government regulation is not always the best approach.

“I think regulation is absolutely necessary but you need to have people who truly understand the capabilities and the risks of these technologies to figure out how to regulate it. Personally, I’m a fan of creating networks of manufacturers that self-regulate each other.”

To a certain extent this is already occurring, with Google, Facebook, Amazon, Microsoft, DeepMind and IBM teaming up to create the Partnership on AI. Publically announced in 2016, the Partnership aims to develop best practices for AI systems, create dialogue on the potential influences of the technology and educate the public about AI.

This is a similar system of self-regulation that was adopted in the early days of the internet when the US actively avoided regulation to avoid stunting its early growth. This lack of government regulation, some people argue, allowed innovations like internet telephony and social media to grow at unprecedented rates.

Trying to manage, predict or regulate AI technology could impede similar innovations in the AI sector, with the UK Government’s independent review into AI in 2017, Growing the Artificial Intelligence Industry in the UK, finding direct regulation could restrict competition.

“For small companies, negotiating agreements and establishing practices can present major obstacles and costs. These conditions could make it difficult for new companies to enter some markets, potentially to the detriment of outcomes for the public. These barriers for small companies in particular could restrict the focus of AI innovation to areas that are core to the major incumbents’ services, and draw resources away from other areas of major public benefit, including innovation in public services or medical research,” the report states.

Instead, the report recommends a framework be created to explain how decisions are made by an AI system to ensure companies operate within the limits of the law.

This is similar to what the Partnership on AI is seeking to achieve, aiming to advance wider public understanding and awareness of AI and how it’s progressing by sharing insights into AI’s core technologies.

“A lot of the concerns around security and privacy are people not having a good understanding of what AI companies are really doing. That’s why we need to be mindful, thoughtful and careful in setting good guidelines and rules for how these systems should operate to show that AI doesn’t represent a threat to the community,” says Alex.

“AI is only a danger to society in the same way the internet was in the early ‘90s in terms of hacking but just like the internet AI is also creating a lot of good. It’s an unstoppable force but that doesn’t necessarily mean computers are taking over and that we’re all going to be under attack, scrutiny and risk.”

  • ADVERTISEMENT

  • ADVERTISEMENT

Previous Article

Holography: it’s not what you think

Next Article

Connected TV – Episode 13 (8 November ...

  • ADVERTISEMENT

  • ADVERTISEMENT

Advertisement

Sign up to our newsletter

Advertisement

Advertisement

Advertisement

Advertisement

  • HOME
  • ABOUT CONNECTED
  • DOWNLOAD MEDIA KIT
  • CONTRIBUTE
  • CONTACT US