Connected Magazine

Main Menu

  • News
  • Products
    • Audio
    • Collaboration
    • Control
    • Digital Signage
    • Education
    • IoT
    • Networking
    • Software
    • Video
  • Reviews
  • Sponsored
  • Integrate
    • Integrate 2024
    • Integrate 2023
    • Integrate 2022
    • Integrate 2021

logo

Connected Magazine

  • News
  • Products
    • Audio
    • Collaboration
    • Control
    • Digital Signage
    • Education
    • IoT
    • Networking
    • Software
    • Video
  • Reviews
  • Sponsored
  • Integrate
    • Integrate 2024
    • Integrate 2023
    • Integrate 2022
    • Integrate 2021
Products
Home›Products›NVIDIA and Microsoft Boost launch Hyperscale GPU Accelerator

NVIDIA and Microsoft Boost launch Hyperscale GPU Accelerator

By Adelle King
14/03/2017
662
0

NVIDIA and Microsoft Boost launch Hyperscale GPU AcceleratorNVIDIA, in partnership with Microsoft, today unveiled blueprints for a new hyperscale GPU accelerator to drive AI cloud computing

The new HGX-1 hyperscale GPU accelerator will provide hyperscale data centres with a fast, flexible path for AI and is an open-source design released in conjunction with Microsoft’s Project Olympus. It establishes an industry standard for cloud-based AI workloads that can be rapidly and efficiently embraced to help meet surging market demand in fields such as autonomous driving, personalised healthcare, superhuman voice recognition, data and video analytics, and molecular simulations.

“The HGX-1 hyperscale GPU accelerator will do for AI cloud computing what the ATX standard did to make PCs pervasive today. It will enable cloudservice providers to easily adopt NVIDIA GPUs to meet surging demand for AI computing,” says NVIDIA founder and CEO Jen-Hsun Huang.

ADVERTISEMENT

It’s powered by eight NVIDIA Tesla P100 GPUs in each chassis and features an innovative switching design based on NVIDIA NVLink interconnect technology and the PCIe standard. This enables a CPU to dynamically connect to any number of GPUs and allows cloud service providers that standardise on the HGX-1 infrastructure to offer customers a range of CPU and GPU machine instance configurations.

The highly modular design of the HGX-1 allows for optimal performance no matter the workload and provides up to 100x faster deep learning performance compared with legacy CPU-based server. It is also estimated at one-fifth the cost for conducting AI training and one-tenth the cost for AI inferencing.

“The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast-growing machine learning workloads, and its unique design allows it to be easily adopted into existing data centres around the world,” wrote Azure Hardware Infrastructure general manager and Microsoft engineer Kushagra Vaid in a blog post.

The HGX-1 offers existing hyperscale data centres a quick, simple path to be ready for AI.

Microsoft, NVIDIA and Ingrasys (a Foxconn subsidiary) collaborated to architect and design the HGX-1 platform and the companies are now sharing it widely as part of Microsoft’s Project Olympus contribution to the Open Compute Project. This is a consortium whose mission is to apply the benefits of open source to hardware and rapidly increase the pace of innovation in, near and around the data centre and beyond. NVIDIA has announced it will be joining the association.

Sharing the reference design with the broader Open Compute Project community means that enterprises can easily purchase and deploy the same design in their own data centres.

  • ADVERTISEMENT

  • ADVERTISEMENT

Previous Article

Polycom appoints Ingram Micro as distributor for ...

Next Article

RTI announces video intercom support across a ...

  • ADVERTISEMENT

  • ADVERTISEMENT

Advertisement

Sign up to our newsletter

Advertisement

Advertisement

Advertisement

Advertisement

  • HOME
  • ABOUT CONNECTED
  • DOWNLOAD MEDIA KIT
  • CONTRIBUTE
  • CONTACT US