Startup NeuReality Ltd. today announced that it has raised $35 million in funding to commercialize its NR1 chip, which is designed to speed up artificial intelligence applications.
The Series A funding round was led by Samsung Ventures, Cardumen Capital, Varana Capital, OurCrowd and XT Hitech. Several other institutional backers participated as well. Among them was SK hynix Inc., one of the world’s largest makers of memory chips.
“High performance and sustainable Inference computing is so critical for growth in day-to-day usage of AI,” said XT High-Tech managing director Yoav Sebba. “We are seeing endless AI opportunities developed by software companies, but the existing hardware infrastructure is limiting the deployment of those use cases.”
Israel-based NeuReality has developed a system-on-chip, the NR1, that is designed to perform AI inference tasks. Inference is the term for running neural networks in production after they’re trained. According to NeuReality, the NR1 can power a variety of AI workloads ranging from natural language processing models to product recommendation engines.
NeuReality offers the NR1 as part of an appliance called the NR1-S Inference Server. The appliance features multiple NR1 chips. According to NeuReality, the NR1-S Inference Server can lower costs and power requirements by a factor of 50 compared with competing hardware.
NeuReality also enables customers to deploy its chip in other ways. The startup offers the NR1 as part of an accelerator card, dubbed the NR1-M, that can be attached to a server via a PCIe port. The accelerator card enables companies to integrate NeuReality’s technology with the existing servers in their data centers.
Hardware efficiency often represents a challenge in large-scale AI infrastructure environments. The more chips are added to an AI environment, the more processing power is needed to manage the chips. This phenomenon decreases the amount of processing power that is available for machine learning applications.
According to NeuReality, its NR1 chip addresses the challenge by facilitating linear scalability. Linear scalability is the term for a situation where adding more chips to a server cluster doesn’t significantly lower hardware efficiency. Increasing the efficiency of AI infrastructure enables companies to decrease hardware costs, as well as reduce electricity usage.
Alongside the NR1, NeuReality provides a set of software tools designed to simplify the task of deploying AI applications in production. The startup’s tools also promise to ease application management. NeuReality’s software portfolio includes, among other components, a so-called AI hypervisor that helps customers manage machine learning applications deployed on NR1 chips.
NeuReality has reportedly been shipping prototype implementations of the NR1 to partners since last May. Using its newly announced $35 million funding round, the startup plans to more widely roll out its technology. NeuReality will hire 20 new employees over the next six months to support the effort.
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.