NeuReality’s First AI Inference Server-on-a-Chip Validated and moved to Production
A milestone for the semiconductor industry, NeuReality’s first-in-class Network Addressable Processing Unit (NAPU) passes quality assurance and moves to TSMC manufacturing facility, promising higher performance, more affordable and easier-to-use data center infrastructure that unlocks the full potential of AI inference
[AI Hardware Summit, Santa Clara ] – NeuReality’s 7nm AI-centric NR1 chip moved its final, validated design to TSMC manufacturing, creating the world’s first AI-centric server-on-a-chip (SOC). A major step for the semiconductor industry, NeuReality will transform AI inference solutions used in a wide range of applications – from natural language processing and computer vision to speech recognition and recommendation systems.
With the mass deployment of AI as a service (AIaaS) and infrastructure-hungry applications such as ChatGPT, NeuReality’s solution is crucial for an industry urgently in need of affordable access to modernized, AI inference infrastructure. In trials with AI-centric server systems, NeuReality’s NR1 chip demonstrated 10 times the performance at the same cost when compared to conventional CPU-centric systems. These remarkable results signal NeuReality’s technology as a bellwether for achieving cost-effective, highly-efficient execution of AI inference.
AI Inference traditionally requires significant software activity at eye-watering costs. NeuReality’s final steps from validated design to manufacturing – known in the industry as “tape-out” – signals a new era of highly integrated, highly scalable AI-centric server architecture.
The NR1 chip represents the world’s first NAPU (or Network Addressable Processing Unit) and will be seen as an antidote to an outdated CPU-centric approach for inference AI, according to Moshe Tanach, Co-Founder and CEO of NeuReality. “In order for Inference-specific deep learning accelerators (DLA) to perform at full capacity, free of existing system bottlenecks and high overheads, our solution stack, coupled with any DLA technology out there, enables AI service requests to be processed faster and more efficiently, said Tanach.
“Function for function, hardware runs faster and parallelizes much more than software. As an industry, we’ve proven this model, offloading the deep learning processing function from CPUs to DLAs such as the GPU or ASIC solutions. As in Amdahl’s law, it is time to shift the acceleration focus to the other functions of the system to optimize the whole AI inference processing. NR1 offers an unprecedented competitive alternative to today’s general-purpose server solutions, setting a new standard for the direction our industry must take to fully support the AI Digital Age.” added Tanach.
NeuReality is moving the dial for the industry, empowering the transition from a largely software centric approach to a hardware offloading approach where multiple NR1 chips work in parallel to easily avoid system bottlenecks. Each NR1 chip is a network-attached heterogeneous compute device with multiple tiers of programmable compute engines including PCIe interface to host any DLA; an embedded Network Interface controller (NIC) and an embedded AI-hypervisor, a hardware-based sequencer that controls the compute engines and shifts data structures between them. Hardware acceleration throughout NeuReality’s automated SDK flow lowers the barrier to entry for small, medium, and large organizations that need excellent performance, low power consumption and affordable infrastructure – as well as ease of use for inferencing AI services.
“We are excited about our first generation NAPU product, proven, tested, and ready to move to manufacture. It’s full steam ahead as we reach this highly anticipated manufacturing stage with our TSMC partners. Our plan remains to start shipping product directly to customers by the end of the year,” says Tanach
About NeuRealityFounded in 2019, NeuReality Ltd. is an AI technology innovation company creating purpose-built AI-platforms for ultra-scalability of real-life AI applications. Its technology transforms how companies deploy AI inferencing with a holistic system solution that supports limitless deep learning models with its easily applied, integrated, end-to-end approach.
The company is led by a seasoned management team with extensive experience in data centers architecture, system, and software. NeuReality’s co-founders are CEO Moshe Tanach, VP Operations Tzvika Shmueli, and VP VLSI Yossi Kasus. Prior to founding NeuReality, Moshe Tanach served in several executive roles as Director of Engineering at Marvell and Intel and AVP R&D at DesignArt-Networks (later acquired by Qualcomm). Tzvika Shmueli served as VP of Backend at Mellanox Technologies and VP of Engineering at Habana Labs. Yossi Kasus served as Senior Director of Engineering at Mellanox and the head of VLSI at EZChip.
NeuReality has developed the first complete system-level solution specifically designed to address the challenges of optimizing, deploying, managing, and scaling AI workloads. NeuReality is enabling AI everywhere by offering an overarching solution for inference deployment that lowers cost, complexity and power consumption with revolutionary new AI-centric architecture, SDK, and API.