IBM Research Unveils “NorthPole” AI Chip, Runs 22x Faster Than Current Chip Offerings

IBM Research has developed its first-ever AI-dedicated chip known as Northpole, which reportedly provides 22 times faster performance than other industry offerings.

IBM Research Combines Power of Neural Networks & Advanced Chip Architectures to Deliver Cutting-Edge AI Performance In NorthPole Accelerator

The news comes from a paper published in the Science journal, which talks about IBM’s upcoming AI accelerator codenamed NorthPole. With the influx of the “AI frenzy” in the industry, a lot of chip manufacturers are moving towards creating their own solutions with the sole aim of surpassing the computing performance of the industry leaders and meeting the surging demand for AI.

The new data published by IBM Research hints that the “NorthPole” AI chip is set to establish new benchmarks within the industry, particularly due to the fact that the company’s approach with its AI chip is indeed fascinating.

Project’s Leader Dharmendra Modha sees great optimism with the chip architecture. Now, IBM Research’s implementation is that the firm combines neural inference architectures into chip processing itself, which is why Modha categorizes it as a “human brain”. Its efficient CPU interconnectivity coupled with all-digital architecture allows inter-communication much faster, which is why NorthPole outputs such performance.

Image Source: IBM Research

Speaking of specifications, the NorthPole AI chip employs a 12nm node processing technology, which in industry terms is fairly old, however, IBM Research believes that the chip surpasses modern 4nm AI GPUs as well, due to the utilization of the ResNet-50 neural network model. This achievement negates Moore’s Law as well, and to some extent, obeys the core elements of Huang’s Law, which focuses on individual chip stacks rather than process shrinking.

Architecturally, NorthPole blurs the boundary between compute and memory. At the level of individual cores, NorthPole appears as memory-near-compute and from outside the chip, at the level of input-output, it appears as an active memory. This makes NorthPole easy to integrate in systems and significantly reduces load on the host machine

-Dharmendra Modha via IBM Research

The first promising set of results from NorthPole chips were published today in Science. NorthPole is a breakthrough in chip architecture that delivers massive improvements in energy, space, and time efficiencies, according to Modha. Using the ResNet-50 model as a benchmark, NorthPole is considerably more efficient than common 12-nm GPUs and 14-nm CPUs. (NorthPole itself is built on 12 nm node processing technology.) In both cases, NorthPole is 25 times more energy efficient, when it comes to the number of frames interpreted per joule of power required. NorthPole also outperformed in latency, as well as space required to compute, in terms of frames interpreted per second per billion transistors required.

According to Modha, on ResNet-50, NorthPole outperforms all major prevalent architectures — even those that use more advanced technology processes, such as a GPU implemented using a 4 nm process.

via IBM

If we look at the potential impact of NorthPole on the AI industry as a whole, it is certainly limited to “model inferencing” only, since the chip lacks support for large-scale neural networks such as GPT-4. However, the company’s aim with the chip isn’t targeting mainstream AI markets, but actually, those that are focused on inferencing only, hence the reason why its impact is confined. It will be interesting to see what kind of performance NorthPole could output, given that the company claims that it can surpass modern-day NVIDIA AI offerings as well.

News Source: IBM Research

Source link

credite