Posted in | News | Machine-Vision

New Study Paves Way for Next-Generation, Energy-Efficient AI Devices

Specialized computer hardware has been designed and developed by scientists from the Institute of Industrial Science at the University of Tokyo. The hardware includes stacks of memory modules aligned in a three-dimensional (3D) spiral for artificial intelligence (AI) applications.

Researchers from The University of Tokyo create a new integrated 3D-circuit architecture for AI applications with spiraling stacks of memory modules, which may help lead to specialized machine-learning hardware that uses much less electricity. Image Credit: Institute of Industrial Science, The University of Tokyo.

This study might enable the development of next-generation, energy-efficient AI devices. Machine learning is a kind of AI that enables computers to be trained to make predictions for new cases by using example data.

For instance, an intelligent speaker algorithm such as Alexa can learn to perceive users’ voice commands, so it can perceive the person even when they ask for something initially. But AI needs more electrical energy to train, which increases concerns regarding its contribution to climate change.

Currently, at the Institute of Industrial Science at the University of Tokyo, researchers have created a new design to stack resistive random-access memory modules with an oxide semiconductor (IGZO) access transistor in a 3D spiral.

Since on-chip nonvolatile memory is positioned next to the processors, the machine learning training process is much more energy-efficient and much quicker. The reason is that the electrical signals have a considerably shorter distance to move than the traditional computer hardware.

Thus, piling up multiple layers of circuits is a natural step because training the algorithm usually needs several operations to be run simultaneously.

For these applications, each layer's output is typically connected to the next layer's input. Our architecture greatly reduces the need for interconnecting wiring.

Jixuan Wu, Study First Author, Institute of Industrial Science, University of Tokyo

Through the implementation of a system of binarized neural networks, the researchers were able to make the device highly energy-efficient. Rather than permitting the parameters to be any number, they are limited to be either +1 or −1.

These two parameters considerably simplify the hardware employed and compress the quantity of data to be stored. The researchers tested the device by utilizing a common task in AI, which is to interpret a database of handwritten digits.

The researchers demonstrated that when the size of each circuit layer is increased, the precision of the algorithm can be improved, up to about 90%.

In order to keep energy consumption low as AI becomes increasingly integrated into daily life, we need more specialized hardware to handle these tasks efficiently.

Masaharu Kobayashi, Study Senior Author and Associate Professor, Institute of Industrial Science, University of Tokyo

This study is a crucial step toward the “internet of things,” in which several small AI-enabled appliances communicate as an integrated “smart-home.”

Source: https://www.iis.u-tokyo.ac.jp/en/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.