ROHM’s Newly Developed AI Chip Predicts Failures in Electronic Devices

ROHM Semiconductor today announced they have developed an on-device learning[1] AI chip (SoC with on-device learning AI accelerator) for edge computer endpoints in the IoT field. The new AI chip utilizes artificial intelligence to predict failures (predictive failure detection) in electronic devices equipped with motors and sensors in real-time with ultra-low power consumption.

Generally, AI chips perform learning and inferences to achieve artificial intelligence functions, as learning requires that a large amount of data gets captured, compiled into a database, and updated as needed. So, the AI chip that performs learning requires substantial computing power that necessarily consumes a large amount of power. Until now, it has been difficult to develop AI chips that can learn in the field consuming low power for edge computers and endpoints to build an efficient IoT ecosystem.

Based on an ‘on-device learning algorithm’ developed by Professor Matsutani of Keio University, ROHM’s newly developed AI chip mainly consists of an AI accelerator (AI-dedicated hardware circuit) and ROHM’s high-efficiency 8-bit CPU ‘tinyMicon MatisseCORE™’. Combining the 20,000-gate ultra-compact AI accelerator with a high-performance CPU enables learning and inference with ultra-low power consumption of just a few tens of mW (1000 times smaller than conventional AI chips capable of learning). This allows real-time failure prediction in a wide range of applications, since ‘anomaly detection results’ (anomaly score) can be output numerically for unknown input data at the site where equipment is installed without involving a cloud server.

Going forward, ROHM plans to incorporate the AI accelerator used in this AI chip into various IC products for motors and sensors. Commercialization is scheduled to start in 2023, with mass production planned in 2024.

Professor Hiroki Matsutani, Dept. of Information and Computer Science, Keio University, Japan

“As IoT technologies such as 5G communication and digital twins advance, cloud computing will be required to evolve, but processing all the data on cloud servers is not always the best solution in terms of load, cost, and power consumption. With the ‘on-device learning’ we research and the ‘on-device learning algorithms’ we have developed, we aim to achieve more efficient data processing on the edge side to build a better IoT ecosystem. Through this collaboration, ROHM has shown us the path to commercialization in a cost-effective manner by further advancing on-device learning circuit technology. I expect the prototype AI chip to be incorporated into ROHM's IC products in the near future.”

Source: https://www.rohm.com/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.