New Spiking Tool from Sandia Enhances Artificially Intelligent Devices

Whetstone, a software tool that sharpens the result of artificial neurons, has allowed neural computer networks to process information up to a hundred times more efficiently compared to the existing industry standard, say the Sandia National Laboratories scientists who designed it.

Against a background of more conventional technologies, Sandia National Laboratories researchers, from left, Steve Verzi, William Severa, Brad Aimone, and Craig Vineyard hold different versions of emerging neuromorphic hardware platforms. The Whetstone approach makes artificial intelligence algorithms more efficient, enabling them to be implemented on smaller, less power-hungry hardware. (Image credit: Randy Montoya)

The appropriately named software, which impressively decreases the amount of circuitry required to perform autonomous jobs, is anticipated to increase the infiltration of artificial intelligence into markets for self-driving cars, mobile phones, and automated interpretation of images.

“Instead of sending out endless energy dribbles of information,” Sandia neuroscientist Brad Aimone said, “artificial neurons trained by Whetstone release energy in spikes, much like human neurons do.”

Major artificial intelligence companies have built spiking tools for their own products, but not one is as efficient or fast as Whetstone, says Sandia mathematician William Severa. “Large companies are aware of this process and have built similar systems, but often theirs work only for their own designs. Whetstone will work on many neural platforms.”

The open-source code was featured in a technical article in Nature Machine Intelligence recently and has been recommended by Sandia for a patent.

How to sharpen neurons

Artificial neurons are essentially capacitors that absorb and sum electrical charges they then discharge in minute bursts of electricity. Computer chips, labeled “neuromorphic systems,” assemble neural networks into huge groupings that imitate the human brain by transmitting electrical stimuli to neurons firing in no anticipated order. This contrasts with a more lock-step system used by desktop computers with their pre-configured electronic processes.

Due to their random firing, neuromorphic systems frequently are slower than conventional computers but also require a lot less energy to function. They also need a different tactic to programming because otherwise, their artificial neurons fire too frequently or not frequently enough, which has been problematic in pushing them online commercially.

Whetstone, which serves as an additional computer code tacked on to more conventional software training programs, trains and sharpens artificial neurons by grasping those that spike only when an adequate amount of energy—read, information—has been gathered. The training has turned out to be effective in refining standard neural networks and is in process of being assessed for the up and coming technology of neuromorphic systems.

Whetstone is an important tool for the neuromorphic community. It provides a standardized way to train traditional neural networks that are amenable for deployment on neuromorphic systems, which had previously been done in an ad hoc manner.

Catherine Schuman, neural network researcher, Oak Ridge National Laboratories

The strict teacher

The Whetstone process, Aimone said, can be envisaged as managing a class of chatty elementary school students who are asked to identify an object on their teacher’s table. Before Whetstone, the students sent a nonstop stream of sensor input to their previously overwhelmed teacher, who had to pay attention to all of it—every bump and chuckle, so to speak—before passing a result into the neural system. This massive amount of information frequently requires cloud-based computation to process, or the incorporation of more local computing equipment joined with a sharp boost in electrical power. Both choices increase the time and cost of commercial artificial intelligence products, diminish their security and privacy, and render their acceptance less probable.

Under Whetstone, their recent strict teacher only gives attention to a simple “yes” or “no” measurement of each student—when they lift their hands to offer a solution, rather than to everything they are saying. Suppose, for instance, the intent is to find whether a slice of green fruit on the teacher’s table is an apple. Each student is a sensor that may react to a different feature of what may be an apple: Does it have the right feature of taste, smell, texture and so on? The student who looks for red may choose “no” while the other student who looks for green would choose “yes.” When the number of answers, either yes or no, is electrically high sufficient to activate the neuron’s capacity to fire, that basic result, instead of ceaseless waffling, enters the whole neural system.

While Whetstone simplifications could possibly increase faults, the vast number of participating neurons—mostly more than a million—offer information that statistically cover up the inaccuracies caused by the data simplification, Severa said, accountable for the mathematics of the program.

Combining overly detailed internal information with the huge number of neurons reporting in is a kind of double booking. It’s unnecessary. Our results tell us the classical way—calculating everything without simplifying—is wasteful. That is why we can save energy and do it well.

William Severa, Mathematician, Sandia National Laboratories

Patched programs work best

The software program functions best when patched into programs designed to train new artificial-intelligence equipment, so Whetstone does not have to overcome learned patterns with already defined energy minimums.

The research is an extension of a Sandia project called Hardware Acceleration of Adaptive Neural Algorithms, which investigated neural platforms in work assisted by Sandia’s Laboratory Directed Research and Development office. The recent work is aided by the Department of Energy’s Advanced Simulation and Computing Program.

Other authors of the paper, besides Aimone and Severa, are Sandia scientists Craig Vineyard, Ryan Dellana, and Stephen Verzi.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.