06
December
2019
|
00:00 AM
Europe/Amsterdam

Synthesizing an Artificial Synapse for Artificial Intelligence

Pitt Engineer Feng Xiong Receives $500K NSF CAREER Award for Advancing AI Neural Network Technology

PITTSBURGH (Dec. 6, 2019) —In science fiction stories from “I, Robot” to “Star Trek,” an android’s “positronic brain” enables it to function like a human, but with tremendously more processing power and speed. In reality, the opposite is true: a human brain - which today is still more proficient than CPUs at cognitive tasks like pattern recognition - needs only 20 watts of power to complete a task, while a supercomputer requires more than 50,000 times that amount of energy.

For that reason, researchers are turning to neuromorphic computer and artificial neural networks that work more like the human brain. However, with current technology, it is both challenging and expensive to replicate the spatio-temporal processes native to the brain, like short-term and long-term memory, in artificial spiking neural networks (SNN). 

Feng Xiong, PhD, assistant professor of electrical and computer engineering at the University of Pittsburgh’s Swanson School of Engineering, received a $500,000 CAREER Award from the National Science Foundation (NSF) for his work developing the missing element, a dynamic synapse, that will  dramatically improve energy efficiency, bandwidth and cognitive capabilities of SNNs.

“When the human brain sees rain and then feels wetness, or sees fire and feels heat, the brain’s synapses link the two ideas, so in the future, it will associate rain with wetness and fire with warmth. The two ideas are strongly linked in the brain,” explains Xiong. “Computers, on the other hand, need to be fed massive datasets to do the same task. Our dynamic synapse would mimic the brain’s ability to create neuronal connections as a function of the timing differences between stimulations, significantly improving the energy efficiency required to perform a task.” 

Current non-volatile memory devices that have been studied for use as artificial synapses in SNNs haven’t measured up: they are designed to retain data permanently and aren’t suited for the spatio-temporal dynamics and high precision that the human brain is capable of. In the brain, it’s not only the information that matters but also the timing of the information—for example, in some situations, the closer two pieces of information are in time, the stronger the synaptic strand between them. 

By programming the conductor to conduct more electricity for a stronger neural connection, it can function more like the synapses of the human brain, giving more weight to items that are more closely linked as it learns. 

“The resulted change in the electrical conductance (representing the synaptic weight or the synaptic connection strength) in the dynamic synapse will have both a short-term and a long-term component, mimicking the short-term and long-term memory/learning in the human brain,” says Xiong.

Though researchers have demonstrated this kind of technology before in the lab, this project is the first time it will be applied to an SNN. The application could lead to the wide use of AI and revolutionary advances in cognitive computing, self-driving vehicles, and autonomous manufacturing.

In addition to the research component of the project, Xiong will use the opportunity to engage future engineers in his research. He plans to develop an after-school outreach program, host nanotech workshops with the Pennsylvania Junior Academy of Science, and welcome undergraduate engineering majors at Pitt to engage with the research. 

The project is titled “Scalable Ionic Gated 2D Synapse (IG-2DS) with Programmable Spatio-Temporal Dynamics for Spiking Neural Networks” and will begin on March 1, 2020.

Author: Maggie Pavlick

Contact: Maggie Pavlick