Logo

The Data Daily

What is neuromorphic computing? - University of York

What is neuromorphic computing? - University of York

What is neuromorphic computing?
/in Articles , Artificial intelligence /by Samantha Bye
Compared with first-generation artificial intelligence (AI), neuromorphic computing allows AI learning and decision-making to become more autonomous. Currently, neuromorphic systems are immersed in deep learning to sense and perceive skills used in, for example, speech recognition and complex strategic games like chess and Go. Next-generation AI will mimic the human brain in its ability to interpret and adapt to situations rather than simply working from formulaic algorithms. 
Rather than simply looking for patterns, neuromorphic computing systems will be able to apply common sense and context to what they are reading. Google famously demonstrated the limitations of computer systems that simply use algorithms when its Deep Dream AI was trained to look for dog faces . It ended up converting any imagery that looked like it might contain dog faces into dog faces.
How does neuromorphic computing work?
This third generation of AI computation aims to imitate the complex network of neurons in the human brain. This requires AI to compute and analyse unstructured data that rivals the highly energy-efficient biological brain. Human brains can consume less than 20 watts of power and still outperform supercomputers, demonstrating their unique energy efficiency. The AI version of our neural network of synapses is called spiking neural networks (SNN). Artificial neurons are arranged in layers and each of the spiking neurons can fire independently and communicate with the others, setting in motion a cascade of change in response to stimuli.
Most AI neural network structures are based on what is known as von Neumann architecture – meaning that the network uses a separate memory and processing units. Currently, computers communicate by retrieving data from the memory, moving it to the processing unit, processing data, and then moving back to the memory. This back and forth is both time consuming and energy consuming. It creates a bottleneck which is further emphasised when large datasets need processing. 
In 2017, IBM demonstrated in-memory computing using one million phase change memory (PCM) devices, which both stored and processed information. This was a natural progression from IBM’s TrueNorth neuromorphic chip which they unveiled in 2014. A major step in reducing neuromorphic computers’ power consumption, the massively parallel SNN chip uses one million programmable neurons and 256 million programmable synapses. Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing, described it as “literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid.”
An analogue revolution was triggered by the successful building of nanoscale memristive devices also known as memristors. They offer the possibility of building neuromorphic hardware that performs computational tasks in place and at scale. Unlike silicon complementary metal oxide semiconductors (CMOS) circuitry, memristors are switches that store information in their resistance/conductance states. They can also modulate conductivity based on their programming history, which they can recall even if they lose power. Their function is similar to that of human synapses.
Memristive devices need to demonstrate synaptic efficacy and plasticity. Synaptic efficacy refers to the need for low power consumption to carry out the task. Synaptic plasticity is similar to brain plasticity, which we understand through neuroscience. This is the brain’s ability to forge new pathways based on new learnings or, in the case of memristors, new information.
These devices contribute to the realisation of what is known as a massively parallel, manycore supercomputer architecture like SpiNNaker (spiking neural network architecture). SpiNNaker is the largest artificial neural network using a million general purpose processors. Despite the high number of processors, it is a low-power, low-latency architecture and, more importantly, highly scalable. To save energy, chips and whole boards can be switched off. The project is supported by the European Human Brain Project (HBP) and its creators hope to model up to a billion biological neurons in real time. To understand the scale, one billion neurons is just 1% of the scale of the human brain. The HBP grew out of BrainScaleS, an EU-funded research project, which began in 2011. It has benefitted from the collaboration of 19 research groups from 10 European companies. Now with neuromorphic tech evolving fast, it seems the race is on. In 2020, Intel Corp announced that it was working on a three-year project with Sandia National Laboratories to build a brain-based computer of one billion or more artificial neurons.
We will see neuromorphic devices used more and more to complement and enhance the use of CPUs (central processing units), GPUs (graphics processing units) and FPGA (field programmable gate arrays) technologies. Neuromorphic devices can carry out complex and high-performance tasks – for example, learning, searching, sensing – using extremely low power. A real-world example would be instant voice recognition in mobile phones without the processor having to communicate with the cloud.
Why do we need neuromorphic computing?
Neuromorphic architectures, although informed by the workings of the brain, may help uncover the many things we don’t know about the brain by allowing us to see the behaviour of synapses in action. This could lead to huge strides in neuroscience and medicine. Although advances in neuromorphic processors that power supercomputers continue at unprecedented levels, there is still some way to go in achieving the full potential of neuromorphic technology.
A project like SpiNNaker, although large-scale, can only simulate relatively small regions of the brain. However, even with its current capabilities, it has been able to simulate a part of the brain known as the Basal Ganglia, a region that we know is affected in Parkinson’s Disease. Further study of the simulated activity with the assistance of machine learning  could provide scientific breakthroughs in understanding why and how Parkinson’s happens.
Intel Labs is a key player in neuromorphic computer science. Researchers from Intel Labs and Cornell University were able to use Intel’s neuromorphic chip, known as Loihi, so that AI could recognise the odour of hazardous chemicals. Loihi chips use an asynchronous spiking neural network to implement adaptive fine-grained computations in parallel that are self-modifying, and event driven. This kind of computation allows this level of odour recognition even when surrounded by ‘noise’ by imitating the architecture of the human olfactory bulb. The neuroscience involved in the sense of smell is notoriously complex, so this is a huge first for AI and wouldn’t be possible with the old-style transistors used in processing. This kind of discovery could lead to further understanding around memory and illnesses like Alzheimer’s, which has been linked to loss of smell.
Learn more about neuromorphic computing and its applications
Artificial intelligence is already helping us to make strides in everyday life from e-commerce to medicine, finance to security. There is so much more that supercomputers could potentially unlock to help us with society’s biggest challenges. 

Images Powered by Shutterstock