Mini-Symposium: Scaling up Systems and Application Complexity in Analog Neuromorphic and Physical Computing


A concentrated, discussion-oriented 1 day symposium (online or in person)


The event is organised by Professor Herbert Jaeger from University of Groningen and CogniGron


Time: Thursday March 24, 2022



Van der Falk hotel, Groningen, NL

No registration fee. The symposium is designed as on-site, in-person event; all invited speakers will be present. Talks and parts of discussions will be streamed to the internet, but not interactively. Please register through the online form at


Idea of how this symposium should function: The field (rather: fields) of neuromorphic / unconventional /physical computing spreads over numerous traditional scientific and engineering disciplines. Learning to understand each other’s terminology, motivations, standard working routines and formal methods is as difficult as it is crucial for a long-term productivity of this field (rather: fields…). This needs – – – … time. Time for just talking with each other. With regards to this symposium we decided to have only a relatively small number of presentations (the four invited talks) which gives us more time than usual to “just talk” with each other – in moderated discussion rounds and maybe even more importantly, in uncommonly long breaks. This gave us the following schedule for the day (merely indicative, we will be extremely adaptive):


Wednesday March 23

Exact time table (evening): a nice pre-symposium dinner


Thursday March 24

9:00 – 9:10 Welcome from Organizers (Beatriz Noheda, Herbert Jaeger)

9:15 – 10:00 Patty Stabile: From InP Photonic Integrated 2D Matrices for Neuromorphic Computing to 3D Photonic Neurons and Non-Volatile Programmable Photonics

10:00 – 10:15 Discussion with Patty

10:15 – 11:00 Break

11:00 – 11:45 Wilfred van der Wiel: Material learning

11:45 – 12:00 Discussion with Wilfred

12:00 – 14:00 Lunch break

14:00 – 14:45 Bernabé Linares-Barranco: Event-Driven Sensing, Convolution-Processing, and STDP Learning

14:45 – 15:00 Discussion with Bernabé

15:00 – 15:30 Breank

15:30 – 16:15 Kwabena Boahen: The Future of Artificial Intelligence: 3-D Silicon Brain

16:15 – 16:30 Discussion with Kwabena

16:30 – 17:00 Rounding up: General discussion: After this day of thinking, what are our views on and expectations for the scaling-up challenge? Here are some trigger questions:

Let us first review and appreciate the complexity levels attained in the digital computing world!

What do we mean by “scaling up”:

-single microchips, or networked systems (what communication signals or “protocols”?)
-complexity of tasks to be solved
-formal theory: the neuro-symbolic integration problem, complex systems maths, combinatorial structures in nonlinear dynamical systems…
-designed or self-organized physical complexity?

Is the idea of a “microchip” appropriate in the first place? (compare with early digital computers before the microchip revolution)

Embedding non-digital systems in a digital ecosystem: the only way to go? Will it be forever like that (i.e. non-digital systems “junior partners” depending on digital environment)?

Will we need to exploit a collection of different physical effects co-located in same physical substrate? (brains do that, have that!) Could such “nano-scale physically diverse” hardware be fabricated?

What are “corridors for reasonable hope” to scale up in the next years / decades?



Professor Patty Stabile, Eindhoven University of Technology, Netherlands

Ripalta (Patty) Stabile is an Associate Professor in the Department of Electrical Engineering, Eindhoven University of Technology (TU/e). She is expert in Indium Phosphide (InP) large-scale photonic integrated circuits based on semiconductor-optical-amplifiers design for high capacity nodes for next generation optical networks as well as the high-speed electronic control of on-chip integrated systems. She is now considering her state-of-the-art photonic integrated matrices for application in optical computing and designing the architecture which sees the system co-integration of electronics and photonics: by exploiting the unique strength point of optics, which is parallelism, and by getting inspired by the way the brain works, she aims to overpass the operational speeds of processors at a much lower power consumption. She is already active in research and develop high data capacity high speed transceiver module, design and assembly, for fast and reliable communication between electronic boards. Side activities are related to the development of low-cost passive coupling concepts for InP multi input/output ports circuits, as well as the exploration of new materials (aSi, 2D materials, polymer) for augmenting photonic integrated platforms performance.

Title: From InP Photonic Integrated 2D Matrices for Neuromorphic Computing to 3D Photonic Neurons and Non-Volatile Programmable Photonics

Abstract: Starting with scaling achievements in photonic integrated fast switches, I will talk about how we use the same switch matrices as photonic neural networks using arrays of semiconductor optical amplifiers.  The foreseen architecture is shown using multiple color (wavelength) input signals, an all-optical monolithically integrated neuron and a multi-layer optical neural network are demonstrated. Scalability studies are performed. Afterwards, the best-in-class technologies are identified and a new concept for a 3D neuron and neural network is shown, together with predicted performance, opening to a promising and feasible technology for neuromorphic photonics. Finally, possibilities for unconventional non-volatile programmable photonics are presented.

Professor Wilfred van der Wiel, University of Twente, Netherlands

Wilfred G. van der Wiel  is full professor of Nanoelectronics and director of the BRAINS Center for Brain-Inspired Nano Systems at the University of Twente, The Netherlands. He holds a second professorship at the Institute of Physics of the Westfälische Wilhelms Universität Münster, Germany. His research focuses on unconventional electronics for efficient information processing. Van der Wiel is a pioneer in Material Learning at the nanoscale, realizing computational functionality and artificial intelligence in designless nanomaterial substrates through principles analogous to Machine Learning. He is author of 120 journal articles receiving 7,500 citations.

Title: Material learning

Abstract: The strong increase in digital computing power in combination with the availability of large amounts of data has led to a revolution in machine learning. Computers now exhibit superhuman performance in activities such as pattern recognition and board games. However, the implementation of machine learning in digital computers is intrinsically wasteful, with energy consumption becoming prohibitively high for many applications. For that reason, people have started looking at natural information processing systems, in particular the brain, that operate much more efficiently. Whereas the brain utilizes wet, soft tissue for information processing, one could in principle exploit any material and its physical properties to solve a problem. Here we give examples of how nanomaterial networks can be trained using the principle of material learning to take full advantage of the computational power of matter1.

We have shown that a designless network of gold nanoparticles can be configured into Boolean logic gates using artificial evolution2. We further demonstrated that this principle is generic and can be transferred to other material systems. By exploiting the nonlinearity of a nanoscale network of boron dopants in silicon, we can significantly facilitate classification. Using a convolutional neural network approach, it becomes possible to use our device for handwritten digit recognition3. An alternative material learning is approach is followed by first mapping our Si:B network on a deep neural network model, which allows for applying standard machine learning techniques in finding functionality4. Finally, we show that the widely applied machine learning technique of gradient descent can be directly applied in materio, opening up the pathway for autonomously learning hardware systems5.

Figure 1: Artist’s impression of digit recognition by a dopant network processing unit in silicon3
Figure 1: Artist’s impression of digit recognition by a dopant network processing unit in silicon3
Figure 2: Artist’s impression of training a dopant network processing unit by using a deep neural network4









Professor Bernabe Linares-Barranco, Seville Microelectronics Institute, CSIC Spain

Bernabé Linares-Barranco (Fellow, IEEE) is full professor  of  Research at the Seville Microelectronics Institute. He received the B.S. degree in electronic physics, the M.S. degree in microelectronics, and the Ph.D. degree in high-frequency OTA-C oscillator design from the University of Seville, Spain and the Ph.D. degree in analog neural network design from Texas A&M University, College-Station, USA. From September 1988 to August 1991, he was a Graduate Student with the Department of Electrical Engineering, Texas A&M University. Since June 1991, he has been the Tenured Scientist of the “Instituto de Microelectrónica de Sevilla,” IMSE-CNM (CSIC and Universidad de Sevilla). In January 2003, he was promoted to a Tenured Researcher and to a Full Professor, in January 2004. Since February 2018, he has been the Director of the “Insitituto de Microelectrónica de Sevilla.” He has been involved with circuit design for telecommunication circuits, VLSI emulators of biological neurons, VLSI neural based pattern recognition systems, hearing aids, precision circuit design for instrumentation equipment, and VLSI transistor mismatch parameters characterization. He has been deeply involved with neuromorphic spiking circuits and systems, with strong emphasis on vision and exploiting nanoscale memristive devices for learning, for the past 25 years. He is a Co-Founder of two start-ups, Prophesee SA and GrAI-Matter-Labs SAS, both on neuromorphic hardware. Since 2021, he has been the Chief Editor of Frontiers in Neuromorphic Engineering.

Title: Event-Driven Sensing, Convolution-Processing, and STDP Learning

Abstract: In this talk we start from reviewing the principles of brain vision and processing by spikes, discuss implementations of bio-inspired Dynamic Vision Sensors, how to process event-driven convolutions for feature extraction on dedicated chips, FPGAs and the SpiNNaker platform, and how to apply this for event-driven stereo vision. Additionally, we will show how to implement Spike-Timing-Dependent-Plasticity on SpiNNaker while reducing spike count to a minimum for optimum recognition, and we will introduce the concept of Stochastic Binary STDP and how this reduces hardware resources and energy consumption.

Professor Kwabena  Boahen,  Stanford University, USA

Kwabena  Boahen is  Professor of Bioengineering and Electrical Engineering at Stanford University. He received his B.S. and M.S. in electrical engineering in 1989 from Johns Hopkins University and his PhD in computation and neural systems in 1997 from the California Institute of Technology. For his PhD thesis, Boahen designed and fabricated a silicon chip emulating the functioning of the retina.

After completing his PhD, Boahen joined the faculty of University of Pennsylvania where he held the Skirkanich Term Junior Chair. In 2005 he moved to Stanford University. Boahen founded and directs Stanford’s Brains in Silicon lab, which develops silicon integrated circuits that emulate the way neurons compute and computational models that link neuronal biophysics to cognitive behavior. This interdisciplinary research bridges neurobiology and medicine with electronics and computer science, bringing together these seemingly disparate fields.

Title: The Future of Artificial Intelligence: 3-D Silicon Brain

Abstract: Artificial intelligence benefited from shrinking transistors and connecting them densely in two dimensions to reduce the energy cost of calculating. Now the energy cost of signaling greatly exceeds that of calculating, reducing the benefits of additional miniaturization.

Stacking circuits shortens distances and thereby reduces signaling’s energy-cost. But stacking reduces surface area for dissipating heat, forcing a 3-D processor to operate serially, rather than in parallel. A fundamental solution would sparsify and enrich signals by exchanging binary numbers for n-ary numbers. Instead of a signal from a pair of units in a neural net encoding a 0 or a 1, a signal from an entire layer of, say, 1,000 units encodes one of 1,000 different digits. And a sequence of 10 such signals encodes a 10-digit 1000-ary number. Decoding these n-ary sequences would require exchanging Boolean logic for operators inseparable in space and time. Advances in cortical physiology suggest that this spatiotemporal inseparability could be achieved with dendrite-like detectors that weight an input based on where and when it is received. This could allow a silicon brain to scale like a biological brain in energy and heat––linearly with the number of neurons––and process in parallel in 3-D.



[1] C. Kaspar et al. Nature 594, 345, 2021
[2] S.K. Bose et al. Nature Nanotechnol. 10, 1048, 2015
[3] T. Chen et al. Nature 577, 341, 2020
[4] H.-C. Ruiz Euler et al. Nature Nanotechnol. 15, 992 2020
[5] M.N. Boon et al., 2021


The POST-DIGITAL Mini-Symposium is supported by the projects FONTE , MENTOR and MULTIPLY.