"Mixed-signal implementation of feedback-control optimizer for single-layer spiking neural networks"
SUMMARY
On-chip learning is one of the main missing pieces for adaptive neuromorphic systems. Backpropagation through time works well in software, but it is hard to map directly to mixed-signal neuromorphic hardware because it requires global information, stored activity traces, and batch-style processing. This paper shows a proof-of-concept implementation of a spike-based feedback-control optimizer on the DYNAP-SE mixed-signal neuromorphic processor.
The core result is that a single-layer spiking neural network can be trained directly in a computer-in-the-loop setup on real hardware, despite device mismatch, noise, limited observability, and quantized synapses. The hardware-trained network matches numerical simulations on a binary task and reaches comparable performance to a gradient-trained single-layer ANN on the nonlinear Yin-Yang benchmark.
1 - The learning rule
The optimizer trains the output synapses using feedback-control signals instead of backpropagation. Each class has an output neuron and two controller neurons that compare the desired target activity with the measured output activity.
If the output fires too little, the positive controller increases the feedback signal; if it fires too much, the negative controller suppresses it. The resulting local update is: $$ w_t = w_{t-1} + \eta I^{fb}_t s^{in}_t, $$ where \(s^{in}_t\) is the presynaptic spike, \(I^{fb}_t\) is the controller-generated feedback current, and \(\eta\) is the learning rate. The key point is that learning is online, local, and compatible with spike-based hardware signals.
2 - The hardware setting
The experiments run on DYNAP-SE, a mixed-signal neuromorphic processor with analog spiking neurons and configurable digital routing. This setting is difficult because neurons and synapses vary across the chip, internal states are only partially observable, and weights are quantized.
To make learning robust, each logical unit is represented by a population of 10 hardware neurons. Synaptic strength is encoded through the number of parallel synaptic connections, giving a discrete but hardware-compatible weight representation.
3 - Computer-in-the-loop training
Training is computer-in-the-loop. The chip runs the spiking dynamics, while the host sends input and target spikes, records activity, estimates the feedback current, computes the weight update, quantizes it, and writes the new configuration back to the chip.
This is not yet a fully embedded learning circuit, but it tests the optimizer under real mixed-signal constraints rather than only in simulation. Calibration is still required for spike generators, synaptic transfer curves, and controller biases.
4 - Results
On a binary classification task, the DYNAP-SE implementation reaches 100% test accuracy, matching numerical simulation. On the nonlinear Yin-Yang benchmark, it reaches 67% accuracy, close to both the numerical spiking simulation and the single-layer ANN baseline reported in the paper.
The result is modest in scale but meaningful: the same learning principle works despite analog mismatch, noisy dynamics, and quantized synapses.
| Dataset | Metric | Numerical sim. | DYNAP-SE | Power |
|---|---|---|---|---|
| Binary | Accuracy | 100% | 100% | 41 µW |
| Binary | Target error | 3.9 Hz | 2.1 Hz | |
| Yin-Yang | Accuracy | 63% | 67% | 189 µW |
| Yin-Yang | Target error | 7.2 Hz | 6.1 Hz |
5 - Why this matters
The paper does not claim a finished autonomous training system. The update is still computed by the host computer, and scaling to deeper networks remains open. Its main contribution is showing that feedback-control learning survives contact with real neuromorphic hardware.
This matters because the method avoids the assumptions that make backpropagation hard to deploy on mixed-signal chips: global error transport, precise stored activations, and high-precision weights. It points toward low-power systems that can adapt after deployment using hardware-native signals.
6 - Discussion
The central lesson is that feedback control can bridge learning theory and neuromorphic implementation. Rather than forcing backpropagation onto the chip, the network is driven toward target activity, and the resulting feedback becomes the learning signal.
The next step is to embed more of the controller and update rule directly in hardware, and to extend the approach beyond single-layer networks. See the paper for the full implementation details, calibration procedure, and experimental analysis.
refs:
[1] Mixed-signal implementation of feedback-control optimizer for single-layer Spiking Neural Networks. Haag, J., Metzner, C., Zendrikov, D., Indiveri, G., Grewe, B., De Luca, C., and Saponati, M. arXiv:2603.24113, 2026.
[2] A feedback control optimizer for online and hardware-aware training of Spiking Neural Networks. Saponati, M., De Luca, C., Indiveri, G., and Grewe, B. arXiv:2602.13261, 2026.
[3] A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors. Moradi, S., Qiao, N., Stefanini, F., and Indiveri, G. IEEE Transactions on Biomedical Circuits and Systems, 2018.