From November 6th to 9th, we at neuroTUM, in collaboration with Fortiss Neuromorphic Labs and Intel, hosted a neuromorphic hackathon. The event, held at Fortiss offices, aimed to expand the understanding and hands-on experience in neuromorphic technologies. It featured four teams, open talks from both labs, and challenges based on state-of-the-art research.
Project Title: Continual Learning
Team Members: Iustin Crucean, Enrico Fazzi, Agustin Coppari Hollmann, and Alyona Starikova.
Supervisors: Michael (from Fortiss) and Elvin (from Intel).
The project centered on Continual Learning Paradigm (CLP), with a focus on learning from data streams, novel pattern recognition from a single representation (one-shot learning), and open-set recognition using prototypes.
What we did:
We developed a pipeline integrating a gesture feature extractor with a Continual Learning Paradigm (CLP) block in Intel's Lava model for Neuromorphic Computing. This initiative showcased the capability of one-shot learning using both a standard gesture dataset and a custom dataset. Our team employed innovative methods and resources, including the AEIA (Alyona-Enrico-Iustin-Agustin) Custom Dense Layer and AEIA LIF (Leaky Integrate-and-Fire) Neurons, which facilitated binary data processing, spike counting, and normalization. Advanced data visualization techniques like TSNE and PCA were also utilized to enhance data interpretation. Despite the progress, the project faced challenges in improving feature extraction and adapting the system for Intel's Loihi2 chip, along with handling varying data sample lengths for real-time and sequential learning. These challenges outline a clear path for future research in spatio-temporal recognition and neuromorphic computing advancement.
In the end, the team was able to finish the implementation and even test it on a self recorded spike dataset from an event camera. This project combined and advanced the research of both supervisors, and they will continue with the implementation developed during the hackathon. This group ended up winning the hackathon after the voting process was finalized.
Project Title: Pattern Recognition
Team Members: Loïc Stratil, Leon Gebhard, Fabiana Lotta, Yannik Schmidt
Supervisors: Camilo Amaya (from Fortiss)
With a project on dynamic neural fields, our group focused on the possibilities of implementing entire logical systems onto neuromorphic hardware. Specifically, the broader goal was to demonstrate that arbitrary logical functions can be implemented with neuromorphic computing by leveraging combinations of multiple independent building blocks, such as dynamic neural fields, state machines, and spiking neural networks. Our specific use case was to develop software to control an end effector fitted with a plug, attached to a 6-axis robot, into a corresponding socket. Several sockets with different geometries were present on a target surface, leading to the primary task of developing a logic that detects, selects, and memorizes a specific socket from the target surface. This was then used in combination with a state machine to implement the overall behavior.
What we did:
Selection and memorization of individual plugs are implemented through the use of dynamic neural fields, which can be seen as filters that produce stabilized activity patterns in recurrently connected populations of neurons. Precisely, we took event-based camera images as input, on which we applied a dynamic neural field selector, followed by a memory state. This enables the system to detect and keep focus on a single plug at a time. The selected plug is then passed to a state machine that triggers a classifier, outputting information regarding the type of socket and, with that, whether the selected socket is the relevant one. The classifier uses a spiking neural network architecture and was provided to us. In case the socket is not the relevant one, we inhibit this dynamic neural field, effectively masking this socket. From there, our selector can choose a different socket and repeat the process. Once the correct plug is identified, our state machine triggers a code functionality provided to us, which adjusts the position of the end effector plug in accordance with the position of the correct socket before inserting it.
This project could not be completed in its entirety within the timeframe of the hackathon; However, we were able to define the overall functionality of the system and implement key functionalities such as input processing and state machine logic. To run this functionality on actual neuromorphic hardware and acquire meaningful results, the interfaces within the system would subsequently have to be harmonized.
Project Title: Orientation Recognition - Improving the efficiency and accuracy of bin picking in warehouses
Team Members: Ipek Akdeniz, Idil Unlu, Isabel Tscherniak and Leona Wang
Supervisors: Priya (from Fortiss)
This project focused on trying to figure out the orientation of objects like a cup or a pen lying in a bin on a conveyor belt in a warehouse. The idea hereby was to follow a paper in implementing an unsupervised learning architecture with simple and complex layers, using an STDP rule to calculate the weights in between. The input is short video snippets taken by an event based camera.
What we did:
Upon review of the paper we discovered that the structure used in the paper will not be possible to implement with lava, a specific python library, if we have the intention of running it on the Intel Loihi chip afterward. This nullified our previous plans for the week and we had to scramble to work out a new plan. Ultimately, we decided to switch to a supervised learning algorithm using LIF neurons with multiple convolution and dense layers and began writing the architecture (in lava dl) and the data loader, as well as recording copious amounts of data for the model to train on. We included an automated labeling system by simultaneously taking a short video with the event based camera and an RBG picture with our phone from the same angle. A few lines of code would tell us the angle of the object on the picture and use this information as a label for the video to feed into our model. As during our work process our model was constantly overfitting because of the lack of data, we used three different augmentation methods to increase the viable data amount - uniform noise, event drop and spacial jitter. These were all created using the tonic library.
In the end, the model was able to identify the orientation of cups and pens with an accuracy of around 80%. For future projects it would be interesting to test, how the model finds the orientation and if its way is applicable to different types of objects or multiple ones at the same time. The end goal would be to use this model with the conveyer belt to easily identify objects fast and efficiently.
Project Title: Temporal pattern recognition with Resonators and Hebbian learning
Team Members: Eric Armbruster, Thomas Huber, Borislav Polovnikov
Supervisors: Reem al Fata and Jules Lecomte (both Fortiss)
Learning a temporal spiking pattern using a Resonate-and-Fire (RF) Neurons based deep spiking neural network with Hebbian learning. The pattern could come from an event camera or from some other temporal data that is spike encoded.
Our motivation for this topic: RF neurons are closer to biological neurons than LIF neurons. It has been proven that a deep SNN with RF neurons act as a Fourier transform on the signal and can, thus, memorize any pattern. Usually, SNNs are trained using Slayer. However, lava’s implementation is not applicable to RF neurons because they output complex numbers. Drawing inspiration from biology to tackle this: Hebbian learning which is one of the learning rules of the human brain is in theory very well-equipped for RF neurons. Essentially, Hebbian learning can be heuristically summarized as “neurons that fire together, wire together”.
What we did:
After getting to know each other, our team started with familiarizing itself with the theoretical background on RF neurons and Hebbian learning. The method had been worked out by Fortiss prior to the hackathon. Our task was the software engineering, network and learning implementation and model application. We took the RF neuron implemented by lava-dl; the network and learning had to be implemented from scratch by ourselves. After day one we had created a three layer, two neurons per layer network with learning alongside a spike visualization tool. On day two, we were mostly concerned with scaling the network. We figured out requirements and designed a layer and network classes. We started with the implementation and finished it on day three. There, we trained a small model with four layers on a simple pattern. Since this test was successful, we thought of a harder pattern to memorize. We decided on a spike encoded SOS pattern, which also provides a simplistic use case.
We successfully trained a model on the SOS pattern. This model had four layers but more neurons than the day three model. Crucially, both models forgot other patterns that led to output spikes before training.
Building on the success and insights from the hackathon, neuroTUM started the Brain-Inspired Computing division. This division, including teams focused on gesture recognition processors and neuromorphic algorithms for continual learning, aims to advance research in neuromorphic computing, by making use of the advanced in computational neuroscience. Collaborating with experts from Fortiss and Intel, we're set to explore the capabilities of spiking neural networks on Intel's Loihi 2 accelerator. We encourage you to reach out to us in case you find this topic interesting.
A special thanks to the teams from Fortiss and Intel for their invaluable collaboration and support. This event not only advanced our understanding of neuromorphic technology but also presented the opportunity to collaborate on research together.
Article by Agustin, Loïc, Thomas and Leona
Edited by Nele and Leona