PhD candidate at Sorbonne Université (CNRS · LIP6), Paris. Building GPU-accelerated, hardware-compliant SNN frameworks for FPGA-based neuromorphic accelerators — surrogate-gradient learning, quantization-aware training, and event-driven inference.
I am a PhD candidate at Sorbonne Université (CNRS · LIP6), Paris, supervised by Prof. Haralampos-G. Stratigopoulos. My PhD topic is Neuromorphic Algorithms and their Hardware Implementation. Neuromorphic computing mimics the spike-based operation of biological neurons — by mapping Spiking Neural Networks (SNNs) onto dedicated hardware accelerators, it achieves orders of magnitude greater energy-efficiency and inference speed compared to conventional ANNs.
My work centres on a GPU-accelerated end-to-end pipeline implementing surrogate-gradient learning, quantization-aware training (QAT), and truncated backpropagation through time (TBPTT), validated on neuromorphic vision datasets including N-MNIST and DVS Gesture. I work at the intersection of PyTorch-based deep learning and FPGA hardware, bridging algorithm design with real-world neuromorphic deployment.
A programmable convolutional SNN accelerator targeting FPGA deployment. The architecture supports both convolutional and fully-connected layers with integrate-and-fire neuron dynamics, leakage and refractory mechanisms, near-memory computing via co-located synaptic memory, spike communication through the AER protocol, and configurable weight precision. An automated model-to-hardware framework generates synthesisable hardware description code directly from a trained SNN model.
Hardware-compliant SNN training with TBPTT, FakeQuantize STE, supporting N-MNIST, DVS Gesture, and Card Symbols datasets.
Automated search of optimal spiking neural network topologies for FPGA deployment. Exploring efficiency-accuracy trade-offs in neuromorphic hardware-constrained architectures.
Always open to discussing research collaborations, PhD opportunities, or connecting with fellow researchers in the neuromorphic and AI hardware space.