Dynamic vision sensors and other neuromorphic hardware (2013-2017)

Inilabs 3rd-generation Dynamic and Active-pixel Vision Sensor (DAVIS) prototype

Dynamic vision sensors (DVS) are a development from Prof. Tobi Delbruck and students at the Institute of Neuroinformatics (ETH/Uni Zurich) (INI). By modelling aspects of the way to retina works, they arrived at a vision sensor which sees the world not as a series of frames, but as a set of changes occuring at individual pixels. Information about change is transmitted asynchronously and with very low latency. The result is a sensor with very low latency and low data rates.

In my role at iniLabs GmbH and then iniVation AG, I worked on the design of third-generation DVS as part of the EU's SEEBETTER project, and as part of DARPA's SyNAPSE project, meanwhile getting fascinating insights into IBM's TrueNorth technology. Other neuromorphic hardware arising from INI include the Dynamic Audio Sensor of Dr. Shih-Chii Liu and students, and the Dynamic Neural Asynchronous Processor (Dynap, Dynap-se, Dynap-le) of Prof. Giacomo Indiveri and students.

I oversaw the marketing, sales and support of these prototypes into hundreds of organisations, including both academic and commercial R&D departments, and I worked with several organisations to develop algorithms and applications for these technologies. I've therefore gained an unparalleled insight into the attempts of many major companies to adopt neuromorphic technology.

Here are a couple of videos from Inilabs, which I worked with a colleague to create, which show some of the output of these sensors:

This article from IMVE magazine Dec 2017 gives a good overview of the state of play by the end of my involvement.

Here's Dharmendra Modha's IBM True North team photo:

Peer-reviewed articles

Conference abstracts

Papers to which I made an acknowledged contribution