Unlock stock picks and a broker-level newsfeed that powers Wall Street.

BrainChip Demonstrates Event-based Vision at Embedded World 2025

In This Article:

LAGUNA HILLS, Calif., March 10, 2025--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, brain-inspired AI, today announced that it will be demonstrating gesture recognition capabilities with its Akida™ 2 processor technology running in combination with Prophesee’s event-based camera in Hall 5 Booth No. 5-213 at Embedded World 2025 in Nuremberg, Germany March 11-13. BrainChip will also demonstrate its edge LLM model based on Temporal Enabled-Neural Networks (TENNs) at the event.

BrainChip’s Akida technology demonstrates the possibilities of embedded AI. As part of its exhibition at Embedded World, the company will showcase the benefits of low-latency and ultra-low power consumption for gesture recognition using the Akida 2 FPGA platform in conjunction with the Prophesee EVK4 development camera. Unlike other approaches, the combination of Prophesee’s event-based vision sensors with Akida’s event-based computing can capture extremely high-speed movement with high sparsity so that only information relevant to the gesture is processed, enabling faster response times. These computer vision systems open new potential in areas such as autonomous vehicles, industrial automation, IoT, security and surveillance and AR/VR.

Integrating Prophesee event-based vision sensors with Akida's event-based processing will enable the development of new, compact SWaP (Size, Weight, and Power) form factors, unlocking fresh product opportunities in the market.

"By combining our technologies, we can achieve ultra-high accuracy in a small form factor, empowering wearables and other power-constrained platforms to incorporate advanced video detection, classification, and tracking capabilities," said Etienne Knauer, VP Sales & Marketing at Prophesee. "Processing our event-based sensor data streams efficiently leverages their sparse nature, reducing computational and memory demands in the final product."

Dr. M. Anthony Lewis, Chief Technology Officer at BrainChip, will present "Fast Online Recognition of Gestures using Hardware Efficient Spatiotemporal Convolutional Networks via Codesign" March 12 at 1:45 p.m. as part of the Embedded Vision track. Lewis will discuss how TENNs developed by BrainChip can be used to tackle a wide range of vision tasks. The presentation will highlight how the co-design of the model architecture, training pipeline and hardware implementation can combine to achieve State-of-the-Art performance, using the gesture recognition task as an example.