In This Article:
- Extending its Smart Edge IP leadership, Ceva-NeuPro-Nano™ received the Best IP/Processor of the year
- Ceva-NeuPro-Nano NPUs deliver optimal balance of ultra-low power and best performance in small area to efficiently execute embedded AI workloads in consumer, industrial and general-purpose AIoT products
ROCKVILLE, Md., Dec. 5, 2024 /PRNewswire/ -- Ceva, Inc. (NASDAQ: CEVA), the leading licensor of silicon and software IP that enables Smart Edge devices to connect, sense and infer data more reliably and efficiently, announced today that the Ceva-NeuPro-Nano NPUs have been awarded the Best IP/ Processor of the Year award at the prestigious EE Awards Asia event, recently hosted in Taipei.
The award-winning Ceva-NeuPro-Nano NPUs deliver the power, performance and cost efficiencies needed for semiconductor companies and OEMs to integrate Embedded AI models into their SoCs for consumer, industrial, and general-purpose AIoT products. Embedded AI models are artificial intelligence algorithms and systems that are integrated directly into hardware devices and run locally on the device rather than relying on cloud processing. By addressing the specific performance challenges of embedded AI, the Ceva-NeuPro-Nano NPUs aim to make AI ubiquitous, economical and practical for a wide range of use cases, spanning voice, vision, predictive maintenance, and health sensing in consumer and industrial IoT applications.
Iri Trashanski, Chief Strategy Officer of Ceva, commented: "Winning Best IP/ Processor of the Year from EE Awards Asia is a testament to the innovation and excellence of our NeuPro-Nano NPUs which bring cost effective AI processing to power-constrained devices. Connectivity, sensing and inference are the three key pillars shaping a smarter, more efficient future, and we are proud to lead the way with our unrivalled IP portfolio addressing these three use cases."
The Ceva-NeuPro-Nano Embedded AI NPU architecture is fully programmable and efficiently executes Neural Networks, feature extraction, control code and DSP code, and supports most advanced machine learning data types and operators including native transformer computation, sparsity acceleration and fast quantization. This optimized, self-sufficient and single core architecture enables Ceva-NeuPro-Nano NPUs to deliver superior power efficiency, with a smaller silicon footprint, and optimal performance compared to the existing processor solutions used for embedded AI workloads which utilize a combination of CPU or DSP with AI accelerator-based architectures. Furthermore, Ceva-NetSqueeze AI compression technology directly processes compressed model weights, without the need for an intermediate decompression stage. This enables the Ceva-NeuPro-Nano NPUs to achieve up to 80% memory footprint reduction, solving a key bottleneck inhibiting the broad adoption of AIoT processors today.