Skymizer Launches Groundbreaking LLM Accelerator IP for on-device LLM Inferencing, EdgeThought, the game-changer in on-device GenAI era.

TAIPEI, May 31, 2024 /PRNewswire/ -- Skymizer, a pioneer in compiler technology and optimized solutions, today announced the release of its revolutionary software-hardware co-design AI ASIC IP, EdgeThought, specifically engineered for accelerating Large Language Models (LLMs) at the edge. This cutting-edge innovation leverages Skymizer's advanced compiler technology to set new industry benchmarks in computation, memory utilization, power efficiency, and cost-effectiveness.

EdgeThought is designed to enhance the performance of LLM applications in a wide range of edge devices, from IoT devices to automotive systems and AI PCs / AI Edge Servers etc. By optimizing EdgeThought design with the compiler-centric approach, Skymizer's solution ensures that these devices can run the state-of-the-art on-device LLM models, including the newest Llama3 8B.

Key Features of Skymizer's LLM Accelerator:

  • Optimized Compiler Technology: At the heart of the accelerator is Skymizer's proprietary compiler technology, which maximizes hardware utilization and efficiency, enabling superior performance even on resource-constrained edge devices.

  • Enhanced Computation and Memory Efficiency: The co-design approach minimizes latency and maximizes throughput while reducing memory footprint, allowing for faster and more reliable inferencing of LLMs at the edge.

  • Power and Cost Efficiency: Skymizer's solution drastically reduces the power consumption and operational costs associated with deploying advanced AI models, making it an ideal choice for businesses looking to scale their operations sustainably.

  • Scalability and Flexibility: Designed to support a range of LLM applications, the accelerator is scalable to different sizes and performance requirements, including multiple users with multiple batches to scale up the throughput for a very power edge server, offering unprecedented flexibility for device manufacturers and application developers.

"Today marks a significant milestone not just for Skymizer, but for the entire AI and edge computing industries," said Jim Lai, CEO of Skymizer. "Our innovative LLM accelerator redefines what's possible in edge AI performance, making it both more accessible and cost-effective. This release reflects our commitment to pushing the boundaries of technology to empower our clients and enrich the user experiences."

With a decade of compiler and virtualization industry experience Skymizer focuses on what it does the best and designs EdgeThought to be a compiler optimized LLM ASIC IP specific for on-device inferencing task. EdgeThought eliminated all the software and hardware needs for training and focus on inferencing only. The result is the best in class on-device LLM inference engine. "If Groq chip is the king of cloud LLM inferencing, then, EdgeThought will be the game-changer for on-device LLM inferencing.", said William Wei, the CMO & EVP of Skymizer. "And, EdgeThought does not require the latest silicon manufacturing process, it can enable the less expensive mature silicon class and specialty memory components which will revitalize those cheaper memory industry in the GenAI era."