VERSES Genius™ Outperforms OpenAI Model in Code-Breaking Challenge, “Mastermind”

In This Article:

VERSES AI Inc.
VERSES AI Inc.

High-Performance Agent Surpasses Leading AI Model in Accuracy, Speed, and Cost Efficiency

VANCOUVER, British Columbia, Dec. 17, 2024 (GLOBE NEWSWIRE) -- VERSES AI Inc. (CBOE:VERS) (OTCQB:VRSSF) ("VERSES'' or the "Company”), a cognitive computing company, today revealed performance highlights of its flagship product Genius winning the code-breaking game Mastermind in a side by side comparison with a leading generative AI model, OpenAI’s o1 Preview, which is positioned as an industry-leading reasoning model. Over one hundred test runs, Genius consistently outperformed OpenAI’s o1-preview model one hundred and forty (140) times faster and more than five thousand times (5,000) cheaper.

“Today we’re showcasing Genius’ advanced reasoning performance against state-of-the-art deep learning-based methods that LLMs are based on,” said Hari Thiruvengada, VERSES Chief Technology Officer. “Mastermind was the perfect choice for this test because it requires reasoning through each step logically, predicting the cause-and-effect outcomes of its decisions, and dynamically adapting to crack the code. This exercise demonstrates how Genius outperforms tasks requiring logical and cause-effect reasoning, while exposing the inherent limitations of correlational language-based approaches in today’s leading reasoning models.

“This is just a preview of what’s to come. We’re excited to show how additional reasoning capabilities, available in Genius today and demonstrated with Mastermind, will be further showcased in our upcoming Atari 10k benchmark results,” Thiruvengada continued.

The comparison involved 100 games of Mastermind, a reasoning task requiring the models to deduce a hidden code through logical guesses informed by feedback hints. Key metrics included success rate, computation time, number of guesses, and total cost.

In the exercise, VERSES compared OpenAI advanced reasoning model o1-preview to Genius. Each model attempted to crack the Mastermind code on 100 games with up to ten guesses to crack the code. Each model is given a hint for each guess and must reason about the missing part of the correct answer, requiring all six code colors to be correct to crack the code. For perspective, you can play the game at mastermindgame.org.

A highlight of the results is below. You can find a more detailed description and results of the tests on our blog at verses.ai.

The exercise: VERSES’ team conducted 100 games for each AI model, using the same secret code parameters: 4 positions and 6 possible colors. Results were measured by success rate, computation time, number of guesses, and total cost. The comparison is summarized below: