Unlock stock picks and a broker-level newsfeed that powers Wall Street.
Flux Set to Deploy Project Mayhem: Security in AI With Proof of Useful Work

Flux is set to unveil a new project on its infrastructure that will ensure AI has a truth model to balance the explosive growth. Welcome to "Project Mayhem".

CAMBRIDGE, ENGLAND / ACCESSWIRE / May 11, 2023 / InFlux Technologies announced Flux, a prominent provider of decentralized cloud services, has recently unveiled the upcoming release of "Project Mayhem". It is no longer news that the rapid breakthroughs in artificial intelligence are redefining the world as we know it. AI offers faster access to information and a more robust problem-solving approach, among other perks. First, it was ChatGpt, in all its glory, solving college work in seconds, writing code for multi-tasking developers, and even creating business growth templates for marketers. Not willing to be beaten by OpenAI, other hard hitters joined the AI race, Google released Bard, and Microsoft released BingAI, all with exciting possibilities. Mayhem will be an open-source application that runs on Flux's decentralized network, open source, and allows people and organizations to detect AI-generated content such as deep fakes, content creation, and any AI output.

InFlux Technologies Limited, Thursday, May 11, 2023, Press release picture
InFlux Technologies Limited, Thursday, May 11, 2023, Press release picture

The Big Bane of Generative A.I.

Deepfakes are synthetic media that have been digitally manipulated to replace another person's likeness (in appearance, speech, etc.). The name initially had its roots in the underlying technology; deep learning. However, while deepfakes are relatively recent, the massive sophistication of artificial intelligence has now made it a legitimate concern.

In 2019, the CEO of a UK-based energy firm lost about €220,000 to a deep fake call that impersonated his boss. Even more recently, last month, an AI-generated track with Drake and Weeknd voice deep fakes had to be taken down across streaming platforms. By the time that would happen, however, the music had already amassed over 8.5 million views from fans of both artists. This ushered in fresh concerns about content infringement.

Research is another area where it is essential to protect content integrity. Artificial intelligence could presumably be used to make the process of citing articles relevant to the topic of interest easier. However, an alarming number of ChatGPT users, for example, have highlighted the tendency of the AI chatbot to spit out 'hallucinations". These are references to articles and papers that do not exist or, in other cases, have nothing in common with the topic of focus. Admittedly, there were other times when the AI got it all lined up. But how exactly do you know when your trusted AI partner has suddenly gone 'rogue'?