Palantir Technologies (PLTR) Extends $400M Partnership with U.S. Army to Enhance Data and AI Capabilities

In This Article:

We recently published a list of 10 Buzzing AI Stocks on Latest News and Ratings. In this article, we are going to take a look at where Palantir Technologies Inc. (NASDAQ:PLTR) stands against other buzzing AI stocks on latest news and ratings.

Predictions of artificial intelligence reaching human-level intelligence have been made for over 50 years. Regardless, the quest to achieve it continues even today, with almost everyone working in the AI field being too focused on achieving it. According to Sam Altman, CEO of OpenAI, reaching AGI isn’t a milestone we can define by a particular date.

READ ALSO: Top 12 AI Stock News and Ratings Dominating Wall Street and 10 AI Stocks Taking Wall Street by Storm 

“I think we’re like in this period where it’s going to feel very blurry for a while. People will wonder if is this AGI yet, or is this not AGI, or it’s just going to be this smooth exponential. And probably most people looking back in history won’t agree when that milestone was hit. And we’ll just realize it was like a silly thing”.

In the latest innovations in artificial intelligence, new research has revealed how the upcoming AIs are capable of human deceit. Joint experiments conducted by AI Company Anthropic and the nonprofit Redwood Research reveal how Anthropic’s model, Claude is capable of strategically misleading its creators during the training process in order to avoid being modified. According to Evan Hubinger, a safety researcher at Anthropic, this will make it harder for scientists to align “AI systems” to human values.

“This implies that our existing training processes don’t prevent models from pretending to be aligned”.

Researchers have also found evidence that as AIs become more powerful, their capability to deceive their human creators also increases. Consequently, it means scientists would be less confident about the effectiveness of their alignment techniques as AI becomes more advanced.

A similar research conducted by AI safety organization Apollo Research revealed how OpenAI’s latest model, o1, also intentionally deceived its testers during an experiment. The test required the model to achieve its goal at all costs, where it lied when it believed that telling the truth would ultimately lead to its deactivation.

“There has been this long-hypothesized failure mode, which is that you’ll run your training process, and all the outputs will look good to you, but the model is plotting against you. The paper, Greenblatt says, “makes a pretty big step towards demonstrating what that failure mode could look like and how it could emerge naturally”.