Unlock stock picks and a broker-level newsfeed that powers Wall Street.

Instagram Implements Expanded AI Age Checking, New Parent Prompts

In This Article:

This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter.

Instagram’s expanding its AI age-checking process, in order to ensure more teens are using its various safety measures and systems, while it’s also rolling out new notifications for parents to help ensure their teens are aware of online risks.

Which, given the rising discussion of increased social media age limits, could end up being an important push from the app.

First off, on AI age testing. Over the past couple of years, Instagram has been gradually advancing its age-checking systems, with new processes that can detect signals which could indicate that a young user has lied about their age.

It’s now expanding this, with improved AI systems that can factor in more elements, in order to better assess the age of an account user.

So how does Meta’s system calculate this?

In its initial overview of its AI age-checking process, Meta explained that:  

To develop our adult classifier, we first train an AI model on signals such as profile information, like when a person’s account was created and interactions with other profiles and content. For example, people in the same age group tend to interact similarly with certain types of content. From those signals, the model learns to make calculations about whether someone is an adult or a teen.

So it’s using engagement trend analysis in order to uncover teens who are lying about their age, which presumably includes not only direct signal (likes, shares, DMs), but also watch time, profiles the account follows, etc.

Meta’s systems are also trained on location-specific data to ensure it’s factoring in local trends.

That should, theoretically at least, be a good indicator, and AI should also excel at this. Because it’s pattern recognition, and the current range of AI tools that we have are not “intelligent” as such, they’re not thinking for themselves. They identify patterns in how things interact, based on billions of parameters built into their models, and from that, they can put together the puzzle pieces for varying purpose.

Which is exactly what Meta’s asking of them in this process.

As such, I do think this could be an effective way to determine user ages, and catch out teens who are lying about their age, and it’ll be interesting to see what results Meta sees from this expanded rollout.

Because if Meta can’t work this out, then it does seem like more teens could soon be forced out of its experiences.

Various regulatory and government groups are now considering expanded restrictions on social media apps, with Australia, Denmarkthe U.S., and the U.K. all weighing the merits of potential age limits for social media access. Which makes this effort even more pressing, because the next step, as noted, will see more teens forced out of Meta’s apps, or will see Meta fined for allowing them in.