To enhance online safety for young users, Meta is set to launch a new artificial intelligence tool early next year to identify Instagram users who are under 18 but may have falsely claimed otherwise.
Known as the Adult Classifier, this tool is part of Meta’s broader initiative to tighten safety protocols on its platform, especially for teens, as the company faces mounting pressure from lawmakers, parents, and privacy advocates.
How the Adult Classifier Works
According to Allison Hartnett, Meta’s director of product management for youth and social impact, the adult classifier will analyze user behaviors, such as the types of accounts followed and the content frequently interacted with.
Suppose the tool identifies patterns associated with teen users. In that case, it will automatically transfer their accounts to a teen setting, applying stricter privacy and safety measures regardless of the stated age on their profiles.
Teen accounts, introduced in September, are configured with high-level privacy settings. Accounts for users under 16 are set to private by default, and communication with strangers is restricted. This initiative builds on Meta’s previous efforts to create a safer space for minors by limiting exposure to potentially harmful interactions on the platform.
How the New Feature Ensures Accuracy
In response to accuracy concerns, Meta plans to offer users wrongly identified as underage an appeal option. Users can verify their age through government-issued ID uploads or by using Yoti, a technology partner that estimates age through video selfies. Both Meta and Yoti will delete the videos after verification is complete.
The adult classifier’s deployment coincides with ongoing legal battles. Meta, along with other tech giants like Google and ByteDance, is facing multiple lawsuits alleging a failure to adequately protect young users from the adverse effects of social media.