Meta Platforms is reigniting tensions with UK regulators by resuming its artificial intelligence (AI) training program, which uses public social media posts.
The program had been paused for three months following inquiries about how the company would secure user consent for using their data.
Now, after addressing legal concerns, Meta is testing whether it can proceed with the initiative.
This move comes after Meta also faced scrutiny from the Irish Data Protection Commission (DPC), the European Union’s primary regulator for the company.
While the UK no longer falls under the EU’s jurisdiction, it still follows a privacy framework similar to the EU’s General Data Protection Regulation (GDPR).
Meta may be using this opportunity to test the waters with British authorities, hoping a favorable ruling could set a precedent for future dealings with European regulators.
How will META gain user permission?
Rather than offering users a clear opt-in option for AI data usage, Meta is relying on an opt-out system.
Users who don’t want their data used for AI training must actively object. Unlike previous instances, where users had to provide reasons for opting out, Meta has simplified the process this time around.
The UK’s Information Commissioner’s Office (ICO) is closely monitoring the situation, insisting that Meta respects users’ privacy rights.
“It is for Meta to ensure and demonstrate ongoing compliance with data protection law,” an ICO spokesperson stated.
The ICO has emphasized the need for transparency in how user data is being utilized for AI purposes.
META’s stance on the issue
Meta asserts that it has integrated regulatory feedback into its revamped AI training program and that the opt-out process is now more transparent.
The company has also clarified that only public posts will be used for AI training, not private messages and that accounts belonging to minors will be excluded.
Meta will begin rolling out these updates next week, notifying users about the upcoming changes.
Those who previously opted out will not be contacted again.
According to Meta, using public data from various nationalities is essential for developing AI that reflects diverse cultures, including British history, idiom, and social nuances.
“We’re building AI at Meta to reflect the diverse communities around the world, and we look forward to launching it in more countries and languages later this year,” Meta stated.
While Meta’s emphasis on cultural diversity and public data may resonate with the public, the ICO’s main objection has always been the handling of personal data rather than the use itself.
Despite Meta’s attempts to frame its AI program as globally inclusive, it’s unlikely to sway regulators focused on ensuring data protection compliance.
As Meta navigates these regulatory challenges, its AI training initiative could become a pivotal test case for how tech companies use public data under evolving privacy laws.
The post Meta resumes AI training using public social media data, testing UK regulators appeared first on Invezz