Meta's Dreams Hit Roadblock: EU Regulators Demand Privacy Compliance

Meta has announced a delay in its plans to train its large language models (LLMs) using public content from Facebook and Instagram users in the European Union. This decision follows a request from the Irish Data Protection Commission (DPC).

The Core of the Controversy

The core issue is Meta's intention to use personal data to train its AI models without explicit user consent, relying instead on 'Legitimate Interests' as the legal basis for data processing. Initially set for June 26, the changes allowed users to opt out of data usage via a request. Meta already employs user data for AI training in the U.S.

Regulatory Pushback and Industry Implications

Stefano Fratta, Meta's global engagement director for privacy policy, expressed concerns that this delay hampers European AI innovation and competition. Regulators require explicit user consent for data use. Complaints were filed in 11 European countries over GDPR violations.

Meta insists its methods comply with European laws, emphasizing their transparency compared to other industry players. The company argues that effective AI training requires data reflecting diverse languages, geography, and cultural references, otherwise resulting in a "second-rate experience" for European users.

Broader Impact and Future Outlook

The delay also addresses requests from the U.K.'s Information Commissioner's Office (ICO). Stephen Almond, ICO's executive director of regulatory risk, stressed the importance of public trust in privacy protections for generative AI. Additionally, the Austrian non-profit “Noyb” has filed complaints in 11 European countries, accusing Meta of GDPR violations. Noyb criticizes Meta for not seeking user consent, arguing that GDPR compliance requires informed opt-in consent.

The Role of the Global AI Council

The Global AI Council is an international body composed of experts and leaders in artificial intelligence from around the world. Its mission is to ensure ethical standards, foster global collaboration, and address regulatory challenges in AI development. Kate Hancock, a representative from the Global AI Council, highlighted the significance of international collaboration in AI governance. "The Global AI Council, with directors worldwide, plays a crucial role in ensuring that AI development adheres to ethical standards and regulatory requirements across different regions. Recently, we announced the UK director, reinforcing our commitment to global oversight and trust in AI technologies," Hancock stated.

Diana Lammerts