Meta’s AI ambition stalled in Europe: Privacy concerns trigger regulatory pause

In 2023, Meta AI proposes to train its large language models (LLM) on user data from Europe. The aim of this proposal is to improve LLM’s ability to understand the dialect, geography and cultural references of European users.

Meta wanted to expand into Europe to optimize the accuracy of its artificial intelligence (AI) technology systems by training them to use user data. However, Ireland’s Data Protection Commission (DPC) raised major privacy concerns and forced Meta to suspend its expansion.

This blog discusses DPC’s privacy and data security concerns and how Meta has responded to them.

Data protection concerns raised by DCP

Meta AI privacy concerns

The DPC is Meta’s main regulator in the European Union (EU). Following complaints, the DPC is investigating Meta’s practices. Although Meta has asked to suspend its plans pending the conclusion of the investigation, it may require additional changes or clarifications from Meta during the investigation.

One such complainant, NOYB (none of you), an organization of privacy activists, filed eleven complaints. In them, they claimed that Meta violated multiple aspects of the General Data Protection Regulation (GDPR). One of the reasons given was that Meta did not explicitly ask users to allow access to their data, but only gave them the option to decline.

In the previous case, Meta’s attempts were stopped when it planned to do targeted advertising to Europeans. The Court of Justice of the European Union (CJEU) ruled that Meta cannot use “legitimate interest” as a justification. This decision had a negative impact on Meta, as the company relied mainly on GDPR provisions to defend its practices.

The DPC presented a list of concerns, including:

  1. Absence of express consent: As mentioned earlier, Meta’s intentions were not entirely consensual. Their practices, of sending consent agreements in notifications and potentially encouraging them to opt out, made it difficult for users to opt out.
  2. Unnecessary data collection: The GDPR says that only necessary data should be collected. However, the DPC argued that Meta’s data collection was too broad and had no specifications.
  3. Transparency issues: Users were not informed exactly how their data would be used, which created a trust deficit. This was against the transparency and accountability principles of the GDPR.

These strict regulations posed significant obstacles to Meta, which responded by not agreeing to the DPC’s investigation and maintaining its compliance stance.

Meta’s answer

The meta was disappointed by the pause and responded to DPC’s concerns. They claimed that their actions were in compliance with the regulations, citing the GDPR’s “legitimate interests” provisions to justify the data processing practices.

In addition, Meta claimed that it notified users in a timely manner through various communication channels and that its artificial intelligence practices seek to improve the user experience without compromising privacy.

In response to concern about user logins, Meta argued that this approach would have a limited amount of data, rendering the project ineffective. Therefore, the notification has been placed strategically to preserve data volume.

However, critics have pointed out that relying on “legitimate interests” is insufficient for GDPR compliance and opaque to explicit user consent. Additionally, they found the extent of transparency to be insufficient, with many users forgetting the extent to which their data was being used.

A statement issued by Meta’s Global Engagement Director highlighted the company’s commitment to user privacy and compliance. In it, he emphasized that Meta will address DPC’s concerns and work to improve data security measures. In addition, Meta is committed to user awareness, user privacy, and the development of accountable and explainable artificial intelligence systems.

Consequences of suspending the Meta AI

As a result of this hiatus, Meta had to change its strategy and reallocate its financial and human capital accordingly. This adversely affected its operation, leading to increased recalibration.

In addition, it has led to uncertainty regarding regulations governing data practices. The DPC’s decision also paves the way for an era where the technology industry could see much more, even stricter, regulation.

The Meta version, which is considered the “successor of the mobile Internet”, will also experience a slowdown. Since the collection of user data across different cultures is one of the fundamental factors for the development of the metaversion, the pause disrupts its development.

The hiatus seriously affected the public perception of Meta. Meta is considering the potential loss of its competitive advantage, particularly in the LLM field. Given this pause, stakeholders will also question the company’s ability to manage user data and comply with privacy regulations.

Wider implications

The DPC’s decision will affect privacy and security legislation and regulations. Additionally, it will prompt other companies in the technology sector to take precautions to improve their data protection policies. Tech giants like Meta must balance innovation and privacy, ensuring that the latter is not compromised.

Additionally, the hiatus presents an opportunity for tech startups to capitalize on Meta’s failure. By taking the lead and not making the same mistakes as Meta, these companies can drive growth.

To stay informed about AI news and developments around the world, visit Unite.ai.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top