This could set a significant precedent for applying GDPR in AI training, particularly in the creation and use of embeddings in AI models.
After the Irish Data Protection Commission (DPC), Austrian advocacy group NOYB has now filed a complaint against the social media platform X, accusing it of using personal data to train its AI systems without consent.
In a statement, NOYB said it has lodged GDPR complaints with data protection authorities in nine countries to ensure the fundamental legal concerns over X’s AI training are thoroughly addressed.
Last week, the DPC sought an order to halt or limit X’s use of user data for AI development, training, and refinement.
Following this, X agreed to temporarily suspend AI training using personal data from EU users until they had been given the opportunity to withdraw their consent.
However, NOYB considers this insufficient, saying that the DPC’s complaint primarily addresses mitigation measures and X’s lack of cooperation, without challenging the legality of the data processing itself.
“We have seen countless instances of inefficient and partial enforcement by the DPC in the past years,” Max Schrems, chairman of NOYB, said in the statement. “We want to ensure that Twitter [X] fully complies with EU law, which – at a bare minimum – requires to ask users for consent in this case.”
NOYB added that several key questions remain unanswered, including what happened to EU data already ingested into the systems and how X plans to properly separate EU and non-EU data.
Setting a precedent in GDPR enforcement
The complaint marks a significant precedent for applying GDPR in AI training, particularly in the creation and use of embeddings in AI models. Under the EU AI Act, these processes must adhere to transparency and ethical AI usage standards.
“This case highlights the tension between technological advancement and data privacy regulations,” said Sakshi Grover, senior research manager at IDC Asia Pacific. “The GDPR, alongside the EU AI Act, emphasizes user consent and transparency, leading to increased scrutiny of how personal data is used for AI training. Applications and platforms must adhere to strict data governance standards to protect user data and ensure privacy, involving measures compliant with GDPR and the EU AI Act’s data handling provisions.”
This is crucial because data serves as the lifeblood of modern organizations, driving competition, enhancing efficiency, generating business insights, and creating new revenue streams.
“In an age of continuous data generation, businesses need the ability to access, govern, and use data securely and effectively to advance their digital transformation efforts,” Grover added. “Establishing protocols for data privacy is essential to fuel this transformation securely.”
Impact on X’s operations
For X, this could spell trouble as it needs to build a strong “social graph,” a model that tracks users’ interactions on a social media platform, according to Neil Shah, VP of research and partner at Counterpoint Research.
Using AI on user-generated content linked to demographic data is essential for the survival of social media platforms, especially those driven by advertising-led business models.
“While X can directly use data generated on the platform in the public domain, similar to others, there’s a fine line regarding how and to what extent user data is stored, used to train the AI model, and then utilized to target those users again,” Shah said. “This will require more transparency from X to ensure that the line isn’t crossed and that the proper GDPR process is followed. Until then, this case could set a precedent for most platforms leveraging user data to train their AI.”
The latest complaint could slow down the advanced analytics goal with Grok for X, especially for monetization in the form of a premium subscription or targeted advertising capabilities, Shah added.