On March 18, 2026, as reported by Matthew Newman at MLex, the Court of Rome annulled Decision No. 755, issued on November 2, 2024, by Italy’s data protection authority (the Garante per la Protezione dei Dati Personali), which had imposed a €15 million fine on OpenAI for multiple GDPR violations linked to the management of its ChatGPT service. The decision had already been suspended by the same court on March 21, 2025, and subsequently removed from the Garante’s own website. Yesterday’s ruling now overturns the substance of the authority’s decision.
This was not just any enforcement decision. It was the only final GDPR enforcement action ever adopted in Europe concerning the period of the launch of generative AI to the public.
A short history of regulatory hype
When the Garante imposed the fine in late 2024, the news landed with considerable force. My own LinkedIn post at the time, titled “ChatGPT Gets a Christmas Present from Italy: Europe’s First GDPR Fine for Generative AI”, reached over 64,000 impressions. The data protection community was buzzing.
And for good reason. The Garante had found that OpenAI processed personal data for training ChatGPT without an adequate legal basis, violated GDPR transparency and information obligations, failed to properly notify the authority of a March 2023 data breach, and lacked age verification mechanisms for minors. In addition to the fine, the Garante exercised for the first time powers under Article 166(7) of the Italian Data Protection Code, ordering OpenAI to conduct a six-month public awareness campaign across radio, television, print and online media.
The regulatory momentum at the time appeared unstoppable. In the spring of 2023, data protection authorities across Europe had opened investigations into ChatGPT in rapid succession: Spain’s AEPD, Poland’s UODO (following a complaint by security researcher Lukasz Olejnik about hallucinated biographical data), France’s CNIL (which received at least five complaints), and several German state data protection authorities. Later, in April 2024, noyb filed a complaint in Austria about data accuracy. The European Data Protection Board (EDPB) established a dedicated ChatGPT Taskforce in April 2023 to coordinate enforcement across the bloc. I recall a call shortly after the Italian fine during which the representative of a DPA in another EU Member State confidently announced that “their own decision concerning OpenAI was ready and should be published imminently”.
What followed: nothing
Since then, not a single other European DPA has published a final enforcement decision concerning GDPR violations linked to the launch period of ChatGPT (November 2022 to March 2023).
The EDPB Taskforce published its report in May 2024, but it contained only “preliminary views” and expressly noted that investigations were “still ongoing”. Meanwhile, in February 2024, OpenAI established OpenAI Ireland Limited, triggering the GDPR’s one-stop-shop mechanism. A footnote in the Taskforce report itself acknowledged that for infringements of a “continuing or continuous nature”, pending proceedings “should be transferred” to the lead DPA, which would now be the Irish Data Protection Commission. This procedural development may well have absorbed or frozen several national investigations.
But the end result is the same: Europe’s regulatory response to the most consequential AI deployment in its history produced a single enforcement decision. And that decision has now been annulled.
Proportionality concerns then and now
At the time of the Garante’s decision, I had raised questions about the proportionality of the fine. €15 million was almost as much as the €20 million penalty imposed on Clearview AI, a company widely criticized for mass biometric surveillance.
That said, the underlying findings of GDPR non-compliance were difficult to contest on the merits. When ChatGPT was launched to the European public, there was a clear disregard for basic data protection requirements: no adequate legal basis had been identified, transparency obligations were not met, and no age verification was in place. These were not marginal infractions.
Crucially, the full reasoning of the Court of Rome has not yet been published. The Garante itself received the ruling (Judgment no. 4153/2026, R.G. 4785/2025) on the morning of March 19, but, as the reasoning has not yet been made available, the Authority was not in a position, at this stage, to provide comments or to assess whether to lodge an appeal. This is an important clarification: the annulment may not be the end of the story, and a Garante appeal remains a real possibility.
In the meantime, the key questions remain open. Did the Court challenge the proportionality of the sanction? Or did it call into question the Garante’s findings of GDPR violations themselves? Could a new Garante decision follow on the basis of the Court’s instructions? Matthew Newman’s article in MLex does not offer an answer on these points, and I would welcome any clarification from him, from Italian colleagues, or from anyone with access to the decision.
The widening gap between guidance and formal enforcement
Regardless of the specific reasoning of the Court of Rome, the broader picture is interesting. The initial regulatory intensity surrounding ChatGPT’s launch in Europe has given way to an extended period of inaction. No other DPA in Europe has issued a decision. The only decision that was issued has been annulled.
Meanwhile, data protection authorities across Europe have invested, and continue to invest, considerable energy in producing guidance documents on web scraping and the training of large language models with publicly available data. Wenlong Li, Yueming Zhang, Qingqing Zheng, and Aolan Li have recently and usefully mapped this phenomenon in a comparative study published in the International Data Privacy Law journal, documenting how the legal basis for AI training is framed in data protection guidelines and regulatory interventions across multiple jurisdictions (How the Legal Basis for AI Training is Framed in Data Protection Guidelines and Interventions: Comparative Perspectives and the Prospect of Global Convergence, IDPL 2026). With the notable exception of the Garante’s own continuing ban on DeepSeek in Italy, the gap between the proliferation of such guidances and the near-total absence of formal enforcement is becoming increasingly visible.
The other side of the coin: regulatory engagement that worked
This assessment would be incomplete, however, without acknowledging what DPA interventions have actually achieved in practice, even in the absence of final sanctions.
The Garante’s 2023 temporary ban on ChatGPT in Italy may not have produced a lasting fine, but its immediate operational impact was considerable. Within weeks, OpenAI implemented a privacy notice, introduced an opt-out mechanism for training data, and deployed an age verification system. These were concrete changes to a product used by hundreds of millions, and they were adopted not because of a fine but because a regulator forced the issue. The investigations opened across Europe in 2023, and the EDPB Taskforce’s work, undoubtedly contributed to the broader pressure that led OpenAI, and other major AI companies, to establish a European presence, appoint a DPO, and publish a dedicated EEA privacy policy. None of that would have happened on a voluntary basis.
More recently, the Irish DPC has demonstrated that regulatory engagement short of formal sanctions can yield significant results. In June 2024, following intensive engagement, the DPC secured Meta’s agreement to pause its plans to train large language models using public content shared by adults on Facebook and Instagram across the EU/EEA. The pause lasted nearly a year, during which the DPC sought a formal EDPB Opinion and Meta substantially revised its approach, implementing enhanced transparency notices, improved objection mechanisms, and technical safeguards including data de-identification and output filtering. When Meta eventually resumed training in May 2025, it did so under conditions that bore little resemblance to its original proposal.
This process also prompted the DPC to request, and the EDPB to adopt in December 2024, Opinion 28/2024 on the data protection aspects of AI model training and deployment, providing the first Europe-wide framework on the subject.
The DPC also took unprecedented legal action against X in August 2024, bringing an urgent application before the Irish High Court to halt the use of EU users’ public posts for training xAI’s Grok chatbot. X agreed to permanently suspend the processing, and the proceedings were concluded on that basis in September 2024. It was the first time any lead supervisory authority had used such powers in the AI context.
These interventions matter. They demonstrate that DPA action, even when it does not result in a fine, can reshape how AI companies operate in Europe. The Garante’s DeepSeek ban remains the most visible illustration of this: a Chinese AI service effectively excluded from the Italian market for failure to engage with GDPR requirements.
But there is a difference between regulatory engagement that improves specific practices and the kind of formal enforcement that establishes legal precedent and signals to the broader market what the rules actually are. On the latter front, the record remains thin.
A changed world?
The timing could hardly be more symbolic. Just days before the Court of Rome annulled the Garante’s ChatGPT fine, the Luxembourg Administrative Court annulled the €746 million GDPR fine that had been imposed on Amazon in 2021 for targeted advertising practices without proper consent. In that case, the court largely upheld the substance of the GDPR violations but found that the regulator had failed to properly assess fault and proportionality before imposing its sanction. In the space of a single week, two of the most high-profile GDPR fines ever imposed in Europe have been struck down by national courts. The Luxembourg CNPD has been invited to reassess its case, and the Garante’s story may not be over either. But the signal is hard to miss.
The landscape has shifted in ways that would have been difficult to imagine two years ago. From the regulatory hype of 2023 and 2024, the EU data protection system seems to have moved into a posture of cautious observation when it comes to formal enforcement. Whether this reflects genuine procedural constraints, the complexity of applying legacy data protection rules to novel AI systems, or something else is an open question.
The Garante may yet appeal the Court of Rome’s judgment, and the full reasoning of the ruling could change the picture considerably.
Still, it is difficult not to wonder whether the current “don’t regulate, don’t harm innovation” frenzy sweeping across both sides of the Atlantic has quietly reached the corridors of Europe’s data protection authorities as well.
* * *
These statements are attributable only to the author, and their publication here does not necessarily reflect the view of the Cross-Border Data Forum or any participating individuals or organizations.