Italy fines ChatGPT maker OpenAI €15 million ($15.58 million) following a detailed investigation into how the AI platform handles personal data. The Italian Data Protection Authority, known as Garante, found OpenAI had used users’ personal data to train ChatGPT without proper legal justification, violating transparency principles and information obligations.
This also unveiled a lack of adequate age-verification systems to deter children under 13 years from accessing improper AI-generated content. The Garante therefore directed OpenAI to launch a six-month media campaign in Italy that would enlighten the public on how ChatGPT works, especially on the issues of data collection.
Backed by Microsoft, OpenAI described the penalty as “disproportionate” and said it plans to appeal the ruling. “The fine is almost twenty times the revenue we made in Italy over the relevant period,” the company said, suggesting the ruling might undermine Italy’s aspirations in AI.
Italy fines ChatGPT amid growing scrutiny of AI platforms across Europe
Italy fines ChatGPT amid growing scrutiny of AI platforms across Europe. Garante, known for its proactive enforcement of the EU’s data privacy laws, briefly banned ChatGPT last year over alleged breaches of GDPR rules. The service resumed only after OpenAI implemented measures addressing user rights and transparency around data usage.
Despite this compliance effort, Garante concluded that OpenAI’s actions still fell short of EU General Data Protection Regulation (GDPR) standards. GDPR allows fines of up to €20 million or 4% of a company’s global turnover for non-compliance. Garante acknowledged OpenAI’s cooperative stance, suggesting the penalty could have been even higher.
This trend demonstrates a level of growing regulatory scrutiny demanding privacy for AI firms. Fining ChatGPT has been an important part of the case as it defines future stringent control on generative AI platforms across Europe.
Also, see: 27 new species unveiled in Peru, swimming mouse is one of them