GDPR

The GDPR Turns 5: Three Crucial Cases in the Last Year

privacymatters
PrivadoHQ
The GDPR Turns 5: Three Crucial Cases Since Last Year
Robert Bateman
May 10, 2023

General Data Protection Regulation (GDPR) enforcement began five years ago, in May 2018. Since then, over a billion euros have been levied in penalties. But beyond fines, regulators and courts have interpreted and applied the GDPR in some highly impactful enforcement decisions.

This article will examine three of the most important GDPR enforcement decisions from the GDPR’s fourth year, concerning generative AI, special category data, and the GDPR’s legal bases.

We’ll also look at three upcoming events that could impact GDPR enforcement, including the UK’s data protection and privacy reforms, the new EU-wide AI Task Force, and a proposed GDPR amendment that would strengthen cross-border investigations. 

Generative AI’s First GDPR Reckoning

Use of artificial intelligence has skyrocketed since the GDPR turned four in May 2022. 

While many AI use cases are beneficial (and can even help enhance people’s privacy), data protection experts have long questioned whether certain AI models are compatible with the GDPR.

Our first decision, from the Italian data protection authority (DPA), was generative AI’s first serious test under the GDPR.

The Key Takeaway

AI companies must comply with the GDPR and find ways to let individuals exercise their rights. But it appears that large language models (LLMs) and EU data protection law might not be fundamentally incompatible.

The Background

Since the GDPR’s last birthday, millions of people have started using “generative AI” services such as chatbot ChatGPT and image generator Midjourney.

These AI models are trained on terabytes of information, much of which qualifies as personal data under the GDPR. They can also produce false information about individuals, which arguably violates the GDPR’s “accuracy” principle.

It was only a matter of time before an EU DPA took action. This February, OpenAI—the company behind ChatGPT—received an order to temporarily stop processing personal data about people in Italy from the Italian DPA.

The Italian DPA’s Demands

The Italian DPA’s investigation was triggered by a relatively minor data security issue. But the DPA soon started looking into other areas of OpenAI’s GDPR compliance, including transparency and data subject rights.

The Italian DPA’s ordered OpenAI to “limit” the processing of personal data about people in Italy until at least the end of March. OpenAI responded by temporarily restricting access to ChatGPT in Italy while the company worked on a compliance plan.

After a meeting between OpenAI and the Italian DPA the following week, OpenAI was presented with an extended deadline and a list of specific actions required to bring ChatGPT into GDPR compliance, including to:

  • Publish a privacy notice explaining how ChatGPT processes personal data and how individuals can exercise their rights.
  • Establish a “legal basis” for processing personal data—either “consent” or “legitimate interests”.
  • Put a system in place to help people request that their personal data is corrected or deleted, and to object to the use of personal data in the AI training process.
  • Put an age-verification system in place to stop children from using ChatGPT
  • By September, remove accounts belonging to children under 13and children aged between 13-18 whose parents have not consented on their behalf.
  • Raise awareness about the use of personal data for AI training via TV, radio, and print media.

Most of the DPA’s requirements constituted basic GDPR compliance measures. But some people within the data protection community were still sceptical about whether OpenAI could meet the regulator’s demands.

OpenAI’s Compliance Efforts

Before April was out, OpenAI had implemented several new features to bring ChatGPT closer to GDPR compliance.

These included a new privacy notice, a feature enabling users to opt out of having their chat histories re-used as training data, and a process allowing people in the EU to submit GDPR rights requests.

It’s debatable whether these measures bring OpenAI into full GDPR compliance. However, the changes were enough to satisfy the Italian DPA. This allowed OpenAI to make ChatGPT available in Italy once more.

In the long term, OpenAI’s run-in with the Italian DPA might be a good thing for the company. The incident suggests that generative AI companies can continue to operate in the EU without breaching data protection law.

CJEU Sets Down Strict Rules on GDPR ‘Special Category Data’

This next case comes from the Court of Justice of the European Union (CJEU) and is known as OT vs. Vyriausioji tarnybinės etikos komisija. The decision concerns the GDPR’s concept of “special category data”, sometimes called “sensitive data”.

The Key Takeaway

For information to be “special category” data, it doesn’t have to explicitly state information about a person’s race, health, beliefs, sex life, etc. If special categories can be inferred from a given piece of personal data, it’s likely special category data.

The Background

The claimant, “OT”, had a government job requiring him to declare his partner's name in a public register. OT argued that disclosing his partner’s name would reveal his sexual orientation and was, therefore, “special category data” that his employer had no legal basis for processing.

The name of a person’s partner is certainly personal data. But can a name be special category data? In some cases, yes.

If you know a man’s partner has a typically male name, you can reasonably infer something about both people’s sexual orientation—which is special category data. Therefore, the name itself can be considered special category data.

This makes sense, given that the GDPR defines special category data as personal data “revealing” certain characteristics, including a person’s “sex life or sexual orientation”.

The Grindr Case

The CJEU’s judgment reinforced a 2021 decision by the Norwegian DPA about LGBTQ+ dating Grindr

The Norwegian DPA investigated Grindr’s approach to mobile advertising. The app had several software development kits (SDKs) installed that would share users’ personal data with advertisers without consent.

Even if Grindr had been operating in a less sensitive context, doing behavioral advertising without consent could have been a problem. 

But the Norwegian DPA argued that if a person has a Grindr account, this fact in itself reveals special category data because it implies something about the person’s sexual orientation.

The DPA found that Grindr’s GDPR violations related to special category data, and the company received a larger fine than it otherwise would have received ($11.7 million, later reduced to $7 million).

The Implications

The case is important because all sorts of information can imply or reveal something about the GDPR’s “special categories”, which include:

-> Personal data revealing:

  • Racial or ethnic origin
  • Political opinions
  • Religious or philosophical beliefs
  • Trade union membership

-> Genetic data

-> Biometric data for the purpose of uniquely identifying an individual 

-> Data concerning: 

  • Health 
  • Sex life or sexual orientation

Any organization processing data that reveals or implies these types of information must ensure it has a legal basis for processing under Article 9 of the GDPR. The company must also take additional steps to protect this type of information.

This interpretation aligns with that of the Federal Trade Commission (FTC) in the US, as revealed in two recent enforcement actions against health apps.

The FTC found that even seemingly low-risk data like an email or IP address can be “health information” if it reveals that an individual is using a health app.

EDPB Narrowly Interprets the GDPR’s ‘Contract’ Rules

This next case started as a stand-off between privacy activist Max Schrems and social media giant Meta concerning Meta’s “legal basis” for targeting ads on Facebook and Instagram.

The Key Takeaway

The legal basis of “contract” only justifies data processing that is essential to carry out obligations under—or entering into—a contract with the data subject. Don’t include any unnecessary data processing activities as part of your terms of service.

The Background

If you’re on Facebook or Instagram, Meta uses the “first-party” data you generate on the platform to target you with ads. This processing activity has been the subject of a five-year-long legal dispute that has helped define the GDPR’s legal bases for processing.

Under EU data protection law, all processing of personal data requires a legal basis. There are six to choose from, each appropriate for a different situation.

Before the GDPR, Meta’s legal basis for targeting ads was “consent”.  Users were inferred to have consented merely by signing up for an account on Facebook or Instagram—if they didn’t want to consent to this processing, they couldn’t use the platform.

But the GDPR strengthened the EU’s definition of “consent”. Consent under the GDPR is only valid if it is “unambiguous” and “made by a statement or clear, affirmative action”. 

Concerned that its practices would fall short of this definition, Meta approached the Irish Data Protection Commission (DPC) for advice.

The Switch to ‘Contract’

At midnight on May 25, 2018—the day the GDPR took effect—Meta publish a new terms of service agreement and privacy policy for Facebook and Instagram. 

Under the new terms, Meta argued that it was obliged under the contract with its users to provide personalised advertising—and help people “discover content, products and services”.

But switching from “consent” to “contract” made no difference to Facebook and Instagram users. The deal remained the same: Don’t want targeted ads? Don’t use the platform.

Max Schrems and his privacy advocacy group noyb (None of Your Business) argued that this was Meta’s way to “bypass” the GDPR’s rules on consent. The group complained to the Irish DPC.

After several years of investigation, the Irish DPC drafted an outcome to noyb’s complaint.

The Irish DPC vs. the EDPB

The regulator initially rejected noyb’s argument, accepting Meta’s submission that ad-targeting was “necessary” under the company’s terms of service. 

According to the Irish DPC, Meta’s reliance on “contract” was valid because users signed up to receive targeted ads—and ad-targeting was necessary to maintain Meta’s business model.

But other regulators on the European Data Protection Board (EDPB) disagreed. The EDPB directed the DPC to amend its draft decision to find that Meta could not rely on “contract” for targeting ads, and to order Meta to find a new legal basis.

Finally, Meta relented (although the decision is under appeal). The company switched its legal basis to “legitimate interests”. 

Facebook and Instagram users can now exercise their “right to object” and request to opt out of ad-targeting (as long as they completed a somewhat complicated opt-out form).

What’s Next for GDPR Enforcement?

UK GDPR and ePrivacy Reforms

The UK fully implemented the GDPR into national law shortly before leaving the EU. But the country is now considering some wide-ranging reforms to its data protection and privacy framework.

Among other changes, the UK is considering loosening the GDPR’s record-keeping and risk-assessment requirements. 

But the government is also likely to increase the maximum fines under PECR, the UK’s implementation of the ePrivacy Directive, which covers direct marketing and cookies. 

Currently capped at £500,000 ($626,000), the UK’s fines for cookies and marketing violations could rise to £17.5 million ($21.9 million) or 4% of annual global turnover to match the GDPR’s penalty regime.

DPAs’ AI Crackdown

We saw above how OpenAI managed to satisfy the Italian DPA’s enforcement demands. However, other regulators have indicated that they are also dealing with complaints about ChatGPT.

France might be the next regulator to strike against OpenAI. The EDPB has also announced an AI Task Force to help coordinate AI-related enforcement action. 

While the Italian case suggests that generative AI and the GDPR can co-exist, companies using or developing AI should pay careful attention to whether their products meet the EU’s data protection and privacy requirements.

GDPR Cross-Border Enforcement Amendment

Privacy advocates have long complained that the GDPR’s “one-stop-shop” process has hampered effective data protection enforcement. That might soon change as the EU considers amending this part of the GDPR.

Under the current system, when an individual in France complains about a company whose main EU establishment is in Ireland, the French regulator must forward the complaint to Ireland in the first instance.

This presents a problem, as many of the world’s largest tech companies base their European operations in Ireland. This means Ireland—whose regulator has been accused of employing an excessively “light touch” approach—has a long backlog of GDPR investigations.

The European Commission has drafted new legislation that would provide additional support and a new process for regulators engaged in cross-border GDPR investigations. The intention is to make GDPR enforcement more effective and efficient.

This could solve one of the most heavily-criticized aspects of the GDPR—and could also mean an increase in GDPR penalties and investigations.

The GDPR Turns 5: Three Crucial Cases Since Last Year
Posted by
Robert Bateman
in
GDPR
on
May 10, 2023

Robert is a writer covering privacy, security, and AI. He is a respected voice on privacy and has covered and has been working in the field since 2017.

Get started with Privado

Thank you for subscribing, we have sent a confirmation email to your inbox.
Oops! Something went wrong while submitting the form.