In January 2026, the CNIL issued fines of EUR 27 million and EUR 15 million against Free Mobile and Free respectively. The combined EUR 42 million is the largest GDPR fine the CNIL has ever issued, and it followed a single attack that touched 24,633,469 subscriber contracts.
Three distinct violations triggered it. Each sits in a different part of GDPR. And each one could have been prevented by an architectural decision made before the system went live.
This is a risk-aware architecture review for engineers and engineering leads at 50–500 person companies who are responsible for data architecture. It maps each CNIL finding to a schema, service-boundary, or engineering process decision — and puts your actual exposure in context within the broader AI regulatory environment.
The three violation categories: authentication security (Article 32), breach notification content (Article 34), and data retention (Article 5(1)(e)). The three countermeasures: session-based identity separation, expiring token patterns, and retention TTL design.
What actually triggered the EUR 42 million fine against Free and Free Mobile?
Three distinct GDPR violations. Inadequate VPN authentication (Article 32) let an attacker exfiltrate 24.6 million subscriber records via the internal MOBO tool. A breach notification email omitted required Article 34(2) content. And former subscriber data was retained beyond any justified purpose under Article 5(1)(e). The fine split as EUR 27M against Free Mobile (19.46M contracts) and EUR 15M against Free (5.17M contracts).
Here’s the attack timeline. On 28 September 2024, an attacker gains VPN access. Over the following week they connect to MOBO — a subscriber management tool serving customer data from both entities, including IBANs. Bulk exfiltration runs through 6 October. Discovery comes on 21 October when the attacker makes contact.
The CNIL’s findings read as an architecture critique: the companies hadn’t implemented basic security measures that could have made the attack more difficult. The notification email didn’t contain the information required under Article 34(2). And Free Mobile had retained millions of records about former subscribers without justification for an excessive period of time.
The three violations are independent. The authentication failure opened the door. The notification failure was a process failure that had nothing to do with whether any breach occurred. The retention failure was an ongoing structural violation that would have existed regardless.
The CNIL sized the fine against Iliad Group‘s EUR 10 billion turnover and EUR 367 million profit. EUR 42M is approximately 0.42% of global annual turnover — well below the 4% ceiling GDPR Article 83 permits.
How do GDPR fines work and is this a risk for a company your size?
GDPR fines scale with global annual turnover — up to 4% for Tier 2 violations (security, consent, data subject rights), up to 2% for Tier 1. Most fines against SMB-scale organisations fall in the EUR 10K–EUR 500K range. The Free/Free Mobile case is useful for calibration because it’s a proportional enforcement action against an organisation with identifiable architectural deficiencies — not a big-tech anomaly.
The DLA Piper GDPR Fines and Data Breach Survey from January 2026 recorded approximately EUR 1.2 billion in fines in 2025, a 22% annual increase in breach notifications, and 443 notifications per day — the first time daily notifications have topped 400.
The Irish DPC accounts for EUR 4.04 billion of the EUR 7.1 billion aggregate since 2018 — Meta’s EUR 1.2 billion fine and TikTok’s EUR 530 million between them. If you’re not a multinational platform, those numbers don’t describe your risk profile.
For a company with EUR 5 million annual turnover facing comparable violations, a proportional fine lands in the EUR 50K–EUR 250K range. Significant but survivable, and smaller than the reputational and operational cost of a breach at this scale.
Enforcement is not plateauing. Understanding how GDPR fits into the 2026 compliance stack is the strategic context for everything that follows.
What authentication security failures does GDPR Article 32 actually penalise?
Article 32 requires “appropriate technical and organisational measures” proportionate to the risk. It doesn’t name MFA. The CNIL ruling makes the standard concrete: VPN access to a system containing 24 million subscriber records, including IBANs, protected only by single-factor authentication is disproportionately insecure.
The CNIL found two distinct Article 32 failures.
VPN authentication. One stolen credential gave the attacker direct access to a subscriber management tool serving millions of records including financial identifiers. The fix: MFA as a mandatory gate on any remote access path to personal data — hardware token, authenticator app, or certificate-based.
Anomaly detection. The exfiltration ran for approximately eight days and nothing caught it. Effective anomaly detection doesn’t require a full SIEM. You need structured application logs, alert thresholds on unusual query volumes, off-hours access monitoring, and a named owner who reviews the outputs. Sprint-level task.
The stronger architectural response is to separate authentication identity from application data identity using short-lived session tokens. A compromised VPN credential gives network access. If the application layer requires a separate authentication step before personal data is reachable, the blast radius is bounded. More on this below.
How does GDPR’s breach notification requirement become an engineering process?
Free Mobile’s notification violation wasn’t a failure to notify. They sent the email. The violation was that the email omitted required content under Article 34(2). Affected users couldn’t understand the consequences of the breach or the measures they could take to protect themselves. That’s an engineering process failure with a specific, preventable cause.
Two separate GDPR notification obligations apply on different timelines with different content requirements.
Article 33 requires notification to the supervisory authority within 72 hours of becoming aware of a breach. The clock starts when any internal team member has credible evidence — not when legal reviews it.
Article 34 requires notification to affected data subjects when the breach is likely to result in high risk. Higher threshold than Article 33. Free Mobile violated Article 34, not Article 33.
Article 34(2) specifies what a user notification must contain: DPO contact details; a description of the likely consequences; a description of measures taken or proposed. A generic “we take your security seriously” email fails this. The notification must describe what can actually happen to affected users and the specific actions taken.
Here’s the minimum viable process:
- Detection trigger — automated alert or manual report
- Severity classification — named person responsible, within 4 hours
- Article 33 notification draft — pre-built template, completed by incident owner
- DPO or legal review — time-boxed to 2 hours maximum
- Submission to supervisory authority — documented timestamp, filed under Article 33(5)
For Article 34 user notifications, maintain a separate pre-approved template with the three mandatory content fields in skeleton form. Insert the incident-specific content at send time.
The incident response documentation supporting this process — the Article 33(5) breach register and notification records — overlaps with requirements under other frameworks too.
What does data minimisation look like at the schema and service-boundary level?
Data minimisation under Article 5(1)(c) means collecting only what is necessary for the stated purpose. At the schema level that means nullable personal data fields with documented justification for each non-nullable column, purpose-scoped tables that prevent cross-purpose data reuse, and collection consent gates that block data from reaching the database without a valid consent record.
The Free Mobile retention violation shows what happens when storage limitation — Article 5(1)(e) — is a policy document rather than an architectural constraint. Millions of former subscriber records stayed in live systems with no justified purpose. The cause: nothing in the system forced data to stop existing when its purpose ended. That’s a schema decision, not a policy failure.
Three schema-level patterns worth building in from the start:
Nullable personal data fields with purpose tracking. Add a data_purpose_id foreign key to each personal data column. No active purpose record means the field is null. Purpose limitation becomes a database constraint, not a policy document.
Separate tables for identity and application data. Email, name, phone, and address live in a separate schema from activity logs and subscription history. When a subscriber relationship ends, the identity record can be retired without destroying billing history that has a legitimate retention basis.
Collection consent gate at the API layer. Personal data fields are rejected server-side if no active consent record exists for that data category — a server-side rejection that prevents data reaching the database at all.
How do you implement identity and access separation to reduce long-term data liability?
Identity and access separation keeps authentication identity — who is logged in — distinct from application data identity — what data is associated with that user’s history. MOBO illustrates the failure at scale: VPN access plus MOBO credentials equalled access to millions of subscriber records including financial data. When authentication identity is directly joined to historical data tables, one compromised credential grants access to everything.
Session-based identity separation. The authentication service issues a short-lived session token — 15-minute access token, 7-day refresh token. The application layer uses the token to look up an internal user_ref_id that maps to application data. The external identity (email, phone) never appears in application data tables. A compromised token gives access to the current session scope only.
Expiring token pattern. Short-lived JWT access tokens (15–60 minutes) plus refresh token rotation — each use issues a new token and invalidates the previous one. A stolen refresh token used by an attacker invalidates the legitimate user’s token simultaneously, surfacing the compromise.
Retention TTL design. Each personal data table gets a retain_until datetime column, populated at insert time from the documented retention schedule for that data category. A scheduled job deletes or anonymises rows past their deadline. For subscriber data, the schedule ties to the contract end date plus the justified accounting period — typically 10 years for billing records under EU accounting law. Store the retention schedule in a configuration table keyed by data category. It’s a much more defensible artefact than a policy document when the CNIL comes asking.
Privacy by design as risk-aware architecture — building it in from the start
Privacy by design (Article 25) requires data protection controls built in at the design stage — the same way performance or availability requirements are scoped before architecture is locked.
The Free Mobile case is a clean illustration of what the alternative looks like. Each violation was a process that existed but was inadequate: single-factor VPN authentication, a notification email missing required content, data retention with no automated expiry. That’s what a bolt-on compliance approach looks like in a regulator’s findings.
The DPIA as an architecture gate. Article 35 requires a Data Protection Impact Assessment before high-risk processing activities. For engineering teams, treat it as a structured design session that forces data flow mapping, retention schedule definition, and access control specification before build. Run it before architecture is finalised and the constraints shape the design. Run it after and you’re retrofitting.
The three patterns in this article — session-based identity separation, expiring token rotation, and retention TTL with automated purge — form a minimum viable privacy-by-design stack. Each addresses a specific CNIL violation category. Each is a two-week sprint.
EUR 1.2 billion in GDPR fines in 2025. A 22% increase in breach notifications. Fines for insufficient technical and organisational measures up 40% year-on-year. These three patterns are not gold-plating. They’re the minimum viable architecture for operating in a GDPR-regulated market.
For the full picture of AI regulation in 2026 and where GDPR fits within it, that’s your next read.
Frequently Asked Questions
How large was the Free/Free Mobile fine and what specifically triggered it?
EUR 42 million total (EUR 27M against Free Mobile, EUR 15M against Free), issued by CNIL in January 2026. Three triggers: Article 32 authentication failures (inadequate VPN authentication, ineffective anomaly detection); Article 34 deficient breach notification (omitted consequences, measures taken, and DPO contact); Article 5(1)(e) retention of former subscriber data beyond justified purpose. 24,633,469 contracts affected. Fine calculated against Iliad Group’s EUR 10 billion turnover.
Does GDPR apply to my company if we have any EU users?
Yes. GDPR applies to any organisation that processes personal data of EU residents regardless of where the organisation is based (Article 3, extraterritorial scope). Sign-up form, user accounts, or analytics visible to EU residents — that’s enough. No minimum user count.
What is the 72-hour breach notification window and how do I build a process to meet it?
Article 33 — notification to the supervisory authority — starts the moment any internal team member has credible evidence of a breach, not when legal reviews it. Minimum process: automated detection alert, named incident owner, pre-drafted Article 33 template, time-boxed DPO review, submission. Document and rehearse before an incident. Article 33 and Article 34 are separate obligations with different content requirements.
What is the difference between data minimisation and data deletion?
Data minimisation (Article 5(1)(c)): collect only what is necessary, at the point of collection. Storage limitation (Article 5(1)(e)): retain data only for as long as the purpose requires, then delete or anonymise. Minimisation limits what enters the system; retention TTL limits how long it stays. Free Mobile violated Article 5(1)(e) — data collected legitimately, retained after the purpose ended.
What is the difference between Article 33 and Article 34 breach notification?
Article 33: notify the supervisory authority within 72 hours. Article 34: notify affected data subjects when the breach is likely to result in high risk — higher threshold than Article 33. Free Mobile violated Article 34: they notified users, but the notification omitted the required content. Manage them as separate process steps.
What authentication controls satisfy GDPR Article 32 for remote access systems?
Article 32 requires measures “appropriate to the risk.” The CNIL ruling sets the practical minimum: single-factor authentication is insufficient for VPN access to systems containing personal data at scale. MFA is the baseline — hardware token, authenticator app, or certificate-based. Also required: session expiry, access logging with anomaly detection, and least privilege for service accounts.
How do retention TTLs work at the database layer?
Add a retain_until datetime column to each personal data table, populated at insert time. A scheduled job deletes or anonymises rows past their deadline. Store the retention schedule in a configuration table keyed by data category — auditable without schema migrations. For records needing audit trails, anonymise (null out identifiers, retain aggregate data) rather than deleting.
What is a DPIA and when is it required under GDPR?
A Data Protection Impact Assessment (Article 35) is required before any processing likely to result in high risk to individuals. Mandatory triggers: large-scale sensitive data processing, systematic public monitoring, new technologies with significant individual impact. Treat it as an architecture design session — output is a data flow map, retention schedule, access control spec, and risk register. Required before launch, not after.
How do you anonymise data rather than delete it for retention compliance?
Anonymisation removes all direct and indirect identifiers — result is no longer personal data under GDPR. Pseudonymisation is not anonymisation: the record remains personal data as long as the re-identification key exists. True anonymisation requires the key to be unrecoverable. Use anonymisation for records that retain analytical value after the personal data retention period expires.
Are GDPR fines proportional to company size?
Yes. Article 83 sets fines as a percentage of global annual turnover, which scales automatically. CNIL used Iliad Group’s EUR 10 billion turnover in the Free Mobile calculation. For a company with EUR 5 million turnover, a comparable violation produces a fine in the EUR 50K–EUR 250K range — significant, and almost always smaller than the reputational and operational cost of the breach itself.
What is the practical difference between privacy by design and a compliance bolt-on?
A bolt-on adds deletion scripts, consent banners, and access logs after deployment — it fixes observable non-compliance without fixing the architecture. Privacy by design embeds data minimisation, retention limits, and access controls before the first table is created. The Free Mobile violations are the bolt-on pattern: processes that existed but were inadequate, because they were retrofitted onto an existing system.