Skip to main content

Compliance Guide for Voice Cloning

4. Data Protection Requirements

4.1 Voice as Biometric Data

The human voice is considered biometric data under Art. 9 GDPR and enables the unique identification of a person.
The voice is as unique as a fingerprint and contains physiological, genetic, as well as socio-cultural characteristics.

Data Protection Classification

1

Processing Prohibition

General prohibition without explicit consent
2

High Requirements

Special category of particularly sensitive data
3

Biometric Authentication

Practical use in customer service

Voluntariness

Genuine choice without pressure or disadvantages

Informedness

Clear information about purpose and scope

Purpose Limitation

Specific, explicit purposes

Right to Withdraw

Withdrawable at any time

4.3 Data Subject Rights

  • Whether and which data is processed
  • Processing purposes and categories
  • Recipients and storage duration
  • Origin of the data
  • Immediate deletion when purpose ceases
  • Deletion upon withdrawal of consent
  • Exceptions for legal retention obligations
  • Object to processing at any time
  • Especially regarding legitimate interests
  • No further processing after objection

5. Personality Rights and Economic Interests

5.1 Protection of Economic Exploitation Interests

Voice actors and artists benefit from comprehensive protection under copyright and related rights law.

Key Points of Protection

Right to Remuneration

Fair compensation also for subsequent exploitations

Right to Attribution

Right to be named when used

Legal Enforcement

Injunctions and damages for unauthorized use

Personality Protection

Protection of moral and economic interests

5.2 Relevant Case Law

Key Findings:
  • Voice has considerable economic value
  • Imitation without consent is generally impermissible
  • BGH ruling 1999: Protection extends post-mortem (10 years)
  • Personality rights violation through unauthorized use

6. Criminal Law Aspects

6.1 Limits of § 201 StGB

§ 201 StGB protects only the actually spoken word, but not synthetically generated voice recordings through voice cloning.

Protection Gaps in Criminal Law

§ 201 StGB protects:
  • Real, non-publicly spoken utterances
  • Against unauthorized recording and dissemination
§ 201 StGB does NOT protect:
  • The voice as such
  • AI-generated speech synthesis
  • Computer-generated imitations

6.2 Voice Cloning and Fraud

Social Engineering Attacks

1

CEO Fraud

Fake instructions using imitated supervisor voices
2

Family Extortion

Faked emergencies with imitated family member voices
3

Data Theft

Soliciting sensitive information through familiar voices
4

System Access

Gaining credentials by impersonating colleagues’ voices

Deepfakes and Manipulation

Political Manipulation

Fake statements from politicians to influence opinion

Defamation

Compromising fake statements by famous individuals

Extortion

Threats to release forged recordings

Identity Theft

Complete takeover of digital voice identity

Preventive Measures

  • Multi-factor authentication procedures
  • Voice recognition technology for authenticity verification
  • Watermarking of original recordings
  • AI-based deepfake detection
  • Employee training on voice cloning risks
  • Clear verification processes for critical requests
  • Incident response plans for voice cloning attacks
  • Regular security audits

7. International Developments

7.1 Comparison USA (ELVIS Act) vs. Germany/EU

Explicit Protection:
  • Direct prohibition of AI-based voice imitation
  • Civil and criminal sanctions
  • Up to 11 months imprisonment and 2,500 USD fine
  • Fair use exceptions for satire and journalism
  • Model for other states

Comparison Table

AspectUSA (ELVIS Act)Germany/EU
Protected SubjectExplicit voice clonesGeneral personality rights
EnforcementCriminal and civil lawPrimarily civil law
ScopeTennessee (expansion planned)EU-wide harmonized
ExceptionsFair use for media/satireGDPR exceptions

7.2 EU AI Act – Labelling Requirement

The EU AI Act introduces comprehensive labelling obligations starting from August 1, 2026.

Labelling Requirements

Deepfakes

Deceptively realistic AI-generated audio/video content

Public Information

AI-generated texts on public affairs

First Encounter

Labelling at first interaction with the content

Machine Readable

Technical detectability for systems

Exceptions from Labelling Requirement

1

Editorial Responsibility

Content under human supervision and editorial control
2

Artistic Content

Creative and satirical works
3

Law Enforcement

Use for investigation of crimes
Compliance Deadline: All providers must implement complete labelling systems by August 1, 2026.

Practical Compliance Checklist

For Companies

  • Secure storage of voice data
  • Access control and encryption
  • Implement deletion concepts
  • Audit logs for voice processing
  • Usage agreements with voice providers
  • Data protection agreements with service providers
  • Licensing models for commercial use
  • Define liability arrangements

For Voice Providers

1

Contract Review

Careful examination of usage scope and purpose
2

Remuneration Agreement

Fair compensation for all types of use
3

Withdrawal Rights

Ensuring possibility to withdraw at any time
4

Monitoring

Regular monitoring of voice use
Recommendation: Seek legal advice for complex voice cloning projects to meet all compliance requirements.