News

Will your Claude conversations be used to train AI?

Article Highlights:
  • Anthropic asks users to choose about Claude conversations
  • Consumer users must decide by September 28
  • Retention up to 5 years for those who do not opt out
  • Change affects Claude Free, Pro and Max, including Claude Code
  • Enterprise and specialized customers are excluded
  • Acceptance flow design may lead to uninformed consent
  • Stated goal: improve model safety and capabilities
  • Practical impact: opt out if you handle sensitive data
  • Action: check account training settings immediately
  • Regulatory risk: authorities may scrutinize disclosure practices
Will your Claude conversations be used to train AI?

Introduction

Claude conversations now require an explicit choice: by September 28 users must decide whether to allow their Claude conversations and coding sessions to be used to train models, with retention extended up to 5 years for those who don't opt out.

What changes?

Unless you opt out, Anthropic may use Claude conversations for model training and retain data for up to five years.

Context

Anthropic updated its consumer policy: previously consumer chat data was deleted within 30 days in most cases; the company now requests permission to use Claude conversations and code sessions for training. The change affects Claude Free, Pro and Max users, including Claude Code, while enterprise and specialized customers remain exempt.

The Problem / Challenge

The main issues are long retention and unclear consent. The acceptance flow highlights a large Accept button while the training permission appears as a small toggle preset to On, increasing the risk of uninformed consent.

Why Anthropic is doing this

Anthropic says the data will improve model safety and capabilities, but access to large volumes of real-world Claude conversations also strengthens its competitive position against rivals.

Privacy implications

The change fuels confusion: many users think deleting chats removes data, yet policies and design choices can keep interactions accessible for training for years; that raises regulatory scrutiny about how consent is obtained.

How to decide (practical steps)

  • Check training permissions during signup or in account settings
  • Opt out if you exchange sensitive information or prefer not to contribute data
  • Prefer enterprise offerings if you need contractual data protections
  • Follow official Anthropic communications for policy updates

Conclusion

The decision about Claude conversations is both privacy-related and strategic: choose based on your data sensitivity and willingness to contribute to model improvement before the deadline.

 

FAQ

Quick definition: Claude conversations are the chats and code sessions users have with Claude; the new policy requests permission to use them for model training (under 40 words)

  • Will Claude conversations be used to train models?
    If you do not opt out, Anthropic may use Claude conversations for training and model improvement.
  • How long are Claude conversations retained?
    For users who don't opt out, Anthropic states retention can be up to five years.
  • Which customers are not affected?
    Enterprise and specialized offerings (Claude Gov, Claude for Work, Claude for Education, API customers) are excluded from the consumer change.
  • How do I prevent my Claude conversations from being used?
    Use the opt-out toggle in account settings or during the update flow before the stated deadline.
  • Why does Anthropic want these conversations?
    To improve safety systems, content detection, and skills like coding and reasoning, and to obtain real-world data for model development.
  • Does deleting a chat remove it from Anthropic systems?
    Not necessarily: prior policies and the new retention rules indicate deletion in the UI may not equate to permanent removal from training datasets.
Introduction Claude conversations now require an explicit choice: by September 28 users must decide whether to allow their Claude conversations and coding [...] Evol Magazine