Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out Anthropic is flipping the script on its Claude chatbot, deciding to scoop up user chats and coding sessions as fodder for training its AI models starting October 8—unless you hit the brakes and opt out, which is a smart move if you’re a small biz owner wary of feeding the AI beast. This shift from their previous hands-off approach is all about cranking up Claude’s smarts with real-world data, but let’s be real, it’s a privacy poke in the eye for folks who thought their convos were sacred. To opt out, new users can toggle it off during signup, while existing ones should dive into Privacy Settings and flip the “Help improve Claude” switch to off—do it quick, as this covers new chats and even old ones if you reopen them. Oh, and brace for longer data retention up to five years, which might make MSPs squirm, but remember, this doesn’t hit commercial users; still, it’s a nudge to protect your proprietary info in a world where AI rivals like ChatGPT default to gobbling up data. If you’re tech-curious and running a small operation, opting out keeps your insights private, though public posts are fair game for any AI scavenger.

Source: https://www.wired.com/story/anthropic-using-claude-chats-for-training-how-to-opt-out/