AMD’s AI director is not happy with Anthropic’s Claude code; issue raised on Github reads: Claude cannot be trusted to perform complex engineering tasks, every senior engineer in my team …

Claude Code’s performance isn’t impressing Stella Laurenzo anymore. AMD’s AI chief took to her “stellaraccident” GitHub account to write, “has regressed to the point it … Read more

AMD's AI director is not happy with Anthropic's Claude code; issue raised on Github reads: Claude cannot be trusted to perform complex engineering tasks, every senior engineer in my team …

Claude Code’s performance isn’t impressing Stella Laurenzo anymore. AMD’s AI chief took to her “stellaraccident” GitHub account to write, “has regressed to the point it cannot be trusted to perform complex engineering.” Her comments are based on internal analysis of more than 6,800 coding sessions, nearly 235,000 tool calls, and close to 18,000 reasoning blocks. She said multiple engineers on her team have reported similar issues, pointing to a rise in “stop-hook violations,” where the model exits tasks early or requests unnecessary permissions.Laurenzo said that “Every senior engineer on my team has reported similar experiences/anecdotes,” adding that stop-hook violations increased from zero to around 10 per day last month. She linked the decline to the rollout of thinking redaction (redact-thinking-2026-02-12), arguing that extended reasoning can be “load-bearing” for complex engineering workflows.She also noted a behavioral shift in Claude Code, from a research-first to an edit-first approach, which she said led to lower-quality code, weaker adherence to conventions, and reduced reliability during longer sessions.

What Anthropic said in response to AMD AI chief’s Claude usage concerns

Anthropic responded to the claims, with the company’s engineer, Boris Cherny, stating that the redact-thinking setting only hides reasoning from the interface and does not reduce the model’s actual reasoning. The company also pointed to the introduction of adaptive thinking in Claude Opus 4.6, where the system determines how long to think depending on the task.“Some people want the model to think for longer, even if it takes more time and tokens. To improve intelligence more, set effort=high via `/effort` or in your settings.json,” he wrote.Anthropic added that while the default medium effort setting (effort=85) balances performance and efficiency, it is testing higher effort configurations for Teams and Enterprise users so they can “benefit from extended thinking even if it comes at the cost of additional tokens & latency.”“I appreciate the depth of thinking & care that went into this,” Boris also noted, responding to Laurenzo’s analysis.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

About the Author

Easy WordPress Websites Builder: Versatile Demos for Blogs, News, eCommerce and More – One-Click Import, No Coding! 1000+ Ready-made Templates for Stunning Newspaper, Magazine, Blog, and Publishing Websites.

BlockSpare — News, Magazine and Blog Addons for (Gutenberg) Block Editor

Search the Archives

Access over the years of investigative journalism and breaking reports