Anthropic Restricts Tool Usage, Angers Users
Anthropic, the company behind the AI assistant Claude, has recently implemented a significant policy change that has caused frustration among its users. Starting April 4th, the company began enforcing a new rule: users can no longer connect third-party tools, specifically mentioning OpenClaw, to their Claude subscriptions. This move effectively bans the use of these external tools with Claude, forcing users to find alternatives or pay more for extended usage.
What Changed and Why?
The company sent out emails less than 24 hours before the policy took effect, informing users that using third-party tools like OpenClaw with their Claude subscription was now against their terms of service. These tools, often called “harnesses,” help users interact with AI models in more advanced ways. Anthropic specifically called out OpenClaw, stating that usage through these third-party tools would no longer count towards a user’s regular subscription limits. To continue using Claude with these tools, users would need to purchase additional usage credits, significantly increasing their costs.
In an email to users, Boris Cherny, head of Claude Code, explained the decision. He stated that Anthropic’s subscription plans were not designed for the usage patterns of these third-party tools. The company is experiencing a massive increase in demand for Claude, leading to capacity issues. Cherny mentioned that they are prioritizing customers who use their products and the official API directly. As a way to soften the blow, Anthropic offered a one-time credit equal to the monthly plan cost for affected users who wished to cancel their subscriptions.
Understanding the AI Capacity Crunch
Anthropic is facing a dual challenge: soaring demand for its AI and a shortage of computing power, particularly Graphics Processing Units (GPUs), which are essential for running AI models. To manage this situation, the company has employed both incentives and restrictions. For instance, they offered a temporary promotion doubling usage outside of peak hours (weekdays 5-11 AM Pacific and all day on weekends). However, they also adjusted session limits for some subscription tiers during peak times, making users hit their limits faster.
Despite these measures, the company continued to experience high demand, leading to the ban on third-party harnesses. Many users reported their usage quotas being depleted much faster than expected, sometimes within days of their weekly reset. This suggests that the demand is outstripping Anthropic’s current capacity, even with the implemented measures.
Navigating the Change: Swapping Models
The swiftness of Anthropic’s policy change left users with very little time to adapt. Many had to quickly switch their primary AI models within tools like OpenClaw. The process of switching models, however, proved to be relatively straightforward for many users. Some users reported being able to swap from Claude models to alternatives like GPT-4.5 Turbo within minutes. This ease of switching is partly due to how AI tools are designed, allowing users to change the underlying model with minimal disruption.
A key factor in making this switch easier is having optimized prompt files for different AI models. Prompts are the instructions given to AI. A prompt that works well for one AI model might not work as effectively for another. Users who had already prepared different versions of their prompts for various models found the transition much smoother. This highlights the importance of flexibility and adaptability when working with different AI systems.
Confusion Over Agent SDK
Adding to the confusion, there was initial uncertainty about whether Anthropic’s Agent SDK would also be affected. Boris Cherny had previously stated that there would be no changes to the Agent SDK, but the lack of clear communication left many users wondering if their usage through this SDK would still be permitted. Given the strict enforcement on other third-party tools, many decided it was too risky to continue using the Agent SDK with Claude, fearing further restrictions or unexpected charges.
The Bigger Picture: AI Growth and Strategy
Anthropic’s current situation reflects the explosive growth in the AI sector. The company boasts a revenue run rate of $30 billion, a significant increase from $9 billion at the end of 2025. This rapid growth is a major reason for their capacity issues and the need to manage usage carefully. They have also secured a significant deal with Google to utilize Google’s Tensor Processing Units (TPUs), indicating a strategic move to expand their computing resources.
The intense focus on growth and reaching Artificial General Intelligence (AGI) means that smaller issues, like the use of third-party tools, may be less of a priority for Anthropic compared to scaling their core services. This perspective helps explain why the company is taking such decisive, albeit unpopular, steps to control usage and ensure their infrastructure can keep up with demand.
Why This Matters
This development is significant for several reasons. Firstly, it highlights the challenges companies face in scaling AI services to meet rapidly growing demand. Secondly, it underscores the tension between AI providers wanting to control user experience and direct usage through their own platforms, and users who benefit from the flexibility and enhanced capabilities offered by third-party tools. For developers and businesses relying on AI assistants like Claude, these policy changes can disrupt workflows and increase operational costs. The lack of clear communication from Anthropic has further exacerbated user frustration, making it difficult for them to plan and adapt.
The situation also points to a potential shift in the AI landscape. With Anthropic restricting third-party integrations, competitors like OpenAI, who have been more open to such integrations and even actively encouraging them (as seen with OpenAI’s acquisition of Peter Steinberger, a proponent of OpenClaw), may gain favor among users seeking more flexibility. The ability of tools like OpenClaw to improve the usability of models like GPT-4.5 Turbo further strengthens the case for a multi-model strategy, where users are not tied to a single AI provider.
The Future of AI Tools
As the AI field continues to evolve at a breakneck pace, users are increasingly adopting a multi-model strategy. This means not relying on just one AI provider but using a mix of different models and tools to get the best results for specific tasks. This approach includes incorporating local, open-source models for tasks like data extraction or summarization, alongside powerful frontier models for complex operations. Companies like Digital Ocean are offering infrastructure designed for AI development and deployment, aiming to simplify the process for developers.
While Anthropic’s recent policy changes have caused disruption, they also serve as a reminder of the dynamic nature of the AI industry. Users are encouraged to stay informed about policy updates and to diversify their AI toolset to mitigate risks associated with reliance on a single platform. The focus remains on finding the most efficient and effective ways to build and scale AI applications, whether through frontier models or open-source alternatives.
Source: Anthropic banned OpenClaw… (YouTube)