Anthropic Doubled Claude Code Limits Today, and SpaceX Is the Reason

Anthropic Doubled Claude Code Limits Today, and SpaceX Is the Reason
AI-Powered Summary

Click an AI button above to get an instant summary using your preferred assistant.

Anthropic just made the kind of announcement that lands as one short blog post but reshapes how a lot of people use Claude for the next year. Three changes go live today (May 6, 2026), all aimed at the same problem: paying customers were running out of headroom. Behind those changes is a much larger story about where the new compute came from.

Here is what changed for Claude users today, all effective immediately:

  • Claude Code five-hour rate limits doubled for Pro, Max, Team, and seat-based Enterprise plans.
  • The peak-hours limit reduction is gone for Pro and Max accounts on Claude Code, the same 8 AM to 2 PM ET window that has been making mid-afternoon work feel rationed.
  • API rate limits for Claude Opus models went up substantially, with bigger jumps on the higher-tier accounts where users were hitting input-tokens-per-minute walls.

Each of those alone is worth a small cheer. Together, they roll back most of the squeeze that landed earlier this year, when peak-hour reductions and quietly tightened weekly caps had Pro and Max users counting tokens with one eye open. We wrote about that pressure in The Claude Code Subscription Squeeze Is Coming for Everyone. That pressure has not disappeared, but Anthropic just bought itself enough room to delay it.

The deal that made this possible

The unlock is a new agreement with SpaceX that gives Anthropic full use of all the compute capacity at SpaceX's Colossus 1 data center in Memphis. Per Anthropic's own numbers, that is more than 300 megawatts of new capacity and over 220,000 NVIDIA GPUs, with the capacity coming online within the month.

If you remember Colossus 1 as an xAI project, you are right. SpaceX took ownership of the site earlier this year through its merger with xAI, which is why a SpaceX-Anthropic deal can include a building that was poured, wired, and named by Elon Musk's other AI company. That detail is going to drive a lot of jokes about who is renting whose GPUs from whom, but the practical point for Anthropic is that the cluster is already built.

A site like Colossus 1 is not the kind of thing that gets stood up in weeks. xAI built the original Memphis cluster in 122 days starting in 2024, then doubled it to roughly 200,000 GPUs in another 92 days. By signing with SpaceX, Anthropic effectively skips to the front of the line on a finished cluster instead of waiting on a new build. That is why this can move usage limits today instead of next year.

It is not just SpaceX

The SpaceX deal stacks on top of every other compute commitment Anthropic has signed in the past two months:

  • Up to 5 gigawatts with Amazon, with nearly 1 gigawatt of new capacity due online by the end of 2026 and more than $100 billion of long-term spend committed.
  • 5 gigawatts with Google and Broadcom for next-generation TPUs, beginning to come online in 2027.
  • A strategic partnership with Microsoft and NVIDIA covering $30 billion of Azure capacity.
  • A $50 billion infrastructure investment with Fluidstack inside the United States.

Add Colossus 1 on top of that and you get a picture of a company that decided in early 2026 it would not be GPU-poor again. The new headline figure, "300 megawatts within the month," is what makes the rate-limit changes possible right now. The other deals are what makes the trajectory believable for next year and beyond.

What this changes for Claude Code users today

If you pay for Claude Code on a Pro, Max, Team, or seat-based Enterprise plan, your five-hour rolling limit just doubled. If you pay on Pro or Max, the peak-hour reduction is gone, so the early-afternoon dead zone where your weekly cap was burning faster than expected should stop showing up.

If you hit the API directly on an Opus model, your rate limits are now substantially higher, with the biggest deltas at higher tiers. For shops running parallel agents or doing long Opus runs over big repositories, this is the change that actually moves the needle on day-to-day work.

The practical effect is hard to overstate for the workloads Claude Code is built for: long agent runs, repository-wide refactors, debugging sessions that lean on Opus 4.7 thinking. Those are exactly the workloads that were eating people's weekly caps in March and April. The cost story for those workloads has not changed underneath, but the rationing has loosened, and that matters more day-to-day than any pricing-page headline.

The fine print is the same as before. None of these changes touch the underlying logic that frontier models cost real money per token, and Anthropic still routes heavy enterprise customers toward metered API billing rather than rate-limited subscriptions. If your usage is well above the new caps, the squeeze we wrote about earlier this year is still your future. For everyone else, today is a real upgrade.

The orbital footnote

Buried at the bottom of Anthropic's announcement is a sentence that is going to get screenshotted for a while: "we have also expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity." That is not a 2026 plan or a 2027 plan. Power generation and cooling are already the hardest constraints on terrestrial AI infrastructure, the sun never sets in space, and SpaceX is one of the very few companies on the planet with the launch cadence to make orbital data centers anything other than a thought experiment. So it is not a crazy idea on the math. It is just very far from being real. Treat it as a flag for where the next decade of compute conversations will end up, not a product roadmap.

Anthropic also tucked in a note about international expansion, with the recently announced Amazon collaboration including additional inference capacity in Asia and Europe. That is the kind of detail that mostly matters for regulated enterprise buyers (financial services, healthcare, government) who need in-region compute to satisfy data residency rules. The fact that it shipped today instead of being held for a separate enterprise post tells you something about how aggressively Anthropic wants to be seen as a serious infrastructure player, not just a model lab with an API.

The takeaway

For a Claude Code subscriber, the takeaway today is simple: open the editor, run the long task you have been putting off because of weekly caps, and see if you still hit a wall. For everyone else watching the AI infrastructure race, Anthropic just bought 300 megawatts of headstart on top of an already aggressive 2026 buildout, and signaled that the company has no intention of running out of compute behind OpenAI or Google.

The bills for all of this are going to be enormous, and the per-token economics of frontier inference are still rough. But for one Wednesday in May, the meter on Claude Code finally moved in the user's favor.

TAGS: ai anthropic claude claude-code gpu infrastructure spacex