Anthropic Raises $30 Billion at $380 Billion Valuation, Eyes IPO This Year
5 33Anthropic has raised $30 billion in a Series G funding round that values the Claude maker at $380 billion as the company prepares for an initial public offering that could come as early as this year. Investors in the new round include Singapore sovereign fund GIC, Coatue, D.E. Shaw Ventures, ICONIQ, MGX, Sequoia Capital, Founders Fund, Greenoaks and Temasek. Anthropic raised its funding target by $10 billion during the process after the round was several times subscribed.
The San Francisco-based company, founded in 2021 by former OpenAI researchers, now has a $14 billion revenue run rate, about 80% of which comes from enterprise customers. It claims more than 500 customers spending over $1 million a year on its workplace tools. The round includes a portion of the $15 billion commitment from Microsoft and Nvidia announced late last year.
5 comments
Re: And so it begins (Score: 5, Interesting)
by CAIMLAS ( 41445 ) on Thursday February 12, 2026 @07:08PM (#65985878)
There's a billion different ways to do it and I don't think there's a right way exactly, but defining scope for the models helps a lot.
A lot of planning before any implementation, with precise criteria for how and where things get implemented, and each plan gets further broken down in that regard. Have a good architectural picture of how things are supposed to look. MVC isolation helps a lot. Know the data model and how your controller does its thing.
I can't emphasize enough how much planning seems to impact outcomes. You'll overlook things, but if I'm reasonably thorough it usually goes through without a hitch. Plan, ask for plan diagnosis, refinement... I spend 40-60 minutes planning for any substantial anything, and often have time between agent runs while I'm formulating the next plan.
Also a good primary prompt, and CLAUDE.md files with well defined project definitions, structure, and so on. Be explicit. Have the agent track its work and compare/follow up and review the plan for completion afterwards. Heck, you can even have it build/run/test (which I might do before getting lunch) and iterate. When you find it mess something up, update the project details to do/do not do. The one-shot depends on these things.
Your tooling matters too. opencode and cc seem to work best for me, I'd avoid tools like cursor at this point. memvid is a game changer, it will learn your project's structure and hallucinate a lot less.
Re:And so it begins (Score: 5, Informative)
by fleeped ( 1945926 ) on Thursday February 12, 2026 @04:32PM (#65985656)
A "personal" note from "I'm Matt Shumer — founder & CEO at OthersideAI (HyperWrite). I build and invest in AI products, ship fast, and share what I learn with a large audience."
No conflict of interest, none whatsoever. No reason to skew the truth.
Re:And so it begins (Score: 5, Informative)
by DMDx86 ( 17373 ) on Thursday February 12, 2026 @04:35PM (#65985666)
The author is a CEO of an AI company and an investor in others. I don't mean to poo-poo the actual legitimate advances being made in AI, but this isn't an unbiased piece. He's overstating the capabilities of LLMs in my opinion, but someone who has vested financial interests in the industry is of course going to say those things. The current AI industry is based primarily on the commercialization of LLMs.
I use the latest coding models in my own work and while they do some really awesome things, they are far from "its replacing software engineers". I find they struggle on large code bases and sometimes hallucinate. For me that's okay, because as a SWE I can recognize good code from bullshit and tweak its output but it gets frustrating when it wrongly answers a question about your code (how does X interact with Y, where is the code for that) and it puts you on a wild goose chase around your code base.
LLMs are hitting a brick wall in scaling to the point where the very best models require an incredible amount of resources to run ( = expensive). The resource consumption of LLMs are increasing at a rate greater than what advances in compute power and memory capcity are increasing at.
I'm with Yann LeCun on LLMs: https://www.youtube.com/watch?... [youtube.com]
Re:And so it begins (Score: 5, Insightful)
by MunchMunch ( 670504 ) on Thursday February 12, 2026 @04:43PM (#65985688)
Sorry, the linked summary is really just the same hype cycle I've seen.
Programmer friends at Google, Meta and Amazon have certainly convinced me that code is being assisted successfully by AI. However, the author's level of extrapolation to other fields and situations destroys any credibility he had.
For example, the author - Matt Shumer, who is an AI company founder, booster and frequent submitter to other AI-hype websites, but apparently is not legally trained - spends many paragraphs and anecdotes talking about how a partner at a law firm now has to use AI because he "knows what's at stake" and that AI can do legal work better than their associates.
Nope, the reason that partner is doing it is because he's scared of being left behind, which is the entire motive behind hype pieces like this. I'd wager that hypothetical partner is not the one who beats out all his colleagues and becomes "the most valuable person in the firm" but rather the one that gets sanctioned for submitting briefs with hallucinated cases (which is still happening in the wild regularly). As a lawyer, I can say even current flagship AI models cannot solve the problem of lawyer bar-required ethical duties which require effectively re-doing the work AI does so we can attest it is correct, taking more time than doing the work ourselves the first time.
Shumer similarly gives an "oh god, it's getting so good so fast!" timeline that includes AI passing the bar. That 2023 story was debunked in 2024 and somehow this guy is unaware of that. Why in the world would someone so unable to identify reliable information be trusted on AI reliability?
There may be some functional AI work - like coding within specific environments and circumstances - but there is a huge AI bubble built on this silly "it will do everything better" hype.
Recent good competition to Claude for coding (Score: 5, Informative)
by caseih ( 160668 ) on Thursday February 12, 2026 @03:44PM (#65985576)
Last week I praised Claude code, especially the cli here on slashdot. I still think it's currently the best but it's also the most expensive. But the competition is getting a lot better (and cheaper). I've been using the opencode cli lately with several models: Kimi K2.5, Kimi K2.5 Thinking, OpenCode's Big Pickle, and Qwen3 Coder Next. Using them through OpenCode's Zen service and also OpenRouter (for Qwen3 Coder Next), they are all about 1/2 the cost of Claude's models, maybe less.
Of those I feel that Kimi K2.5 Thinking is probably the closest to Claude Opus 4.6. The rest are quite good at most things, and Qwen3 Coder Next is very fast. Qwen3 Coder Next also has the potential to run on "reasonable" local hardware.