Hacker News new | past | comments | ask | show | jobs | submit login
Ultrathink is a Claude Code a magic word (simonwillison.net)
79 points by ghuntley 4 hours ago | hide | past | favorite | 28 comments





@dickfickling beat me to it, but ultrathink is already explicitly called out in the public Anthropic documentation:

"Ask Claude to make a plan for how to approach a specific problem. We recommend using the word "think" to trigger extended thinking mode, which gives Claude additional computation time to evaluate alternatives more thoroughly. These specific phrases are mapped directly to increasing levels of thinking budget in the system: "think" < "think hard" < "think harder" < "ultrathink." Each level allocates progressively more thinking budget for Claude to use."

https://www.anthropic.com/engineering/claude-code-best-pract...

I don't know what the max allowable "budget_tokens" is for Claude 3.7 Thinking mode, but the SDK shows an example of 32k which matches up with the article's findings.


Looks like that documentation is incorrect. It suggests there are four levels - "think" < "think hard" < "think harder" < "ultrathink." - but if you look in the code there are actually only three.

I hope we will exit this stage of magic spells and incantations sooner rather than later.

I hope we delve deeper into pentacles and rites in candlelit basements to appease black boxes of neural mimicries of canaanite archetypes

Sincerely, I respect your response to how arbitrary it seems in this form.

But... I'd like you to take a moment and think really hard about whether this is truly novel behavior for LLMs, or rather something that has always been part of the interplay between inter-agent communication and intra-agent thought :)


It sounds like it is a “specific phrase mapped directly” based on another comment here? I guess that means hardcoded? Not completely sure, though.

It's hard-coded - this isn't a weird model thing, Claude Code detects the exact string "ultrathink" and sets the thinking token budget to 31999.

I included that de-obfuscated code in my post: https://simonwillison.net/2025/Apr/19/claude-code-best-pract...


I thought that earlier on, I don't think we will though

It would be cool if these "secret keywords" were more directly exposed in the UI somehow, perhaps as a toggleable developer/experimental mode? I would have a lot of fun tinkering with them.

It's for Claude Code FWIW, just leaving a sigil here for fellow API implementers who are confused: your general point stands (though I wonder about UI affordances other than text given it's a CLI tool)

I'm scratching my head a bit at this one.

I already assume that the models are shifting underneath me. It's very frustrating that most non-developers just think you can ask an LLM a question and it will respond accurately each time. They are designed to make creative output and even if you dial down the temperature they still can hallucinate.

Why not be explicit about the thinking budget instead of aliasing it to a number with a term like ultrathink?

It's a cute word, and fun to know is managed on the client side, but isn't it again more imprecision to tools the are already suffering from that?


Nice to know, although I was taught that the magic word is "please".

Tengu think? As in Japanese Tengu?

I think I'll wait for Hyperthink.

This would be helpful information if I hadn’t already switched to Gemini 2.5 because it’s 96% cheaper

I've had a frustrating time over the last couple of days with Gemini 2.5 Pro.

First I asked it to help me reverse the direction of text on a circle in Photoshop. It gave me very specific instructions which don't work and continued to argue with me that I was doing something wrong - I did my own research and found it's not actually possible to do this in Photoshop, and the instructions it was giving me were for Illustrator. 30 minutes of my time wasted.

This morning I asked it how to remove the axis lines from the orthographic view in Blender 4.3. I explained carefully that I know how to remove them in perspective view but that wasn't working for orthographic views. It over and over told me how to remove them from perspective views, telling me to use non existent UI elements, even drawing ASCII diagrams of how to find the nonexistent icons. When I said they didn't exist, it would circle back to telling me how to turn them off in the perspective view.

It turns out, again, it's not possible to remove grid lines from orthographic views in Blender (at least without messing around with the theme settings, or turning off the grid entirely).

In both cases it was incredibly persistent in stating the wrong way to do things, even when I was saying that it didn't work. I felt like it was gaslighting me, moreso than with any previous model I've used.

I haven't yet used it for writing code but these two experiences don't make me feel hopeful. The worst part about dealing with AI is when they are confidently incorrect.


After stunts like Amp and Web Integrity (among others) I don’t care what they charge, I want nothing to do with Google.

It does feel like a Faustian bargain using it.

Crazy that it's a key word that's implemented in the code that expands the context window, and that a light touch of reverse engineering was required to find it.


Ah yes, the documentation. If everyone read documentation, we wouldn't need LLMs to read it for us!

That very very quickly moved from blog to twitter to blog to HN. Gotta love the velocity of information these days

Link first, ask questions later

megathink sounds better

But paradoxically only allocates 1/3 the tokens according to the code.

Perhaps they should switch to the metric thinking system.

Gigathinking, and Terathinking should be on the menu as well.


And doublemegathink if you want it to do two megathinks in parallel

Not to be confused with doublethink, a mode that is always active for LLMs.

I asked Claude if this was true, and Claude confirmed.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: