This is perfectly legitimate. It's something I've been denouncing day after day. Company X charges you 10dolar per token, while company Y charges you 7dolar, yet company X is cheaper because of the tokenizer they use. The token consumption depends on the tokenizer, and companies create tokenizers using standard algorithms like BPE. But they're charging for hardware access, and the system can be biased to the point that if you speak in English, you consume 17% less than if your prompt is written in Spanish, or even if you write with Chinese characters, you'll significantly reduce your token consumption compared to English speakers. I've written about this several times on HN, but for whatever reason, every time I mention it, they flag my post.
I have often wondered if Chinese is a much 'better' language for LLMs - every character is a token, boom you're done. No weird subword nonsense, no strange semantics being applied to arbitrary chunks of words.. I feel like there must be benefits to being able to have the language tokenized in what must be very close to 1:1.
> Opus 4.7 tokenizer used 1.46x the number of tokens as Opus 4.6
Interesting. Unfortunately Anthropic doesn't actually share their tokenizer, but my educated guess is that they might have made the tokenizer more semantically aware to make the model perform better. What do I mean by that? Let me give you an example. (This isn't necessarily what they did exactly; just illustrating the idea.)
Let's take the gpt-oss-120b tokenizer as an example. Here's how a few pieces of text tokenize (I use "|" here to separate tokens):
You have 3 different tokens which encode the same word (Kill, kill, <space>kill) depending on its capitalization and whether there's a space before it or not, you have separate tokens if it's the past tense, etc.
This is not necessarily an ideal way of encoding text, because the model must learn by brute force that these tokens are, indeed, related. Now, imagine if you'd encode these like this:
Notice that this makes much more sense now - the model now only has to learn what "<capitalize>" is, what "kill" is, what "<space>" is, and what "ed" (the past tense suffix) is, and it can compose those together. The downside is that it increases the token usage.
So I wouldn't be surprised if this is what they did. Or, my guess number #2, they removed the tokenizer altogether and replaced them with a small trained model (something like the Byte Latent Transformer) and simply "emulate" the token counts.
There is currently very little evidence that morphological tokenizers help model performance [1]. For languages like German (where words get glued together) there is a bit more evidence (eg a paper I worked on [2]), but overall I start to suspect the bitter lesson is also true for tokenization.
I never understood why people want this in the first place. Sure, making this step more human explainable would be nice and possibly even fix some very particular problems for particular languages, but it directly goes against the primary objective of a tokenizer: Optimizing sequence length vs. vocabulary size. This is a pretty clear and hard optimization target and the best you can do is make sure that your tokenizer training set more closely mimics your training and ultimately your inference data. Putting english or german grammar in there by force will only degrade every other language in the tokenzier, while we already know that limiting additional languages will hurt overall model performance. And the belief that you can encode a dataset of trillions of tokens into a more efficient vocabulary than a machine is kind of weird tbh. People have also accepted since the early convnet days that the best encoding representation for images in machine learning is not a human understandable one. Same goes for audio. So why should text be any different? If you really think so, you might also wanna have a go at feature engineering images. And it's not like people haven't tried that. But they all eventually learned their lesson.
We usually build the tokenizer by optimizing for one goal (space-efficient encoding of text), then use it in a model that is trained for an entirely different goal (producing good text, "reasoning", "coding", etc). It is not immediately clear that the optimization goal for the tokenizer is actually the one that best serves the training of the llm.
That's what all these attempts boil down to. They don't presume to be able to find a more space-efficient encoding by hand, they assume that the optimization goal for the tokenizer was wrong and they can do better by adding some extra rules. And this isn't entirely without precendent, most tokenizers have a couple of "forced" tokens that were not organically discovered. Moving around how digits are grouped in the tokenizer is another point where wins have been shown.
This is where projects like nanochat are really valuable for quickly and (relatively) cheaply trying out various tweaks
>It is not immediately clear that the optimization goal for the tokenizer is actually the one that best serves the training of the llm.
Except that is exactly what research has shown. Besides, the tokenizer's training goal is literally just to encode text efficiently with fewer tokens by increasing the vocabulary, which obviously directly benefits the attention mechanism if you look at the dimensions of involved matrices. The biggest issues so far have stemmed from variances between tokenizer and LLM training sets [1] and the fact that people primarily work with character based text and not word-part based text (even though that gets muddy when you look at what is actually happening in the brain) when doing anything in writing.
LLMs are explicitly designed to handle, and also possibly 'learn' from different tokens encoding similar information. I found this video from 3blue1brown very informative: https://www.youtube.com/watch?v=wjZofJX0v4M
Also, think about how a LLM would handle different languages.
I was looking into morpheme tokenization approach, but went even more radical with building a semantic primitive tokenizer [1], i.e. kill, killed, killer would all share the same semantic connection and tokens, e.g. [KILL], [KILL, BEFORE], [KILL, SOMEONE].
It’s based on semantic primitives (Wierzbicka NSM) and emoji (the fun idea that got me interested in this in the first place).
So far I’ve tested 6 iterations and it trains and responds well with a 10k vocab, but the grammar came out rougher.
Working on 8th iteration, mainly to improve the grammar and language. Turns out the smaller vocab couldn’t be maintained and all improvements get us back in the ballpark of the 32k vocab size. Further testing is still outstanding for this week.
This is such a superficial, English-centric take, but it might as well be true. It seems to me that in non-english languages the models, especially chatgpt, have suffered in the declension department and output words in cases that do not fit the context.
I have just ran an experiment: I have taken a word and asked models (chatgpt, gemini and claude) to explode it into parts. The caveat is that it could either be root + suffix + ending or root + ending. None of them realized this duality and have taken one possible interpretation.
Any such approach to tokenizing assumes context free (-ish) grammar, which is just not the case with natural languages. "I saw her duck" (and other famous examples) is not uniquely tokenizable without a broader context, so either the tokenizer has to be a model itself or the model has to collapse the meaning space.
Current models understand different tokenization variants perfectly, e.g. leading space vs no leading space vs one character per token. It doesn't even affect evals and behchmarks. They're also good at languages that have very flexible word formation (e.g. Slavic) and can easily invent pretty natural non-existent words without being restricted by tokenization. This ability took a bit of a hit with recent RL and code generation optimizations, but this is not related to tokenization.
>None of them realized this duality and have taken one possible interpretation.
I suspect this happens due to mode collapse and has nothing to do with the tokenization. Try this with a base model.
Case sensitive language models have been a thing since way before neural language models. I was using them with boosted tree models at least ten years ago, and even my Java NLP tool did this twenty years ago (damn!). There is no novelty there of course - I based that on PG's "A Plan for Spam".
The bitter lesson says that you are much better off just adding more data and learning the tokenizer and it will be better.
It's not impossible that the new Opus tokenizer is based on something learnt during Mythos pre-training (maybe it is *the learned Mythos tokenizer?%), and it seems likely that the Mythos pre-training run is the most data ever trained on.
Putting an inductive bias in your tokenizer seems just a terrible idea.
Anthropic was already special-casing case-folding in their tokenizers before this recent change: https://transformer-circuits.pub/2025/attribution-graphs/met... "The tokenizer the model was trained with uses a special “Caps Lock” token" (⇪). Their visualizations for Claude 3.5 Haiku also show the Title Case token (↑).
So how would you explain the increase in token usage, considering the fact that conventionally tokenizers are trained to minimize the token usage within a given vocabulary budget?
> Putting an inductive bias in your tokenizer seems just a terrible idea.
You're already effectively doing this by the sheer fact of using a BPE tokenizer, and especially with modern BPE-based LLM tokenizers[1]. I agree trying to bake this manually in a tokenizer is most likely not a good idea, but I could see a world where you could build a better tokenizer training algorithm which would be able to better take the natural morphology of the underlying text into account.
> So how would you explain the increase in token usage, considering the fact that conventionally tokenizers are trained to minimize the token usage within a given vocabulary budget?
Just modeling whitespace as its own token would seem to explain the increase.
> Qwen3.6 tokenizer: "pretokenizer"
That's the pre-tokenizer, not the tokenizer. That is mostly a performance optimization that lets the memory requirements for the BPE tokenizer be a lot less.
> I could see a world where you could build a better tokenizer training algorithm which would be able to better take the natural morphology of the underlying text into account.
The reason everyone went to BPE was because it was so dramatically better than morphology based tokenizers. See the BPE paper: https://arxiv.org/abs/1508.07909
BPE already learns morphology because it sees the raw bytes.
Yes, it's an extra tokenizer which runs before the learned tokenizer and injects an inductive bias into it.
> That is mostly a performance optimization that lets the memory requirements for the BPE tokenizer be a lot less.
While it does indeed speed up training of the tokenizer, no, it isn't mostly just a performance optimization? It injects a clear cut inductive bias into the tokenizer (split by words, split by punctuation, don't merge words and numbers, etc. -- is that not an inductive bias?), and for some languages (e.g. Asian languages which don't use spaces) the "it's just for performance" argument doesn't make as much sense because there it has no spaces to split on, so the chunks of text are much longer (although it does still split on punctuation, etc.).
Can we not agree that the absolutist position of "Putting an inductive bias in your tokenizer seems just a terrible idea." (as in - any inductive bias) is not actually true, especially since people are actually doing it?
Note, I'm not actually arguing that hand-crafted morphological tokenizers are better. (Which is the straw man many people seem to be replying to.) I'm just arguing that it should be feasible to train your tokenizer in a more morphologically aware way, because BPE doesn't do that.
> The reason everyone went to BPE was because it was so dramatically better than morphology based tokenizers. [..] BPE already learns morphology because it sees the raw bytes.
The reason everyone went to BPE is because of the bitter lesson (and because you don't have to hardcode your whole vocabulary, i.e. no UNK tokens), and not because it's particularly good at learning the morphology of the actual text. It's trivial to show countless examples where it fails to do so.
their old tokenizer performed some space collapsing that allowed them to use the same token id for a word with and without the leading space (in cases where the context usually implies a space and one is not present, a "no space" symbol is used).
Token counting matters a lot when agents are running long action chains. The hidden cost is retry loops — when an agent action times out and the agent retries, it re-sends the full context including all previous tool call results. A single failed payment call can cost 3x the tokens of a successful one. Observability at the token level is one thing, but you also need observability at the action level — did this side effect actually execute or did it fail silently?
This is the rugpull that is starting to push me to reconsider my use of Claude subscriptions. The "free ride" part of this being funded as a loss leader is coming to a close. While we break away from Claude, my hope is that I can continue to send simple problems to very smart local llms (qwen 3.6, I see you) and reserve Claude for purely extreme problems appropriate for it's extreme price.
> This is the rugpull that is starting to push me to reconsider my use of Claude subscriptions.
I'm still with them cause the model is good, but yes, I'm noticing my limits burning up somewhat faster on the 100 USD tier, I bet the 20 USD tier is even more useless.
I wouldn't call it a rugpull, since it seems like there might be good technical reasons for the change, but at the same time we won't know for sure if they won't COMMUNICATE that to us. I feel like what's missing is a technical blog post that tells uz more about the change and the tokenizer, although I fear that this won't be done due to wanting to keep "trade secrets" or whatever (the unfortunate consequence of which is making the community feel like they're being rugpulled).
I think an LLM that is a decent chunk smarter/better than other LLM's ought to be able to charge a premium perhaps 10x or 100x it's competitors.
See for example the price difference between taking a taxi and taking the bus, or between hiring a real lawyer Vs your friend at the bar who will give his uninformed opinion for a beer.
1. Anthropic has not published anything about why they made the change and how exactly they changed it
2. Nobody has reverse engineered it. It seems easy to do so using the free token counting APIs (the Google Vertex AI token count endpoint seems to support 2000 req/min = ~3million req/day, seems enough to reverse engineer it)
The tokenizer is an important part of overall model training and performance. It’s only one piece of the overall cost per request. If a tokenizer that produces more tokens also leads to a model that gets to the correct answer more quickly and requires fewer re-prompts because it didn’t give the right answer, the overall cost can still be lower.
Comparisons are still ongoing but I have already seen some that suggest that Opus 4.7 might on average arrive at the answer with fewer tokens spent, even with the additional tokenizer overhead.
How would it be a money grab? If the new tokenizer requires more tokens to encode the same information, it costs them more money for inference. The point of charging per token is that the cost is proportional to the number of tokens. That's my understanding anyway
Not necessarily with speculative decoding. Whitespace would be trivial to predict and they would petty much keep using the same amount of compute as before.
I don't think that's their primary motive for doing this but it is a side effect.
An interesting question is whether the tokenizer is better at something measurable or just denser. A denser tokenizer with worse alignment to semantic boundaries costs you twice, higher bill and worse reasoning. A denser tokenizer that actually carves at the joints of the model's latent space pays for itself in quality. Nobody outside Anthropic can answer which it is without their eval suite, so the rugpull read is fair but premature. Perhaps the real tell will be whether 4.7 beats 4.6 on the same dollar budget on the benchmarks you care about, not on the per-token ones Anthropic publishes.
I just asked Claude about defaulting to 4.6 and there are several options. I might go back to that as default and use --model claude-opus-4-7 as needed. The token inflation is very real.
Yeah that should work - it looks like the same pixel dimension image at smaller sizes has about the same token cost for 4.6 and 4.7, so the image cost increase only kicks in if you use larger images that 4.6 would have presumably resized before inspecting.
I'd guess it's because they don't want people to reverse engineer it.
Note that they're the only provider which doesn't make their tokenizer available offline as a library (i.e. the only provider whose tokenizer is secret).
Anthropic is somewhat becoming the Apple of AI in terms of closed ecosystem. Not saying I blame them, I just don't like it as a customer.
The fact that it's impossible to get the actual thinking tokens anymore, but we have to do with a rewritten summary, is extremely off-putting. I understand that it's necessary for users, but when writing agentic applications yourself, it's super annoying not to have the actual reasoning of the agent to understand failure modes.
This is perfectly legitimate. It's something I've been denouncing day after day. Company X charges you 10dolar per token, while company Y charges you 7dolar, yet company X is cheaper because of the tokenizer they use. The token consumption depends on the tokenizer, and companies create tokenizers using standard algorithms like BPE. But they're charging for hardware access, and the system can be biased to the point that if you speak in English, you consume 17% less than if your prompt is written in Spanish, or even if you write with Chinese characters, you'll significantly reduce your token consumption compared to English speakers. I've written about this several times on HN, but for whatever reason, every time I mention it, they flag my post.
I have often wondered if Chinese is a much 'better' language for LLMs - every character is a token, boom you're done. No weird subword nonsense, no strange semantics being applied to arbitrary chunks of words.. I feel like there must be benefits to being able to have the language tokenized in what must be very close to 1:1.
> Opus 4.7 tokenizer used 1.46x the number of tokens as Opus 4.6
Interesting. Unfortunately Anthropic doesn't actually share their tokenizer, but my educated guess is that they might have made the tokenizer more semantically aware to make the model perform better. What do I mean by that? Let me give you an example. (This isn't necessarily what they did exactly; just illustrating the idea.)
Let's take the gpt-oss-120b tokenizer as an example. Here's how a few pieces of text tokenize (I use "|" here to separate tokens):
You have 3 different tokens which encode the same word (Kill, kill, <space>kill) depending on its capitalization and whether there's a space before it or not, you have separate tokens if it's the past tense, etc.This is not necessarily an ideal way of encoding text, because the model must learn by brute force that these tokens are, indeed, related. Now, imagine if you'd encode these like this:
Notice that this makes much more sense now - the model now only has to learn what "<capitalize>" is, what "kill" is, what "<space>" is, and what "ed" (the past tense suffix) is, and it can compose those together. The downside is that it increases the token usage.So I wouldn't be surprised if this is what they did. Or, my guess number #2, they removed the tokenizer altogether and replaced them with a small trained model (something like the Byte Latent Transformer) and simply "emulate" the token counts.
There is currently very little evidence that morphological tokenizers help model performance [1]. For languages like German (where words get glued together) there is a bit more evidence (eg a paper I worked on [2]), but overall I start to suspect the bitter lesson is also true for tokenization.
[1] https://arxiv.org/pdf/2507.06378
[2] https://pieter.ai/bpe-knockout/
I never understood why people want this in the first place. Sure, making this step more human explainable would be nice and possibly even fix some very particular problems for particular languages, but it directly goes against the primary objective of a tokenizer: Optimizing sequence length vs. vocabulary size. This is a pretty clear and hard optimization target and the best you can do is make sure that your tokenizer training set more closely mimics your training and ultimately your inference data. Putting english or german grammar in there by force will only degrade every other language in the tokenzier, while we already know that limiting additional languages will hurt overall model performance. And the belief that you can encode a dataset of trillions of tokens into a more efficient vocabulary than a machine is kind of weird tbh. People have also accepted since the early convnet days that the best encoding representation for images in machine learning is not a human understandable one. Same goes for audio. So why should text be any different? If you really think so, you might also wanna have a go at feature engineering images. And it's not like people haven't tried that. But they all eventually learned their lesson.
We usually build the tokenizer by optimizing for one goal (space-efficient encoding of text), then use it in a model that is trained for an entirely different goal (producing good text, "reasoning", "coding", etc). It is not immediately clear that the optimization goal for the tokenizer is actually the one that best serves the training of the llm.
That's what all these attempts boil down to. They don't presume to be able to find a more space-efficient encoding by hand, they assume that the optimization goal for the tokenizer was wrong and they can do better by adding some extra rules. And this isn't entirely without precendent, most tokenizers have a couple of "forced" tokens that were not organically discovered. Moving around how digits are grouped in the tokenizer is another point where wins have been shown.
This is where projects like nanochat are really valuable for quickly and (relatively) cheaply trying out various tweaks
>It is not immediately clear that the optimization goal for the tokenizer is actually the one that best serves the training of the llm.
Except that is exactly what research has shown. Besides, the tokenizer's training goal is literally just to encode text efficiently with fewer tokens by increasing the vocabulary, which obviously directly benefits the attention mechanism if you look at the dimensions of involved matrices. The biggest issues so far have stemmed from variances between tokenizer and LLM training sets [1] and the fact that people primarily work with character based text and not word-part based text (even though that gets muddy when you look at what is actually happening in the brain) when doing anything in writing.
[1] https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldm...
I feel it's a case of "This random word generator can't possibly be smarter than I?!"
LLMs are explicitly designed to handle, and also possibly 'learn' from different tokens encoding similar information. I found this video from 3blue1brown very informative: https://www.youtube.com/watch?v=wjZofJX0v4M
Also, think about how a LLM would handle different languages.
This is how language models have worked since their inception, and has been steadily improved since about 2018.
See embedding models.
> they removed the tokenizer altogether
This is an active research topic, no real solution in sight yet.
I was looking into morpheme tokenization approach, but went even more radical with building a semantic primitive tokenizer [1], i.e. kill, killed, killer would all share the same semantic connection and tokens, e.g. [KILL], [KILL, BEFORE], [KILL, SOMEONE].
It’s based on semantic primitives (Wierzbicka NSM) and emoji (the fun idea that got me interested in this in the first place).
So far I’ve tested 6 iterations and it trains and responds well with a 10k vocab, but the grammar came out rougher. Working on 8th iteration, mainly to improve the grammar and language. Turns out the smaller vocab couldn’t be maintained and all improvements get us back in the ballpark of the 32k vocab size. Further testing is still outstanding for this week.
[1] https://github.com/frane/primoji
This is such a superficial, English-centric take, but it might as well be true. It seems to me that in non-english languages the models, especially chatgpt, have suffered in the declension department and output words in cases that do not fit the context.
I have just ran an experiment: I have taken a word and asked models (chatgpt, gemini and claude) to explode it into parts. The caveat is that it could either be root + suffix + ending or root + ending. None of them realized this duality and have taken one possible interpretation.
Any such approach to tokenizing assumes context free (-ish) grammar, which is just not the case with natural languages. "I saw her duck" (and other famous examples) is not uniquely tokenizable without a broader context, so either the tokenizer has to be a model itself or the model has to collapse the meaning space.
Current models understand different tokenization variants perfectly, e.g. leading space vs no leading space vs one character per token. It doesn't even affect evals and behchmarks. They're also good at languages that have very flexible word formation (e.g. Slavic) and can easily invent pretty natural non-existent words without being restricted by tokenization. This ability took a bit of a hit with recent RL and code generation optimizations, but this is not related to tokenization.
>None of them realized this duality and have taken one possible interpretation.
I suspect this happens due to mode collapse and has nothing to do with the tokenization. Try this with a base model.
This is almost certainly wrong.
Case sensitive language models have been a thing since way before neural language models. I was using them with boosted tree models at least ten years ago, and even my Java NLP tool did this twenty years ago (damn!). There is no novelty there of course - I based that on PG's "A Plan for Spam".
See for example CountVectorizer: https://scikit-learn.org/stable/modules/generated/sklearn.fe...
The bitter lesson says that you are much better off just adding more data and learning the tokenizer and it will be better.
It's not impossible that the new Opus tokenizer is based on something learnt during Mythos pre-training (maybe it is *the learned Mythos tokenizer?%), and it seems likely that the Mythos pre-training run is the most data ever trained on.
Putting an inductive bias in your tokenizer seems just a terrible idea.
Anthropic was already special-casing case-folding in their tokenizers before this recent change: https://transformer-circuits.pub/2025/attribution-graphs/met... "The tokenizer the model was trained with uses a special “Caps Lock” token" (⇪). Their visualizations for Claude 3.5 Haiku also show the Title Case token (↑).
This is similar to what the TokenMonster tokenizer does: https://github.com/alasdairforsythe/tokenmonster
> This is almost certainly wrong.
So how would you explain the increase in token usage, considering the fact that conventionally tokenizers are trained to minimize the token usage within a given vocabulary budget?
> Putting an inductive bias in your tokenizer seems just a terrible idea.
You're already effectively doing this by the sheer fact of using a BPE tokenizer, and especially with modern BPE-based LLM tokenizers[1]. I agree trying to bake this manually in a tokenizer is most likely not a good idea, but I could see a world where you could build a better tokenizer training algorithm which would be able to better take the natural morphology of the underlying text into account.
[1] Example from Qwen3.6 tokenizer:
> So how would you explain the increase in token usage, considering the fact that conventionally tokenizers are trained to minimize the token usage within a given vocabulary budget?
Just modeling whitespace as its own token would seem to explain the increase.
> Qwen3.6 tokenizer: "pretokenizer"
That's the pre-tokenizer, not the tokenizer. That is mostly a performance optimization that lets the memory requirements for the BPE tokenizer be a lot less.
> I could see a world where you could build a better tokenizer training algorithm which would be able to better take the natural morphology of the underlying text into account.
The reason everyone went to BPE was because it was so dramatically better than morphology based tokenizers. See the BPE paper: https://arxiv.org/abs/1508.07909
BPE already learns morphology because it sees the raw bytes.
> That's the pre-tokenizer, not the tokenizer.
Yes, it's an extra tokenizer which runs before the learned tokenizer and injects an inductive bias into it.
> That is mostly a performance optimization that lets the memory requirements for the BPE tokenizer be a lot less.
While it does indeed speed up training of the tokenizer, no, it isn't mostly just a performance optimization? It injects a clear cut inductive bias into the tokenizer (split by words, split by punctuation, don't merge words and numbers, etc. -- is that not an inductive bias?), and for some languages (e.g. Asian languages which don't use spaces) the "it's just for performance" argument doesn't make as much sense because there it has no spaces to split on, so the chunks of text are much longer (although it does still split on punctuation, etc.).
Can we not agree that the absolutist position of "Putting an inductive bias in your tokenizer seems just a terrible idea." (as in - any inductive bias) is not actually true, especially since people are actually doing it?
Note, I'm not actually arguing that hand-crafted morphological tokenizers are better. (Which is the straw man many people seem to be replying to.) I'm just arguing that it should be feasible to train your tokenizer in a more morphologically aware way, because BPE doesn't do that.
> The reason everyone went to BPE was because it was so dramatically better than morphology based tokenizers. [..] BPE already learns morphology because it sees the raw bytes.
The reason everyone went to BPE is because of the bitter lesson (and because you don't have to hardcode your whole vocabulary, i.e. no UNK tokens), and not because it's particularly good at learning the morphology of the actual text. It's trivial to show countless examples where it fails to do so.
their old tokenizer performed some space collapsing that allowed them to use the same token id for a word with and without the leading space (in cases where the context usually implies a space and one is not present, a "no space" symbol is used).
Token counting matters a lot when agents are running long action chains. The hidden cost is retry loops — when an agent action times out and the agent retries, it re-sends the full context including all previous tool call results. A single failed payment call can cost 3x the tokens of a successful one. Observability at the token level is one thing, but you also need observability at the action level — did this side effect actually execute or did it fail silently?
This is the rugpull that is starting to push me to reconsider my use of Claude subscriptions. The "free ride" part of this being funded as a loss leader is coming to a close. While we break away from Claude, my hope is that I can continue to send simple problems to very smart local llms (qwen 3.6, I see you) and reserve Claude for purely extreme problems appropriate for it's extreme price.
> This is the rugpull that is starting to push me to reconsider my use of Claude subscriptions.
I'm still with them cause the model is good, but yes, I'm noticing my limits burning up somewhat faster on the 100 USD tier, I bet the 20 USD tier is even more useless.
I wouldn't call it a rugpull, since it seems like there might be good technical reasons for the change, but at the same time we won't know for sure if they won't COMMUNICATE that to us. I feel like what's missing is a technical blog post that tells uz more about the change and the tokenizer, although I fear that this won't be done due to wanting to keep "trade secrets" or whatever (the unfortunate consequence of which is making the community feel like they're being rugpulled).
20 USD tier was useless from the start. You'd get to the limit in 30 minutes. Codex with 20 USD on the other hand...
[dead]
I think an LLM that is a decent chunk smarter/better than other LLM's ought to be able to charge a premium perhaps 10x or 100x it's competitors.
See for example the price difference between taking a taxi and taking the bus, or between hiring a real lawyer Vs your friend at the bar who will give his uninformed opinion for a beer.
For now you can run /model claude-opus-4-6
Quality of answers from quantized models is noticeable worse than using the full model.
You'll be better using Qwen 3.6 Plus through Alibaba coding plan.
I'm really surprised that:
1. Anthropic has not published anything about why they made the change and how exactly they changed it
2. Nobody has reverse engineered it. It seems easy to do so using the free token counting APIs (the Google Vertex AI token count endpoint seems to support 2000 req/min = ~3million req/day, seems enough to reverse engineer it)
> It seems easy to do so
What are you waiting for? ;)
Is there any provided reason from anthropic why they changed the tokenizer ?
Is there a quality increase from this change or is it a money grab ?
The tokenizer is an important part of overall model training and performance. It’s only one piece of the overall cost per request. If a tokenizer that produces more tokens also leads to a model that gets to the correct answer more quickly and requires fewer re-prompts because it didn’t give the right answer, the overall cost can still be lower.
Comparisons are still ongoing but I have already seen some that suggest that Opus 4.7 might on average arrive at the answer with fewer tokens spent, even with the additional tokenizer overhead.
So, no, not a money grab.
How would it be a money grab? If the new tokenizer requires more tokens to encode the same information, it costs them more money for inference. The point of charging per token is that the cost is proportional to the number of tokens. That's my understanding anyway
Not necessarily with speculative decoding. Whitespace would be trivial to predict and they would petty much keep using the same amount of compute as before.
I don't think that's their primary motive for doing this but it is a side effect.
Because everyone burns through their limits much faster, forcing them to upgrade to higher limits or new tiers.
They could just increase the token cost no? There’s little need for cute conspiracies like these
It's a better model in my usage. I have benchmarks.
An interesting question is whether the tokenizer is better at something measurable or just denser. A denser tokenizer with worse alignment to semantic boundaries costs you twice, higher bill and worse reasoning. A denser tokenizer that actually carves at the joints of the model's latent space pays for itself in quality. Nobody outside Anthropic can answer which it is without their eval suite, so the rugpull read is fair but premature. Perhaps the real tell will be whether 4.7 beats 4.6 on the same dollar budget on the benchmarks you care about, not on the per-token ones Anthropic publishes.
[dead]
Aren't these increases offset by the quality of the responses and reducing the iterations needed to fine-tune the responses?
Likely only some of the time
Many comparisons between 4.6 & 4.7 at https://tokens.billchambers.me/leaderboard My prompt was 40% more tokens using Opus 4.7.
I just asked Claude about defaulting to 4.6 and there are several options. I might go back to that as default and use --model claude-opus-4-7 as needed. The token inflation is very real.
Okay, but what about output tokens?
Interesting findings. Might need a way to downsample images on upload to keep costs down.
Yeah that should work - it looks like the same pixel dimension image at smaller sizes has about the same token cost for 4.6 and 4.7, so the image cost increase only kicks in if you use larger images that 4.6 would have presumably resized before inspecting.
Why do you need an API key to tokenize the text? Isn't it supposed to be a cheap step that everything else in the model relies on?
I'd guess it's because they don't want people to reverse engineer it.
Note that they're the only provider which doesn't make their tokenizer available offline as a library (i.e. the only provider whose tokenizer is secret).
Anthropic is somewhat becoming the Apple of AI in terms of closed ecosystem. Not saying I blame them, I just don't like it as a customer.
The fact that it's impossible to get the actual thinking tokens anymore, but we have to do with a rewritten summary, is extremely off-putting. I understand that it's necessary for users, but when writing agentic applications yourself, it's super annoying not to have the actual reasoning of the agent to understand failure modes.
To prevent abuse? It's a completely free endpoint so I don't understand your complaint.
I'd love it if that API (which I do not believe Anthropic charge anything for) worked without an API key.
[dead]
[dead]