paste text, get tokens. uses cl100k tokenizer (tiktoken). runs locally. see pricing.

context usage (gpt-4o 128K) 0%
0 tokens
0 chars
0 words
0 lines
gpt-5.2$0.00
gpt-5-mini$0.00
llama-3.1-405b$0.00
claude-4.5-sonnet$0.00
claude-4.5-haiku$0.00
claude-4.5-opus$0.00
gemini-3-pro$0.00
gemini-3-flash$0.00
deepseek-3.2$0.00
note: uses OpenAI's cl100k tokenizer (GPT-4/4o). other models use different tokenizers, but they either aren't available on the web, or aren't available to the public, so costs shown are estimates based on cl100k.

$ api pricing (per 1M tokens)

model ctx input cached out your $in your $out
gpt-5.2400K$1.75$0.175$14--
gpt-5.2-pro400K$21$168--
gpt-5400K$1.25$0.125$10--
gpt-5-mini400K$0.25$0.025$2--
gpt-5-nano400K$0.05$0.005$0.4--
gpt-4.11M$2$0.5$8--
gpt-4.1-mini1M$0.4$0.1$1.6--
o1200K$15$7.5$60--
claude-4.5-opus200K$5$0.5*$25--
claude-4.5-sonnet200K$3$0.3*$15--
claude-4.5-haiku200K$1$0.1*$5--
claude-4-opus200K$15$1.5*$75--
claude-4-sonnet200K$3$0.3*$15--
gemini-3-pro1M$2*$0.2*$12*--
gemini-3-flash1M$0.5$0.05$3--
gemini-2.5-pro1M$1.25*$0.125*$10*--
gemini-2.5-flash1M$0.3$0.03$2.5--
deepseek-3.2128K$0.028*$0.42--
llama-3.1-405B128K$1$1.8--
llama-3.3-70B128K$0.1$0.4--
grok-4-1-fast-reasoning2M$0.2$0.05$0.5--
grok-4-1-fast-non-reasoning2M$0.2$0.05$0.5--
grok-3131K$3$0.75$15--

* Some providers use tiered, cached, or TTL-based pricing (e.g. Gemini input price changes above 200k tokens per request; Claude charges separately for cache writes vs cache hits). Actual costs may vary depending on request size, cache usage, and model configuration. Check the URLs.