paste text, get tokens. uses cl100k tokenizer (tiktoken). runs locally. see pricing.

context usage (gpt-4o 128K) 0%
0 tokens
0 chars
0 words
0 lines
gpt-4o$0.00
gpt-4o-mini$0.00
o1$0.00
claude-3.5-sonnet$0.00
claude-3.5-haiku$0.00
claude-3-opus$0.00
gemini-1.5-pro$0.00
gemini-2.0-flash$0.00
deepseek-v3$0.00
note: uses OpenAI's cl100k tokenizer (GPT-4/4o). other models use different tokenizers, but they either aren't available on the web, or aren't available to the public, so costs shown are estimates based on cl100k.

$ api pricing (per 1M tokens)

model ctx input cached out your $in your $out
gpt-4o128K$2.50$1.25$10--
gpt-4o-mini128K$0.15$0.08$0.60--
o1200K$15$7.50$60--
o1-mini128K$3$1.50$12--
claude-3.5-sonnet200K$3$0.30$15--
claude-3.5-haiku200K$0.80$0.08$4--
claude-3-opus200K$15$1.50$75--
gemini-1.5-pro2M$1.25$0.31$5--
gemini-2.0-flash1M$0.10$0.03$0.40--
deepseek-v364K$0.27$0.07$1.10--
llama-3.1-405b128K$3—$3--
mistral-large128K$2—$6--