About

AI Token Counter - Count Tokens for Prompts & Chat Payloads

Pick a model, paste text or chat JSON, and get instant token counts (and cost estimates) to avoid context-limit errors and control spend.

🟢 Runs locally · no uploads

AI Token Counter

Count tokens for prompts, messages JSON, and Chat Completions payloads

Settings

Input

Input161 bytes · 161 chars · 26 words
Rewrite this prompt to be clearer and shorter, without changing meaning: "Please go over the following JSON and extract the relevant fields for an API request."

Output

Tokens (input)
0
Chars
161
3 lines
Bytes
161
UTF-8 estimate
Cost (est.)
N/A
In: N/A · Out: N/A

Related tools

Show more
Show more
› About this tool · FAQ

Count tokens for AI prompts and OpenAI-style chat payloads before you send them. Paste plain text, a messages array, or a full Chat Completions JSON request and instantly see token counts for popular models with cost estimates. Everything runs locally in your browser.

What is a token?

A token is a chunk of text used by language models. Tokens are not the same as words: some words split into multiple tokens, and punctuation/whitespace can also count.

Does this token counter support OpenAI chat payloads?

Yes. You can paste either a messages array (role/content objects) or a full Chat Completions request JSON with a messages field.

Why do token counts differ between models?

Different model families use different token encodings, which can change how text is split into tokens. This tool counts tokens using model-aware encodings.

Is my prompt uploaded anywhere?

No. All counting runs locally in your browser. Your text is not sent to a server.