Chatterbox Turbo

Hello everyone. When installing Chatterbox Turbo locally, I encountered this error. Please help me fix it. New tokens were created, but the problem persists.

”huggingface_hub.errors.LocalTokenNotFoundError: Token is required (token=True), but no token found. You need to provide a token or be logged in to Hugging Face with hf auth login or huggingface_hub.login. See Hugging Face – The AI community building the future. .

1 Like

The method for setting environment variables varies depending on your OS, but I think setting the HF_TOKEN environment variable is the simplest approach. For greater reliability, use hf auth login.


This error means one specific thing: some code called Hugging Face Hub with token=True, and your runtime could not find a locally stored Hugging Face token. Creating new tokens on the Hugging Face website does not fix anything unless the token is actually available inside the environment that runs Chatterbox Turbo.


What the error means (background)

Hugging Face Hub download functions accept a token argument with these semantics:

  • token=True means “read the token from the Hugging Face config folder.”
  • token=False or None means “do not use a token.”
  • token="hf_..." means “use this token string explicitly.” (Hugging Face)

Your traceback says: Token is required (token=True), but no token found.
So the code path is explicitly requesting token=True, and the library is failing because it cannot find a cached login token.


Why this happens specifically with Chatterbox Turbo

In the official Hugging Face Space code for Chatterbox Turbo, from_pretrained() downloads weights via snapshot_download() like this:

That line causes your exact failure mode:

  • If HF_TOKEN environment variable is set, it uses it.
  • If HF_TOKEN is not set (or is empty), it falls back to True.
  • True triggers “must read cached token from disk,” and if none is found, you get LocalTokenNotFoundError. (Hugging Face)

So you can see this error even when downloading a public model repo, because the code forces auth unless HF_TOKEN exists.


Root causes in real local installs (most common)

Cause 1: You created tokens online, but did not log in locally

Creating a token at huggingface.co/settings/tokens only creates credentials.
Your machine still has no saved token unless you do one of these:

  • Set HF_TOKEN in the environment.
  • Run hf auth login to save the token locally. (Hugging Face)

Cause 2: You logged in, but you are running under a different user or “different home”

This is extremely common with:

  • sudo python ...
  • Docker containers
  • systemd services
  • remote processes
  • IDE run configs that do not inherit your shell env

Hugging Face stores the token under HF_HOME (default ~/.cache/huggingface) and by default the token file is at ~/.cache/huggingface/token. (Hugging Face)
If your process runs with a different home directory, it will not see the saved token.

Cause 3: You set the wrong environment variable

huggingface_hub standardizes on:

If you set something else, Chatterbox Turbo still sees HF_TOKEN as missing and falls back to True. (Hugging Face)


Fixes that work reliably

Fix A (most reliable): Set HF_TOKEN where you run Chatterbox Turbo

This avoids all “where is the token file” issues.

Linux/macOS (bash/zsh):

export HF_TOKEN="hf_your_token_here"
python -c "import os; print(bool(os.getenv('HF_TOKEN')))"

Windows PowerShell:

$env:HF_TOKEN="hf_your_token_here"
python -c "import os; print(bool(os.getenv('HF_TOKEN')))"

Why this fixes it: Chatterbox Turbo checks HF_TOKEN first, and only falls back to token=True if it is missing. (Hugging Face)
Also, HF_TOKEN overrides any token stored on disk. (Hugging Face)

Fix B: Log in locally with the Hugging Face CLI

Run:

hf auth login
hf auth whoami

The docs state hf auth login validates and saves the token to HF_HOME (default ~/.cache/huggingface/token), and other libraries reuse it automatically. (Hugging Face)

If hf auth whoami works but your app still fails, you are almost certainly hitting the “different user / different HF_HOME” pitfall.

Fix C (Jupyter/Notebook): Login programmatically

If you run in a notebook kernel that does not inherit your shell login, use the Python authentication API. (Hugging Face)

Example (do not paste tokens into shared notebooks):

from huggingface_hub import login
login("hf_your_token_here")

Quick verification checklist (pinpoint the exact mismatch)

Run these in the same terminal/session that launches Chatterbox Turbo:

  1. Confirm the environment variable path:
python -c "import os; print('HF_TOKEN set:', bool(os.getenv('HF_TOKEN')))"
  1. Check where Hugging Face is looking for the token:
python -c "import os; print('HF_HOME=', os.getenv('HF_HOME')); print('HF_TOKEN_PATH=', os.getenv('HF_TOKEN_PATH'))"

What matters:

  • If HF_TOKEN is falsey, Chatterbox Turbo will force token=True and require a cached token. (Hugging Face)
  • If HF_HOME is set to something unexpected, your login token may be saved somewhere else. (Hugging Face)
  1. Confirm CLI login state:
hf auth whoami

If this fails, you are not logged in for that user environment. (Hugging Face)


Workarounds if you want “no token required”

Workaround 1: Patch the token forcing line

In chatterbox/tts_turbo.py, the forcing behavior is:

If you change it to:

  • token=os.getenv("HF_TOKEN")
    then public downloads typically work without login, because token becomes None when unset, and “no token is provided.” (Hugging Face)

This is a local patch. Upstream updates can overwrite it.

Workaround 2: Download once, then load from local directory

That same file includes a from_local() loader that takes a local checkpoint directory and loads weights from there. (Hugging Face)
So you can:

  1. download model files by any method you control (or on another machine),
  2. copy them into a directory,
  3. call from_local(path, device) instead of from_pretrained(device).

This is also the path you use on offline machines.


Extra context: token storage and common “it still doesn’t work” pitfalls

Token storage locations

  • HF_HOME defaults to ~/.cache/huggingface. (Hugging Face)
  • Token path defaults to "$HF_HOME/token" (so typically ~/.cache/huggingface/token). (Hugging Face)
  • Setting HF_TOKEN overrides the stored token. (Hugging Face)

Pitfall: sudo breaks it

If you ran hf auth login as your normal user but run the app with sudo, root has a different ~ and a different HF_HOME. Result: no token found.

Pitfall: Docker breaks it unless you pass env or mount cache

Inside Docker, you usually need either:

  • -e HF_TOKEN=...
  • or mount the Hugging Face cache directory into the container

Pitfall: “token created” but wrong scope

For reading/downloading, a read token is typically sufficient. Hugging Face describes tokens and scopes in their security tokens docs. (Hugging Face)


References (URLs)

https://huggingface.co/spaces/ResembleAI/chatterbox-turbo-demo/blob/main/chatterbox/tts_turbo.py
https://huggingface.co/docs/huggingface_hub/en/package_reference/file_download
https://huggingface.co/docs/huggingface_hub/en/quick-start
https://huggingface.co/docs/huggingface_hub/en/package_reference/environment_variables
https://huggingface.co/docs/huggingface_hub/en/guides/cli
https://huggingface.co/docs/huggingface_hub/en/package_reference/authentication

Summary

  • Your error persists because Chatterbox Turbo’s from_pretrained() forces token=True when HF_TOKEN is missing. (Hugging Face)
  • Creating tokens online is not enough. You must either set HF_TOKEN or run hf auth login in the same runtime environment. (Hugging Face)
  • If it still fails after login, the cause is almost always “different user / different HF_HOME.” (Hugging Face)
  • Fastest reliable fix: set HF_TOKEN in the exact shell/container that runs Chatterbox Turbo. (Hugging Face)