Those split files are in the “Diffusers format.” It’s a file format for software other than ComfyUI. However, it seems ComfyUI can also directly read and use Diffusers format files.
Regarding conversion, it appears possible to convert them using ComfyUI (though I haven’t tried it myself…) so this should be possible via the GUI…?
You are trying to solve two different problems that look identical on disk.
- “Merge shards into one safetensors file.”
- “Make a ComfyUI-loadable model.”
Sometimes (1) is enough. Often it is not. That is why your scripts “work” but ComfyUI still refuses to load the output.
Below is the clean way to approach this on Windows, with the least guessing.
Background you need (short, but important)
What those model-00001-of-00003.safetensors files actually are
They are shards. Hugging Face splits large weight files into multiple safetensors files to reduce memory pressure and avoid huge single files. Transformers typically pairs shards with an index JSON that maps each tensor name to the shard that contains it. (Hugging Face)
So “combine them” is not “zip/join bytes.” It is “load tensor dictionaries from each shard and re-save as one dictionary.”
Why ComfyUI often fails even after a successful merge
ComfyUI has different loader nodes and folder expectations for different model families. A single .safetensors file is only usable as a “checkpoint” if it contains the right kind of weights and the key structure ComfyUI expects.
A common real-world failure is: merged file saves fine, but ComfyUI says it cannot detect the model type or it is “not readable.” (GitHub)
So you need to first identify what you downloaded.
Step 1: Identify your model layout (30 seconds, no coding)
Open the folder that contains the three shards. Look for these “tells”:
Case A: You see model.safetensors.index.json (or similar)
That is Transformers sharding. The index tells loaders which shard holds which weights. (Hugging Face)
In this case, merging is usually unnecessary in the Transformers ecosystem, and a ComfyUI “checkpoint” is usually not the right target.
Case B: You see folders like unet/, vae/, text_encoder/ (Diffusers layout)
That is a Diffusers pipeline. A “merged safetensors” of only one component often still will not load as a classic checkpoint in ComfyUI. You either load it as Diffusers, or you export a proper checkpoint. (GitHub)
Case C: You only see the shards (and maybe a couple simple JSON files), and this is meant to be one monolithic diffusion checkpoint
Then merging can be the right move.
If you do not know which case you are in, assume B is most common for modern diffusion releases and A is common for LLMs.
The simplest GUI-ish options (recommended for non-coders)
Option 1: Merge inside ComfyUI with a node (no external scripts)
There is a ComfyUI extension that includes a node specifically advertised to combine split safetensors shards into one file and optionally convert precision (FP8/FP16/FP32). (GitHub)
What this gives you:
- A pure ComfyUI workflow for merging.
- No Python environment setup.
- You click-select inputs and an output path.
If your only goal is “turn 3 shard files into 1 shard file,” this is the lowest-friction route.
Important limitation:
- This still does not guarantee the result is a checkpoint ComfyUI’s checkpoint loader recognizes. It just guarantees the file is merged.
Option 2: Convert to a real checkpoint using ComfyUI’s CheckpointSave node
If your model is “separate parts” (model + CLIP + VAE) and you want a single .safetensors checkpoint, ComfyUI has a built-in concept for that: CheckpointSave.
CheckpointSave is explicitly meant to save model + CLIP + VAE into a checkpoint file. (ComfyUI Wiki)
This is the clean ComfyUI-native conversion path:
- Load your diffusion model with the correct loader node (not necessarily “Load Checkpoint”).
- Load the matching CLIP and VAE nodes if required.
- Feed them into CheckpointSave. (ComfyUI Wiki)
- Output becomes a single checkpoint file intended for later loading/sharing.
This solves the “merged but not recognized” problem more often than raw shard merging, because you are producing a file in the checkpoint style ComfyUI expects.
If you must do it outside ComfyUI (minimal moving parts)
Option 3: Use a purpose-built shard merger script (simple, documented)
- soursilver/safetensors-merger exists specifically for merging
model-00001-of-00003.safetensors style shards into one file. (GitHub)
It also warns about a common Windows pain point: NumPy 2.x may not work with the script’s dependencies, so you may need NumPy 1.x. (GitHub)
Option 4: Use a “manager” tool that can also generate the HF index (advanced)
- NotTheStallion/reshard-safetensors is a package for merging/splitting safetensors and generating
model.safetensors.index.json in HF style. (GitHub)
This is more useful when you are dealing with sharded Transformers models and you want “proper sharded layout” rather than one giant file.
Why “merge” and “convert” are not the same thing (this is the key pitfall)
Merge
- Takes multiple safetensors shards and produces a single safetensors file.
- Output contains the union of tensor keys.
Convert for ComfyUI
- Produces a file that matches a ComfyUI loader’s expectations.
- For checkpoints, that usually means “this file looks like a Stable Diffusion-style checkpoint with the expected components or expected key patterns.”
A common example of the mismatch:
- People merge three diffusion shards and ComfyUI still can’t load it. This exact pattern is reported in ComfyUI issues and discussions. (GitHub)
- The underlying reason is typically “you merged a component or a non-checkpoint layout.”
So if you merge and then ComfyUI errors, the correct next step is usually CheckpointSave or use a loader that supports the model’s native layout, not “try a different merge script.”
Hardware context for your system
Your specs are strong for this task.
- 64 GB RAM is the main enabler for merging huge weights without crashing.
- Disk space matters because the merged file will be roughly the sum of shards.
- GPU VRAM (32 GB) does not help with the merge itself. It helps only when you actually run inference.
If you see memory errors during merging, it is typically CPU RAM pressure, not VRAM.
Practical “what I would do in your exact situation”
If you want the highest chance of success with the least frustration:
- Try the ComfyUI node-based merge first (fast, no environment setup). (GitHub)
- If ComfyUI still will not load the merged file, stop re-merging.
- Instead, load the model in its native pieces and use CheckpointSave to output a proper checkpoint file. (ComfyUI Wiki)
- If your folder looks like Transformers sharding (index JSON, config, tokenizer), do not try to “make it a ComfyUI checkpoint.” That is the wrong target. Transformers uses the index to load shards. (Hugging Face)
Quick summary
- Sharded safetensors are normal on HF and often come with an index file that maps tensors to shards. (Hugging Face)
- Merging shards produces one safetensors file, but that file is not automatically a ComfyUI “checkpoint.” (GitHub)
- Best GUI path: merge with a ComfyUI node pack, then if needed export a real checkpoint with CheckpointSave. (GitHub)
- Best script path:
safetensors-merger or reshard-safetensors, but merging alone may still not yield a loadable checkpoint. (GitHub)