Update README.md
Browse files
README.md
CHANGED
|
@@ -13,10 +13,11 @@ language:
|
|
| 13 |
license: apache-2.0
|
| 14 |
---
|
| 15 |
|
| 16 |
-
This is a merge of [Wan-AI/Wan2.1-VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) and [vrgamedevgirl84/Wan14BT2VFusionX](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX)
|
| 17 |
|
| 18 |
The process involved extracting VACE scopes and injecting into the target models.
|
| 19 |
-
|
|
|
|
| 20 |
|
| 21 |
## Usage
|
| 22 |
|
|
@@ -36,5 +37,5 @@ The model files can be used in [ComfyUI](https://github.com/comfyanonymous/Comfy
|
|
| 36 |
|
| 37 |
## Reference
|
| 38 |
|
| 39 |
-
- For more information about the GGUF-quantized versions, refer to [QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF](https://huggingface.co/QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF)
|
| 40 |
- For an overview of Safetensors format, please see the [Safetensors](https://huggingface.co/docs/safetensors/index).
|
|
|
|
| 13 |
license: apache-2.0
|
| 14 |
---
|
| 15 |
|
| 16 |
+
This is a merge of [Wan-AI/Wan2.1-VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) scopes and [vrgamedevgirl84/Wan14BT2VFusionX](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX).
|
| 17 |
|
| 18 |
The process involved extracting VACE scopes and injecting into the target models.
|
| 19 |
+
|
| 20 |
+
- FP8 model weight was then converted to specific FP8 formats (E4M3FN and E5M2) using ComfyUI custom node [ComfyUI-ModelQuantizer](https://github.com/lum3on/ComfyUI-ModelQuantizer) by [lum3on](https://github.com/lum3on).
|
| 21 |
|
| 22 |
## Usage
|
| 23 |
|
|
|
|
| 37 |
|
| 38 |
## Reference
|
| 39 |
|
| 40 |
+
- For more information about the GGUF-quantized versions, refer to [QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF](https://huggingface.co/QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF).
|
| 41 |
- For an overview of Safetensors format, please see the [Safetensors](https://huggingface.co/docs/safetensors/index).
|