Update README.md
Browse files
README.md
CHANGED
|
@@ -20,6 +20,8 @@ The process involved extracting VACE scopes and injecting into the target models
|
|
| 20 |
|
| 21 |
The FP16 model weights were then quantized to specific FP8 formats (E4M3FN and E5M2) using ComfyUI custom node [ComfyUI-ModelQuantizer](https://github.com/lum3on/ComfyUI-ModelQuantizer) by [lum3on](https://github.com/lum3on).
|
| 22 |
|
|
|
|
|
|
|
| 23 |
## Usage
|
| 24 |
|
| 25 |
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the WanVaceToVideo node. Place the required model(s) in the following folders:
|
|
|
|
| 20 |
|
| 21 |
The FP16 model weights were then quantized to specific FP8 formats (E4M3FN and E5M2) using ComfyUI custom node [ComfyUI-ModelQuantizer](https://github.com/lum3on/ComfyUI-ModelQuantizer) by [lum3on](https://github.com/lum3on).
|
| 22 |
|
| 23 |
+
LoRA extraction was done using the ComfyUI node Extract and Save LoRA.
|
| 24 |
+
|
| 25 |
## Usage
|
| 26 |
|
| 27 |
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the WanVaceToVideo node. Place the required model(s) in the following folders:
|