some questions during i run locally
#14
by
lizhiyichina
- opened
Your demo inference looks really good. If i want to get same result locally, is 80G VAM enough?
https://huggingface.co/vafipas663/Qwen-Edit-2509-Upscale-LoRA/blob/main/Qwen-Edit-Upscale.json
This workflow works with 24G just fine. V4 models can probably do even smaller