separate pieces
Hey I was wondering if you would be able to release a version of v5.2 with the clip/text encoder and the rest of the model separate. I keep running out of vram and crashing comfyui and I would like to offload the clip to my first gpu and the rest of the model to my other gpu. I wish comfyui had better multigpu support. Really just versions without the text encoder and the text encoder separate so I can put it on my other gpu. That would help a lot.
Edit: Nevermind I'm fucking retarded and I didn't realize there was a "CheckpointLoaderAdvancedMultiGPU" node that lets you pick which part of the single model gets loaded onto which device. I'm still getting used to this stuff my bad dude.
connect loader to save model, save vae, save clip, run and voilla.Just for next time you ask.
btw works also in reverse with save checkpoint.
you find them in output subfolder after this.
connect loader to save model, save vae, save clip, run and voilla.Just for next time you ask.
btw works also in reverse with save checkpoint.
you find them in output subfolder after this.
I had no idea that you could do this, thank you!