|
|
发表于 2026-3-25 12:14:16
|
显示全部楼层
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Requested to load WanVAE
loaded completely; 7449.00 MB usable, 242.03 MB loaded, full load: True
Found quantization metadata version 1
Using MixedPrecisionOps for text encoder
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load QwenImageTEModel_
loaded completely; 9581.80 MB usable, 7910.28 MB loaded, full load: True
Requested to load WanVAE
Unloaded partially: 3633.28 MB freed, 4277.00 MB remains loaded, 453.25 MB buffer reserved, lowvram patches: 0
loaded completely; 3104.87 MB usable, 242.03 MB loaded, full load: True
loaded completely; 9320.65 MB usable, 7910.28 MB loaded, full load: True
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLUX
got prompt
处理出错 |
|