You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
UNet-only adapters will still cause problems with pipeline.set_adapters() after torch.compile because the compiled graph expects adapters to exist in all components. #12427
2025-10-03 07:47:56 - main - INFO - loading adapter detail_enhancer No LoRA keys associated to CLIPTextModel found with the prefix='text_encoder'. This is safe to ignore if LoRA state dict didn't originally have any CLIPTextModel related params. You can also try specifying prefix=None to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new No LoRA keys associated to CLIPTextModelWithProjection found with the prefix='text_encoder_2'. This is safe to ignore if LoRA state dict didn't originally have any CLIPTextModelWithProjection related params. You can also try specifying prefix=None to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new
This warning confirms what we've been discussing: detail_enhancer (and likely nijistyle and chibi_rr) only have UNet weights, not text encoder weights.
The warnings are saying the LoRA files don't contain any parameters for the text encoders - they're UNet-only adapters. Why This Causes Issues After Compilation
When you call pipeline.set_adapters(['detail_enhancer']):
Without compilation:
Pipeline checks each component Applies adapter to UNet ✓ Skips text encoders (no weights found) ✓ Works fine
With compilation:
The compiled graph expects a consistent structure If you compile after loading adapters that ARE in text encoders, the graph expects text encoder adapters When you try to use detail_enhancer which has no text encoder weights, the compiled code path fails