Skip to content

Conversation

@DN6
Copy link
Collaborator

@DN6 DN6 commented Feb 4, 2026

What does this PR do?

Update Wan tests with new format

Fixes # (issue)

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@DN6 DN6 requested review from dg845, sayakpaul and yiyixuxu February 4, 2026 12:58
Copy link
Collaborator

@yiyixuxu yiyixuxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

really nice thanks!
should we start to add a guide for contributor some where, maybe https://huggingface.co/docs/diffusers/main/en/conceptual/contribution

Copy link
Collaborator

@dg845 dg845 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I see there are two Wan model related failures from the CI:

  • tests/models/transformers/test_models_transformer_wan_animate.py::TestWanAnimateTransformer3DAttention::test_fuse_unfuse_qkv_projections
  • tests/models/transformers/test_models_transformer_wan_vace.py::TestWanVACETransformer3DAttention::test_fuse_unfuse_qkv_projections

If I try to run the new Wan tests locally, for example with

pytest tests/models/transformers/test_models_transformer_wan.py

I get some more test failures:

  • tests/models/transformers/test_models_transformer_wan.py::TestWanTransformer3D
    • test_keep_in_fp32_modules
    • test_from_save_pretrained_dtype_inference[fp16,bf16]
  • tests/models/transformers/test_models_transformer_wan.py::TestWanTransformer3DGGUF
    • test_gguf_quantization_inference
    • test_gguf_keep_modules_in_fp32
    • test_gguf_quantization_dtype_assignment
    • test_gguf_quantization_lora_inference
    • test_gguf_dequantize
    • test_gguf_quantized_layers
  • tests/models/transformers/test_models_transformer_wan.py::TestWanTransformer3DGGUFCompile
    • test_gguf_torch_compile
    • test_gguf_torch_compile_with_group_offload

Are these test failures expected?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@DN6
Copy link
Collaborator Author

DN6 commented Feb 6, 2026

Thanks for flagging @dg845. I've fixed the test issues. There are some GGUF related fixes that should probably go in a different PR (will handle that later)

Copy link
Collaborator

@dg845 dg845 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Comment on lines 45 to 220
@@ -214,15 +214,18 @@ def __init__(
self.add_v_proj = torch.nn.Linear(added_kv_proj_dim, self.inner_dim, bias=True)
self.norm_added_k = torch.nn.RMSNorm(dim_head * heads, eps=eps)

self.is_cross_attention = cross_attention_dim_head is not None
if is_cross_attention is not None:
self.is_cross_attention = is_cross_attention
else:
self.is_cross_attention = cross_attention_dim_head is not None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like an unrelated change?

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I left some comments!

self.inner_dim = dim_head * heads
self.heads = heads
self.cross_attention_head_dim = cross_attention_dim_head
self.cross_attention_dim_head = cross_attention_dim_head
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above. Would be nice if you could explain these changes? Were these flagged by the newly written test suite?

Comment on lines -456 to +459
# Test with float16
model.to(torch_device)
model.to(torch.float16)
# Save the model and reload with float16 dtype
# _keep_in_fp32_modules is only enforced during from_pretrained loading
model.save_pretrained(tmp_path)
model = self.model_class.from_pretrained(tmp_path, torch_dtype=torch.float16).to(torch_device)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have we run this test for all the other models to ensure this isn't breaking anything?

Comment on lines -179 to -187
# Get model dtype from first parameter
model_dtype = next(model_quantized.parameters()).dtype

inputs = self.get_dummy_inputs()
# Cast inputs to model dtype
inputs = {
k: v.to(model_dtype) if isinstance(v, torch.Tensor) and v.is_floating_point() else v
for k, v in inputs.items()
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why remove them?

Comment on lines -1024 to -1025
def test_gguf_quantized_layers(self):
self._test_quantized_layers({"compute_dtype": torch.bfloat16})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why remove? Is it a duplicate?

# See the License for the specific language governing permissions and
# limitations under the License.

import unittest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am guessing the changes under tests/models/transformers/ were all auto-generated?

class TestWanVACETransformer3DCompile(WanVACETransformer3DTesterConfig, TorchCompileTesterMixin):
"""Torch compile tests for Wan VACE Transformer 3D."""

def test_torch_compile_repeated_blocks(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can further simplify this test by letting users pass a recompile_limit. I will open a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants