Skip to content

Conversation

@sayakpaul
Copy link
Member

@sayakpaul sayakpaul commented Nov 4, 2025

What does this PR do?

Some things came up with the tests that needed fixing. But LMK if you would like me to revert them.

Will propagate the changes from #12579 once it's merged.

@property
def inputs(self) -> List[InputParam]:
return [
InputParam("latents"),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to how it's done in the other pipelines.

)
]
* block_state.batch_size
for _ in range(block_state.batch_size)
Copy link
Member Author

@sayakpaul sayakpaul Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have two options:

  1. Either this
  2. Or how it's done in edit:
    img_shapes = [
    [
    (1, height // self.vae_scale_factor // 2, width // self.vae_scale_factor // 2),
    (1, calculated_height // self.vae_scale_factor // 2, calculated_width // self.vae_scale_factor // 2),
    ]
    ] * batch_size

Regardless, the current implementation isn't exactly the same as how the standard pipeline implements it and would break for the batched input tests we have.

block_state = self.get_block_state(state)

# YiYi Notes: remove support for output_type = "latents', we can just skip decode/encode step in modular
vae_scale_factor = 2 ** len(components.vae.temperal_downsample)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keeping vae_scale_factor fixed to 8, for example, would break the tests as we use a smaller VAE.

Comment on lines +506 to +507
block_state.negative_prompt_embeds = None
block_state.negative_prompt_embeds_mask = None
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise, no CFG settings would break.

Comment on lines +132 to +142
@pytest.mark.xfail(condition=True, reason="Batch of multiple images needs to be revisited", strict=True)
def test_num_images_per_prompt(self):
super().test_num_images_per_prompt()

@pytest.mark.xfail(condition=True, reason="Batch of multiple images needs to be revisited", strict=True)
def test_inference_batch_consistent():
super().test_inference_batch_consistent()

@pytest.mark.xfail(condition=True, reason="Batch of multiple images needs to be revisited", strict=True)
def test_inference_batch_single_identical():
super().test_inference_batch_single_identical()
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are skipped in the standard pipeline tests, too.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@sayakpaul
Copy link
Member Author

sayakpaul commented Nov 4, 2025

With transformers latest the modular tests are failing in this PR. Checking.

Update: Checking with the transformers team. Internal link:
https://huggingface.slack.com/archives/C014N4749J9/p1762245891630079

All of those tests work with transformers 4.57.1

Copy link
Collaborator

@DN6 DN6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just comments on aligning with new testing structure.

from ..test_modular_pipelines_common import ModularPipelineTesterMixin


class QwenImageModularTests:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
class QwenImageModularTests:
class QwenImageModularTesterMixin:

Comment on lines +38 to +43
pipeline_class = QwenImageModularPipeline
pipeline_blocks_class = QwenImageAutoBlocks
repo = "hf-internal-testing/tiny-qwenimage-modular"

params = frozenset(["prompt", "height", "width", "negative_prompt", "attention_kwargs", "image", "mask_image"])
batch_params = frozenset(["prompt", "negative_prompt", "image", "mask_image"])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's move these into the actual test object e.g QwenImageModularPipelineFastTests

params = frozenset(["prompt", "height", "width", "negative_prompt", "attention_kwargs", "image", "mask_image"])
batch_params = frozenset(["prompt", "negative_prompt", "image", "mask_image"])

def get_pipeline(self, components_manager=None, torch_dtype=torch.float32):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can just use the one defined in ModularPipelineTesterMixin

pipeline.set_progress_bar_config(disable=None)
return pipeline

def get_dummy_inputs(self, device, seed=0):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can move into actual test object e.g. QwenImageModularPipelineFastTests

return inputs


class QwenImageModularGuiderTests:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
class QwenImageModularGuiderTests:
class QwenImageModularGuiderTesterMixin:

Comment on lines +90 to +91
class QwenImageModularPipelineFastTests(
QwenImageModularTests, QwenImageModularGuiderTests, ModularPipelineTesterMixin, unittest.TestCase
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove unittest

Suggested change
class QwenImageModularPipelineFastTests(
QwenImageModularTests, QwenImageModularGuiderTests, ModularPipelineTesterMixin, unittest.TestCase
class TestQwenImageModularPipelineFast(
QwenImageModularTests, QwenImageModularGuiderTests, ModularPipelineTesterMixin

@sayakpaul
Copy link
Member Author

I will wait for your #12579 PR to get merged first. Easier that way for me to apply your suggestions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants