Skip to content

Commit 7238105

Browse files
committed
Test on attention type and automatically modify flash block sizes object when 'tokamax_flash' requested
Signed-off-by: Kunjan Patel <kunjanp@google.com>
1 parent a71eb74 commit 7238105

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/maxdiffusion/tests/wan_transformer_test.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -240,7 +240,7 @@ def test_wan_attention(self):
240240
query_dim=query_dim,
241241
heads=40,
242242
dim_head=128,
243-
attention_kernel=config.attention,
243+
attention_kernel=attention_kernel,
244244
mesh=mesh,
245245
flash_block_sizes=flash_block_sizes,
246246
)

0 commit comments

Comments
 (0)