Skip to content

Conversation

ayushtues
Copy link
Contributor

What does this PR do?

Add F5 TTS #10043

@ayushtues ayushtues mentioned this pull request Jul 19, 2025
2 tasks
@ayushtues
Copy link
Contributor Author

Okay, got all the code which is needed in two files, and used existing diffusers primitives in some easy to catch places. Now will work on integrating it in the diffusers class structure

@ayushtues
Copy link
Contributor Author

ayushtues commented Jul 21, 2025

Attention!

Seems like we can use the diffusers Attention class directly, but need to add a new Processor to support RoPE embeds on selective heads as in F5

@ayushtues
Copy link
Contributor Author

ayushtues commented Jul 26, 2025

Tokenization

F5 uses a character level tokenizer for the text, might want to write a simple tokeniser class for it.

Might just be fine to keep it in a simple function for now, since its very straightforward.

@ayushtues
Copy link
Contributor Author

ayushtues commented Jul 29, 2025

Tests

Basic structure looks good now, let's add some tests, and then make it more diffusers friendly! Adding tests would also force me to follow the structure more strongly and ensure that the code is not buggy

@ayushtues
Copy link
Contributor Author

ayushtues commented Jul 29, 2025

Flow matching/Schedulers

Will also need to use one of the schedulers from Diffusers, I think they use simple Euler method only, but the sway sampling step needs to be accounted for somehow, although its just a change in the discretisation schedule so should be straightforward

@ayushtues
Copy link
Contributor Author

Future work

  • Support streaming (already there in OG F5 repo), although this is more like chunk based inference really. Current model is non-causal so only chunk based streaming makes sense anyway
  • Triton server inference, again already there in the F5 repo

@ayushtues
Copy link
Contributor Author

ayushtues commented Aug 3, 2025

Current status

  • Pipeline forward pass working
  • Checkpoint converted to hf format
  • Same forward passes from OG f5 and pipeline
  • scheduler

To do

  • Tests

@ayushtues
Copy link
Contributor Author

Got the same forward passes as the OG F5! Next to write some tests

@ayushtues
Copy link
Contributor Author

Scheduler done! FlowMatchEulerDiscreteScheduler is what we want to use, with slight modifications for sway sampling

@ayushtues
Copy link
Contributor Author

ayushtues commented Aug 21, 2025

@asomoza I was writing some tests for this and was confused about why in the common test _test_attention_slicing_forward_pass the generator_device is set to cpu, while the torch_device can be anything. This seems to be breaking things for me at the moment if my device has cuda or mps in case of a Mac.

Ref: https://github.com/ayushtues/diffusers/blob/cde02b061b6f13012dfefe76bc8abf5e6ec6d3f3/tests/pipelines/test_pipelines_common.py#L1551

Same is true for some other tests too which set the generator_device to cpu

@ayushtues
Copy link
Contributor Author

Also any suggestions on how to add the character level tokenisation of F5, its just a simple character to index lookup, but not sure if to make a new tokeniser class for it, or just save it as a dict and load it somehow

@ayushtues ayushtues marked this pull request as ready for review August 25, 2025 02:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant