I am trying to reproduce some of the results that one would obtain if we cal fal.ai for ltx, mainly this: https://fal.ai/models/fal-ai/ltx-2.3/text-to-video?utm_source=landing&utm_campaign=ltx-2.3&utm_content=hero
But the results from calling fal.ai are much better than what we got from running the inference (two stages pipeline). So I am wondering if you have any of the information about the hosted model on fal and what are the parameters used for inference there.
Best,
I am trying to reproduce some of the results that one would obtain if we cal fal.ai for ltx, mainly this: https://fal.ai/models/fal-ai/ltx-2.3/text-to-video?utm_source=landing&utm_campaign=ltx-2.3&utm_content=hero
But the results from calling fal.ai are much better than what we got from running the inference (two stages pipeline). So I am wondering if you have any of the information about the hosted model on fal and what are the parameters used for inference there.
Best,