Skip to content

Different quality than the model on fal.ai #199

@os1a

Description

@os1a

I am trying to reproduce some of the results that one would obtain if we cal fal.ai for ltx, mainly this: https://fal.ai/models/fal-ai/ltx-2.3/text-to-video?utm_source=landing&utm_campaign=ltx-2.3&utm_content=hero

But the results from calling fal.ai are much better than what we got from running the inference (two stages pipeline). So I am wondering if you have any of the information about the hosted model on fal and what are the parameters used for inference there.

Best,

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions