Skip to content

Fine-grained Hallucination in Numerical Values Despite Citation and Judger Implementation #167

@YeLyea

Description

@YeLyea

Hello,

I see that your implementation of citation and judge uses prompts to control credible output. However, there's a problem: large models are inherently susceptible to hallucinations, especially with numbers. The answer might be generally correct, but the numbers might be secretly altered (for example, changing "87%" to "85%" or "2021" to "2022"). Has the author considered and addressed this issue?

Thank you for your consideration!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions