-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/trust eval integration #17670
base: main
Are you sure you want to change the base?
Feature/trust eval integration #17670
Conversation
from trust_eval.retrieval import retrieve | ||
|
||
|
||
class TrustScoreEvaluator: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be better if this implemented the actual base classes we have for evaluation?
class BaseEvaluator(PromptMixin): |
Similar to the other evaluation integrations
Line 12 in 659f9fa
class AnswerConsistencyEvaluator(BaseEvaluator): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Logan, thank you for reviewing my PR. I did see how others have implemented, but all of them implemented the BaseEvaluator to process query by query. For my integration, there are some metrics that work more on a macro level (i.e. over a few queries) and are unable to work on a single query level. Am I able to change the aevaluate
to take in multiple queries? What the best way to do this? Grateful for any advice you can offer!
@abstractmethod
async def aevaluate(
self,
query: Optional[str] = None,
response: Optional[str] = None,
contexts: Optional[Sequence[str]] = None,
**kwargs: Any,
) -> EvaluationResult:
"""Run evaluation with query string, retrieved contexts,
and generated response string.
Subclasses can override this method to provide custom evaluation logic and
take in additional arguments.
"""
raise NotImplementedError
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That does make sense. Hmmm. Is there a way that those metrics can be tracked under a different method?
The only reason I ask for this is, this integration does not even depend on anything from llama-index right now (not using our documents/nodes, not our using llms or embeddings, not using our query engines, etc.), so it doesn't quite make sense to merge 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
llama-index isn't a dependency on the package either lol so it seems like users should just use the trust_eval package or this code could live inside trust_eval
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
New Package?
Did I fill in the
tool.llamahub
section in thepyproject.toml
and provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.toml
file of the package I am updating? (Except for thellama-index-core
package)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Your pull-request will likely not be merged unless it is covered by some form of impactful unit testing.
Suggested Checklist:
make format; make lint
to appease the lint gods