You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Objective: Bring the final similarity score between 0 and 1 so that I can set a threshold such that 'score'>=threshold means the document is relevant.
As per my understanding, the final score that we get is the summation of the maximum cosine similarity scores for each query token against the document tokens. So, dividing the final score by number of query tokens should give a normalized score (b/w 0 and 1).
But that's not correct. After dividing the final score by number of query tokens, I am getting >1 score for some queries.
This is how I am loading the tokenizer to get the number of tokens from query:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("path to model directory")
from ragatouille import RAGPretrainedModel
RAG = RAGPretrainedModel.from_pretrained("path to model directory")
Objective: Bring the final similarity score between 0 and 1 so that I can set a threshold such that 'score'>=threshold means the document is relevant.
As per my understanding, the final score that we get is the summation of the maximum cosine similarity scores for each query token against the document tokens. So, dividing the final score by number of query tokens should give a normalized score (b/w 0 and 1).
But that's not correct. After dividing the final score by number of query tokens, I am getting >1 score for some queries.
This is how I am loading the tokenizer to get the number of tokens from query:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("path to model directory")
from ragatouille import RAGPretrainedModel
RAG = RAGPretrainedModel.from_pretrained("path to model directory")
index=RAG.from_index("path to index")
text="query"
normalized_score=index.search(text)[0]['score']/len(tokenizer.tokenize(text)
normalized_score is >1 for some of the queries.
I would greatly appreciate any help!
The text was updated successfully, but these errors were encountered: