Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NTCIREVAL #20

Open
seanmacavaney opened this issue Jun 23, 2021 · 0 comments
Open

NTCIREVAL #20

seanmacavaney opened this issue Jun 23, 2021 · 0 comments

Comments

@seanmacavaney
Copy link
Collaborator

Add a NTCIREVAL provider. It includes several measures not yet covered by the software.

Write a python interface to NTCIREVAL or use pyNTCIREVAL?

There's pyNTCIREVAL, which would certainly be easier to incorporate. But the downside is that it's a port of the software to Python, rather than an interface (the way pytrec_eval is an interface with trec_eval).

Writing a new python interface to NTCIREVAL would be a lot of work (especially after taking a cursory look over the code), and would be done as a separate repo if this is the route pursued. An advantage of this route is that it (theoretically) should then be easier to then support new measures incorporated into the official software. This direction should be given due consideration, as NTCIR is updated periodically.

Supported Measures

From this page:

  • Average Precision
  • Q-measure
  • nDCG
  • Expected Reciprocal Rank (ERR)
  • Graded Average Precision (GAP)
  • Rank-Biased Precision (RBP)
  • Expected Blended Ratio (EBR)
  • intentwise Rank-Biased Utility (iRBU)
  • Normalised Cumulative Utility (NCU)
  • Condensed-List versions of the above metrics
  • Bpref
  • D#-measures and DIN#-measures for diversity evaluation
  • Intent-Aware (IA) metrics and P+Q# for diversity evaluation

(Looks like there are others too)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant