You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In most papers, we only see mAP, which is a good aggregate metric for the comparison over the horizon, but it fails to provide further insights. The inclusion of F1 in the leaderboard is a great step! If we go one step further, between precision and recall, specific applications may prefer one over the other. If we are searching for an important object from satellite imagery, we'd mostly not mind losing the precision to gain high recall. For example, searching for objects in the vicinity of the national borders which are of national security concern.
If we include Precision/Recall in the leaderboard, we can potentially answer the following and similar questions:
For the same recall, which model provides better precision?
Which model provides the highest recall at a specified precision threshold? For example, we may like to bind the precision to be > 0.5.
The text was updated successfully, but these errors were encountered:
Hi,
In most papers, we only see mAP, which is a good aggregate metric for the comparison over the horizon, but it fails to provide further insights. The inclusion of F1 in the leaderboard is a great step! If we go one step further, between precision and recall, specific applications may prefer one over the other. If we are searching for an important object from satellite imagery, we'd mostly not mind losing the precision to gain high recall. For example, searching for objects in the vicinity of the national borders which are of national security concern.
If we include Precision/Recall in the leaderboard, we can potentially answer the following and similar questions:
The text was updated successfully, but these errors were encountered: