According to the authors in 1 , 2, and 3 , Feedback is the percentage of relevant items selected from all the relevant items in the repository, and Accuracy is the percentage of relevant items from those items that were selected in the request.
Therefore, assuming user U gets a list of items recommended by top-k, they will look something like this:
Remind = (Relevant_Items_Recommended in top-k) / (Relevant_Items)
Accuracy = (Relevant_Items_Recommended in top-k) / (k_Items_Recommended)
So far, this part is all clear, but I don’t understand the difference between them and @k . What will be the formula for calculating @k recall rate ?
precision-recall evaluation recommendation-engine
Luisa Hernández
source share