Recall @k recall speed and accuracy in top-k recommendations

According to the authors in 1 , 2, and 3 , Feedback is the percentage of relevant items selected from all the relevant items in the repository, and Accuracy is the percentage of relevant items from those items that were selected in the request.

Therefore, assuming user U gets a list of items recommended by top-k, they will look something like this:

Remind = (Relevant_Items_Recommended in top-k) / (Relevant_Items)

Accuracy = (Relevant_Items_Recommended in top-k) / (k_Items_Recommended)

So far, this part is all clear, but I don’t understand the difference between them and @k . What will be the formula for calculating @k recall rate ?

+7
precision-recall evaluation recommendation-engine
source share
1 answer

Finally, I received an explanation from Professor Yuri Malheiros ( paper 1 ). The @k recall algorithm cited in the answers cited in the questions seemed like a normal recall metric, but was applied in top-k, they do not match. This metric is also used in paper 2 , document 3 and paper 3

The @k repetition rate is a percentage that depends on the tests performed, that is, the number of recommendations, and each recommendation is a list of elements, some elements will be correct and some will not. If we made 50 different recommendations, let us name them R (regardless of the number of points for each recommendation), in order to calculate the return coefficient, you need to look at each of the 50 recommendations. If at least one recommended item is correct for each recommendation, you can increase the value, in this case let us call it N. To calculate the repetition rate @R, you need to do N / R.

+6
source share

All Articles