Advanced algorithms improve our lives in subtle, but consistent ways. Google search results seldom make us go to the second page and Netflix magically recommends the right movie. These are just outcomes of multiple trials and errors to find the best recommendation algorithms. How do these companies determine which algorithm is the best? Evaluating different ranking algorithms is not a trivial task. This is why we developed the Rank Discounter Cumulative Gain (RankDCG) evaluation measure. This measure is designed to evaluate and compare ranking-ordering algorithms. RankDCG has three fundamental properties: 1) consistent lower and upper score bounds, 2) it works with non-normal score distribution, and 3) it allows element transitivity inside subgroups of the same rank. RankDCG works on a large verity of problems where conventional measures fail.

Problems such as search or recommendation are usually defined by the umbrella term information retrieval (IR). This family of problems deals with retrieving and ranking-ordering relative data, whether it is movies or web pages. This data often comes from a large data source. In order to know which of the items is more relevant, there exists a function, we can call it , that returns a relevance score from 0 — not relevant, to some positive number. This function helps to convert categorical data into ordinal (quantifiable) data. For example, imagine that we want to recommend a movie according to its genre. It is impossible to say that a thriller is better than a drama because movie genres don’t have a hierarchy. However, the function orders movie genres according to a defined rule. For visualization purposes, let’s say that we have a list of genres with relevance scores: where A — action, T — thriller, D — drama and the subscript number defines a relevance score for each element. Imagine that there exist two algorithms that order the movie list according to some rules. The first algorithm returns and the second returns . None of them returned the perfect result. The question to consider is which algorithm returns a better result. If one of them is better, then how much better? This is when evaluation measures come into play.

Among the variety of measures, discounted cumulative gain (DCG) is the most notable. It is often used during information retrieval (IR) competitions, including annual TREC competitions. DCG is defined by the following formula:

In other words, if DCG is applied to the first list, the result is equal to and the second is . From the result, we can see that the second algorithm works better. Unfortunately, the score of 2.125 is hard to understand. The only way to know how well an algorithm works is by direct comparison with another algorithm or ground truth score. This creates the problem of understanding results. For this reason, normalized DCG (nDCG) came into existence. nDCG is defined as follows:

In our case, resulting in and . Both algorithms got quite low scores for misplacing A — action genre with a relevance score of 3. This shows the differences between the algorithms much better. What is more important, reporting a score of 0.26 gives a much better picture to describe the performance. Unfortunately, nDCG has a few drawbacks as well.

First of all, nDCG was designed for IR tasks. It works better when the relevance score is binary (0 or 1) and the algorithm needs only to retrieve relevant elements. For example, given a list returning would produce the perfect score and the lowest score of “0.” In the list , recommending elements by reordering never produces “0.” Thus, the lowest score is defined by , which results in the nDCG score of 0.22. To a person not familiar with the problem, 0.22 might sound like a reasonable performance. This measure makes reports confusing because the lowest score can be perceived as a good score.

Secondly, this measure does not work well with subgroups. Let us say that the collection contains multiple movies but only two genres: action (value of 2) and drama (value of 1). A list for new movie scores looks like this: . Since there are only two genres, the problem can be interpreted as a binary classification. Thus and must be equal. Think of it as an example of transitivity property. In other words, permutations inside subgroups should not change the score.

Lastly, some problems might require predicting the relevance function scores and ordering the list according to this new scoring. If the estimated values are used instead of the original, this creates a bias towards assigning larger values to the elements. For instance, if an algorithm assigns and evaluates using this new function than nDCG is equal to 5.232. Clearly, this is above the intended score range of 1.0.

RankDCG was designed to address these and other problems. The main idea of this measure is to:

1) normalize the scores to be in the range from 0 to 1;

2) add transitive property to subgroups. Ex: in , the following permutations are equal = ;

3) abstraction from the relevance scores, ex: both and must receive the same RankDCG score.

Formally stated:

is derived by counting the unique relevance scores (Ex: => => which equals to 2 and then assigning this score to the top element (A_3 => A_2). The consecutive scores are reduced by one (no change is needed to and ).

– the reverse of the values obtained from the function above with the correction to the subgroup position. If we apply this to , the result will be a list of discounting factors .

By adding two functions to the definition, the measure addresses all the points from above in a consistent manner. By using RankDCG, researchers will be able to better understand their results and design algorithms with much higher precision. Evaluating results of ordering is a hard task where conventional measures fail to deliver consistent results on a wide range of problems. RankDCG, on the other hand, delivers better results on wider types of problem. More details in more experimental settings can be found in this paper: RankDCG.pdf

RankDCG python library can be downloaded from here: RankDCG python library