Good questions!
Firstly, there aren't inherent differences in the testing between underdog and subtop. Just imagine a league with the actual best 100 teams in the world, we then add two teams:
- Subtop - i.e. top 40% of the league (translates to top 8 teams in a 20-team league)
- Underdog - i.e. bottom 40% of the league (translates to worst 8 teams in a 20-team league)
We use the editor to estimate these numbers and add players respectively to the teams that match the supposed quality level for each team, making sure every role is covered and reducing the randomness factor to the minimum physically possible with the tools available today.
The rating system is based on an algorithm we've worked on that can be changed, this is how it currently should be working:
Take the three main values you see when opening the
Tactics page
- Number of Matches (NoM)
- Total Points (TP)
- Goal Difference (GD)
Assign weight values
Assuming the worst tactic
GD, let's say -100 for simplicity, we create a
MIN value
- MIN = 250 (inverse of -100 * 2.5 (= GD weight value))
Assuming the best tactic
GD, let's say 100 for simplicity, we can now compute
TOT = max(TP) * 7.5 + max(GD) * 2.5 + MIN
Finally, divide by x (e.g. 100) to get ratings over 100 and later use stars
- Divide TOT by 100, then floor it (i.e. getting the largest integer that is smaller or equal to TOT)
Adjusting the numbers, we empirically define what's the minimum rating (out of 100) needed to get 5 stars
- Assuming >75 is needed for 5 stars
- We assign one point on the 0 -> 10 scale based on each rating band you surpass
- One point = half star
This should explain the current results you see. By the way, I'm not surprised as the vast majority of tactics is built using subtop teams!
Basically, when using an underdog team in such a competitive environment without randomness, it is incredibly unlikely you produce such high ratings that we assign 5 stars to an underdog tactic!
The stars reflect the actual result the tactic brings in the test.
Take as an example the Leicester championship back in 2016. If we computed 5000 simulations of the Premier League in that year, only once Leicester would have won! That's 0.0002% - and it accounts for randomness, if we take it out of the equation as we strive to do, the odds are even smaller!
I hope this clarifies!