Mr Hough83
Member
- Joined
- Dec 7, 2014
- Messages
- 36
- Reaction score
- 32
- Points
- 18
I want to provide Information regarding the tactic testing leagues/tables whatever you want to call it. At the moment of writing this thread there are three different tactic testing leagues.
TFF's 75 Siblings, Mr L and FM Base.
TFF's 75 Siblings uses a system based on a Team's Reputation - Underdog and Favourite. That's the only difference between the two sides.
Both sides in terms of ability are the same and I can prove this in the screenshots below:
These are just two players for each Favourite and Underdog team. I've also included the players default profiles for comparison, from the standard FM Database as well.
Default Attributes in the FM Database
As you can see the TFF's 75 Siblings players are a bit overpowered especially compared to their default counterparts. Thus, you can see where potential problems can lie regarding tactic testing. For examples each individual player is able to create something out of nothing due to their flair, or being able to shoot on target from distance due to having such high attributes. Generally, in a player's FM save to, they're not going to have such players, while having such a low reputation. Adding further context to this, imagine multiple Messi's and Cristiano Ronaldo's playing for Norwich.
Mr L's testing approach is very similar to the theory we use at FM Base, however there are differences. For example, their sub-top team has an average CA165 and underdog have an average CA140. The teams within the league are also different. Outside of this, not much else is known.
Onto our approach at FM Base...
Our tactic tests are based on CA (Current Ability) we have two teams:
We believe this gives a more realistic approach and should match better when you use these in your FM saves. If you'd like to have a further in-depth look into our testing method we have also provided its own thread which includes key highlights. We'd also love to have your feedback on our testing method to!
Conclusion
As you can see in the screenshots, teams on each test are very different. Thus, the cause of different results.
Which one is right? That's not for me to say I'm just showcasing the differences for each method.
Thank you for reading and most importantly your precious time.
TFF's 75 Siblings, Mr L and FM Base.
TFF's 75 Siblings uses a system based on a Team's Reputation - Underdog and Favourite. That's the only difference between the two sides.
- One has a Lower Reputation (Underdog)
- One has a Higher Reputation (Favourite)
Both sides in terms of ability are the same and I can prove this in the screenshots below:
These are just two players for each Favourite and Underdog team. I've also included the players default profiles for comparison, from the standard FM Database as well.
Default Attributes in the FM Database
As you can see the TFF's 75 Siblings players are a bit overpowered especially compared to their default counterparts. Thus, you can see where potential problems can lie regarding tactic testing. For examples each individual player is able to create something out of nothing due to their flair, or being able to shoot on target from distance due to having such high attributes. Generally, in a player's FM save to, they're not going to have such players, while having such a low reputation. Adding further context to this, imagine multiple Messi's and Cristiano Ronaldo's playing for Norwich.
Mr L's testing approach is very similar to the theory we use at FM Base, however there are differences. For example, their sub-top team has an average CA165 and underdog have an average CA140. The teams within the league are also different. Outside of this, not much else is known.
Onto our approach at FM Base...
Our tactic tests are based on CA (Current Ability) we have two teams:
- Subtop - which have a CA of 145
- Underdog - which have a CA of 120
We believe this gives a more realistic approach and should match better when you use these in your FM saves. If you'd like to have a further in-depth look into our testing method we have also provided its own thread which includes key highlights. We'd also love to have your feedback on our testing method to!
Conclusion
As you can see in the screenshots, teams on each test are very different. Thus, the cause of different results.
Which one is right? That's not for me to say I'm just showcasing the differences for each method.
Thank you for reading and most importantly your precious time.
Last edited by a moderator: