Most Comprehensive Tactic Test in FM-Base History

  • Thread starter Thread starter Lucho616
  • Start date Start date
  • Replies Replies 500
  • Views Views 378K
Which one? In that zip file there are three tactics, one asymmetric 2-3-2-3, one standard 2-3-2-3 and one 3-4-3.

The standard one I've tested and possession and passing % wise is very strong. Asymmetric one will probably perform better though as more goal threat. 3-4-3 is striker less, looks intriguing but don't know of anyone that's used it.
 
The standard one I've tested and possession and passing % wise is very strong. Asymmetric one will probably perform better though as more goal threat. 3-4-3 is striker less, looks intriguing but don't know of anyone that's used it.

So, which one do you want to test, asymmetric or the standard one? :)
 
Did you guys test more matches to see how is the variation season to season?

I'm testing on a similar scenario but with all players positionally equal, with frozen morale/fitness/max match prep for every match. Tested 8 seasons x 48 games the same tactic. Points varied from 1,58/game to 1.94/game.
 
Did you guys test more matches to see how is the variation season to season?

I'm testing on a similar scenario but with all players positionally equal, with frozen morale/fitness/max match prep for every match. Tested 8 seasons x 48 games the same tactic. Points varied from 1,58/game to 1.94/game.

First of all, using "equal" stat players makes the testing redundant, because in the real game, there aren't many strikers who have lets say 14 marking and 14 tackling (just one example) :) This is something that will have an advantage/disadvantage for certain tactics. Second of all, if points vary from 1,58 per game to 1.94 per game, your test database is too random also. It shouldn't deviate more than around 0,1 points per game with a proper test base (this is the maximum deviation in our test so far). Also, testing several seasons is problematic because of player growth/fall in attributes. The players we have in our database are real players, so that we have a database as close as the real game as possible, only leaving out as much randomness as we can. These real players have some values modified, as is specificed in the front page of this thread. Another problem of testing several seasons (As I myself haven't found a work-around for) is that the OTHER teams in the test league BUY/LOAN/SELL/FIRE players/STAFF for the second season (You can set transfer embargo first season). This is yet another random element added, which won't be the same from one tactic tested to the next.

Instead, it is better to test in several DIFFERENT databases, then combine the results into one table. This is our plan, but at the moment, only me and Lucio are testing, so it goes kinda slow compared to when we had Francesco testing also :)

By the way, we also freeze morale, fitness and set 100% fluid match prep :P You should read the parameters on the front page! :P
 
Last edited:
First of all, using "equal" stat players makes the testing redundant, because in the real game, there aren't many strikers who have lets say 14 marking and 14 tackling (just one example) :) This is something that will have an advantage/disadvantage for certain tactics. Second of all, if points vary from 1,58 per game to 1.94 per game, your test database is too random also. It shouldn't deviate more than around 0,1 points per game with a proper test base (this is the maximum deviation in our test so far). Also, testing several seasons is problematic because of player growth/fall in attributes. The players we have in our database are real players, so that we have a database as close as the real game as possible, only leaving out as much randomness as we can. These real players have some values modified, as is specificed in the front page of this thread. Another problem of testing several seasons (As I myself haven't found a work-around for) is that the OTHER teams in the test league BUY/LOAN/SELL/FIRE players/STAFF for the second season (You can set transfer embargo first season). This is yet another random element added, which won't be the same from one tactic tested to the next.

Instead, it is better to test in several DIFFERENT databases, then combine the results into one table. This is our plan, but at the moment, only me and Lucio are testing, so it goes kinda slow compared to when we had Francesco testing also :)

By the way, we also freeze morale, fitness and set 100% fluid match prep :P You should read the parameters on the front page! :P

I didn't set all attributes equal for all players. What I did mean was, all strikers are the same, all midfielders are the same, and so on. For example, all strikers have tackling 6 and finishing 14. As on front page, hidden attributes/mental are just like those set here.

When I say that I tested 8 seasons it means 8 saves of the same season. Using TFF Tactic Testing League , but with added freezes and equaling teams. Those freezes I mean was set on all players on the league, not only on testing team. I also check not only points, but use a n=104 correlation for other stats like CCC, shots on target, possession, etc, to try to predict/correlate stats to points. Those stats have a coefficient of variation of only 1,5% for 8 seasons, while points has 7%.

There is not much difference on the testing I'm doing to the stated on opening post.

I'm testing with a macro and testing goes fast.
 
I didn't set all attributes equal for all players. What I did mean was, all strikers are the same, all midfielders are the same, and so on. For example, all strikers have tackling 6 and finishing 14. As on front page, hidden attributes/mental are just like those set here.

When I say that I tested 8 seasons it means 8 saves of the same season. Using TFF Tactic Testing League , but with added freezes and equaling teams. Those freezes I mean was set on all players on the league, not only on testing team. I also check not only points, but use a n=104 correlation for other stats like CCC, shots on target, possession, etc, to try to predict/correlate stats to points. Those stats have a coefficient of variation of only 1,5% for 8 seasons, while points has 7%.

There is not much difference on the testing I'm doing to the stated on opening post.

I'm testing with a macro and testing goes fast.

I understand, but doing it like that makes it just the same. Some tactics require the "big" type, some require the "quick" type, some require the "smart" type. It would be much better to use real players, but have so many players in the team that it covers every single role perfectly.

I've mentioned before, on his thread also, that there are some things in his database that are just way too random to make it a proper testing environment. His database has been used by many, and people get very different results. This again proves that the testing accuracy of his database isn't good enough (at least to my standards). You also mentioned a re-test accuracy of almost a 0,4 points per game deviation! That alone is proof that testing in that database can give nothing more than a hint. It cannot provide accurate enough results to say that one tactic is better than another.

I know that there are quite many things that differ from TFF' database to ours, and no disrespect to TFF, but he is kinda new to the game in terms of tactic testing in custom databases. I have improved my testing accuracy over several years, learning from my experiences over time, and I have an extreme interest in even the smallest details that can improve testing accuracy just by a tiny bit. To improve accuracy further, we have planned to test all tactics in TWO other databases as well, which makes it 186 matches tested per tactic (which then multiplies accuracy by 300%). In addition, the three databases will be different, with different players (other real players), different reputation, different schedule, and lots of other differents. So, this means that in addition to increase the accuracy, we also add the versatility dimension to the testing (so that we can find the "best" overall tactic, instead of just finding the best tactic for CA 120, or CA145).

I would also like to say that EVERY single parameter we have in our test is very important. Taking ONE of them out can make the deviation go from max 0,1 to max 0,2 (just an example), which would then be 100% less accurate.

Regarding setting freeze on all players in the entire test league, this is something I haven't figured out myself yet, and would love to add this to our testing parameters also. How do you do it? Do you use FMRTE or do you use another program?
 
Last edited:
I understand, but doing it like that makes it just the same. Some tactics require the "big" type, some require the "quick" type, some require the "smart" type. It would be much better to use real players, but have so many players in the team that it covers every single role perfectly.

I've mentioned before, on his thread also, that there are some things in his database that are just way too random to make it a proper testing environment. His database has been used by many, and people get very different results. This again proves that the testing accuracy of his database isn't good enough (at least to my standards). You also mentioned a re-test accuracy of almost a 0,4 points per game deviation! That alone is proof that testing in that database can give nothing more than a hint. It cannot provide accurate enough results to say that one tactic is better than another.

I know that there are quite many things that differ from TFF' database to ours, and no disrespect to TFF, but he is kinda new to the game in terms of tactic testing in custom databases. I have improved my testing accuracy over several years, learning from my experiences over time, and I have an extreme interest in even the smallest details that can improve testing accuracy just by a tiny bit. To improve accuracy further, we have planned to test all tactics in TWO other databases as well, which makes it 186 matches tested per tactic (which then multiplies accuracy by 300%). In addition, the three databases will be different, with different players (other real players), different reputation, different schedule, and lots of other differents. So, this means that in addition to increase the accuracy, we also add the versatility dimension to the testing (so that we can find the "best" overall tactic, instead of just finding the best tactic for CA 120, or CA145).

I would also like to say that EVERY single parameter we have in our test is very important. Taking ONE of them out can make the deviation go from max 0,1 to max 0,2 (just an example), which would then be 100% less accurate.

Regarding setting freeze on all players in the entire test league, this is something I haven't figured out myself yet, and would love to add this to our testing parameters also. How do you do it? Do you use FMRTE or do you use another program?

Nice, hope you guys release a database to people use sometime, as many people like to test tactics too =)
 
Nice, hope you guys release a database to people use sometime, as many people like to test tactics too =)

The "dangerous" part of sharing the test database is that it is 100% guaranteed that everyone who use it won't follow the guidelines for testing perfectly. Some will mess up in FMRTE, some will mess up something else. And then people start creating tactics, saying it got these and these points in X tactic test.

Me and the guys are working on a new fm website at the moment, I will try to think of a way to avoid this problem by then.

By the way, did you read my question about freezing players? :)
 
That v4 isn't his latest one, it's the final v2, dated Jan 28, while that v4 is dated Jan 17.

I added the final v2 to the to-do-list

No no, the last one is the v4 version. I don't know how to upload a tactic myself, so I just linked the thread REBORN of Sir Goalalot.
He stopped his thread, and someone else reposted his tactics, but I can confirm to you that the v4 is the last one (I think he tweaked the passing and changed the corner settings compare to the v2). The difference of dates must probably due to someone asking for the previous v2 one...

An d I was about to ask you to test the KNAP Midsomer 442 one too, but then I saw that you already started to test it and you seem impressed by its consistency. Let's see where it leads, but it may tend to what I believe: this tactis is a counter attack one too, and I've great success with Knap Midsomer's one too.
Counter attack seems to be the way to go in this opus, even if you play with Chelsea, City or Bayern!!!
 
No no, the last one is the v4 version. I don't know how to upload a tactic myself, so I just linked the thread REBORN of Sir Goalalot.
He stopped his thread, and someone else reposted his tactics, but I can confirm to you that the v4 is the last one (I think he tweaked the passing and changed the corner settings compare to the v2). The difference of dates must probably due to someone asking for the previous v2 one...

An d I was about to ask you to test the KNAP Midsomer 442 one too, but then I saw that you already started to test it and you seem impressed by its consistency. Let's see where it leads, but it may tend to what I believe: this tactis is a counter attack one too, and I've great success with Knap Midsomer's one too.
Counter attack seems to be the way to go in this opus, even if you play with Chelsea, City or Bayern!!!

Okay, but do you think the difference between v2 and v4 will be huge? Because the v2 got heavily butchered in the test. I don't have time to fill in everything at the moment, doing a little multitasking here, testing and correcting student papers :) I will send all the tests to Lucio, maybe he has time to fill in the spreadsheet today.
 
Okay, but do you think the difference between v2 and v4 will be huge? Because the v2 got heavily butchered in the test. I don't have time to fill in everything at the moment, doing a little multitasking here, testing and correcting student papers :) I will send all the tests to Lucio, maybe he has time to fill in the spreadsheet today.

In the meantime can you tell us which one is leading..?
 
Back
Top