We have developed a new algorithm for multi-table tournament (MTT) ICM calculations that provides highly accurate approximations of *Malmuth-Harville* ICM for field sizes up to thousands of players. The deviation of the calculated values to exact *Malmuth-Harville* ICM is orders of magnitude smaller than in the previous HRC MTT model. A web calculator featuring this new method is already online here and a new HRC beta version with an updated MTT mode will be released for public testing within a few days.

The underlying *Malmuth-Harville* ICM model is extremely inefficient for large player fields. Naive implementations of ICM can handle about 15 players, and even optimized versions can't calculate exact *Malmuth-Harville* values beyond 25-30 players. So the standard ICM chips-to-equity mapping that is used for single-table calculations can't be directly applied to calculations with larger fields.

Until a few years ago, the standard procedure for calculations before the final table was to either use chip EV calculations and ignore any ICM considerations entirely or to setup single-table calculations with artificial prize structures like 80/20 to introduce small bubble factors to the calculation.

In the last few years, ICM tools started to automate the setup of approximative structures with dedicated modes for multi-table tournaments. In the background, the previous HRC MTT model would generate a set of about 10 hidden stacks and an adjusted artificial prize structure, aiming to achieve bubble factors similar to those in the original MTT situation, but in a compressed field of no more than 20 players where exact ICM values can still be efficiently calculated.

The accuracy of that approach is quite reasonable, estimates of the previous MTT model were typically off by around 3% compared to exact ICM values. So a chip stack with with actual *Malmuth-Harville* equity of 100$ would typically be predicted to have 100$ ± 3$ equity by the MTT model. That's much better than using chip EV and likely better than any manual approximations that can be carried out using single-table calculations. But, while the MTT model was a clear improvement over the then status quo, there's certainly room for improvement. The accuracy of our previous mode is inherently limited by the approach used. When fields with hundreds or thousands of players are compressed down to below 20 stacks, some details necessarily get lost.

The new HRC MTT model pushes MTT accuracy to an entirely new level. Unlike the previous MTT mode, it no longer operates on a compressed game structure behind the scenes. The ICM estimates are calculated directly on the original stacks and prizes. The typical relative error is orders of magnitude lower than our previous model, the average is well below 0.01% for the test scenarios presented below. For realistic game settings we expect the new estimates to be, for all practical purposes, just as good as exact ICM values.

We use two slightly different variations of the new model: The full variant is fast enough for fields of up to 500 players in the HRC desktop version. For even larger fields, an additional approximative step is used in the calculation to speed things up further. But even this faster variant is still remarkably accurate for medium and large fields, while being fast enough to comfortably support calculations with thousands of remaining players.

*The online MTT calculator here uses full accuracy for up to 64 players and switches to the faster variant for larger fields. The upcoming HRC beta currently uses a threshold of 500 players before switching to the faster version.*

The following section provides some additional details about the evaluation procedure used and lists the results for our main evaluation set. The selection of test scenarios presented here is quite limited, the model was actually evaluated on a variety of more extreme stack and prize functions, as well as a selection of actual tournament structures.

If it is impracticable to calculate exact ICM values for large fields, how else can we evaluate the accuracy of the new model?

Back in 2011 Tysen Streib (co-author of "Kill Everyone") introduced a method to calculate ICM values by Monte Carlo sampling. The original post can still be found in the 2+2 Poker Theory forum: New algorithm to calculate ICM for large tournaments.

This method allows the approximation of ICM values to arbitrary accuracy by random sampling, but it didn't get widely adopted by any ICM software tools because the sample sizes required to achieve good accuracy are quite large. The method is too slow to use in a full fledged ICM calculator, where thousands of ICM estimates are needed to calculate a single hand. However, it's perfectly suited for evaluation purposes where we can spend several hours or days to calculate a few equities.

Using Tysen's method, we simulated the finishing distributions for varying field sizes of 32 to 1024 players for 10^{10} tournaments each. The various tables below were then created by applying different prize structures to the same simulated finishing distributions, so the different tables are all based on the same set of samples.

The stack and prize setups were chosen to be easily reproducible. For the tables below, the following stack and prize setups were used:

- With
*n*players, stacks are:*1, 2,...,n - 1, n* - With
*p*spots paid, payouts are:*1*^{st}= p, 2^{nd}= p - 1,..., p^{th}=1

The model quality is evaluated using the absolute percentage deviation of the model values against the simulation results:

: Mean APD,*mean**100% * mean(|S*_{i}- M_{i}| / S_{i}): Maximum APD,*max**100% * max(|S*_{i}- M_{i}| / S_{i})- with
*S*being the simulated values and_{i}*M*being the model estimates_{i}

New MTT Model | Previous | ||||||
---|---|---|---|---|---|---|---|

Full | Fast | MTT Model | |||||

n | p | mean | max | mean | max | mean | max |

32 | 8 | 0.0018% | 0.0127% | 0.0192% | 0.0269% | 1.2% | 2.8% |

64 | 16 | 0.0028% | 0.0180% | 0.0053% | 0.0228% | 1.5% | 3.6% |

128 | 32 | 0.0023% | 0.0197% | 0.0020% | 0.0186% | 1.7% | 4.5% |

256 | 64 | 0.0024% | 0.0204% | 0.0025% | 0.0203% | 1.8% | 4.8% |

512 | 128 | 0.0023% | 0.0237% | 0.0025% | 0.0232% | 1.8% | 5.2% |

1024 | 256 | 0.0022% | 0.0424% | 0.0026% | 0.0425% | 1.8% | 5.0% |

Linear stacks and prizes, top 25% paid, sample size 10^{10} |

New MTT Model | Previous | ||||||
---|---|---|---|---|---|---|---|

Full | Fast | MTT Model | |||||

n | p | mean | max | mean | max | mean | max |

32 | 16 | 0.0014% | 0.0121% | 0.0224% | 0.0303% | 2.5% | 4.4% |

64 | 32 | 0.0016% | 0.0151% | 0.0048% | 0.0159% | 3.0% | 6.8% |

128 | 64 | 0.0014% | 0.0108% | 0.0015% | 0.0131% | 3.6% | 10.1% |

256 | 128 | 0.0016% | 0.0216% | 0.0017% | 0.0211% | 3.8% | 10.8% |

512 | 256 | 0.0014% | 0.0311% | 0.0016% | 0.0313% | 3.7% | 11.5% |

1024 | 512 | 0.0014% | 0.0397% | 0.0017% | 0.0399% | 3.8% | 12.3% |

Linear stacks and prizes, top 50% paid, sample size 10^{10} |

New MTT Model | Previous | ||||||
---|---|---|---|---|---|---|---|

Full | Fast | MTT Model | |||||

n | p | mean | max | mean | max | mean | max |

32 | 24 | 0.0006% | 0.0029% | 0.0222% | 0.0500% | 3.9% | 15.8% |

64 | 48 | 0.0010% | 0.0065% | 0.0047% | 0.0209% | 4.0% | 11.6% |

128 | 96 | 0.0009% | 0.0084% | 0.0012% | 0.0100% | 5.8% | 21.7% |

256 | 192 | 0.0010% | 0.0117% | 0.0011% | 0.0163% | 5.7% | 32.1% |

512 | 384 | 0.0009% | 0.0140% | 0.0011% | 0.0142% | 6.3% | 21.5% |

1024 | 768 | 0.0009% | 0.0225% | 0.0012% | 0.0223% | 6.2% | 28.1% |

Linear stacks and prizes, top 75% paid, sample size 10^{10} |

New MTT Model | Previous | ||||||
---|---|---|---|---|---|---|---|

Full | Fast | MTT Model | |||||

n | p | mean | max | mean | max | mean | max |

32 | 32 | 0.0003% | 0.0015% | 0.0252% | 0.3977% | 5.3% | 19.7% |

64 | 64 | 0.0005% | 0.0023% | 0.0094% | 0.3491% | 9.6% | 27.6% |

128 | 128 | 0.0005% | 0.0038% | 0.0042% | 0.3212% | 9.6% | 40.5% |

256 | 256 | 0.0007% | 0.0428% | 0.0019% | 0.2380% | 10.7% | 49.8% |

512 | 512 | 0.0009% | 0.0953% | 0.0016% | 0.3441% | 11.4% | 55.8% |

1024 | 1024 | 0.0010% | 0.0616% | 0.0014% | 0.2742% | 11.5% | 62.4% |

Linear stacks and prizes, top 100% paid, sample size 10^{10} |

*Note: Keep in mind that the model quality is evaluated against a noisy reference. Even with samples of 10 ^{10} tournaments, the sampling error is still quite significant in comparison to the model error, so the tables above only provide an upper bound of the error levels in the tested scenarios. Although actual model accuracy might be better than indicated, even the listed accuracy is already more than sufficient for all practical purposes.*