Problems of Statistical Mathematics

I am developing my own Texas Hold stock valuer, which evaluates manual distributions using a Monte Carlo simulation. I ran into two unpleasant problems that I cannot explain by behavior.

Problem number 1:

In a walnut shell, the evaluator works by first raising his hands from the player’s hands. Say we have the following:

  AA - 6 hands
 KK - 6 hands 

We take cards and after that, one hand randomly from both players who do not encounter cards.
In this example, the following fairness is given:

  AA = ~ 81.95%
 KK = ~ 18.05% 

Now the problem. If after that the appraiser first selects the closed cards and card cards, this will not work. Then I get something like this:

  AA = ~ 82.65%
 KK = ~ 17.35 & 

Why does he become biased? What does it matter if you first select open cards or card cards? Obviously, this is so, but cannot understand why.

Problem number 2:

If I have ten distributions manually with the following ranges:

  AA
 KK +
 QQ +
 Jj +
 TT +
 99+
 88+
 77+
 66+
 55+ 

my evaluator is very slow. This is due to the fact that when choosing closed cards from distributions, there are many collisions. There are many tests before we get ten cards with holes and a board that does not collide. So, I changed the method of how the evaluator selects a hand from the distribution:

// Original - works. void HandDistribution::Choose(unsigned __int64 &usedCards, bool &collided) { _pickedHand = _hands[(*Random)()]; collided = (_pickedHand & usedCards) != 0; usedCards |= _pickedHand; } // Modified - Doesn't work; biased equities. void HandDistribution::Choose(unsigned __int64 &usedCards, bool &collided) { // Let try to pick-up a hand from this distribution ten times, before // we give up. // NOTE: It doesn't matter, how many attempts there are (except one). 2 or 10, // same biased results. for (unsigned int attempts = 0; i < 10; ++i) { _pickedHand = _hands[(*Random)()]; collided = (_pickedHand & usedCards) != 0; if (!collided) { usedCards |= _pickedHand; return; } } // All the picks collided with other hole cards... } 

The alternative method is much faster, since there are no more conflicts. However, the results are VERY biased. What for? What does it matter if the evaluator chooses a hand with one try or several? Again, it is obvious that this is so, but I cannot understand why.

Edit:

FYI, I use the Boost random number generator, more precisely boost :: lagged_fibonacci607 . Although, the same behavior occurs with mersenne twister.

Here is the code that is:

 func Calculate() { for (std::vector<HandDistribution *>::iterator it = _handDistributions.begin(); it != _handDistributions.end(); ++it) { (*it)->_equity = 0.0; (*it)->_wins = 0; (*it)->_ties = 0.0; (*it)->_rank = 0; } std::bitset<32> bsBoardCardsHi(static_cast<unsigned long>(_boardCards >> 32)), bsBoardCardsLo(static_cast<unsigned long>(_boardCards & 0xffffffff)); int cardsToDraw = 5 - (bsBoardCardsHi.count() + bsBoardCardsLo.count()), count = 0; HandDistribution *hd_first = *_handDistributions.begin(), *hd_current, *hd_winner; unsigned __int64 deadCards = 0; boost::shared_array<unsigned __int64> boards = boost::shared_array<unsigned __int64>(new unsigned __int64[2598960]); memset(boards.get(), 0, sizeof(unsigned __int64) * 2598960); hd_current = hd_first; do { deadCards |= hd_current->_deadCards; // All the unary-hands. hd_current = hd_current->_next; } while (hd_current != hd_first); if (cardsToDraw > 0) for (int c1 = 1; c1 < 49 + (5 - cardsToDraw); ++c1) if (cardsToDraw > 1) for (int c2 = c1 + 1; c2 < 50 + (5 - cardsToDraw); ++c2) if (cardsToDraw > 2) for (int c3 = c2 + 1; c3 < 51 + (5 - cardsToDraw); ++c3) if (cardsToDraw > 3) for (int c4 = c3 + 1; c4 < 52 + (5 - cardsToDraw); ++c4) if (cardsToDraw > 4) for (int c5 = c4 + 1; c5 < 53; ++c5) { boards[count] = static_cast<unsigned __int64>(1) << c1 | static_cast<unsigned __int64>(1) << c2 | static_cast<unsigned __int64>(1) << c3 | static_cast<unsigned __int64>(1) << c4 | static_cast<unsigned __int64>(1) << c5; if ((boards[count] & deadCards) == 0) ++count; } else { boards[count] = static_cast<unsigned __int64>(1) << c1 | static_cast<unsigned __int64>(1) << c2 | static_cast<unsigned __int64>(1) << c3 | static_cast<unsigned __int64>(1) << c4; if ((boards[count] & deadCards) == 0) ++count; } else { boards[count] = static_cast<unsigned __int64>(1) << c1 | static_cast<unsigned __int64>(1) << c2 | static_cast<unsigned __int64>(1) << c3; if ((boards[count] & deadCards) == 0) ++count; } else { boards[count] = static_cast<unsigned __int64>(1) << c1 | static_cast<unsigned __int64>(1) << c2; if ((boards[count] & deadCards) == 0) ++count; } else { boards[count] = static_cast<unsigned __int64>(1) << c1; if ((boards[count] & deadCards) == 0) ++count; } else { boards[0] = _boardCards; count = 1; } _distribution = boost::uniform_int<>(0, count - 1); boost::variate_generator<boost::lagged_fibonacci607&, boost::uniform_int<> > Random(_generator, _distribution); wxInitializer initializer; Update *upd = new Update(this); _trial = 0; _done = false; if (upd->Create() == wxTHREAD_NO_ERROR) upd->Run(); hd_current = hd_first; ::QueryPerformanceCounter((LARGE_INTEGER *) &_timer); do { hd_current = hd_first; unsigned __int64 board = boards[Random()] | _boardCards, usedCards = _deadCards | board; bool collision; do { hd_current->Choose(usedCards, collision); hd_current = hd_current->_next; } while (hd_current != hd_first && !collision); if (collision) { hd_first = hd_current->_next; continue; } unsigned int best = 0, s = 1; // Evaluate all hands. do { hd_current->_pickedHand |= board; unsigned long i, l = static_cast<unsigned long>(hd_current->_pickedHand >> 32); int p; bool f = false; if (_BitScanForward(&i, l)) { p = _evaluator[53 + i + 32]; l &= ~(static_cast<unsigned long>(1) << i); f = true; } if (f) while (_BitScanForward(&i, l)) { l &= ~(static_cast<unsigned long>(1) << i); p = _evaluator[p + i + 32]; } l = static_cast<unsigned long>(hd_current->_pickedHand & 0xffffffff); if (!f) { _BitScanForward(&i, l); p = _evaluator[53 + i]; l &= ~(static_cast<unsigned long>(1) << i); } while (_BitScanForward(&i, l)) { l &= ~(static_cast<unsigned long>(1) << i); p = _evaluator[p + i]; } hd_current->_rank = p; if (p > best) { hd_winner = hd_current; s = 1; best = p; } else if (p == best) ++s; hd_current = hd_current->_next; } while (hd_current != hd_first); if (s > 1) { for (std::vector<HandDistribution *>::iterator it = _handDistributions.begin(); it != _handDistributions.end(); ++it) { if ((*it)->_rank == best) { (*it)->_ties += 1.0 / s; (*it)->_equity += 1.0 / s; } } } else { ++hd_winner->_wins; ++hd_winner->_equity; } ++_trial; hd_first = hd_current->_next; } while (_trial < trials); } 
+4
source share
5 answers

For problem # 1, I don't think bias is an integral part of the problem, but rather your implementation.

I mean, if you deal with an infinite number of hands, first playing with cards, and then the player passes (*) and take into account only “deals”, where one hand is AA and the other is KK, the capital should be like this same as if you were dealing with an infinite number of hands, first dealing with the player’s hands, and then with the cards, and again consider only “deals”, where one hand is AA and the other is KK.

When you first select a player’s hands from a separate set of hands, you limit the cards that can be placed on the board.

If you place card boards at first, you have no restrictions, and if after that you arbitrarily choose a pair of AA / KK hands until you get a collision, you will have an analogue (*)

I'll see if I can figure it out a little more.

+1
source

can it be a random number generator? It was not possible to determine from the code which number generator is used .5

0
source

I can’t say for sure, because I obviously don’t see the whole source of the program, but my first suspicion that I see different stocks is that for your simulation in Monte Carlo you simply do not conduct enough tests to get a good estimate of net worth.

It may well be some other problem, but just find out if this is the cause of your discrepancy. I would try multiple trials of a very large number of trials (more than 1 million) and see how widely your stocks are changing.

0
source

I think you should not use collisions. They are very inefficient, especially when there are a lot of them, and when you try to reduce their number, it is very likely that you introduced bias: you simulate the probability distribution of collisions, so having a complete set of possible collisions is necessary. Any reduction should maintain the distribution in the same way, so this is similar to a decrease in the proportion.

Problem 1: How did you determine the correctness of the first set of stocks? Independent sources? I would suggest that by first choosing open cards, then the card cards from the remaining cards would be “clearly” correct (or is it naive?), While collecting cards can be done independently (see Problem 2).

Problem 2: The distribution of the arms (holes) is not independent when the ranges of the arms overlap. If one player has AK +, the other hands are unknown, the situation is different from where one player has AK + and the other AA: in the second case, it is more likely that the first player really has QC than in the first case. In your ten-hand example, a player with 55+ is unlikely to have something like KK. Again, how did you determine the right stocks?

I am sorry that I cannot give you a definitive answer to your problems, but I think you need to do or read about some involved statistical analysis in order to determine the correct distribution of hands and to ensure that they are also independent of the order of the players.

EDIT: Thank you for posting something that allows us to evaluate the type of code you're working with. No matter how severe this may sound, now I am inclined to advise you to start from scratch, but first learn about structured programming.

A few tips: you are currently representing a set of cards as a 52-element bit field. Although at first it seems effective, you see that simply dealing cards from such a deck is very difficult. So, keep it simple: create a class that describes the map; create a class that describes a deck of cards; for this you need to shuffle Fisher Yates; give it a deal() method that returns a map, etc. Oh, and don't use pass-by-reference, use return values ​​instead. You want to see where the values ​​change, without diving into each subroutine.

EDIT 2: As for “I can't use a class, it's too slow”: what gives you an idea that using classes makes your code slow? Classes are just a concept of compilation time in C ++ (approximately). Even if you do not want to use classes or structures, use the correct data structures, such as arrays; it is no less effective than a beat game.

Your problem is that in order to get several random cards from the deck, you beat them with sets of random requests until you succeed completely, instead of just asking for a certain number of unused cards. The algorithmic complexity of this approaches infinity, as the number of available cards decreases.

This is the famous quote “Premature optimization is the root of all evil”: you prematurely “optimized” your data presentation and now you can’t get random maps efficiently. First get your algorithm, then find the bottlenecks (this means testing and measuring, profiling aka) and optimize them.

0
source

Andreas Brink answered your # 1 problem.

I think your problem # 2 is caused by the same problem.

Instead of detecting collisions, simply simulate a stack of cards and select the remaining cards. If you do not, you must be careful about the probabilities, or you can cause a bias in conditional probabilities depending on your algorithm.

By the way, this may help you:

0
source

All Articles