During the optimization of my connected four game engine, I reached a point where further improvements can be minimal, since most of the CPU time is used by the TableEntry te = mTable[idx + i] command in the following code example.
TableEntry getTableEntry(unsigned __int64 lock) { int idx = (lock & 0xFFFFF) * BUCKETSIZE; for (int i = 0; i < BUCKETSIZE; i++) { TableEntry te = mTable[idx + i];
The mTable hash table mTable defined as std::vector<TableEntry> and has about 4.2 million records (about 64 MB). I tried replacing vector by highlighting the new table without increasing speed.
I suspect random access to memory (due to the Zobbrist Hashing feature ) can be expensive, but really so much? Do you have suggestions for improving the function?
Thanks!
Edit: BUCKETSIZE is 4. It is used as a strategy. The size of one TableEntry is 16 bytes, the structure is as follows:
struct TableEntry { // Old New unsigned __int64 lock; // 8 8 enum { VALID, UBOUND, LBOUND }flag; // 4 4 short score; // 4 2 char move; // 4 1 char height; // 4 1 // ------- // 24 16 Bytes TableEntry() : lock(0LL), flag(VALID), score(0), move(0), height(-127) {} };
Summary: The function initially took 39 seconds. After making the changes suggested by jdehaan, the function now takes 33 seconds (the program stops after 100 seconds). This is better, but I think Conrad Rudolph is right, and the main reason he is slow is cache misses.
c ++ optimization memory hash
Christian ammer
source share