Obwohl Stockfish im TCEC 26 mit nur ca. 50% der maximalen Geschwindigkeit lief (um die 20 MN/s bei vollem Brett, erwartbar wären mit den neuen, größeren Netzen seit SF 16.1 ca. 40 MN/s gewesen)
https://github.com/official-stockfish/Stockfish/issues/5253Stockfish hat gerade das TCEC 26 Superfinal gewonnen, da SF mit Partie 88 den Score von 50.5 erreicht hat und damit nicht mehr einzuholen ist.
Inzwischen gibt es einen Patch, der das Problem beheben soll. Da ich kein MultiSocket System besitze, kann ich das nicht verifizieren, zudem scheinen auch nur ältere Systeme betroffen zu sein, da Stockfish auf dem chess.com Rechner mit voller Kraft läuft (ca. 80 MN/s bei vollem Brett). Aber da Vondele, das Stockfish-Mastermind, das "Issue" als geschlossen auf GitHub beendet hat, darf man wohl davon ausgehen, daß Stockfish beim nächsten TCEC wieder mit voller Kraft läuft.
Author: Tomasz Sobczyk
Date: Tue May 28 18:34:15 2024 +0200
Timestamp: 1716914055
Improve performance on NUMA systems
Allow for NUMA memory replication for NNUE weights. Bind threads to ensure execution on a specific NUMA node.
This patch introduces NUMA memory replication, currently only utilized for the NNUE weights. Along with it comes all machinery required to identify NUMA nodes and bind threads to specific processors/nodes. It also comes with small changes to Thread and ThreadPool to allow easier execution of custom functions on the designated thread. Old thread binding (WinProcGroup) machinery is removed because it's incompatible with this patch. Small changes to unrelated parts of the code were made to ensure correctness, like some classes being made unmovable, raw pointers replaced with unique_ptr. etc.
Windows 7 and Windows 10 is partially supported. Windows 11 is fully supported. Linux is fully supported, with explicit exclusion of Android. No additional dependencies.
-----------------
A new UCI option `NumaPolicy` is introduced. It can take the following values:
```
system - gathers NUMA node information from the system (lscpu or windows api), for each threads binds it to a single NUMA node
none - assumes there is 1 NUMA node, never binds threads
auto - this is the default value, depends on the number of set threads and NUMA nodes, will only enable binding on multinode systems and when the number of threads reaches a threshold (dependent on node size and count)
[[custom]] -
// ':'-separated numa nodes
// ','-separated cpu indices
// supports "first-last" range syntax for cpu indices,
for example '0-15,32-47:16-31,48-63'
```
Setting `NumaPolicy` forces recreation of the threads in the ThreadPool, which in turn forces the recreation of the TT.
The threads are distributed among NUMA nodes in a round-robin fashion based on fill percentage (i.e. it will strive to fill all NUMA nodes evenly). Threads are bound to NUMA nodes, not specific processors, because that's our only requirement and the OS can schedule them better.
Special care is made that maximum memory usage on systems that do not require memory replication stays as previously, that is, unnecessary copies are avoided.
On linux the process' processor affinity is respected. This means that if you for example use taskset to restrict Stockfish to a single NUMA node then the `system` and `auto` settings will only see a single NUMA node (more precisely, the processors included in the current affinity mask) and act accordingly.
-----------------
We can't ensure that a memory allocation takes place on a given NUMA node without using libnuma on linux, or using appropriate custom allocators on windows (
https://learn.microsoft.com/en-us/windows/win32/memory/allocating-memory-from-a-numa-node), so to avoid complications the current implementation relies on first-touch policy. Due to this we also rely on the memory allocator to give us a new chunk of untouched memory from the system. This appears to work reliably on linux, but results may vary.
MacOS is not supported, because AFAIK it's not affected, and implementation would be problematic anyway.
Windows is supported since Windows 7 (
https://learn.microsoft.com/en-us/windows/win32/api/processtopologyapi/nf-processtopologyapi-setthreadgroupaffinity). Until Windows 11/Server 2022 NUMA nodes are split such that they cannot span processor groups. This is because before Windows 11/Server 2022 it's not possible to set thread affinity spanning processor groups. The splitting is done manually in some cases (required after Windows 10 Build 20348). Since Windows 11/Server 2022 we can set affinites spanning processor group so this splitting is not done, so the behaviour is pretty much like on linux.
Linux is supported, **without** libnuma requirement. `lscpu` is expected.
-----------------
Passed 60+1 @ 256t 16000MB hash:
https://tests.stockfishchess.org/tests/view/6654e443a86388d5e27db0d8```
LLR: 2.95 (-2.94,2.94) <0.00,10.00>
Total: 278 W: 110 L: 29 D: 139 Elo +104.25
Ptnml(0-2): 0, 1, 56, 82, 0
```
Passed SMP STC:
https://tests.stockfishchess.org/tests/view/6654fc74a86388d5e27db1cd```
LLR: 2.95 (-2.94,2.94) <-1.75,0.25>
Total: 67152 W: 17354 L: 17177 D: 32621 Elo +0.92
Ptnml(0-2): 64, 7428, 18408, 7619, 57
```
Passed STC:
https://tests.stockfishchess.org/tests/view/6654fb27a86388d5e27db15c```
LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
Total: 131648 W: 34155 L: 34045 D: 63448 Elo +0.29
Ptnml(0-2): 426, 13878, 37096, 14008, 416
```
fixes #5253
closes
https://github.com/official-stockfish/Stockfish/pull/5285No functional change