Not logged inCSS-Forum
Forum CSS-Online Help Search Login
CSS-Shop Impressum Datenschutz
Up Topic Hauptforen / CSS-Forum / Versteckte Lc0 Parameter
- - By Eduard Nemeth Date 2020-01-08 06:15
Dank Jörg Burwitz habe ich von Lc0 Optionen erfahren, die ich bisher nicht kannte. Ich kann sie nämlich weder unter Droidfish noch unter Fritz sehen. Sie könnten manchmal jedoch wichtig sein.

Hier sind zwei Parameter (auf der Lc0 Homepage zu finden) mit denen man Einfluss auf die Zeiteinteilung vornehmen kann:

--time-midpoint-move  TimeMidpointMove

"The move where the time budgeting algorithm guesses half of all games to be completed by. Half of the time allocated for the first move is allocated at approximately this move.
Default value: 51.50
Minimum value: 1.00
Maximum value: 100.00"

--time-steepness
TimeSteepness

"Steepness" of the function the time budgeting algorithm uses to consider when games are completed. Lower values leave more time for the endgame, higher values use more time for each move before the midpoint.
Default value: 7.00
Minimum value: 1.00
Maximum value: 100.00"

Hat jemand Erfahrung damit?

Ich habe mit Lc0 auf dem Server bei 16 Min Partien zB. die Erfahrung gemacht dass Lc0 nach dem letzten Buchzug, die nächsten Züge im Blitztempo abspult, um danach plötzlich irgendwann ins andere Extrem zu fallen. Durch die beiden Parameter hoffe ich nun, Einfluss nehmen zu können.

Nur, woher hätte ich das wissen können, wenn die beiden GUIs die ich nutze, diese Parameter garnicht erst anzeigen?
Parent - By Thorsten Czub Date 2020-01-08 06:23
Der lebt noch ?!
Parent - - By Eduard Nemeth Date 2020-01-08 06:56
Hier sind alle Lc0 command Parameter. Die beiden erwähnten sind hier nicht aufgeführt! Wie soll dann Fritz 17 einen Parameter übernehmen aus der ENG Datei, wenn der hier nicht aufgeführt ist?

Allowed command line flags for current mode:
  -h,  --help  Show help and exit.

  -w,  --weights=STRING
               Path from which to load network weights.
               Setting it to <autodiscover> makes it search in ./ and ./weights/ subdirectories
               for the latest (by file date) file which looks like weights.
               [UCI: WeightsFile  DEFAULT: <autodiscover>]

  -b,  --backend=CHOICE
               Neural network computational backend to use.
               [UCI: Backend  DEFAULT: blas  VALUES: blas,random,check,roundrobin,multiplexing,demux]

  -o,  --backend-opts=STRING
               Parameters of neural network backend. Exact parameters differ per backend.
               [UCI: BackendOptions]

  -t,  --threads=1..128
               Number of (CPU) worker threads to use.
               [UCI: Threads  DEFAULT: 2  MIN: 1  MAX: 128]

       --nncache=0..999999999
               Number of positions to store in a memory cache. A large cache can speed up
               searching, but takes memory.
               [UCI: NNCacheSize  DEFAULT: 200000  MIN: 0  MAX: 999999999]

       --minibatch-size=1..1024
               How many positions the engine tries to batch together for parallel NN
               computation. Larger batches may reduce strength a bit, especially with a small
               number of playouts.
               [UCI: MinibatchSize  DEFAULT: 256  MIN: 1  MAX: 1024]

       --max-prefetch=0..1024
               When the engine cannot gather a large enough batch for immediate use, try to
               prefetch up to X positions which are likely to be useful soon, and put them into
               cache.
               [UCI: MaxPrefetch  DEFAULT: 32  MIN: 0  MAX: 1024]

       --[no-]logit-q
               Apply logit to Q when determining Q+U best child. This makes the U term less
               dominant when Q is near -1 or +1.
               [UCI: LogitQ  DEFAULT: false]

       --cpuct=0.00..100.00
               cpuct_init constant from "UCT search" algorithm. Higher values promote more
               exploration/wider search, lower values promote more confidence/deeper search.
               [UCI: CPuct  DEFAULT: 3.00  MIN: 0.00  MAX: 100.00]

       --cpuct-base=1.00..1000000000.00
               cpuct_base constant from "UCT search" algorithm. Lower value means higher growth
               of Cpuct as number of node visits grows.
               [UCI: CPuctBase  DEFAULT: 19652.00  MIN: 1.00  MAX: 1000000000.00]

       --cpuct-factor=0.00..1000.00
               Multiplier for the cpuct growth formula.
               [UCI: CPuctFactor  DEFAULT: 2.00  MIN: 0.00  MAX: 1000.00]

       --temperature=0.00..100.00
               Tau value from softmax formula for the first move. If equal to 0, the engine
               picks the best move to make. Larger values increase randomness while making the
               move.
               [UCI: Temperature  DEFAULT: 0.00  MIN: 0.00  MAX: 100.00]

       --tempdecay-moves=0..100
               Reduce temperature for every move from the game start to this number of moves,
               decreasing linearly from initial temperature to 0. A value of 0 disables
               tempdecay.
               [UCI: TempDecayMoves  DEFAULT: 0  MIN: 0  MAX: 100]

       --temp-cutoff-move=0..1000
               Move number, starting from which endgame temperature is used rather than initial
               temperature. Setting it to 0 disables cutoff.
               [UCI: TempCutoffMove  DEFAULT: 0  MIN: 0  MAX: 1000]

       --temp-endgame=0.00..100.00
               Temperature used during endgame (starting from cutoff move). Endgame temperature
               doesn't decay.
               [UCI: TempEndgame  DEFAULT: 0.00  MIN: 0.00  MAX: 100.00]

       --temp-value-cutoff=0.00..100.00
               When move is selected using temperature, bad moves (with win probability less
               than X than the best move) are not considered at all.
               [UCI: TempValueCutoff  DEFAULT: 100.00  MIN: 0.00  MAX: 100.00]

       --temp-visit-offset=-1000.00..1000.00
               Adjusts visits by this value when picking a move with a temperature. If a
               negative offset reduces visits for a particular move below zero, that move is
               not picked. If no moves can be picked, no temperature is used.
               [UCI: TempVisitOffset  DEFAULT: 0.00  MIN: -1000.00  MAX: 1000.00]

  -n,  --[no-]noise
               Add Dirichlet noise to root node prior probabilities. This allows the engine to
               discover new ideas during training by exploring moves which are known to be bad.
               Not normally used during play.
               [UCI: DirichletNoise  DEFAULT: false]

  -v,  --[no-]verbose-move-stats
               Display Q, V, N, U and P values of every move candidate after each move.
               [UCI: VerboseMoveStats  DEFAULT: false]

       --fpu-strategy=CHOICE
               How is an eval of unvisited node determined. "First Play Urgency" changes search
               behavior to visit unvisited nodes earlier or later by using a placeholder eval
               before checking the network. The value specified with --fpu-value results in
               "reduction" subtracting that value from the parent eval while "absolute"
               directly uses that value.
               [UCI: FpuStrategy  DEFAULT: reduction  VALUES: reduction,absolute]

       --fpu-value=-100.00..100.00
               "First Play Urgency" value used to adjust unvisited node eval based on
               --fpu-strategy.
               [UCI: FpuValue  DEFAULT: 1.20  MIN: -100.00  MAX: 100.00]

       --fpu-strategy-at-root=CHOICE
               How is an eval of unvisited root children determined. Just like --fpu-strategy
               except only at the root level and adjusts unvisited root children eval with
               --fpu-value-at-root. In addition to matching the strategies from --fpu-strategy,
               this can be "same" to disable the special root behavior.
               [UCI: FpuStrategyAtRoot  DEFAULT: same  VALUES: reduction,absolute,same]

       --fpu-value-at-root=-100.00..100.00
               "First Play Urgency" value used to adjust unvisited root children eval based on
               --fpu-strategy-at-root. Has no effect if --fpu-strategy-at-root is "same".
               [UCI: FpuValueAtRoot  DEFAULT: 1.00  MIN: -100.00  MAX: 100.00]

       --cache-history-length=0..7
               Length of history, in half-moves, to include into the cache key. When this value
               is less than history that NN uses to eval a position, it's possble that the
               search will use eval of the same position with different history taken from
               cache.
               [UCI: CacheHistoryLength  DEFAULT: 0  MIN: 0  MAX: 7]

       --policy-softmax-temp=0.10..10.00
               Policy softmax temperature. Higher values make priors of move candidates closer
               to each other, widening the search.
               [UCI: PolicyTemperature  DEFAULT: 2.20  MIN: 0.10  MAX: 10.00]

       --max-collision-events=1..1024
               Allowed node collision events, per batch.
               [UCI: MaxCollisionEvents  DEFAULT: 32  MIN: 1  MAX: 1024]

       --max-collision-visits=1..1000000
               Total allowed node collision visits, per batch.
               [UCI: MaxCollisionVisits  DEFAULT: 9999  MIN: 1  MAX: 1000000]

       --[no-]out-of-order-eval
               During the gathering of a batch for NN to eval, if position happens to be in the
               cache or is terminal, evaluate it right away without sending the batch to the
               NN. When off, this may only happen with the very first node of a batch; when on,
               this can happen with any node.
               [UCI: OutOfOrderEval  DEFAULT: true]

       --[no-]sticky-endgames
               When an end of game position is found during search, allow the eval of the
               previous move's position to stick to something more accurate. For example, if at
               least one move results in checkmate, then the position should stick as
               checkmated. Similarly, if all moves are drawn or checkmated, the position should
               stick as drawn or checkmate.
               [UCI: StickyEndgames  DEFAULT: true]

       --[no-]syzygy-fast-play
               With DTZ tablebase files, only allow the network pick from winning moves that
               have shortest DTZ to play faster (but not necessarily optimally).
               [UCI: SyzygyFastPlay  DEFAULT: true]

       --multipv=1..500
               Number of game play lines (principal variations) to show in UCI info output.
               [UCI: MultiPV  DEFAULT: 1  MIN: 1  MAX: 500]

       --[no-]per-pv-counters
               Show node counts per principal variation instead of total nodes in UCI.
               [UCI: PerPVCounters  DEFAULT: false]

       --score-type=CHOICE
               What to display as score. Either centipawns (the UCI default), win percentage or
               Q (the actual internal score) multiplied by 100.
               [UCI: ScoreType  DEFAULT: centipawn  VALUES: centipawn,centipawn_2018,win_percentage,Q]

       --history-fill=CHOICE
               Neural network uses 7 previous board positions in addition to the current one.
               During the first moves of the game such historical positions don't exist, but
               they can be synthesized. This parameter defines when to synthesize them (always,
               never, or only at non-standard fen position).
               [UCI: HistoryFill  DEFAULT: fen_only  VALUES: no,fen_only,always]

       --short-sightedness=0.00..1.00
               Used to focus more on short term gains over long term.
               [UCI: ShortSightedness  DEFAULT: 0.00  MIN: 0.00  MAX: 1.00]

  -s,  --syzygy-paths=STRING
               List of Syzygy tablebase directories, list entries separated by system separator
               (";" for Windows, ":" for Linux).
               [UCI: SyzygyPath]

       --[no-]ponder
               This option is ignored. Here to please chess GUIs.
               [UCI: Ponder  DEFAULT: true]

       --[no-]chess960
               Castling moves are encoded as "king takes rook".
               [UCI: UCI_Chess960  DEFAULT: false]

       --[no-]show-wdl
               Show win, draw and lose probability.
               [UCI: UCI_ShowWDL  DEFAULT: false]

  -c,  --config=STRING
               Path to a configuration file. The format of the file is one command line
               parameter per line, e.g.:
               --weights=/path/to/weights
               [UCI: ConfigFile  DEFAULT: lc0.config]

       --kldgain-average-interval=1..10000000
               Used to decide how frequently to evaluate the average KLDGainPerNode to check
               the MinimumKLDGainPerNode, if specified.
               [UCI: KLDGainAverageInterval  DEFAULT: 100  MIN: 1  MAX: 10000000]

       --minimum-kldgain-per-node=0.00..1.00
               If greater than 0 search will abort unless the last KLDGainAverageInterval nodes
               have an average gain per node of at least this much.
               [UCI: MinimumKLDGainPerNode  DEFAULT: 0.00  MIN: 0.00  MAX: 1.00]

       --smart-pruning-factor=0.00..10.00
               Do not spend time on the moves which cannot become bestmove given the remaining
               time to search. When no other move can overtake the current best, the search
               stops, saving the time. Values greater than 1 stop less promising moves from
               being considered even earlier. Values less than 1 causes hopeless moves to still
               have some attention. When set to 0, smart pruning is deactivated.
               [UCI: SmartPruningFactor  DEFAULT: 1.33  MIN: 0.00  MAX: 10.00]

       --ramlimit-mb=0..100000000
               Maximum memory usage for the engine, in megabytes. The estimation is very rough,
               and can be off by a lot. For example, multiple visits to a terminal node counted
               several times, and the estimation assumes that all positions have 30 possible
               moves. When set to 0, no RAM limit is enforced.
               [UCI: RamLimitMb  DEFAULT: 0  MIN: 0  MAX: 100000000]

       --move-overhead=0..100000000
               Amount of time, in milliseconds, that the engine subtracts from it's total
               available time (to compensate for slow connection, interprocess communication,
               etc).
               [UCI: MoveOverheadMs  DEFAULT: 200  MIN: 0  MAX: 100000000]

       --slowmover=0.00..100.00
               Budgeted time for a move is multiplied by this value, causing the engine to
               spend more time (if value is greater than 1) or less time (if the value is less
               than 1).
               [UCI: Slowmover  DEFAULT: 1.00  MIN: 0.00  MAX: 100.00]

       --immediate-time-use=0.00..1.00
               Fraction of time saved by smart pruning, which is added to the budget to the
               next move rather than to the entire game. When 1, all saved time is added to the
               next move's budget; when 0, saved time is distributed among all future moves.
               [UCI: ImmediateTimeUse  DEFAULT: 1.00  MIN: 0.00  MAX: 1.00]

  -l,  --logfile=STRING
               Write log to that file. Special value <stderr> to output the log to the console.
               [UCI: LogFile]
Parent - - By Lothar Jung Date 2020-01-08 08:25 Edited 2020-01-08 08:34
Zur Ergänzung:

—nncache

!addcmd nncache
nncache saves need to query NN for transpositions (increasing nps slightly).
Every nncache entry takes 350 bytes of RAM.
Every node in the tree takes 250 bytes of RAM.
You have to decide yourself whether you want to allow faster search (by increasing nncache) or longer search (by saving RAM by reducing nncache).

You can use the formula:
```
max_NN_cache_size = (total_RAM_in_GB)*2500000 - (Longest_move_time_in_minutes)*(nps)*43
```

New option --logit-q (UCI: LogitQ). Changes subtree selection algorithm a bit, with a hope to win won positions faster (experimental, default off).

And how a brief tour to the backends that we have.

tensorflow
This backend was historically the first (briefly) that Lc0 supported. It's not included in pre-built binaries as it's pretty complicated to get it built and run.
Also it didn't have any updates since August, so test10 is the latest network run it supports.
But if we'll ever want to use Google TPUs, this backend has to be revived.
tensorflow-cpu
Version of tensorflow backend that runs on CPU. Was useful before we had blas backend.

cudnn
Backend for CUDA-compatible devices (NVidia GPUs).
Options:

gpu=<int> (default: 0) Which GPU device to use (0-based: 0 for the first GPU device, 1 for the second, etc).
max_batch=<int> (default: 1024) Maximum size of minibatch that backends allocates memory for. This is different from search parameter --minibatch-size (but should be at least same size), it doesn't affect search speed or performance. If it's too large, engine may refuse to start because it won't be able to allocate needed VRAM. If it's too small, engine will crash when batch coming from search algorithm will turn out to be too large.

cudnn-fp16
Version of cudnn backend which makes use of tensor cores, found in newer NVidia GPUs (RTX 20xx series is the most popular). That improves performance by a lot.
Options:
same as in cudnn.

opencl
OpenCL backend is for GPUs which are not CUDA-compatible. It's slower than CUDA but faster than using CPU.
Options:

gpu=<int> (default: 0) Which GPU device to use (0-based: 0 for the first GPU device, 1 for the second, etc).
force_tune=true Triggers search for the best configuration for GPU.
tune_only=true Force exits the engine as soon as GPU tuning is complete.
tune_exhaustive=true Tries more configurations during tuning. May take some time, but as a result performance may be slightly better.

blas
Runs NN evaluation on CPU.
Options:

blas_cores=<int> (default: 1)  Number of cores to use (probably?..)
batch_size=<int> (default: 256) Maximum size of minibatch that backend can handle. Similar to cudnn's parameter: doesn't change anything by itself. Too large eats up memory, too low crashes. Should probably be renamed to max_batch for consistency.

multiplexing
This backend originally was intended to use during selfplay generation. It combines NN eval requests from several threads (in selfplay case, those threads come from different games), and sends them further to child backend as a single batch.
Also it supports several child backends and sends a batch to whichever backend is free. Because of this it's also used outside of selfplay, in multi-GPU configurations (although now there are better backends for that).

Options:
Multiplexing takes list of subdictionaries as options, and creates one child backend per dictionary. All subdictionary parameters are passed to those backends, but there are also additional params:

threads=<int> (default: 1) Number of eval threads allocated for this backend.
max_batch=<int> (default: 256) Maximum size of batch to send to this backend.
backend=<string> (default: name of the subdictionary) Name of child backend to use.

Examples:

backend=cudnn,(gpu=0),(gpu=1) -- Two child backends, backend name is inherited from parent dictionary.
blas,cudnn -- Two child backends, blas and cudnn (() are omitted for subdictionary, and name of subdictionary is used as backend= option is not specified).
Not allowed!: cudnn,cudnn (two keys with the same name)
threads=2,cudnn(gpu=0),cudnn-fp16(gpu=1) -- cudnn backend for GPU 0, cudnn-fp16 for GPU 1, two threads are used for each.

roundrobin
Can have multiple child backends. Alternates to which backend the request is sent. E.g. if there are 3 children, 1st request goes to 1st backend, 2nd -- to 2nd, then 3rd, then 1st, 2nd, 3rd, 1st, ... and so on.
Somewhat similar to multiplexing backend, but doesn't combine/accumulate requests from different threads, but rather sends them verbatim immediately. It also doesn't need to use any locks which makes it a bit faster.

It's important for this backend that all child backends have the same speed (e.g. same GPU model, and none of them is throttled/overheated). Otherwise all backends will be slowed down to the slowest one. If you use non-uniform child backends, it's better to use multiplexing backend.

Options:
Takes list of subdictionaries as options, and creates one child backend per dictionary. All subdictionary parameters are passed to those backends, but there are also one additional param:

backend=<string> (default: name of the subdictionary) Name of child backend to use.

demux
Does the opposite from what multiplexing does: takes large batch which comes from search, splits into smaller batches and sends them to children backends to compute in parallel.
May be useful for multi-GPU configurations, or multicore CPU configurations too.

As with roundrobin backend, it's important that all child backends have the same performance, otherwise everyone will wait for the slowest one.

Options:

minimum-split-size=<int> (default: 0)  Do not split batch to subbatches smaller than that.
Also takes list of subdictionaries as options, and creates one child backend per dictionary. All subdictionary parameters are passed to those backends, but there are also additional params:
threads=<int> (default: 1) Number of eval threads allocated for this backend.
backend=<string> (default: name of the subdictionary) Name of child backend to use.

random
A testing backend which returns random number as NN eval. Initially was intended for performance testing of search algorithm, but turned out also to be useful to generate seed games when we start new run.

Options:

delay=<int> (default: 0) Do a delay during every NN eval, in milliseconds, to simulate slower backend.
seed=<int> (default: 0) Seed to initialize random number generator to get repeatable results.
uniform=true Enables "uniform mode". Instead of returning random numbers, always returns 0.0 as position eval and equal probability for all possible moves. Turned out that's how DeepMind generated their seed games, and that's what we do now too.

check
Sends the same data to two backends and compares the result. Used to detect/debug mismatches between backends.

Options:

mode=check|display|histo (default: check) What to do with the results: only check (and report mismatches), display short stats, display histogram.
atol=<float> (default: 0.00001) Maximum absolute value difference between backends, still considered normal (not mismatching).
rtol=<float> (default: 0.0001) Maximum relative value difference between backends, still considered normal (not mismatching).
freq=<float> (default: 0.2) How often to check for mismatches (0=never, 1=always, 0.2=for every fifth request)
Two backends to compare are passed as subdictionaries. All params are passed to those backeds, as usual, and as usual there's one additional param:
backend=<string> (default: name of the subdictionary) Name of child backend to use.
Parent - - By Eduard Nemeth Date 2020-01-08 08:44 Edited 2020-01-08 08:53
Und am besten scheint mir, nur mit der lc0.config (auch mit Leelafish) zu arbeiten. Die Einträge werden auch bei Verwendung unter Fritz angenommen. Es genügt demnach unter Fritz einfach nur die neue Engine zu kreieren (für die notwendige ENG Datei) und mit den voreingestellten Parametern. Die Änderungen dann nur in der lc0.config vornehmen die man im lc0 Ordner abspeichert. Damit benötigt man keine gesonderten Einstellungen bei Verwendung unter mehreren GUIs wie zB Arena oder Nibbler. Eine lc0.config für alles!
Parent - - By Lothar Jung Date 2020-01-08 09:57 Edited 2020-01-08 10:18
Folgende Parameter haben je nach Größe des Netzes und allgemein Auswirkungen auf die Spielstärke:

setoption name CPuct value
setoption name CPuctFactor value
setoption name CPuctBase value
setoption name FpuValue value
setoption name PolicyTemperature value

setoption name nncache value
setoption name logit-q

Für CPUs gibt es einen neuen Parameter für das Backend „dnnl2“.
Parent - - By Lothar Jung Date 2020-01-08 10:58 Edited 2020-01-08 11:12
Zu DNNL2 folgender Link:

https://github.com/intel/mkl-dnn/releases/tag/v2.0-beta03

    --short-sightedness=0.00..1.00 hat auch Einfluss auf das Spielverhalten.
Parent - - By Eduard Nemeth Date 2020-01-08 11:36
Bei den Threads, wird unter Fritz doch der Eintrag in der eigenen ENG Datei gefordert. Der Eintrag aus der lc0.config wird nicht angenommen
--threads=2

Es muss in der ENG-Datei unter Options stehen:
[Options]
Threads=2

Weiss jemand weshalb das so ist?
Parent - - By Lothar Jung Date 2020-01-08 11:55
Ich benutze Arena.
Parent - - By Eduard Nemeth Date 2020-01-08 12:17 Edited 2020-01-08 12:20
Bin gerade auf Schach.de und habe zuvor aus Versehen die Werte für --time-steepness   und  --time-midpoint-move vertauscht. Ist ja der Hammer, nach über 5 Minuten auf Stufe 16 Min überlegt Lc0 immer noch.
Diese Partie geht bestimmt verloren. Das es gleich so extrem wird? Der maximale Wert für --time-steepness liegt bei 100, jetzt habe ich aus versehen etwa die Hälfte eingestellt. 7 Minuten sind rum, Lc0 ist noch immer beim ersten Zug. Und jetzt hat Lc0 endlich gezogen. Nach genau 8 Minuten.
Parent - - By Lothar Jung Date 2020-01-08 12:21
Ich würde jeden einzelnen  Parameter separat einem Kurztest unterziehen.
Parent - - By Eduard Nemeth Date 2020-01-08 12:41
Ich bin gerade dabei. Die Einträge hatte ich gemacht damit ich später nix neu eingeben muss, ausser den neuen Werten. Und der Leelafish hat die Partie noch gerettet. Nach 8 Minuten Grübelns sah sich Leelafish leicht im Vorteil. Nach 30 Zügen war die Zeit fast alle, aber Leelafish bewies superblitz Qualitäten.
Parent - - By Eduard Nemeth Date 2020-01-08 13:14
Ich möchte den Wert für-time-steepness erhöhen, da mir vor dem Endspiel die Zeit wichtiger ist. Im Endspiel habe ich 6 und viele 7 Steiner, damit schafft Lc0 es ganz gut. Der Wert "-time-steepness=8" brachte wenig Veränderung bei der zweiten Partie, die der Leelafish ebenfalls, mit Schwarz wie zuvor, Remis gestalten konnte gegen Stockfish. Ich hatte ja meine Analysen mit den Settings gepostet, mit dem Sergio Netz 256x20 spielt der Leelafish stark.
Parent - - By Eduard Nemeth Date 2020-01-08 17:10
Die nächste Partie läuft, habe den Wert für steepness auf 15 erhöht. Nach dem letzten Buchzug gönnte sich Leelafish für die folgenden beiden Züge nur jeweils 5s mehr. Gezogen hat Leelafish bei beiden Zügen nach nur 15s. Mir ist das immer noch zu schnell, denn der Gegner Brainfish überlegte für seine beiden Züge jeweils eine Minute lang. Das ist schon krass!

Trotzdem scheint Leelafish zu gewinnen! Wir sind im Endspiel, Leelafish hat nur noch 2 Minuten, sieht sich jedoch +30 vorne mit 3 Mehrbauern und den Syzygys. Brainfish ist nur bei - 4,5.
Parent - - By Eduard Nemeth Date 2020-01-08 17:29
Mit dem Wert 20 wird es nun insgesamt schlechter. Die ersten Züge sind OK aber soeben hat Brainfish auf d8 getauscht mit Schach, es gibt nur einen einzigen Zug für Leelafish, nämlich Qxd8. Ein anderer Zug geht nicht! Trotzdem gönnte sich Leelafish 27s um den einzig möglichen Zug zu machen. Für mich bedeutet das, der sinnvollste Wert für steepness ist max.15. Als nächstes teste ich dann time-mid-point-move.
Parent - - By Eduard Nemeth Date 2020-01-08 17:43
Habe steepness jetzt auf 15.00 gestellt, und time-mid-point-move auf 41.50 (statt 51.50). Das sieht sehr gut aus. Bisher gute Zeiteinteilung bei 16+0.
Parent - - By Eduard Nemeth Date 2020-01-08 18:05
Ganz toll war das noch nicht. Zwar endete die Partie Remis, aber Leelafish hatte bei Zug 48 nur noch 60s auf der Uhr. Werde time midpoint move nun auf 46.50 setzen.
Parent - - By Eduard Nemeth Date 2020-01-08 18:55 Edited 2020-01-08 18:59
So, die Einstellung mit steepness 15.00 und Midpoint Move 46.50 ist bei 16+0 sehr gut, wenn man mit bis  zu 7 Steinern Syzygy spielt. Bei Zug 58 hatte ich soeben noch 1:20 Min auf der Uhr. Weil ich mit einem sehr hohen Wert für Move Overhead spiele, gibt es keine Zeitüberschreitung. Habe insgesamt 7 Partien gespielt mit dem Leelafish und dem sergio 256x20. 6 Remis und ein Sieg auf einer GTX 1050 Ti, mehr geht nicht.
Parent - - By Eduard Nemeth Date 2020-01-08 19:03
Hier ist die Gewinnpartie, Brainfish spielte einen Königsinder, was ihm zum Verhängnis wurde:

Event:
Ort:
Datum:

Weiss:
Schwarz:

Ergebnis
Board
Parent - - By Lothar Jung Date 2020-01-09 09:31
Ich finde Deine Tests und Tunierpartien mit Leelafish sehr beachtenswert.
T60-Netze sind taktisch weniger anfällig. Auch im Endspiel hat sich einiges getan.
Die erhöhte Knotenzahl der RTX-GPUs bringt da auch viel.
An SF gerade in offenen Stellung kommt Leela (noch) nicht heran.
Auch nutzt SF die TBs viel früher, da er tiefer rechnet.
Eröffnungen und positioniere Stellungen sind die Stärke der NN.
Wenn Du Deine RTX 2070X bekommst spielst Du in einer anderen Liga.
Mich würde denn interessieren, wie Leelafish gegen die führenden SV-Netze abschneidet.
Vielleicht könnten wir auch gemeinsam testen.
Hardware: 2xRTX 2070(X) und Ryzen 3900X.
Parent - - By Eduard Nemeth Date 2020-01-09 09:52
Ich habe bald Geburtstag. Danach hole ich mir die neue GPU. Ja, ich bin ebenfalls sehr gespannt auf die Ergebnisse.
Parent - By Lothar Jung Date 2020-01-11 15:26
Vielleicht für Dich interessant?

**Match**: Test settings for 61925 in 1kn/move vs default.
**LC0 version:** 0.23.2
**LC0 options:** cudnn-fp16, 1 thread, settings [cpuct, fpu, pst, cpuct-base]:
    - [1.45, 0.24, 1.85] (default cpuct-base=19652)
    - [1.45, 0.24, 1.85, 5000] (currently used for testing T60 vs SV 20b net)
    - [2.0, 0.5, 1.5] (currently used for testing T59/58)
    - [2.08, 0.47, 1.92] (onced optimized for 42800 in about 800n/move)
    - Default settings :[3.0, 1.2, 2.2, 19652].
   
**Time control:**  Fixed node: 1Kn/move, 10Kn/move, 50 kn/move.
**Hardware:** RTX 2070
**Book:** SuperGM 4mvs 500 book, in sequence, reversed color
**Tablebases:** 6 man TB
**Adjudication**: 6-man TB, -draw movenumber=50 movecount=5 score=8 -resign movecount=5 score=1000
**Software:**cutechess-cli
**Comment**: The table shows the result as well as the speed in pgn files (average speed for default: 9.1 knps then the custom speed compared to the default one).  All are better than the default one in speed and strength. The settings [2.0, 0.5, 1.5] that worked well for T58/59 is best here (+49 elo), and also the fastest (1kn/move). The settings I used for testing T60 vs T40 is good, but not best for T60 (+24 elo). I will check other settings that are good for T58 for T60.

```diff
Fixed 1knpm, 1 thread
   # PLAYER                                 :  RATING  ERROR  POINTS  PLAYED   (%)  CFS(%)    W     D    L   Speed
+  1 lc0.net.61925_[2.0_0.5_1.5]            :      49     21   269.0     474  56.8      65  137   264   73   +10%
+  2 lc0.net.61925_[1.45_0.24_1.85]         :      43     21   265.0     474  55.9      83  142   246   86    +8%
+  3 lc0.net.61925_[2.08_0.47_1.92]         :      28     20   255.5     474  53.9      61  121   269   84    +6%
+  4 lc0.net.61925_[1.45_0.24_1.85_5000]    :      24     20   253.0     474  53.4      99  124   258   92    +7%
-  5 lc0.net.61925                          :       0   ----   853.5    1896  45.0     ---  335  1037  524   9.1knps
```
Up Topic Hauptforen / CSS-Forum / Versteckte Lc0 Parameter

Powered by mwForum 2.29.3 © 1999-2014 Markus Wichitill