Hi all!
Recently a series of commits was implemented in the fc-solve master branch that
provided a compile time option to use GCC's __int128 128-bit integers for the
var_base* routines instead of GNU Multiple Precision's big-integers. These
routines are used by the depth_dbm_fc_solver and by similar solver - not by
fc-solve and libfreecell-solver themselves (at least not yet).
This change was largely inspired by this post
-
https://www.nu42.com/2016/01/excellent-optimization-story.html that was
referenced in this Perl Weekly edition -
http://perlweekly.com/archive/234.html .
Here is a (JavaScript-requiring) chart with the runtime results:
http://fc-solve.shlomifish.org/charts/dbm-solver-__int128-optimisation/
(Sorry for not publishing it earlier but I ran into some problems with
TypeScript, jQuery, and Flot which delayed me.)
As one can see the newer benchmarks run at about 2/3rds the time of the older
code, which is a significant improvement. The invocation I used was:
./depth_dbm_fc_solver --num-threads 1
--offload-dir-path /home/shlomif/tmp/depth-dbm/ 1107600547.board | tee
1107600547.depth_dbm.dump
Regards,
Shlomi Fish
--
-----------------------------------------------------------------
Shlomi Fish http://www.shlomifish.org/
http://shlomifishswiki.branchable.com/Self-Sufficiency/
Microsoft — making it all make sense. Ours.
Please reply to list if it's a mailing list post - http://shlom.in/reply .
Received on Mon Oct 17 2016 - 06:24:59 IDT