On Tue, 11 Dec 2001 WKRfresno_at_aol.com wrote:
> Horne: All 'A's play home immediately, all '2's play home if the same-suit
> 'A' is already there, an 'N' plays home if the same-suit 'N-1' and both
> 'N-1's of the opposite color are already home.
>
> Raymond: All 'A's play home immediately, an 'N' plays home if the same-suit
> 'N-1', both 'N-2's of the opposite color, and the other same-color 'N-3' are
> already home.
Indeed. I was using Horne's rule (what can I say, it's obvious), and
switching to Raymond's rule improved the performance of the solver, not
just in terms of time, but in terms of quality, which I define in terms of
the length of the solution in single card moves. As usual, some boards
got worse, but more got better (by 1/10 of a move on average, but the
distribution is skewed and a larger number of games got shorter while a
small number got rather longer). In particular, the 0-freecell games MS
5814 and MS 12392 both got worse (but were still solvable). These games
require at least Horne's rule to be solvable.
> > Cool.
>
> Hmmmm. A trendy new Japanese expression?
Oh, excuse me. "Sugoi!"
> Information theory may show that fewer bytes are necessary, but the smallest
> practical representation I've worked out is 36 bytes. However, a long series
> of positions can be represented in many fewer than 36 bytes per position.
I presume you mean _at most_ 36 bytes. And of course your second point
there applies to your hash buckets, which are designed so that the
positions in them are all similar and so you only need to record the
differences (by storing them in a trie, for example).
> Yes, a transposition table is mandatory unless I want to spend several weeks
> trying to find a solution to a single deal. But I never record all positions,
> and in some cases positions are retained for shorter or longer periods of
> time.
Ah. When you know for sure that you'll never visit this position again,
you don't have to save it. In practice I've noticed that my solver
generates about twice as many positions as it saves. It would be
interesting to see how many of those positions only get visited once, and
how many get visited lots and lots of times.
<tinker tinker...>
I've been using a splay tree (a type of balanced binary tree) instead of a
hash, to gain the advantages of the caching properties of the splay tree
(since I know that nearby positions often re-generate the same positions
again). (OK, and because it's a cool data structure. :-) It works pretty
well when the tree is small, but some boards generate millions of
intermediate positions, and it starts to take some time. Though, I've
been thinking that my real problem is that I shouldn't be generating so
many, and there's probably a big prune somewhere that I'm just too lazy to
go looking for.
Indeed, preliminary analysis of a few boards suggests that 30-40% of the
intermediate positions are only visited once. While not storing them in
the first place would be an improvement, I'm still left with 2 million
instead of 3. How many intermediate positions do you visit, for, say,
MS#883 with zero freecells? My solver (doesn't solve it, and) generates
almost 7 million positions, 2.6 million of which are unique, and 24% of
which are visited only once.
I'll bet you don't generate that many to begin with.
> > (A drawback of a position store is that it prevents the cool feature of
> > having the solver keep going after finding a solution, looking for others.
>
> That shouldn't be a problem, unless something in your implementation gets in
> the way.
Something does. I store _every_ position I visit, including the ones
generated during the final flourish. So I can't visit them again. But,
if I didn't store them...
Cheers,
Dr. Tom Holroyd
"I am, as I said, inspired by the biological phenomena in which
chemical forces are used in repetitious fashion to produce all
kinds of weird effects (one of which is the author)."
-- Richard Feynman, _There's Plenty of Room at the Bottom_
Received on Wed Dec 12 2001 - 01:42:42 IST