I'm not sure of my technique's name. I call it a "prioritized" search.
I start with the initial layout and generate every possible move for it. Each of the new layouts are stored in a tree where the key is a "priority" value and a sequence number (in case of duplicate priority values). I then discard the current layout and get the layout with the smallest key from the tree. If the current layout doesn't contain any additional moves, or it is at the maximum number of moves allowed, then I don't generate layouts for it. Note, this approach allows me to jump from one search path to another if an alternate path now has the "best" remaining key.
The concept of limiting the number of moves only works if you know that a solution exists for fewer moves than the move limit, and you are able to store enough layouts to attain a solution from them. Here's where the prioritization algorithm (PA) helps. Unfortunately, my PA was developed through a hit-and-miss approach that was more luck than science. I'm still trying to refine it!
My solver doesn't yield fully optimal solutions, but I've been fortunate because it does find reasonably short solutions -- even with an unrestrictive maximum number of moves. Using an unrestrictive maximum number of moves -- 80 -- also yields my fastest execution times. Originally one hour to process 1000000 deals.
I have another version of my solver that iteratively looks for shorter solutions by reducing the maximum number of moves each iteration. Because of execution time, I limit the size of my tree in this version. Otherwise, it'll search for over six minutes on many deals, with little gain after 30 seconds. These solutions can only be said to be "near" optimal. Still, it's impressive to see it take a deal that normally uses 53 moves and see it reduced to 42 moves or less.
--- In fc-solve-discuss_at_yahoogroups.com, Shlomi Fish <shlomif_at_...> wrote:
>
> >
> > As for my freecell solver, I maintain a byte array for the moves
> > associated with each layout that I generate while searching for a
> > solution. When another move would exceed the size of the moves array
> > for the current layout, I abandon the layout and proceed to another
> > layout that's been cached.
>
> So are you doing a depth-limited depth first search? I don't fully understand
> how this can work.
>
> >
> > I recently made a run of the first 1000000 deals while limiting the
> > number of moves to 64. I had a median of 48 moves for the solveable
> > deals, but the results were skewed at the upper end. I'm not
> > surprised that the results were skewed, but I do wonder how he
> > managed to get (what appears to be) a normalized bell curve for his
> > results.
>
> Curious. Does your solver yield fully optimal solutions?
>
Received on Wed Sep 12 2012 - 09:04:32 IDT