Reverse Engineering the AIMA
EightPuzzleDemo

In my ongoing handle-getting effort (see my attempt at Agent code here) I have attempted to reverse engineer the Java EightPuzzleDemo code while it was running a GreedyBestFirst search from the simplest -- boardWithThreeMoveSolution -- starting position. In the course of such events I have mapped out (most of?) the search demo code as well.

Eight Puzzle Representation

The model of the eight puzzle board is a one dimensional(!?) array of nine integers which map onto the Puzzle Board left->right and top->bottom. The tiles are numbered from zero to eight, where zero is the blank space. The default goal state for the puzzle is to have all the numbers in ascending order:

0 1 2
3 4 5
6 7 8
which is internally represented in the EightPuzzleBoard.state array as having each integer in its appropriate indexed slot:
{ 0, 1, 2, 3, 4, 5, 6, 7, 8 };

Each puzzle state is a unique ordering of the array, and since the numbers cannot be reused there are 9! possible states (my dull-mathmind first assumed that there were 99 states, but that is the space with full replacement for each digit).

The blank space "tile" is effectively the Agent searching in an Environment defined by the EightPuzzleBoard layout and rules. Actions that can be taken are defined as motions of the blank and are relative to it. A move "Right" causes the blank to switch places with the tile to its right, such that (starting from the goal state above) you swap 0 and 1:

{ 1, 0, 2, 3, 4, 5, 6, 7, 8 };
There are four possible moves constrained by the blank's position -- if its "up against a wall" at the edge of the puzzle it can't move into the wall. So from any parent state there are two, three, or four possible child states to explore.

The Search Process Outline

The subtlties of each search algorithm are fairly well buried in the actual implementations, especially as to how states are expanded and how a particular child is selected for further exploration. I have tried to describe some of these issues in this doc.

<IMHO mode="grumble">
The int[] board mapping and blank space agent behavior are the logically (perhaps) "obvious" decomposition of the Eight Puzzle. But, as far as I can tell, it is not explicitly described anywhere so I had to run the code to verify those assumptions.

The structure of the Search and NodeExpander child classes is so complicated as to appear to be intentinally malicious. I haven't done a deep analysis so it may be that this is the best of all possible worlds, but it makes me a little queezy. I'm not sure why there are the variances in class structure where some, but not all, top level search classes extend NodeExpander, others use a separate NodeExpander child class to implement search behavior, and some few don't use it at all (also IterativeDeepeningSearch extends it to seemingly no purpose). Perhaps just renaming some things, like QueueSearch.search() which gets easily jumbled up with the actual Search.search() interface, would make the code a bit easier to follow.

Along the naming lines, the code contains a number of HUGE object names, e.g.: replaceFrontierNodeAtStateCostFunction which descibes the thing to a "t" -- if one knows what is being described -- and a few tiny ones, e.g.: f which may match a function name in the AIMA book, but...oy... In a number of cases, e.g.: BestFirstSearch.getComparator(), there is an "f" field and an "f()" method making it darned easy to overload one on the other when reading the code. Just a serving suggestion on my part.

The initialization of the system is also exceedingly convoluted. Many hoops are jumped through to get the necessary search behavior parameters and methods to the right places before beginning the real work. There is a Problem class which is pretty much a container of necessary miscellany and is awfully close to an Environment class...but the two do not meet in this context. (The GUI EightPuzzleApp demo does make some use of the Agent/Environment paradigm, but this console demo only takes a swipe at making a SearchAgent class that doesn't use any of the AbstractAgent features).

The most egretious setup example is in getting the right heuristic function to the right PriorityQueue comparator. Starting in PrioritySearch.search getComparator() a number of squirrel holes are examined and filled, including this lovely <kludge> in BestFirstSearch.getComparator():


if (this.search instanceof GraphSearch) {
	((GraphSearch) this.search).setReplaceFrontierNodeAtStateCostFunction(f);
because an optimization in QueueSearch needs to be able to evaluate the heiristic on the fly as it is expanding the search tree.

And speaking of <kludges>.... There is an EightPuzzleBoard Goal State defined in EightPuzzleGoalTest and used to test for search completion. But both heuristic functions, MisplacedTilleHeuristicFunction(sic) and ManhattanHeuristicFunction, hard code the completion state in their calculations. So the Goal State is defined at least three times in the code.
</IMHO>









Class Diagrams

Support classes

There are two important classes, Node and Problem, that support search behavior, one on each end of the system as it were. Neither of them has a class heirarchy per se but I'll describe their contents here in the hopes that it will make the search heirarchy a bit more understandable.

Node

The Node class is at the bottom of it all. It defines a single system state in the search tree and is manipulated by all the search algorithms.


Node
  Object state; -- specific system config
  Object parent; -- previous Node in search path
  Object action; -- Action that moved from parent to here
  double pathCost; -- Cost of that Action

Problem

The Problem class is at the top of the system. It defines (almost) all of the problem specific operations needed by the lower levels. It is passed in to most search methods and utilities. One might think of it as a global variable containing problem specific functions...


Problem problem;
  Object initialState; -- problem specific starting state
  ActionsFunction actionsFunction; -- actions available to the agent
  ResultFunction resultFunction; -- what each action does
  GoalTest goalTest; -- test for goal reached
  StepCostFunction stepCostFunction; -- returns cost of action execution


Search classes

The search code revolves around two main definitions which are inter-related in some (less than apparent) complicated ways:

Using these definitions, high level classes are built for all the search algorithms in the AIMA book, e.g., BreadthFirstSearch, DepthFirstSearch, etc.

Search implementation hierarchy


Search -- search( Problem )
  PrioritySearch -- uses QueueSearch with PriorityQueue
    BestFirstSearch
      AStarSearch -- could use TreeSearch or GraphSearch impls
      GreedyBestFirstSearch -- probably GraphSearch impl
    UniformCostSearch -- default to GraphSearch impl
  BidirectionalSearch -- expands own Nodes using CachedStateQueues
  BreadthFirstSearch -- uses GraphSearch with FIFOQueue
  DepthFirstSearch -- uses (probably) GraphSearch with LIFOQueue
  DepthLimitedSearch -- recursively expands own Nodes
  HillClimbingSearch ext NodeExpander -- no Queues...
  IterativeDeepeningSearch ext NodeExpander -- uses internal DepthLimitedSearch
  RecursiveBestFirstSearch ext NodeExpander -- recursively calls expandNode
  SimulatedAnnealingSearch ext NodeExpander -- randomly selects from expandNode list

Each of these classes implements the Search interface, which boils down to this method:

The search() method takes a Problem, which is a description of the initial state, the goal state, and the methods to be used to solve it, and returns a list of Actions (also described in the Problem definition) to be taken in order to reach the specified goal.

NodeExpander extension hierarchy

Many of the top level Search classes also extend NodeExpander.


NodeExpander -- expandNode( Node, Problem )
  QueueSearch -- search(Problem, Queue<Node> frontier)
    GraphSearch
    TreeSearch
  DepthLimitedSearch impl Search
  HillClimbingSearch impl Search
  IterativeDeepeningSearch impl Search
  RecursiveBestFirstSearch impl Search
  SimulatedAnnealingSearch impl Search

The main method of interest here is: Using the given parent node and the Problem definitions, expandNode() returns a list of new child Nodes. Instead of extending NodeExpander, some of the top level classes, like GreedyBestFirstSearch, contain an extension of it, e.g. QueueSearch and it's children, which is called to do the real work. Other of the classes implement their own node expansions.

One feature of QueueSearch and it's extensions is that their behavior can be controlled by the selection of Queue type, as indicated in the Search hierarchy above:

QueueSearch also contains a boolean flag: which controls when the goal state is checked. It is defaulted to false, as when used for Depth Searches, each Node is checked after it is popped off the frontier list. When set to true, each Node is checked as it is generated before it is pushed onto the frontier list, this is generally used for Breadth Searches as in BreadthFirstSearch.

Data Schema

EightPuzzleDemo -- main program

The top level program is EightPuzzleDemo.eightPuzzleGreedyBestFirstDemo().


EightPuzzleDemo.eightPuzzleGreedyBestFirstDemo()
  Problem problem;
    Object initialState = EightPuzzleBoard; == boardWithThreeMoveSolution
    ActionsFunction actionsFunction = EPActionsFunction;
    ResultFunction resultFunction = EPResultFunction;
    GoalTest goalTest = EightPuzzleGoalTest;
    StepCostFunction stepCostFunction = DefaultStepCostFunction;
  Search search = GreedyBestFirstSearch; 
     EvaluationFunction evaluationFunction =
         GreedyBestFirstEvaluationFunction which uses:
           HeuristicFunction hf = MisplacedTilleHeuristicFunction;
     QueueSearch search = GraphSearch;
  SearchAgent agent = SearchAgent;
The program creates:

The SearchAgent constructor executes the actual search by calling GreedyBestFirstSearch.search(Problem) which internally becomes:

The getComparator() call returns a Comparator class which uses GreedyBestFirstEvaluationFunction which, in turn, uses the heuristic function to evaluate how good a given state is. The PriorityQueue uses this Comparator to order its frontier Node entries by how good they might be. A side effect of the getComparator() call is that the same Comparator class is poked into GraphSearch.replaceFrontierNodeAtStateCostFunction where it is used to adjudicate on duplicate states in the frontier list.

EightPuzzleBoard -- the state description

This defines the Actions that can be taken and contains the int state array described above. One of these objects is stored everywhere a "state" field is encountered in the program, e.g. Node.state.


EightPuzzleBoard
  Action LEFT = new DynamicAction("Left");
  Action RIGHT = new DynamicAction("Right");
  Action UP = new DynamicAction("Up");
  Action DOWN = new DynamicAction("Down");
  int[] state; -- 9 ints: tile number in each location l-r-t-b, 0=empty

GreedyBestFirstSearch -- the Search wrapper


GreedyBestFirstSearch
  EvaluationFunction evaluationFunction;
  QueueSearch search;

The constructor sets the search element to a GraphSearch object and the evaluationFunction to GreedyBestFirstEvaluationFunction (which boils down to MisplacedTilleHeuristicFunction). The search object is used to execute the actual search and the evaluationFunction is used by the Comparator that is returned by getComparator() for use in the GraphSearch PriorityQueue.

GraphSearch -- the actual search mechanism

This class executes the real search after being initialized by the Problem and PrioritySearch.search() call.


GraphSearch
  Queue<Node> frontier; -- Nodes to explore, set to PriorityQueue
  HashMap<Object, Node> frontierState;
      -- frontier lookup table: EightPuzzleBoard -> Node
  HashSet<Object> explored; -- list of EightPuzzleBoard's already examined
  Comparator<Node> replaceFrontierNodeAtStateCostFunction
      =  BestFirstSearch.Comparator;
  boolean checkGoalBeforeAddingToFrontier; -- false (here) for DepthFirstSearch
  ArrayList<Node> addToFrontier; -- temp results of Node expansion
  Metrics metrics;

The PriorityQueue frontier is created by the PrioritySearch caller and is the main driver for the search. The most-likely-to-goal Node is popped from this queue and expanded. The expanded child Nodes are pushed back onto the queue, and the popping and expansion continues until the goal is reached (or we run out of Nodes).

The HashMap frontierState is a parallel map of the frontier States to Nodes. It is used to optimize the process of determining if a State is already in the frontier queue. It is managed, almost, in parallel with the frontier queue. The only divergence is that elements are pushed onto the frontierState map as they are expanded, but the resulting Nodes are not entered into the frontier queue until each expansion cycle is complete. <IMHO>I'm not sure if this is a feature or a bug, or if it IS a feature if it is actually necessary. It does slightly confuse those who are trying to understand the code though.</IMHO>

The HashSet explored contains all the States that have been examined and rejected. It is used to eliminate duplicate explorations when expanding a Node.

As noted before, replaceFrontierNodeAtStateCostFunction is quietly set to a Comparator object -- which uses GreedyBestFirstEvaluationFunction and MisplacedTilleHeuristicFunction -- during setup by BestFirstSearch.getComparator().

The ArrayList addToFrontier is used to pass the temp results of Node expansion from GraphSearch.getResultingNodesToAddToFrontier() back to QueueSearch.search(). <IMHO>I'm not sure why this is not a local variable...except for the confusion factor.</IMHO>

SearchAgent -- a shell of a class

This class provides a convenient place to put things that might not go elsewhere, kind of like Problem. As far as searching goes, the constructor executes the given search program and saves the resulting Action list for subsequent use. <IMHO>I'm not sure it's even fair to call this an Agent vis the description in the AIMA book.... except, of course, for the confusion factor.</IMHO>


SearchAgent agent extends AbstractAgent implements Agent;
  List<Action> actionList;
  Metrics searchMetrics;
  AgentProgram program;

Problem and Node

The Problem and Node classes are described in the Class Diagrams section above. The specific settings for the Problem in the EightPuzzleDemo are shown at the top of this section.









Code Execution Trace

Pseudocode trace


SearchAgent.
  call GreedyBestFirstSearch.search( Problem problem )
    call PrioritySearch.search( Problem problem )
      comparator = BestFirstSearch.getComparator()
      frontier = new PriorityQueue( 5, comparator )
      call GraphSearch.search( Problem problem, Queue<Node> frontier )
        clear explored and frontierState lists
        call QueueSearch.search( Problem problem, Queue<Node> frontier )
          make Node for problem.initialState
          push initialState Node onto frontier queue
          while( there are frontier Nodes )
            pop highest priority Node from frontier and remove it from frontierState
            if this Node is the Goal
              return SearchUtils.actionsFromNodes.getPathFromRoot()
                -- returns resulting list of search Actions to perform 
            call GraphSearch.getResultingNodesToAddToFrontier() to expand Node
              add nodeToExpand.state to explored list
              call NodeExpander.expandNode() to get list of new Nodes
                execute all possible Actions and create new Nodes for each result
                 -- uses Problem.actionsFunction, .resultFunction, .stepCostFunction
                 -- to fill in Node .state, .parent, .action, .cost
                 -- returns ArrayList of new Nodes
              for( the new Nodes )
                lookup new Node.state in frontierState HashMap and explored HashSet
                if not found in frontierState or explored lists
                  add new State to frontierState and Node to the return list
                else if Node already in frontier but at a higher cost
                  remove high cost State from frontierState and Node from frontier
                  add new State to frontierState and Node to the return list
                -- note: [I think] updating frontierState in this loop
                --        eliminates duplicate State entries at this level
              return list of Nodes to add to frontier
            for( the new Nodes )
              insert new Nodes into frontier queue
            loop to: while there are frontier Nodes...

          if while(there are frontier Nodes) breaks -- out of Nodes to explore
            return failure

A more detailed and notated trace output of the program running is here:

Instrumented execution

I made some small modifications to the QueueSearch and Node classes in order to instrument their function. Using these, QueueSearch.search() dumps it's frontier list contents and then prints the the new Node popped off of it's queue at the beginning of each pass. You can collect these source mods -- to see the changes look for the string "SCHIP" -- here if you want to try this at home...

Below is the output of the simplest search with the options shown. Lines like "frontier at step 0:" preceed the dump of the frontier Queue's Nodes -- they appear to be in Priority order although the iterator I used claims no responsibility for maintaining it. Lines preceeded with "pop:" show the Node that was popped off the Queue for processing. A blank line separates iterations of the while( frontier ) loop. The actual Node dump may look like this:

17934197::[parent=31149935], action=Action[name==Up], pathCost=1.0 state=
1 2 0
3 4 5
6 7 8

Where the first digit string "17934197" is the hashcode of the Node in question, used as an identifier (note that hashcodes are NOT guaranteed, but in practice, are unique). The second field "[parent=31149935]" is the hash of our parent Node, Action is what happened to get use from there to here, pathCost is the cumulative cost of getting here, and state is the EightBoard state in convenient graphic layout.

In general you can compare the "pop:" Node with the first Node in the subsequent "frontier" set to follow how the Actions progress to the goal. And, once at the goal, you can back track through the successful Node.action's to re-generate the path.

execution trace


EightPuzzleDemo Greedy Best First Search (MisplacedTileHeursitic)-->
frontier at step 0:
31149935::[parent=null], action=null, pathCost=0.0 state=
1 2 5
3 4 0 
6 7 8
pop: 31149935::[parent=null], action=null, pathCost=0.0 state=
1 2 5
3 4 0 
6 7 8

frontier at step 1:
17934197::[parent=31149935], action=Action[name==Up], pathCost=1.0 state=
1 2 0
3 4 5 
6 7 8
4875224::[parent=31149935], action=Action[name==Down], pathCost=1.0 state=
1 2 5
3 4 8 
6 7 0
31522607::[parent=31149935], action=Action[name==Left], pathCost=1.0 state=
1 2 5
3 0 4 
6 7 8
pop: 17934197::[parent=31149935], action=Action[name==Up], pathCost=1.0 state=
1 2 0
3 4 5 
6 7 8

frontier at step 2:
9532399::[parent=17934197], action=Action[name==Left], pathCost=2.0 state=
1 0 2
3 4 5 
6 7 8
4875224::[parent=31149935], action=Action[name==Down], pathCost=1.0 state=
1 2 5
3 4 8 
6 7 0
31522607::[parent=31149935], action=Action[name==Left], pathCost=1.0 state=
1 2 5
3 0 4 
6 7 8
pop: 9532399::[parent=17934197], action=Action[name==Left], pathCost=2.0 state=
1 0 2
3 4 5 
6 7 8

frontier at step 3:
22171962::[parent=9532399], action=Action[name==Left], pathCost=3.0 state=
0 1 2
3 4 5 
6 7 8
22201561::[parent=9532399], action=Action[name==Down], pathCost=3.0 state=
1 4 2
3 0 5 
6 7 8
31522607::[parent=31149935], action=Action[name==Left], pathCost=1.0 state=
1 2 5
3 0 4 
6 7 8
4875224::[parent=31149935], action=Action[name==Down], pathCost=1.0 state=
1 2 5
3 4 8 
6 7 0
pop: 22171962::[parent=9532399], action=Action[name==Left], pathCost=3.0 state=
0 1 2
3 4 5 
6 7 8

Action[name==Up]
Action[name==Left]
Action[name==Left]
pathCost : 3.0
nodesExpanded : 3
queueSize : 3
maxQueueSize : 4