Graph-Based Path Planning: A*

A while back I wrote a post about one of the most popular graph based planning algorithms, Dijkstra’s Algorithm, which would explore a graph and find the shortest path from a starting node to an ending node. I also explored the very fundamentals of graph theory, graph representations as data structures (see octrees), and finally, a python implementation that animates the search process for several example graphs. If you haven’t read that yet, I urge you to please check that out, because I will be continuing the discussion assuming some prior knowledge with the subject.

An example octomap of an outdoor environment. A map this large can be visualized efficiently thanks to the tree-based structure of an Octomap. Source: octomap.github.io/freiburg_outdoor_big.png

Dijkstra vs A*

Dijkstra is novel because it begins exploring from a starting node and continues until it reaches the ending node. It then identifies the list of nodes that connect the end node to the start as one single path. This means that Dijkstra works by exploring nodes in order based by their distance from the starting node. To put it another way, this means that it will equally explore nodes that are farther from the goal as well as nodes closest to the goal as it expands outwards from the starting node. As one can imagine, that can become computationally expensive as our graph representation of our world grows.

Dijkstra.png

Example of Dijkstra’s Algorithm attempting to go from a start node to a goal node. See how it has to expand out almost the entire distance of the path in every direction before it finds the goal? Source: wiki.ros.org/global_planner

So, in 1968, an AI researcher by the name of Nils Nilsson tried to improve on this planning approach for Shakey the Robot, a prototype mobile robot from SRI. The algorithm that was developed for Shakey is what is now known as A* (pronounced ‘A Star’). A* tries to improve on Dijkstra’s Algorithm by focusing only on exploring nodes that bring us closer to our goal. This greatly improves our speed of finding the shortest and most direct path.

AStar2.png

The same path planning problem as shown above, but look at how quickly the goal is found? With the use of some forcing heuristic, we ensure that we only expand nodes that drive us towards the goal node. Source: wiki.ros.org/global_planner

The way A* does this is via the introduction of a heuristic function that guides the node expansion towards the goal. Our heuristic functions will be used to create a correlation from every node in the graph to a non-negative cost value. Heuristic functions (represented as the function H below) must satisfy two basic criteria:

  1. H(goal) = 0
  2. For any two adjacent nodes x and y:
    1. H(x) <= H(y) +d(x, y)
    2. d(x, y) = weight/length of edge from x to y

The notion that our cost is zero at the goal node helps address the issue described above that when we explore from the starting node, we have no real direction. So if we have some extra value that can assess if an explored node will bring us closer to the goal, we improve our search time. Following these two criteria, we can ensure that every node will be less than or equal to the length of the shortest path from node n to the goal. Two of the most popular examples of heuristics are the Euclidean Distance, and Manhattan Distance from any node, n, in the graph to the goal, g, in the graph. We can, therefore, say, given any node, n, at (xₙ, yₙ) and a goal, g, at (xg, yg):

  1. Euclidian Distance
    euclidian
  2. Manhattan Distance
    manhattan

With this heuristic now in place, we can update our previous Dijkstra algorithm as follows:

  • For each node, n, in the graph:
    • n.distance = Infinity, n.g = Infinity
  • Create an empty list
    • start.distance = 0
    • start.f=H(start)
    • add start to list
  • While list not empty:
    • Let current = node in the list with the smallest f value, remove current from list
    • If (current == goal) return Success
    • For each node, n that is adjacent to current:
      • if n.g > (current.g + cost of edge from n to current
        • n.g= current.g+ cost of edge from n to current
        • n.f = n.g + H(n)
        • n.parent = current
        • Add n to list if it is not there already

A* Python Example

In keeping with my previous post on Dijkstra’s Algorithm, I have written a quick Python example to illustrate the above algorithm.

To get started, let’s clone the repo locally on your machine:

$ git clone https://github.com/atomoclast/realitybytes_blogposts.git

With that cloned locally, you should be able to run it by navigating to:

../realitybytes_blogposts/pathplanning/

and running:

$ python a_star.py

You should see a window open up similar to the one below showing an occupancy map. The darker blue color is free space, the lighter blues are walls/obstacles. The yellow square is the starting node, and the orange square is the goal node. You will see that we will begin searching around the starting node. Once a node is searched, it will become aqua blue, and the search will keep radiating out until the goal node is found. Once that is found, the shortest path is calculated and visualized in red.

astar_python

A* Algorithm in action for an Occupancy Map. It will search until the goal is found, then visualize the path in red.

In the above example, there is a large L shaped wall that would prevent the quick expansion of a Dijkstra approach. That method would have to wrap around the wall and search nearly the entire map before it found the goal. Alternatively, our A* searches grow towards the goal until it is reached, with very little concern for nodes that take us away from the ending node.

In the example map code, you can see that we define a test_grid variable as a list of lists where ‘0’ indicates an open node and ‘1’ indicates an obstacle:

    test_grid = [[0, 0, 0, 0, 0, 0],
                 [0, 1, 1, 1, 1, 0],
                 [0, 1, 0, 0, 0, 0],
                 [0, 1, 0, 0, 0, 0],
                 [0, 1, 0, 0, 0, 0],
                 [0, 1, 0, 0, 0, 0],
                 [0, 1, 0, 0, 0, 0],
                 [0, 1, 0, 0, 1, 0]]

We then define the start and end nodes as a list of [x, y] node positions (starting from 0). Using this basic format, you can challenge yourself to create different maps to test your A* Algorithms.

Another interesting thing is our heuristic function, which will calculate how far each node is on the graph from our goal node. For a map this size, calculating the cost from the goal to each surrounding node costs very little, but for larger robotic applications, it makes more sense to calculate the heuristic at every iteration of the loop for computational efficiency.

What makes this A* implementation especially intriguing is the way we handle the heuristic value in our loop. As shown below:

    
for i in range(len(delta)):
    x2 = x + delta[i][0]
    y2 = y + delta[i][1]
      if len(self.grid) > x2 >= 0 <= y2 < len(self.grid[0]):
          if closed[x2][y2] == 0 and self.grid[x2][y2] == 0:
              g2 = g + cost
              f = g2 + self.heuristic[x2][y2]
              open.append([f, g2, x2, y2])
              closed[x2][y2] = 1
              delta_tracker[x2][y2] = i

In this loop, we will search through every possible orthogonal direction, and then calculate the f value as described in the pseudo-code above, which is just the running cost + the heuristic cost. Once our cost is found for the explored node, we add it to a running list of possible “open” nodes from which we pick the lowest f value on the next iteration, thus bringing us closer to the goal.

That’s it. That’s what makes A* such an amazing yet powerful algorithm!

Where to go from here:

If you are still curious about playing with some code, I challenge you to upgrade my Dijkstra’s code to A*, since I decided to go a different route with my example.

If you want to look at other examples of A* in Python, check out Python Robotics’ A* Example.

A* ROS Example

The ROS Navigation Stack is a set of software packages that make path planning and motion control (more on this in a coming post) as simple as possible for differential drive and holonomic wheeled mobile robots. There are multiple packages in this stack to help with everything from building up Cost Maps with costmap_2d to localization with packages such as amcl, which is an adaptive Monte Carlo Localization approach. Monte Carlo Localization is the process of using a known map and sensor measurements to localize where a robot is with a high degree of confidence using something called particle filters (see my other post about Kalman Filters for some motivation on the state estimation problem, PFs are just another type of filter). The most pertinent portion of the stack for Grid-Based path planning will be the global_planner package.

nav_comic.png

Source: wiki.ros.org/navigation?action=AttachFIle&do=get&target=nave_comic.png

global_planner

This package uses an implementation of A* for a fast, interpolated global planner for navigation via a discretized map. There are other implementations like a traditional Dijkstra planner that are available for configuration during launch, but the A* is undoubtedly faster.

While this provides a high-level plan through a map, the question remains of how the robot’s base frame must be controlled taking into account local obstacles such as corners of walls and stand-alone objects. The move_base package provides such a feature. Once the global planner searches and publishes a plan, this package will then take that plan and link the global/local planners with an actual controller. Below is a graphic showing how the move_base node subscribes and publishes to various topics for a differential wheel robot.

attachment:overview_tf.png

Enter a caption

Looking at the source code directly on the ROS Navigation Github, you can see the source code for the global planning package has multiple source files, one of which is devoted solely for A*. The source script shows the implementation of the AStarExpansion class. The real planner is in AStarExpansion::calculatePotentials(), which will take in a starting pose, an end pose, and then iterate through a potential field and add each node it expands to a queue until the expanded node reaches the goal. If it does not find a goal, it returns back a ‘false’.

    
bool AStarExpansion::calculatePotentials(unsigned char* costs, double start_x, double start_y, double end_x, double end_y,
                                        int cycles, float* potential) {
    queue_.clear();
    int start_i = toIndex(start_x, start_y);
    queue_.push_back(Index(start_i, 0));

    std::fill(potential, potential + ns_, POT_HIGH);
    potential[start_i] = 0;

    int goal_i = toIndex(end_x, end_y);
    int cycle = 0;

    while (queue_.size() > 0 && cycle < cycles) {
        Index top = queue_[0];
        std::pop_heap(queue_.begin(), queue_.end(), greater1());
        queue_.pop_back();

        int i = top.i;
        if (i == goal_i)
            return true;

        add(costs, potential, potential[i], i + 1, end_x, end_y);
        add(costs, potential, potential[i], i - 1, end_x, end_y);
        add(costs, potential, potential[i], i + nx_, end_x, end_y);
        add(costs, potential, potential[i], i - nx_, end_x, end_y);

        cycle++;
    }

    return false;
}

Example with ROS/Gazebo:

Using our trusty Turtlebot simulator and a previously explored map, we can have Turtlebot plan a global path, then navigate to that goal on the map with a Monte Carlo Localization node to provide a Particle Filter/Estimate of where the robot exists in the map and a controller to actually move the base frame of the robot.

To install the package, we will run:

    
$ sudo apt-get install ros-[YOURDISTRO]-turtlebot-stage

Then once that is installed, run:

    
$ rospack profile # Update our ROS package lists to find our new package.
$ roslaunch turtlebot_stage turtlebot_in_stage.launch 

If everything loads successfully, you should be prompted with an RViz window similar to this:

a_star_ros

RViz window showing Turtlebot Navigating towards a goal.

In the gif above, you see the robot was given a pose, an (x,y) position in space and an orientation via the red arrow that the local planner and controller are now using. The faint red line is the output of the A* planner and the square grid around it is how the local planner is following the desired trajectory from the global planner.

To give our robot a goal, we will click the button labeled “2D Nav Goal”, and click anywhere on the map, and drag in the direction we want it to have its final orientation. From there, you should see the robot move towards the goal as shown below:

A* is an incredibly powerful algorithm that not only has applications in path planning for mobile robotics but also for general Artificial Intelligence, including the problem of semantic parsing using stochastic grammars in Natural Language Processing! It really makes you appreciate how simple ideas can find usage in world-changing applications.

Until next time, happy robotting!

Liked what you saw? Subscribe for more cool how-tos, side projects, and robot stuff!

Have a comment, issue, or question? Leave a comment or message me on GitHub.

References

  1. Robotics: Motion Planning by CJ Taylor
  2. Artificial Intelligence for Robotics by Sebastian Thrun

Graph-Based Path Planning: Dijkstra’s Algorithm.

In my last post, we built on kinematics by introducing Jacobians for robotic manipulators. This week, I’m looking to shift a bit and start to explore mobile robots and path planning. Mobile robots are all around us now. From useful Roombas to the rise of autonomous cars, mobile robots are becoming more and more popular in our everyday lives. These robots, unsurprisingly, move around and must use an array of sensors to perceive their environments, even accounting for how these environments may change. Robots must be able to achieve specific goals, and that requires them to navigate to specific areas of interest. So, how do we go about giving robots this ability? Aside from more advanced problems like SLAM (Simultaneous Localization and Mapping), which  requires a robot to construct a map while trying to estimate (guesstimate) their position in space, there are a number of well-known algorithms to plan the shortest path from a start location to a goal location in an (assumed) static map. So let’s get into it!

Amazon’s (previously Kiva Systems’) mobile warehouse robots must be able to plan a path from their current location to a goal location in order to move goods efficiently from storage to shipping. Note that a more collaborative level is also at work to ensure robots aren’t colliding, but that is out of the scope of this post.

What’s a Graph?

To answer the question of what a graph is, we turn to a branch of mathematics called Graph Theory which states that a graph is a structure that relates a set of vertices (or nodes) through edges. We can, therefore, state that a graph is an ordered pair G = (V, E), with V being the set of vertices and E their associated edges.

This is a simple example of a graph and shows the relationship between vertices/nodes via edges. Source: bcu.copsewood.net/dsalg/graph/diag1.jpg

This definition is purposefully general as graphs can be used to model various data structures, for instance, an Octree. Octrees have applications in things such as 3D computer graphics, spatial indexing, nearest neighbor searches, finite element analysis, and state estimation.

An Octree is a tree-based data structure with a root node that branches out into 8 children nodes that can then branch out even further. Source: https://en.wikipedia.org/wiki/Octree

In robotics especially, octrees have been leveraged via the creation of the OctoMap Library, which implements a 3D occupancy grid mapping approach. This provides data structures and mapping algorithms that not only assist in mobile robot navigation and mapping, but also helps in path planning for manipulators in cluttered environments.

An example octomap of an outdoor environment. A map this large can be visualized efficiently thanks to the tree based structure of an Octomap. Source: octomap.github.io/freiburg_outdoor_big.png

Dijkstra’s Algorithm

Dijkstra’s Algorithm is a fairly generic way to find the shortest path between two vertices that are connected by edges. Let me present to you an interesting problem. Say that we are planning a trip with connecting flights, and we want to get from one city to another in the most efficient way, we can generate a graph like this:

An example graph relating cities and their distances from each other. Source: http://www.mcs.csueastbay.edu/~grewe/CS3240/Mat/Graph/CitySearch.JPG

Say that we wanted to plan a trip from Austin to Washington. There are several paths that we can take:

  1. Path A, 1560 miles: Austin -> Houston -> Atlanta -> Washington
  2. Path B, 2980 miles: Austin -> Dallas -> Denver -> Atlanta -> Washington

Notwithstanding the philosophical quandary regarding the merits of the journey versus the destination, if our final goal was Washington, Path A would definitely be the most desirable. The above example was solved with relative ease by looking through the graph and quickly summing up the distances between edges of our starting node in Austin and our destination node, Washington. One can also solve it by trying to count the number of nodes between the start point and end point.

Another example of a graph is a grid-based structure as shown below. This grid denotes a starting point and a goal point with several different paths that would solve the problem. In this type of graph, each square is a node (vertex) that is connected to its four surrounding neighbors via edges that have a value of one.

grid1

An example grid that shows multiple solutions to a path planning problem. Source: C.J. Taylor, University of Pennsylvania

Again, we can solve the above path planning problem by counting how many steps it would take to reach the start position from the goal, or vice versa. While this is a real planning solution (called the Grassfire Algorithm), it’s often tedious and very computationally intensive because each node must be visited to find the shortest path. This can pose a serious problem if our map is comprised of hundreds of nodes.

graph2

By counting the number of steps it would take to go from the goal to our starting point, we can find the shortest path. This is a very tedious process because each node must be visited.  Source: C.J. Taylor, University of Pennsylvania

While the above two examples are easy to solve as humans, the real difficulty is programmatically solving these problems in a robust and efficient manner. Enter Dijkstra’s Algorithm, which was developed by Edsger W. Dijkstra. Djikstra was incredibly influential in the rise of the field of computer science, both from theoretical and implementation/engineering perspectives. Dijkstra began to try and solve the path planning problem in 1956, and then published his solution three years later detailing out his algorithm. The above example started at the goal and radiated outwards until it visited every node, counting each step along the way. Djikstra proposed a similar solution, but one that starts at the first node and grows out towards the goal. Using “Big O” notation, we can state that the time it will take to run this algorithm increases linearly with the number of edges, meaning: O(|V|), where |V| is the number of nodes/vertices on the graph.

We can describe the algorithm in pseudo-code as follows:

  • For each node, n, in the graph:
    • n.distance = Infinity
  • Create an empty list
    • start.distance = 0, add start to list
  • While list not empty:
    • Let current = node in the list with the smallest distance, remove current from list
  • For each node, n that is adjacent to current:
    • if n.distance > current.distance + length of edge from n to current
    • n.distance = current.distance + length of edge from n to current
    • Add n to list if it is not there already

Using the above pseudocode, we can find the shortest path from Austin to Washington in a programmatic manner. Below is a great animation I found that really shows the algorithm in action:

File:DijkstraDemo.gif

Dijkstra’s Algorithm on a graph. Source: https://en.wikipedia.org/wiki/File:DijkstraDemo.gif

Computational Complexity of Dijkstra’s Algorithm

Djikstra’s algorithm is an improvement to the Grassfire method because it often will reach the goal node before having to search the entire graph; however, it does come with some drawbacks.  On occasion, it may search nearly the entire map before determining the shortest path. That can be both tedious and time-consuming depending on the number of nodes to be searched.  We can estimate worst case performance by using “Big O” notation, O(|V|²).

One can improve the efficiency of Dijkstra’s Algorithm by using a clever data structure known as a priority queue. This particular data structure is different from a normal queue data structure because instead of the standard “first in, first out” implementation, it will pop the values based off of the associated priority.

Simple visualization of a priority queue. Source: http://pages.cs.wisc.edu/~vernon/cs367/notes/11.PRIORITY-Q.html

This simple data structure can help reduce how quickly the computational load increases. This rate can be described as O((|E|+|V|)log(|V|)). Where |E| and |V| are the number of edges and vertices (nodes) respectively.

Dijkstra’s Algorithm in Python

This week’s code snippet is purely Python and is a modified implementation of Dijkstra’s that I used for a mobile robot.  To get started, let’s clone the repo locally on your machine:

$ git clone https://github.com/atomoclast/realitybytes_blogposts.git

With that cloned locally, you should be able to run it by navigating to:

../realitybytes_blogposts/pathplanning/

and running:

$ python dijkstra.py

You should see a window open up similar to the one below showing an occupancy map. The darker blue color is free space, the lighter blues are walls/obstacles. The yellow square is the starting node, and the orange square is the goal node. You will see that we will begin searching around the starting node. Once a node is searched, it will become aqua blue, and the search will keep radiating out until the goal node is found. Once that is found, the shortest path is calculated and visualized in red!

There are three maps that will run in sequence if you just run the command above. The first two will iterate and find the goal, while the last one will fail because the goal node is blocked off from the start node via a wall. This was intentional to test my implementation and how it will handle such an event.

videotogif_2017.07.09_22.23.53

Dijkstra’s Algorithm in action for an Occupancy Map. It will search until the goal is found and then visualize the path in red.

Once you’ve run the out of the box code, let’s really look into it and see what’s going on. The function itself takes five arguments and returns a discrete path or a failure boolean:

def dijkstras(occupancy_map, x_spacing, y_spacing, start, goal)

The variable occupancy_map is a NxM numpy array of 0s and 1s that represents the locations of walls and obstacles as well as free space in the world. The variables x_spacing and y_spacing are parameters that represent the space between adjacent columns and rows respectively. I told you that this was implemented on a real robot, so when a map was explored and built up, we discretized the metric values into nodes on a graph using the following relationships:

x = (j + 0.5)*x_spacing
y = (i + 0.5)*y_spacing

The starting and goal positions are a 3×1 numpy array of (x, y, theta) with theta being the orientation of the robot in space. We don’t need to worry too much about any constraints on theta because we assume the robot is holonomic. Holonomic means that the robot is like a Roomba. It can rotate around its z-axis, without having to move in the x or y directions. This makes planning much easier for this type of robot vs. a nonholonomic robot, like a car.

Two very useful variables that I have defined in the Python file are DEBUG  and VISUAL.  If DEBUG is set equal to ‘True’, then the print statements within the code will be displayed on the terminal during every iteration. If VISUAL is set to ‘True’, then the animation windows will be created and displayed as the algorithm runs. Use them as you see fit to play around with maps, adjusting the start and goal positions. You can also change the dimensions and “landscape” of the occupancy maps that you pass to the algorithm. Again, I really urge that you take the time to read through the code, the comments, and play around with Dijkstra’s Algorithm to get a good feel for it!

example_cropped.png

Setting the DEBUG and VISUAL flags to True will show the animation as well as the data flow behind the scenes on your screen.

In future posts, we will look to an extension of Dijkstra called A* (pronounced “A Star”), and how it uses one relatively simple trick to make Dijkstra a little more efficient.

Until next time, happy robotting!

Liked what you saw? Subscribe for more cool how-tos, side projects, and robot stuff!

Have a comment, issue, or question? Leave a comment or message me on GitHub.

Featured Image Source: http://www.astro.mech.tohoku.ac.jp/~ishigami