TY - GEN
T1 - A scalable distributed RRT for motion planning
AU - Jacobs, Sam Ade
AU - Stradford, Nicholas
AU - Rodriguez, Cesar
AU - Thomas, Shawna
AU - Amato, Nancy M.
PY - 2013
Y1 - 2013
N2 - Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine.
AB - Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine.
UR - http://www.scopus.com/inward/record.url?scp=84887274173&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84887274173&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2013.6631304
DO - 10.1109/ICRA.2013.6631304
M3 - Conference contribution
AN - SCOPUS:84887274173
SN - 9781467356411
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 5088
EP - 5095
BT - 2013 IEEE International Conference on Robotics and Automation, ICRA 2013
T2 - 2013 IEEE International Conference on Robotics and Automation, ICRA 2013
Y2 - 6 May 2013 through 10 May 2013
ER -