TY - GEN
T1 - Identifying support surfaces of climbable structures from 3D point clouds
AU - Eilering, Anna
AU - Yap, Victor
AU - Johnson, Jeff
AU - Hauser, Kris
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/9/22
Y1 - 2014/9/22
N2 - This paper presents a probabilistic technique for identifying support surfaces like floors, walls, stairs, and rails from unstructured 3D point cloud scans. A Markov random field is employed to model the joint probability of point labels, which can take on a number of user-defined surface classes. The probability of a point depends on both local spatial features of the point cloud around the point as well as the classifications of points in its neighborhood. The training step estimates joint and pairwise potentials from labeled point cloud datasets, and the prediction step aims to maximize the joint probability of all labels using a hill-climbing procedure. The method is applied to stair and ladder detection from noisy and partial scans using three types of sensors: A sweeping laser sensor, time-offlight depth camera, and a Kinect depth camera. The resulting classifier achieves approximately 75% accuracy and is robust to variations in point density.
AB - This paper presents a probabilistic technique for identifying support surfaces like floors, walls, stairs, and rails from unstructured 3D point cloud scans. A Markov random field is employed to model the joint probability of point labels, which can take on a number of user-defined surface classes. The probability of a point depends on both local spatial features of the point cloud around the point as well as the classifications of points in its neighborhood. The training step estimates joint and pairwise potentials from labeled point cloud datasets, and the prediction step aims to maximize the joint probability of all labels using a hill-climbing procedure. The method is applied to stair and ladder detection from noisy and partial scans using three types of sensors: A sweeping laser sensor, time-offlight depth camera, and a Kinect depth camera. The resulting classifier achieves approximately 75% accuracy and is robust to variations in point density.
UR - http://www.scopus.com/inward/record.url?scp=84929223219&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84929223219&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2014.6907777
DO - 10.1109/ICRA.2014.6907777
M3 - Conference contribution
AN - SCOPUS:84929223219
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 6226
EP - 6231
BT - Proceedings - IEEE International Conference on Robotics and Automation
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2014 IEEE International Conference on Robotics and Automation, ICRA 2014
Y2 - 31 May 2014 through 7 June 2014
ER -