TY - GEN
T1 - Demo- satellites in our pockets
T2 - 10th International Conference on Mobile Systems, Applications, and Services, MobiSys'12
AU - Manweiler, Justin
AU - Jain, Puneet
AU - Roy Choudhury, Romit
PY - 2012
Y1 - 2012
N2 - We attempt to localize a distant object by looking at it through a smartphone. As an example use-case, while driving on a highway entering New York, we want to look at one of the skyscrapers through the smartphone camera, and compute its GPS location. While the problem would have been far more difficult five years back, the growing number of sensors on smartphones, combined with advances in computer vision, have opened up important opportunities. We harness these opportunities through a system called Object Positioning System (OPS) [1] that achieves reasonable localization accuracy. Our core technique uses computer vision to create an approximate 3D structure of the object and camera, and applies mobile phone sensors to scale and rotate the structure to its absolute configuration. Then, by solving (nonlinear) optimizations on the residual (scaling and rotation) error, we ultimately estimate the object's GPS position. We present a demonstration of OPS, a system to appear in the MobiSys 2012 main conference. The user is expected to bring the object of interest near the center of her view finder, and take as few as four photographs. GPS and compass readings are also recorded during the process. The position of the photographs can be seperated by a few steps from each other in any direction. OPS uses Structure from Motion to extract keypoints across the photographs to come up with a 3D structure, composed of the object and the camera locations. Finally, OPS minimizes errors in GPS and compass readings with help of 3D structure to converge on the object location.
AB - We attempt to localize a distant object by looking at it through a smartphone. As an example use-case, while driving on a highway entering New York, we want to look at one of the skyscrapers through the smartphone camera, and compute its GPS location. While the problem would have been far more difficult five years back, the growing number of sensors on smartphones, combined with advances in computer vision, have opened up important opportunities. We harness these opportunities through a system called Object Positioning System (OPS) [1] that achieves reasonable localization accuracy. Our core technique uses computer vision to create an approximate 3D structure of the object and camera, and applies mobile phone sensors to scale and rotate the structure to its absolute configuration. Then, by solving (nonlinear) optimizations on the residual (scaling and rotation) error, we ultimately estimate the object's GPS position. We present a demonstration of OPS, a system to appear in the MobiSys 2012 main conference. The user is expected to bring the object of interest near the center of her view finder, and take as few as four photographs. GPS and compass readings are also recorded during the process. The position of the photographs can be seperated by a few steps from each other in any direction. OPS uses Structure from Motion to extract keypoints across the photographs to come up with a 3D structure, composed of the object and the camera locations. Finally, OPS minimizes errors in GPS and compass readings with help of 3D structure to converge on the object location.
KW - augmented reality
KW - localization
KW - structure from motion
UR - http://www.scopus.com/inward/record.url?scp=84864372345&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84864372345&partnerID=8YFLogxK
U2 - 10.1145/2307636.2307681
DO - 10.1145/2307636.2307681
M3 - Conference contribution
AN - SCOPUS:84864372345
SN - 9781450313018
T3 - MobiSys'12 - Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services
SP - 455
BT - MobiSys'12 - Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services
Y2 - 25 June 2012 through 29 June 2012
ER -