TY - GEN
T1 - Bridging semantics with physical objects using augmented reality
AU - Sun, Yu
AU - Bae, Hyojoon
AU - Manna, Sukanya
AU - White, Jules
AU - Golparvar-Fard, Mani
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/2/26
Y1 - 2015/2/26
N2 - Today's industry emphasize greatly on data-driven and data engineering technologies, triggering a tremendous amount of structured and unstructured data across different domains. As a result of which, semantic information is implicitly available in the knowledge base, mainly in the form of data descriptions, and needs to be extracted automatically to better serve the users' need. But how to deliver the data to the end-users in an effective and efficient way, has posed a new challenge, particularly in the context of big data and mobile computing. Traditional search-based approach may suffer from the degraded user experience or scalability. It is very essential to understand meaning (i.e., semantics) rather than pure keywords matching, that might lead to entirely spurious results of no relevance. In this paper, we present the usage of an Augmented Reality (AR) solution to bridge the existing semantic data and information with the real-world physical objects. The AR solution - HD4AR (Hybrid 4-Dimensional Augmented Reality) has been commercialized as a startup company to provide AR service to industry patterns to associate valuable semantic information with the objects in specific contexts, so that users can easily retrieve the data by snapping a photo and having the semantic information rendered on the photo accurately and quickly. Followed by a brief overview of the technology, we present a few use cases as well as the lessons learned from the industry collaboration experience.
AB - Today's industry emphasize greatly on data-driven and data engineering technologies, triggering a tremendous amount of structured and unstructured data across different domains. As a result of which, semantic information is implicitly available in the knowledge base, mainly in the form of data descriptions, and needs to be extracted automatically to better serve the users' need. But how to deliver the data to the end-users in an effective and efficient way, has posed a new challenge, particularly in the context of big data and mobile computing. Traditional search-based approach may suffer from the degraded user experience or scalability. It is very essential to understand meaning (i.e., semantics) rather than pure keywords matching, that might lead to entirely spurious results of no relevance. In this paper, we present the usage of an Augmented Reality (AR) solution to bridge the existing semantic data and information with the real-world physical objects. The AR solution - HD4AR (Hybrid 4-Dimensional Augmented Reality) has been commercialized as a startup company to provide AR service to industry patterns to associate valuable semantic information with the objects in specific contexts, so that users can easily retrieve the data by snapping a photo and having the semantic information rendered on the photo accurately and quickly. Followed by a brief overview of the technology, we present a few use cases as well as the lessons learned from the industry collaboration experience.
UR - http://www.scopus.com/inward/record.url?scp=84925626433&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84925626433&partnerID=8YFLogxK
U2 - 10.1109/ICOSC.2015.7050832
DO - 10.1109/ICOSC.2015.7050832
M3 - Conference contribution
AN - SCOPUS:84925626433
T3 - Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing, IEEE ICSC 2015
SP - 344
EP - 349
BT - Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing, IEEE ICSC 2015
A2 - Kankanhalli, Mohan S.
A2 - Li, Tao
A2 - Wang, Wei
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th IEEE International Conference on Semantic Computing, IEEE ICSC 2015
Y2 - 7 February 2015 through 9 February 2015
ER -