Though the quality of imaging devices, the accuracy of algorithms that construct 3D data, and the hardware available to render such data have all improved, the algorithms available to calibrate, reconstruct, and then visualize such data are difficult to use, extremely noise sensitive, and unreasonably slow. In this paper, we describe a multi-camera system that creates a highly accurate (on the order of a centimeter), 3D reconstruction of an environment in real time (under 30 ms) that allows for remote interaction between users. The paper addresses the aforementioned deficiencies by featuring an overview of the technology and algorithms used to calibrate, reconstruct, and render objects in the system. The algorithm produces partial 3D meshes, instead of dense point clouds, which are combined on the renderer to create a unified model of the environment. The chosen representation of the data allows for high compression ratios for transfer to remote sites. We demonstrate the accuracy and speed of our results on a variety of benchmarks and data collected from our own system.