Accurate and rapidly produced 3D models of the built environment from point cloud data can be used in a variety of engineering applications. When performed manually, this task is often time consuming and labor intensive. In response, several research groups have recently focused on developing methods for segmenting point cloud data based on appearance and geometric information into distinct subsets, and populating the scenes with surface objects. However, these methods, particularly where building systems are in close proximity of architectural/structural elements, still result in over-segmentation or require significant fine-tuning to produce acceptable results. To overcome these limitations, this paper presents a new procedure that takes in a point cloud - segmented at a user-desired level of abstraction - as an input and by considering neighborhood context via a Markov Random Field optimization framework, labels each distinct subset with semantic (wall, ceiling, floor, pipes) and geometric (horizontal, vertical, cylindrical) categories. Experimental results, using real-world point cloud data, show that the method achieves the state-of-the-art performance on semantic and geometric labeling of point cloud data. It is also shown how understanding semantic regions in point clouds - improved via geometric labels - can facilitate the process of generating as-built 3D models from point cloud data.