The generation of 3D solid models from point cloud data can be a time-consuming, labor-intensive, and error-prone task. Architects, engineers, and contractors are taking advantage of the information embedded in 3D solid models to manage infrastructures during its entire life (planning, construction, operation, and maintenance). Several research groups have developed new methods with the objective of identifying the elements present in the building scenes. Despite the progress made by them, the existing methods require a significant amount of manual interaction and fail to represent structural or mechanical components when these are in close proximity to other components. To address these limitations, this paper presents a new method for semantically segment point cloud scene containing structural and mechanical components (e.g., beams, ceilings, columns, floors, pipes, and walls). The point cloud is semantically segmented using a convolutional neural network (CNN) architecture that identifies the semantic category that the point cloud points belong. The method was tested using six real-world point clouds, and the method obtained an average point accuracy of 90.23%. The experimental results showed robust results for Scan2BIM applications.