Electroencephalography (EEG) is a commonly used method for monitoring brain activity. Automating an EEG signal processing pipeline is imperative to the exploration of real-time brain computer interface (BCI) applications. EEG analysis demands substantial training and time for removal of distinct unwanted independent components (ICs), generated via independent component analysis, corresponding to artifacts. The considerable subject-wise variations across these components motivates defining a procedural way to identify and eliminate these artifacts. We propose DeepIC-virtual, a convolutional neural network (CNN) deep learning classifier to automatically identify brain components in the ICs extracted from the subject's EEG data gathered while they are being immersed in a virtual reality (VR) environment. This work examined the feasibility of DL techniques to provide automated ICs classification on noisy and visually engaging upright stance EEG data. We collected the EEG data for six subjects while they were standing upright in a VR testing setup simulating pseudo-randomized variations in height and depth conditions and induced perturbations. An extensive 1432 IC representation images data set was generated and manually labelled via an expert as brain components or one of the six distinct removable artifacts. The supervised CNN architecture was utilized to categorize good brain ICs and bad artifactual ICs via generated images of topographical maps. Our model categorizing good versus bad IC topographical maps resulted in a binary classification accuracy and area under curve of 89.20% and 0.93 respectively. Despite significant imbalance, only 1 out of the 57 present brain ICs in the withheld testing set was miss-classified as an artifact. These results will hopefully encourage clinicians to integrate BCI methods and neurofeedback to control anxiety and provide a treatment of acrophobia, given the viability of automatic classification of artifactual ICs.