Visualizing latent embeddings is a popular approach to explain classification models, including deep neural networks. However, existing visualization methods such as t-distributed Stochastic Neighbor Embedding (t-SNE) and Uniform Manifold Approximation Projection (UMAP) are often used as a postprocessing step which is independent of the classification models. The resulting visualization can be misaligned with the classification models. In this paper, we propose ViVA, a novel method for semi-supervised Visualization via Variational Autoencoders. ViVA learns from both unlabeled and labeled data by jointly optimizing both visualization loss and classification loss. As a parameterized model using neural networks, ViVA can easily project new data to the same embedding space. Experiments show that ViVA can achieve better visualization quality as well as classification accuracy on multiple challenging datasets compared to several visualization baselines, including t-SNE and UMAP.