Modern machine learning (ML) is one of the prevailing tools for big data applications of face attribute recognition. However, due to the commonly observed imbalanced distribution of the training data, well-trained models could suffer severely from undesired performance bias across different demographic groups. Motivated by the fact that neural networks could be extremely sensitive to adversarial examples, we argue that there exists the possibility of properly leveraging adversarial examples to address the imbalanced data distribution, and guiding the training convergence towards the direction of improved fairness. That is, we propose to use adversarial examples to alleviate the performance bias issue from the origin: the data source. In this paper, we present a novel adversarial training framework that generates adversarial features in the latent space to automatically balance the distribution of training features and adjust the deep classification layers of the face attribute classifiers to be more fair. Extensive experimental results on the CelebA face dataset show that our method is able to boost the model fairness more effectively compared to the state-of-the-art adversarial debiasing algorithms.