Many standard procedures in statistics such as linear regression and principal component analysis (PCA) are inconsistent in high-dimensional settings, where the number of unknown parameters is larger than the number of available observations. This implies that consistency can only be achieved by polynomial-time methods through additional structural assumptions on the data, e.g., sparsity or manifold constraints. In Robust PCA the goal is to recover the intrinsic data structure lying in a low-dimensional subspace from noisy measurements. One formulation of this problem capitalizes on the decomposition of the data matrix into low-rank and sparse error components, which form the aforementioned structural constraints. This formulation yields a convex problem that can recover the exact components via the minimization of a weighted combination of the low-rank component's nuclear norm and the ℓ1-norm of the sparse noise component subject to linear constraints. In this paper, we extend this approach by proposing methods that can be formulated in the general framework of majorization-minimization algorithms. The proposed methods perform at least as well as the state-of-the-art schemes for Robust PCA, while they allow for larger rank and sparsity regimes of the component matrices under exact recovery requirements. Convergence results are discussed and efficient implementations based on the general Augmented Lagrange Multiplier framework are presented.