L2 learners of Mandarin have difficulty learning native-like pronunciation of nasal codas. In order to help them learn native-like pronunciation, we propose to develop targeted classifiers for automatic pronunciation error detection. In this paper, perceptual experiments with modified speech are designed to analyze the exact position of the landmark of a nasal coda. Based on perceptual results from isolated words, we propose that information about nasal coda place of articulation is most dense near a landmark at the center of the nasalized vowel. Landmarks detected in a database of Japanese learners of Mandarin, and classified as correct vs. incorrect using an SVM. The result shows that the detection performance of the SVM+Landmark system is similar to that of a DNN-HMM+MFCC system. When the two systems are combined, an FRR of 4.6% is achieved at DA of 83.9%. This performance is comparable to that of previously developed classifiers for 16 common Mandarin pronunciation errors.