Human speech recognition error rates are 30 times lower than machine error rates. Psychophysical experiments have pinpointed a number of specific human behaviors that may contribute to accurate speech recognition, but previous attempts to incorporate such behaviors into automatic speech recognition have often failed because the resulting models could not be easily trained from data. This paper describes Bayesian learning methods for computational models for human speech perception. Specifically, the linked computational models proposed in this paper seek to imitate the following human behaviors: independence of distinctive feature errors, perceptual magnet effect, the vowel sequence illusion, sensitivity to energy onsets and offsets, and redundant use of asynchronous acoustic correlates. The proposed models differ from many previous computational psychological models in that the desired behavior is learned from data, using a constrained optimization algorithm (the EM algorithm), rather than being coded into the model as a series of fixed rules.