In medical imaging systems, task-based metrics have been advocated as a means of evaluating image quality. Mathematical observers are one method of computing such metrics. Although the Bayesian Ideal Observer (IO) is optimal by definition, it is frequently intractable and non-linear. Linear approximations to the IO are sometimes employed to obtain task-based statistics when computing the IO is infeasible. The optimal linear observer for maximizing the SNR of the test statistic is the Hotelling Observer (HO). However, the computational cost for computing the HO increases with image size and becomes intractable for larger images. Channelized methods of reducing the dimensionality of the data before computing the HO have become popular, with efficient channels capable of approximating the HO's performance at significantly reduced computational cost. State-of-the-art channels have been learned by using an autoencoder (AE) to encode data by employing a known signal template as the desired reconstruction, but the method is dependant on a high-quality estimate of the signal. An alternative to channels is approximating the test statistic directly using a feed-forward neural network (FFNN). However, this approach can overfit when the amount of training data is limited. In this work, a generalized method for learning channels utilizing an AE with dual losses (AEDL) is proposed. The AEDL framework jointly minimizes both task-specific and reconstruction losses to learn a set of efficient channels, even when the number of training images is relatively small. Preliminary results indicate that the proposed network outperforms state-of-the-art methods on the selected imaging task. Additionally, the AEDL framework suffers from less overfitting than the FFNN.