We consider the problem of high-level learning and decision making to enable multi-agent teams to autonomously tackle complex, large-scale missions, over long time periods in the presence of actuator failures. Agent health, measured by the functionality of its subsystems such as actuators, can change over time in long-duration missions and may depend on environmental states. This variability in agent health leads to uncertainty that can lead to inefficient plans, and in some cases even mission failure. The joint learning-planing problem becomes particularly challenging in a heterogeneous team where each agent may have a different correlation between their individual states and the state of the environment. We present a learning based planning framework for heterogeneous multiagent missions with health uncertainty that uses online learned probabilistic models of agent health. A decentralized incremental Feature Dependency Discovery algorithm is developed to enable agents to collaborate to efficiently learn representations of the uncertainty models across heterogeneous agents. The learned models of actuator failures allow our approach to plan in anticipation of potential health degradation. We show through large-scale planning under uncertainty simulations and flight experiments with state-dependent actuator and fuel-burn-rate uncertainty that our planning approach can outperform planners that do not account for heterogeneity between agents.