With the advancements in many data mining and machine learning tasks, together with the availability of large-scale annotated data sets, there have been an increasing number of off-the-shelf tools for addressing these tasks, like Stanford NLP Toolkit and Caffe Model Zoo. However, many of these tasks are time-evolving in nature due to, e.g., the emergence of new features and the change of class conditional distribution of features. As a result, the off-the-shelf tools are not able to adapt to such changes and will suffer from sub-optimal performance in the target application. In this paper, we propose a generic framework named AOT for adapting the outputs from an off-the-shelf tool to accommodate the changes in the learning task. It considers two major types of changes, i.e., label deficiency and distribution shift, and aims to maximally boost the performance of the off-the-shelf tool in the target domain, with the help of a limited number of target domain labeled examples. Furthermore, we propose an iterative algorithm to solve the resulting optimization problem, and we demonstrate the superior performance of the proposed AOT framework on text and image data sets.