Video can be viewed as the integration of several heterogeneous media interwoven in a temporally close-coupled fashion. To support the retrieval of video segments under the query-by-example scenario, we purport in this paper a post-integration model that integrates low-level media types to identify visually/auditorily similar video segments. The model also allows flexible relevance feedback on the user end to further improve the speed and accuracy of searching in a video database. As its name implies, the post-integration model first treats models of the underlying media as independent processes and then combines distance scores from each of the underlying media at the later stage. This decoupling allows the usage of efficient algorithms to fast and accurately compare thousands of video clips in a moderate-to-large-sized video database. It also naturally facilitates improved user interaction by fast dynamic weight adjustment on the media distance scores in a multiple query succession. We explicit our models and dynamic weight adjustment schemes in this paper and offer demonstration with a working system on a large video database set.