With the increasing popularity of online video platforms (e.g., YouTube, Vimeo), the spread of hateful videos and the lack of rigorous hateful content control have become a critical issue. This paper focuses on the problem of identifying online hatred-vulnerable videos where the videos themselves do not contain any hateful content but unexpectedly trigger hateful comments from the audience. It is suboptimal to simply treat the hatred-vulnerable videos as hateful ones and remove them from the sharing platforms. This will discourage the uploaders of such videos from sharing valid and informative videos in the future. However, treating these hatred-vulnerable videos as hatred-free ones will provide undesirable opportunities for hateful users to spread their toxic comments and extreme ideology. In this paper, we develop VulnerCheck, an end-to-end supervised learning approach to effectively classify hatred-vulnerable videos from hateful and hatred-free ones by exploring the structure and semantics features of audience's comment networks. VulnerCheck is content-agnostic in the sense that it does not analyze the content of the video and is therefore robust against sophisticated content creators who craft hateful videos to bypass the current content censorship. We evaluate VulnerCheck on a real-world dataset collected from YouTube. Results demonstrate that our scheme is both effective and efficient in identifying hatred-vulnerable videos and significantly outperforms the state-of-the-art baselines.