Demonstrating firefighting operations in search and rescue missions through videos is a common approach to in-classroom firefighter training. Unfortunately, traditional 2D cameras have fundamental weaknesses - they can only capture a narrow field of view and miss a lot of information coming from the surroundings of the firefighter, which may become the matter of life and death in certain situations. In this paper, we propose a system combining the advantage of 360° videos and deep learning to automatically detect important objects in the panoramic scene, assisting firefighting instructors in classroom teaching scenarios. Specifically, we summarize the salient objects and events relevant to firefighting through an interview with an experienced firefighting instructor. Leveraging this knowledge, we investigate the detection of firefighting objects on 360° videos through a transfer learning approach. We report insightful results for object detectors trained on generic objects and 2D videos and discuss the next steps in designing a customized object detector.