First responders operate in hazardous working conditions with unpredictable risks. To better prepare for demands of the job, first responder trainees conduct training exercises that are being recorded and reviewed by the instructors, who check for objects indicating risks within the video recordings (e.g., firefighter with an unfastened gas mask). However, the traditional reviewing process is inefficient due to unanalyzed video recordings and limited situational awareness. For better reviewing experience, a latency-aware Viewing and Query Service (VQS) should be provided. The VQS should support object searching, which can be achieved using the video object detection algorithms. Meanwhile, the application of 360-degree cameras facilitates an unlimited field of view of the training environment. Yet, this medium represents a major challenge because low-latency high-accuracy 360-degree object detection is difficult due to higher resolution and geometric distortion. In this paper, we present the Responders-360 system architecture designed for 360-degree object detection. We propose a Dynamic Selection algorithm that optimizes computation resources while yielding accurate 360-degree object inference. The results, using a unique dataset collected from a firefighting training institute, show that the Responders-360 framework achieves 4x speedup and 25% memory usage reduction compared with the state-of-the-art methods.