Network operations that support tactical missions are often characterized by evolving information that needs to be delivered over bandwidth constrained communication networks and presented to a social/cognitive network with limited human attention span and high stress. Most past research efforts on data dissemination examined syntactic redundancy between data items (e.g., common bit strings, entropy coding and compression, etc.), but only limited work has examined the problem of reducing semantic redundancy with the goal of providing higher quality information to end users. In this paper we propose to measure semantic redundancy in large volume text streams using online topic models and opinion analysis (e.g., topic = Location X and opinion = possible-hazard +, safe-zone-). By suppressing semantically redundant content one can better utilize bottleneck resources such as bandwidth on a resource constrained network or attention time of a human user. However, unlike syntactic redundancy (e.g., lossless compression, lossy compression with small reconstruction errors), a semantic redundancy based approach is faced with the challenge of having to deal with larger inaccuracies (e.g., false positive and false negative probabilities in an opinion classifier). This paper seeks to quantify the effectiveness of a semantic redundancy based approach (over its syntactic counterparts) as a function of such inaccuracies and present a detailed experimental evaluation using realistic information flows collected from an enterprise network with about 1500 users1.