This paper explores an emerging method with deep roots in machine learning and game theory that has been applied to a number of signal processing applications. This competitive algorithm-based framework is particularly attractive for applications in which there is a large degree of uncertainty in the statistics and behavior of the signals of interest. Problems of prediction, equalization and adaptive filtering can be cast in a manner intimately related to repeated game playing as a game between a player, who can observe the outcomes from a large class of competiting algorithms, and an adversarial nature that produces the observations. The player in such a formulation attempts to outperform the best "expert" in this class, while nature is free to select the outcomes to defeat the player. Minmax strategies for the player naturally arise with corresponding bounds on performance that can be obtained with relatively little knowledge or contraints on the outcomes. This paper reviews the history of these methods, together with a number of robust adaptive filtering and prediction techniques that have been developed. Examples of competition classes comprising a finite number of adaptive filtering algortihms are considered along with examples of continuous classes of competing algorithms. Methods for incorporating time variation and nonlinearity explicity into the competition classes are also described.