A popular approach for single image super-resolution (SR) is to use scaled down versions of the given image to build an internal training dictionary of pairs of low resolution (LR) and high resolution (HR) image patches, which is then used to predict the HR image. This self-similarity approach has the advantage of not requiring a separate external training database. However, due to their limited size, internal dictionaries are often inadequate for finding good matches for patches containing complex structures such as textures. Furthermore, the quality of matches found are quite sensitive to factors like patch size (larger patches contain structures of greater complexity and may be difficult to match), and dimensions of the given image (smaller images yield smaller internal dictionaries). In this paper we propose a self-similarity based SR algorithm that addresses the abovementioned drawbacks. Instead of seeking similar patches directly in the image domain, we use the selfsimilarity principle independently on each of a set of different sub-band images, obtained using a bank of orientation selective band-pass filters. Therefore, we allow the different directional frequency components of a patch to find matches independently, which may be in different image locations. Essentially, we decompose local image structure into component patches defined by different sub-bands, with the following advantages: (1) The sub-band image patches are simpler and therefore easier to find matches, than for the more complex textural patches from the original image. (2) The size of the dictionary defined by patches from the sub-band images is exponential in the number of sub-bands used, thus increasing the effective size of the internal dictionary. (3) As a result, our algorithm exhibits a greater degree of invariance to parameters like patch size and the dimensions of the LR image. We demonstrate these advantages and show that our results are richer in textural content and appear more natural than several state-of-the-art methods.