This paper presents an approach to scale-invariant image matching. Given two images, the goal is to find correspondences between similar subimages, e.g., representing similar objects, even when the objects are captured under large variations in scale. As in previous work: similarity is defined in terms of geometric, photometric and structural properties of regions, and images are represented by segmentation trees that capture region properties and their recursive embedding. Matching two regions thus amounts to matching their corresponding subtrees. Scale invariance is aimed at overcoming two challenges in matching two images of similar objects. First, the absolute values of many object image properties may change with scale. Second, some of the finest details visible in the high-zoom image may not be visible in the coarser scale image. We normalize the region properties associated with one of the subtrees to the corresponding properties of the root of the other subtree. This makes the scales of objects represented by the two subtrees equal, and also decouples this scale from that of the entire scene. We also weight contributions of subregions to the total similarity of their parent regions by the relative area the subregions occupy within the parents. This reduces the penalty for not being able to match fine-resolution details present within only one of the two regions, since the penalty will be down-weighted by the relatively small area of these details. Our experiments demonstrate invariance of the proposed algorithm to large changes in scale.