This paper addresses the display of multi-parameter medical image data, such as arises in MRI or multimodality image fusion. MRI or multi modality studies produce several different images of a given cross-section of the body, each providing different levels of contrast sensitivity between different tissues. The question then arises as to how to present this wealth of data to the diagnostician. While each of the different images may be misleading (as illustrated later by an example), in combination they may contain the correct information. Unfortunately, a human observer is not likely to be able to extract this information when presented with a parallel display of the distinct images. Given the sequential nature of detailed visual examination of a picture, a human observer is quite ineffective at integrating complex visual data from parallel sources. The development of a display technology that overcomes this difficulty by synthesizing a display method matched to the capabilities of the human observer is the subject of this paper. The ultimate goal of diagnostic imaging is the detection, localization, and quantification of abnormality. An intermediate goal, which is the one we address, is to present the diagnostician with an image that will maximize his changes to classify correctly different regions in the image as belonging to different tissue types. Our premise is that the diagnostician is able to bring to bear all his knowledge and experience, which are difficult to capture in a computer program, on the final analysis process. This is often key to the detection of subtle and otherwise elusive features in the image. We therefore rule out the generation of an automatically segmented image, which not only fails to include this knowledge, but also would deprive the diagnostician of the opportunity to exercise it, by presenting him with a hard-labeled segmentation. Instead we concentrate on the fusion of the multiple images of the same cross-section into a single most informative grey-scale image.