Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. An experiment to accurately study the fault latency in the memory subsystem is described. The experiment uses real memory data from a VAX 11/780 running UNIX. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis-of-variance model, to quantify the relative influence of various workload measures on the evaluated latency, is also given.