MPI has revolutionized parallel computing in science and engineering. But the MPI specification provides only an application programming interface. This is merely the first step toward an environment that is seamless and transparent to the end user as well as the developer. This talk discusses current progress toward a productive MPI environment. To expert users of MPI, one of the major impediments to a seamless and transparent environment is the lack of a application binary interface (ABI) that would allow applications using shared libraries to work with any MPI implementation. Such an ABI would ease the development and deployment of tools and applications. However, defining a common ABI requires careful attention to many issues. For example, defining the contents of the MPI header file is insufficient to provide a workable ABI; the interaction of an MPI program with any process managers needs to be defined independent of the MPI implementation. In addition, some solutions that are appropriate for modest-sized clusters may not be appropriate for massively parallel systems with very low latency requirements or even for large conventional clusters. To novice users of MPI, the relatively low level of the parallel abstractions provided by MPI is the greatest barrier to achieving high productivity. This problem is best addressed by developing a combination of compile-time and run-time tools that aid in the development and debugging of MPI programs. One well-established approach is the use of libraries and frameworks written by using MPI. However, libraries limit the user to the data structures and operations implemented as part of the library. An alternative is to provide source-to-source transformation tools that bridge the gap between a fully compiled parallel language and the library-based parallelism provided by MPI. This talk will discuss both the issues in a common ABI for MPI and some efforts to provide better support for user-defined distributed data structures through simple source-transformation techniques.