We live in interesting times. Our systems have unprecedented levels of device integration. Analog and mixed signal components and devices form increasingly large parts of our designs built for low power and high flexibility. New architectures and models of computation that embrace variation like neuromorphic computing are a part of our horizon. Architectures specialized for neural networks and learning algorithms are being built as massive undertakings in contemporary industry as well as with hardware accelerators. Application specific hardware has seen a healthy resurgence for machine learning and vision applications. With all these innovations in architecture and design, how do we know if were getting them right? As designs get more complicated, the"price of the lunch" is paid by verification complexity. We have always aspired to build systems we dont know to check. That problem is going to get much more challenging for systems of the future. What does it mean to verify these massively integrated systems, with new features, new models of computation, non-Traditional architectures and new applications? How do we characterize, define, execute and sign off on the correctness of the most complex systems known to humans? This paper touches upon these questions and presents challenges in these systems of the future.