Volterra kernels are well known to be the multidimensional extension of the impulse response of a linear time invariant (LTI) system. It can be used to accurately model weakly nonlinear, specifically, polynomial nonlinearity systems. It has been used in the past for white-box model order reduction (MOR) to model frequency-domain performance metric quantities such as distortion in power amplifiers (PA). In this paper, we train a neural network from time-domain response of high-speed link buffers to extract multiple high-order kernels at once. Once the kernels are extracted, they can fully characterize the dynamics of the buffers of interest. Using the kernels, we demonstrate that time-domain response is straight-forward to obtain using super-, or multi-dimensional convolution. Previous work has used a shallow feed-forward neural network to train the system by using Gaussian noise as the identification signal. This is not convenient for the method to be compatible with existing computer-aided design tools. In this work, we directly use a pseudo random bit sequence (PRBS) to train the network. The proposed technique is more challenging because the PRBS has flat regions which have highly rich frequency spectrum and requires longer memory length, but allows the method to be compatible with existing simulation programs. We investigate different topologies including feed-forward neural network and recurrent neural network. Comparisons between training phase, inference phase, convergence are presented using different neural network topologies. The paper presents a numerical example using a 28Gbps data rate PAM4 transceiver to validate the proposed method against traditional simulation methods such as IBIS or SPICE level simulation for comparison in speed and accuracy. Using Volterra kernels promises a novel way to perform accurate nonlinear circuit simulation in the LTI system framework which is already well known and well developed. It can be conveniently incorporated into existing EDA frameworks.