Distributed particle filters (DPF) are known to provide robustness for the state estimation problem and can reduce the amount of information communication compared to centralized approaches. Due to the difficulty of merging multiple distributions represented by particles and associated weights, however, most uses of DPF to date tend to approximate the posterior distribution using a parametric model or to use a predetermined message path. In this paper, the Markov Chain Distributed Particle Filter (MCDPF) algorithm is proposed, based on particles performing random walks across the network. This approach maintains robustness since every sensor only needs to exchange particles and weights locally and furthermore enables more general representations of posterior distributions because there are no a priori assumptions on distribution form. The paper provides a proof of weak convergence of the MCDPF algorithm to the corresponding centralized particle filter and the optimal filtering solution, and concludes with a numerical study showing that MCDPF leads to a reliable estimation of the posterior distribution of a nonlinear system.