TY - GEN
T1 - Aggregation of markov chains
T2 - 2014 53rd IEEE Annual Conference on Decision and Control, CDC 2014
AU - Xu, Yunwen
AU - Beck, Carolyn L.
AU - Salapaka, Srinivasa M.
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014
Y1 - 2014
N2 - We develop a method for aggregating large Markov chains into smaller representative Markov chains, where Markov chains are viewed as weighted directed graphs, and similar nodes (and edges) are aggregated using a deterministic annealing approach. The notions of representativeness of the aggregated graphs and similarity between nodes in graphs are based on a newly proposed metric that quantifies connectivity in the underlying graph. Namely, we develop notions of distance between subchains in Markov chains, and provide easily verifiable conditions that determine if a given Markov chain is nearly decomposable, that is, conditions for which the deterministic annealing approach can be used to identify subchains with high probability. We show that the aggregated Markov chain preserves certain dynamics of the original chain. In particular we provide explicit bounds on the ℓ1 norm of the error between the aggregated stationary distribution of the original Markov chain and the stationary distribution of the aggregated Markov chain, which extends on longstanding foundational results (Simon and Ando, 1961).
AB - We develop a method for aggregating large Markov chains into smaller representative Markov chains, where Markov chains are viewed as weighted directed graphs, and similar nodes (and edges) are aggregated using a deterministic annealing approach. The notions of representativeness of the aggregated graphs and similarity between nodes in graphs are based on a newly proposed metric that quantifies connectivity in the underlying graph. Namely, we develop notions of distance between subchains in Markov chains, and provide easily verifiable conditions that determine if a given Markov chain is nearly decomposable, that is, conditions for which the deterministic annealing approach can be used to identify subchains with high probability. We show that the aggregated Markov chain preserves certain dynamics of the original chain. In particular we provide explicit bounds on the ℓ1 norm of the error between the aggregated stationary distribution of the original Markov chain and the stationary distribution of the aggregated Markov chain, which extends on longstanding foundational results (Simon and Ando, 1961).
UR - http://www.scopus.com/inward/record.url?scp=84988214329&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84988214329&partnerID=8YFLogxK
U2 - 10.1109/CDC.2014.7040423
DO - 10.1109/CDC.2014.7040423
M3 - Conference contribution
AN - SCOPUS:84988214329
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 6591
EP - 6596
BT - 53rd IEEE Conference on Decision and Control,CDC 2014
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 15 December 2014 through 17 December 2014
ER -