### Abstract

The asymptotic bias and variance are important determinants of the quality of a simulation run. In particular, the asymptotic bias can be used to approximate the bias introduced by starting the collection of a measure in a particular state distribution, and the asymptotic variance can be used to compute the simulation time required to obtain a statistically significant estimate of a measure. While both of these measures can be computed analytically for simple models and measures, e.g., the average buffer occupancy of an M/G/1 queue, practical computational methods have not been developed for general model classes. Such results would be useful since they would provide insight into the simulation time required for particular systems and measures and the bias introduced by a particular initial state distribution. In this paper, we discuss the numerical computation of the asymptotic bias and variance of measures derived from continuous-time Markov reward models. In particular, we show how both measures together can be efficiently computed by solving two systems of linear equations. As a consequence of this formulation, we are able to numerically compute the asymptotic bias and variance of measures defined on very large and irregular Markov reward models. To illustrate this point, we apply the developed algorithm to queues with complex traffic behavior, different service time distributions, and several alternative scheduling disciplines that may be typically encountered in nodes in high-speed communication networks.

Original language | English (US) |
---|---|

Pages (from-to) | 173-182 |

Number of pages | 10 |

Journal | Proceedings of the IEEE Annual Simulation Symposium |

State | Published - Jan 1 1996 |

Event | Proceedings of the 1996 29th Annual Simulation Symposium - New Orleans, LA, USA Duration: Apr 8 1996 → Apr 11 1996 |

### ASJC Scopus subject areas

- Software
- Modeling and Simulation

## Fingerprint Dive into the research topics of 'Computation of the asymptotic bias and variance for simulation of Markov reward models'. Together they form a unique fingerprint.

## Cite this

*Proceedings of the IEEE Annual Simulation Symposium*, 173-182.