TY - JOUR
T1 - Stochastic a posteriori blockmodels
T2 - Construction and assessment
AU - Wasserman, Stanley
AU - Anderson, Carolyn
N1 - Funding Information:
Research support provided by National Science Foundation Grant #SES84-08626 to the University of Illinois at Urbana-Champaign and by a predoctoral traineeship awarded to the second author by the Quantitative Methods Program of the Department of Psychology, University of Illinois at Urbana-Champaign, funded by ADAHMA, National Research Service Award #MH14257.
PY - 1987/3
Y1 - 1987/3
N2 - In 1983, Holland, Laskey, and Leinhardt, using the ideas of Holland and Leinhardt, and Fienberg and Wasserman, introduced the notion of a stochastic blockmodel. The mathematics for stochastic a priori blockmodels, in which exogenous actor attribute data are used to partition actors independently of any statistical analysis of the available relational data, have been refined by several researchers and the resulting models used by many. Attempts to simultaneously partition actors and to perform relational data analyses using statistical methods that yield stochastic a posteriori blockmodels are still quite rare. In this paper, we discuss some old suggestions for producing such posterior blockmodels, and comment on other new suggestions based on multiple comparisons of model parameters, log-linear models for ordinal categorical data, and correspondence analysis. We also review measures for goodness-of-fit of a blockmodel, and we describe a natural approach to this problem using likelihood-ratio statistics generated from a popular model for relational data.
AB - In 1983, Holland, Laskey, and Leinhardt, using the ideas of Holland and Leinhardt, and Fienberg and Wasserman, introduced the notion of a stochastic blockmodel. The mathematics for stochastic a priori blockmodels, in which exogenous actor attribute data are used to partition actors independently of any statistical analysis of the available relational data, have been refined by several researchers and the resulting models used by many. Attempts to simultaneously partition actors and to perform relational data analyses using statistical methods that yield stochastic a posteriori blockmodels are still quite rare. In this paper, we discuss some old suggestions for producing such posterior blockmodels, and comment on other new suggestions based on multiple comparisons of model parameters, log-linear models for ordinal categorical data, and correspondence analysis. We also review measures for goodness-of-fit of a blockmodel, and we describe a natural approach to this problem using likelihood-ratio statistics generated from a popular model for relational data.
UR - http://www.scopus.com/inward/record.url?scp=37249037751&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=37249037751&partnerID=8YFLogxK
U2 - 10.1016/0378-8733(87)90015-3
DO - 10.1016/0378-8733(87)90015-3
M3 - Article
AN - SCOPUS:37249037751
SN - 0378-8733
VL - 9
SP - 1
EP - 36
JO - Social Networks
JF - Social Networks
IS - 1
ER -