TY - GEN
T1 - Proposal for a Flexible Benchmark for Agent Based Models
AU - Koning, Elizabeth
AU - Gropp, William
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - As hardware capabilities and scientific understanding of social and biological systems improves, researchers are able to model complex system with increasing detail. This change increases the demand on Agent Based Modeling (ABM) software, making the performance of the software more critical. For researchers to choose the best system to implement their models, they must understand not only the capabilities of the platforms, which is available in the documentation, but also the relative performance to other software options. Currently, there is a dearth of information on the performance of ABM platforms and no standardized way to compare the performance of different platforms. To address this gap, the proposed NothingModel benchmark is a flexible benchmark to reflect a wide variety of ABMs and allow the developers of models and ABM platforms to compare platforms under a range of conditions. Set up as a series of building blocks, it allows the users to vary the scale, memory use, communication, computation, and heterogeneity of a sample model, implemented with multiple tools. Along with the written description of the model, we include a reference implementation in NetLogo to demonstrate the specifics and provide comparison to other tools.
AB - As hardware capabilities and scientific understanding of social and biological systems improves, researchers are able to model complex system with increasing detail. This change increases the demand on Agent Based Modeling (ABM) software, making the performance of the software more critical. For researchers to choose the best system to implement their models, they must understand not only the capabilities of the platforms, which is available in the documentation, but also the relative performance to other software options. Currently, there is a dearth of information on the performance of ABM platforms and no standardized way to compare the performance of different platforms. To address this gap, the proposed NothingModel benchmark is a flexible benchmark to reflect a wide variety of ABMs and allow the developers of models and ABM platforms to compare platforms under a range of conditions. Set up as a series of building blocks, it allows the users to vary the scale, memory use, communication, computation, and heterogeneity of a sample model, implemented with multiple tools. Along with the written description of the model, we include a reference implementation in NetLogo to demonstrate the specifics and provide comparison to other tools.
KW - ABM
KW - Agent Based Models
KW - Benchmarking
KW - MAS
KW - Multi-agent systems
KW - Performance
UR - http://www.scopus.com/inward/record.url?scp=85200768539&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85200768539&partnerID=8YFLogxK
U2 - 10.1109/IPDPSW63119.2024.00149
DO - 10.1109/IPDPSW63119.2024.00149
M3 - Conference contribution
AN - SCOPUS:85200768539
T3 - 2024 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2024
SP - 835
EP - 838
BT - 2024 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2024
Y2 - 27 May 2024 through 31 May 2024
ER -