TY - JOUR
T1 - Safe-visor architecture for sandboxing (AI-based) unverified controllers in stochastic cyber–physical systems
AU - Zhong, Bingzhuo
AU - Lavaei, Abolfazl
AU - Cao, Hongpeng
AU - Zamani, Majid
AU - Caccamo, Marco
N1 - Funding Information:
This work was supported in part by the H2020 ERC Starting Grant AutoCPS (grant agreement No 804639 ) and by an Alexander von Humboldt Professorship endowed by the German Federal Ministry of Education and Research .
Publisher Copyright:
© 2021 Elsevier Ltd
PY - 2021/12
Y1 - 2021/12
N2 - High performance but unverified controllers, e.g., artificial intelligence-based (a.k.a. AI-based) controllers, are widely employed in cyber–physical systems (CPSs) to accomplish complex control missions. However, guaranteeing the safety and reliability of CPSs with this kind of controllers is currently very challenging, which is of vital importance in many real-life safety-critical applications. To cope with this difficulty, we propose in this work a Safe-visor architecture for sandboxing unverified controllers in CPSs operating in noisy environments (a.k.a. stochastic CPSs). The proposed architecture contains a history-based supervisor, which checks inputs from the unverified controller and makes a compromise between functionality and safety of the system, and a safety advisor that provides fallback when the unverified controller endangers the safety of the system. Both the history-based supervisor and the safety advisor are designed based on an approximate probabilistic relation between the original system and its finite abstraction. By employing this architecture, we provide formal probabilistic guarantees on preserving the safety specifications expressed by accepting languages of deterministic finite automata (DFA). Meanwhile, the unverified controllers can still be employed in the control loop even though they are not reliable. We demonstrate the effectiveness of our proposed results by applying them to two (physical) case studies.
AB - High performance but unverified controllers, e.g., artificial intelligence-based (a.k.a. AI-based) controllers, are widely employed in cyber–physical systems (CPSs) to accomplish complex control missions. However, guaranteeing the safety and reliability of CPSs with this kind of controllers is currently very challenging, which is of vital importance in many real-life safety-critical applications. To cope with this difficulty, we propose in this work a Safe-visor architecture for sandboxing unverified controllers in CPSs operating in noisy environments (a.k.a. stochastic CPSs). The proposed architecture contains a history-based supervisor, which checks inputs from the unverified controller and makes a compromise between functionality and safety of the system, and a safety advisor that provides fallback when the unverified controller endangers the safety of the system. Both the history-based supervisor and the safety advisor are designed based on an approximate probabilistic relation between the original system and its finite abstraction. By employing this architecture, we provide formal probabilistic guarantees on preserving the safety specifications expressed by accepting languages of deterministic finite automata (DFA). Meanwhile, the unverified controllers can still be employed in the control loop even though they are not reliable. We demonstrate the effectiveness of our proposed results by applying them to two (physical) case studies.
KW - AI-based unverified controllers
KW - Approximate probabilistic relations
KW - Safe-visor architecture
KW - Stochastic cyber–physical systems
UR - http://www.scopus.com/inward/record.url?scp=85117809329&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85117809329&partnerID=8YFLogxK
U2 - 10.1016/j.nahs.2021.101110
DO - 10.1016/j.nahs.2021.101110
M3 - Article
AN - SCOPUS:85117809329
SN - 1751-570X
VL - 43
JO - Nonlinear Analysis: Hybrid Systems
JF - Nonlinear Analysis: Hybrid Systems
M1 - 101110
ER -