TY - JOUR
T1 - A novel explainable AI-based approach to estimate the natural period of vibration of masonry infill reinforced concrete frame structures using different machine learning techniques
AU - Thisovithan, P.
AU - Aththanayake, Harinda
AU - Meddage, D. P.P.
AU - Ekanayake, I. U.
AU - Rathnayake, Upaka
N1 - Publisher Copyright:
© 2023 The Author(s)
PY - 2023/9
Y1 - 2023/9
N2 - In this study, we used four different machine learning models - artificial neural network (ANN), support vector regression (SVR), k-nearest neighbor (KNN), and random forest (RF) - to predict the natural period of reinforced concrete frame structures with masonry infill walls. To interpret the models and their predictions, we employed Shapley additive explanations (SHAP), Local interpretable model-agnostic explanations (LIME), and partial dependency plots (PDP). All models showed good accuracy in predicting the fundamental period (T). The post-hoc explanations provided insights into (a) the importance of each feature, (b) their interaction, and (c) the underlying reasoning behind the predictions. For the first time, we have created a graphical interface that can predict the value of T along with its SHAP explanation. This interface can be useful in manually optimizing the design of reinforced concrete frame structures with masonry infill walls. However, the local explanations from SHAP and LIME exhibited significant discrepancies, and LIME underestimated the feature importance of dominant features compared to SHAP. These discrepancies observed in the explanations highlight the need for further research in the field of explainable artificial intelligence (XAI) in structural engineering.
AB - In this study, we used four different machine learning models - artificial neural network (ANN), support vector regression (SVR), k-nearest neighbor (KNN), and random forest (RF) - to predict the natural period of reinforced concrete frame structures with masonry infill walls. To interpret the models and their predictions, we employed Shapley additive explanations (SHAP), Local interpretable model-agnostic explanations (LIME), and partial dependency plots (PDP). All models showed good accuracy in predicting the fundamental period (T). The post-hoc explanations provided insights into (a) the importance of each feature, (b) their interaction, and (c) the underlying reasoning behind the predictions. For the first time, we have created a graphical interface that can predict the value of T along with its SHAP explanation. This interface can be useful in manually optimizing the design of reinforced concrete frame structures with masonry infill walls. However, the local explanations from SHAP and LIME exhibited significant discrepancies, and LIME underestimated the feature importance of dominant features compared to SHAP. These discrepancies observed in the explanations highlight the need for further research in the field of explainable artificial intelligence (XAI) in structural engineering.
KW - Explainable AI
KW - Machine learning
KW - Masonry infill
KW - Natural period
KW - Regression
UR - http://www.scopus.com/inward/record.url?scp=85169612359&partnerID=8YFLogxK
U2 - 10.1016/j.rineng.2023.101388
DO - 10.1016/j.rineng.2023.101388
M3 - Article
AN - SCOPUS:85169612359
SN - 2590-1230
VL - 19
JO - Results in Engineering
JF - Results in Engineering
M1 - 101388
ER -