An interpretable machine learning framework for enhancing road transportation safety
Journal
Transportation Research Part E: Logistics and Transportation Review
Journal Volume
195
Start Page
103969
ISSN
1366-5545
Date Issued
2025-03
Author(s)
Abstract
This study presents a comprehensive decision-making framework that employs eXplainable Artificial Intelligence (XAI)-based methods to improve proactive road transport safety management, which is critical for global supply chain networks. The framework offers explainable predictions as well as suggestions pertaining to the near-future digitization of safety tools and their usage, customized for road transport safety management. We employed four black-box machine learning-based models—artificial neural network (ANN), support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost)—in this setting to enhance our comprehension of the crash-related risk factors that contribute to the severity of traffic accident injuries. Due to their opaqueness and complex inner workings, stakeholders often perceive these models as data-driven black-box approaches, making them incapable of providing an efficient decision-support tool. The recommended decision support incorporates agreement levels for predictions and interpretation across various XAI modeling paradigms. We deploy PFI (Permutation Feature Importance) and FIRM (Feature Importance Ranking Measures) tools to evaluate the extent of agreement in explainability between these various modeling approaches. The recommendations are based on PFI and FIRM values of highly performing models. We execute the framework as an illustration of the concept using a real crash dataset obtained from the NHTSA (National Highway Transportation Safety Administration of the United States) and report end-user feedback for use by transport policymakers.
Subjects
Crash severity
Explainable analytics
Interpretability
Machine learning
Safety assessment
Transport logistics
SDGs
Publisher
Elsevier BV
Type
journal article
