Commentary

Eyes on the prize: harnessing computer vision for automated detection of traumatic rib and clavicle fractures in chest radiographs

Cheng et al1 in ‘Development and Evaluation of a Deep Learning-Based Model for Simultaneous Detection and Localization of Rib and Clavicle Fractures in Trauma Patients’ Chest Radiographs’ add to the rapidly growing body of literature using supervised and unsupervised machine learning algorithms for fracture detection in patients with trauma.2–5 The authors introduce CXR-FrNET, a deep learning algorithm that uses chest X-ray Portable network graphic (PNG) formatted images and provides a heat map representation of all potential rib and clavicle fracture sites, to aid providers in detecting rib and clavicular fractures with a high sensitivity and acceptable accuracy (86.8 and 83.7, respectively). The authors included 991 chest X-rays over an 8-year period in their data set, with 2409 rib fracture bounding boxes and 331 clavicle fracture bounding boxes. These bounding boxes were created by two experienced trauma surgeons in conjunction with radiology reports and chest CT scans.

Timely diagnosis of traumatic chest wall fractures is a crucial component to avoid delays in treatment and poor patient outcomes.6 We commend the authors for tackling an impactful problem with the potential to improve diagnostic chest X-ray accuracy and reduce potentially avoidable radiation exposure for patients. The use of computer vision models for diagnostic radiology use cases has exploded.7 For now, the clinical use case for this model is narrow given the diagnostic focus on rib and clavicle fractures only. Practical implementation will need to be targeted towards resource-constrained healthcare settings, where access to radiologists or CT scanners is limited necessitating a diagnostic aid for timely diagnosis based on chest X-ray alone.

Notable limitations of the study include lack of radiologist involvement for internal validation of bounding box chest X-ray labels, inter-rater reliability assessment among the two trauma surgeon labellers was not conducted, and lack of public availability of the CXR-FrNET algorithm for greater transparency. While this initial model is promising and the authors are to be commended, further work is needed to determine the clinical utility of this model, implementation feasibility and potential for impact on patient outcomes. Having this tool available freely as a simple software solution capable of integrating into existing radiology platforms could help the authors implement their tool more broadly.

Article metrics
Altmetric data not available for this article.
Dimensionsopen-url