This is a WIP (Work-In-Progress): this message will be removed once sufficient progress has been made.
This post goes over some current discussions around what A.I. (really Deep Learning systems) means in the context of Radiology.
In 2017, Geoffrey Hinton claimed that “people should stop training radiologists now”. It has been 5 years since, and this statement is still rather inaccurate and overstates the promise of A.I. Unfortunately, the well intentioned response of building automated systems to detect COVID-19 made little, if any impact. The reasons ascribed are poor quality ‘frankenstein’ datasets, bias in the labels/subjects used to train such systems, and shockingly odd behavior of models, like focusing on text in the images to derive outputs. One way to look at this is - that Deep Learning systems may have generated out-of-the-park accuracies as compared to classical methods, but we really need to know why, how, and what are their bounds of performance, especially for critical applications like health.
Nearly half of all FDA approved systems that involve A.I. in medicine, come under the radiology and imaging space!
Image segmentation and quantification of radiological biomarkers are probably less valuable from a patient point-of-view, but are very important for the clinical workflow.
Robustness (on out of ditribution data, as well as adversarial attacks), validation (technical and clinical) and verification (formal, and bounds-based) of A.I. systems is a major challenge - and could help improve the reliability of such systems going forward.
Of note is Curt Langlotz’ statement with regard to this - that “will A.I. replace radiologists? No, but radiologists who do A.I. will replace radiologists who don’t”.