Comprehensive Assessment of Geographic Atrophy - Artificial Intelligence

Leo Sheck
5
minute read

This post was written by myself and Dr Aaron Yap, and parts of it first appeared in New Zealand Optics in 2023. Please read part 1 and part 2 for background information. Information on consultation with Dr Sheck can be found here.

Geographic atrophy (GA) is the late manifestation of non-neovascular age-related macular degeneration (AMD) characterised by progressive loss of retinal pigment epithelium (RPE) and photoreceptors. Up to now, it was a slowly progressive untreatable disease that led to vision loss. Emergent trials into the use of complement inhibitors to slow the progression of geographic atrophy have shown promising results. Alongside these developments, artificial intelligence (AI) demonstrates efficient and accurate quantitative measurements for disease prognostication and treatment response.

In patients receiving pegcetacoplan monthly or every other month, the GA growth rate was reduced by 29% and 20% compared with the sham treatment group. The effect was more pronounced in the second 6 months of treatment, with observed reductions of 45% and 33% for pegcetacoplan monthly and every other month, respectively. At twelve months, somewhat disappointingly, Pegcetacoplan had no effect on changes in foveal encroachment or visual acuity measures compared with sham treatment. However, direction and rate of GA growth is shown to be highly variable between individual patients and even at the level of an individual GA lesionLocal progression is of particular importance, as affection towards the fovea is largely responsible for central vision loss. In depth assessment of GA is essential to identify those patients who are most likely to benefit from Pegcetacoplan, but this is time consuming (median time for grading a stack of OCT scans for geographic atrophy at the Moorfields Eye Hospital reading centre is 43 minutes for a 49 slice volume), and may not be feasible in routine clinical practice.

AI for patient selection

A group from Moorfields set out to develop and validate a fully automated deep learning model to detect and quantify geographic atrophy from OCT. The training data was composed of OCT images from the aforementioned FILLY study. The model was then validated on a separate dataset of patients receiving routine care at Moorfields Eye Hospital. The primary outcome was segmentation and classification agreement between the deep-learning model and combined opinion of two expert graders.

When applied to the validation cohort of 884 B-scans from 192 OCT volumes, the deep-learning model produced predictions similar to expert graders (median Dice similarity coefficient 0·96; intraclass correlation coefficient [ICC] 0·93) and outperformed agreement between human graders (DSC 0·80 [0·28]; ICC 0·79). Similarly, deep-learning models were able to accurately segment each of the three constituent features of GA: RPE loss (median DSC 0·95), photoreceptor degeneration (0·96), and hypertransmission (0·97) in the external validation dataset versus consensus grading. In comparison to human grading, it only takes the AI model 2.04 seconds to grade a 49 volume OCT stack, which is more than 1000x improvement in efficiency.

Whilst treatment response in the FILLY trial was based on fundus autofluorescence images, Vogl et. al. applied a deep learning model on optical coherence tomography (OCT) images from the same dataset. Using AI-based analysis, OCT images were automatically segmented for pathomorphological features to produce a precise topographic “heat map” of GA activity around existing lesions. Local progression rate was higher for areas with low eccentricity to the fovea, thinner photoreceptor layer, or higher hyperreflective foci concentration in the junctional zone. Even after accounting for these confounding risk factors, there was a statistically significant lowering in the local progression rate by 28.0% in monthly Pegcetacoplan treated eyes compared with sham. 

Conclusion

We describe two studies that demonstrate the capabilities of well-trained AI models at identifying OCT characteristics of GA predictive of visually significant progression. These models achieve an accuracy close to expert human graders, and consistency that surpasses interhuman variability. Manual assessment of OCT scans at this level requires expert graders and is time consuming associated with intergrader variability. These factors make assessing such endpoints in routine clinical practice or research impractical. Artificial intelligence is the ideal clinical tool in determining which patients will benefit from novel GA treatments and quantifying their treatment response to treatment providers, researchers and patients.

About Dr Leo Sheck

Dr Sheck is a RANZCO-qualified, internationally trained ophthalmologist. He combined his initial training in New Zealand with a two-year advanced fellowship in Moorfield Eye Hospital, London. He also holds a Doctorate in Ocular Genetics from the University of Auckland and a Master of Business Administration from the University of Cambridge. He specialises in medical retina diseases (injection therapy), cataract surgery, ocular genetics, uveitis and electrodiagnostics.