CUHK pioneers the world’s first globally representative medical AI foundation model for more equitable, effective, cost-saving, secure innovation for eye diseases
An international research consortium led by The Chinese University of Hong Kong (CUHK)’s Faculty of Medicine (CU Medicine) and international researchers has launched the “Global RETFound” initiative to develop the first globally representative artificial intelligence foundation model in medicine, using 100 million eye images. The global model will open doors to more equitable, effective and privacy-preserving medical AI development on every continent, with minimal data and computational resources. The rationale and novelty of the Global RETFound initiative have been published in the renowned medical journal Nature Medicine.

Featured are contributors to the Global RETFound initiative including (from left) Dr Harry Jiang Hongyang; Research Assistant Professor from Department of Ophthalmology and Visual Sciences at CU Medicine, Dr Yih Chung Tham, Assistant Professor at NUS Medicine; Professor Carol Cheung Yim-lui from the Department of Ophthalmology and Visual Sciences at CU Medicine; Professor Pearse Keane from University College London; and Dr Emma Ran Anran, scientific officer from Department of Ophthalmology and Visual Sciences at CU Medicine.
100 million eye images to advance medical AI research and clinical applications worldwide
Medical foundation models, which serve as starting points for the development of AI applications, require a broad range of data in order to be effective. A lack of comprehensive training datasets hinders medical AI development due to lack of geographic and ethnic diversity, especially in Southeast Asia, Central Asia, Latin America, Africa, and the Middle East. The technical gap is further widened by strict data-sharing regulations, model generalisability and constraints such as lack of expertise and AI infrastructure.
Researchers from CU Medicine, National University of Singapore Yong Loo Lin School of Medicine (NUS Medicine), the Institute of Ophthalmology at University College London, and NIHR Moorfields Biomedical Research Centre led the development of the “Global RETFound” model using an unprecedented dataset of over 100 million colour fundus photographs of the retina, which is located at the back of the eye, sourced from over 100 study groups from more than 65 countries and regions, spanning Southeast Asia, Africa, the Middle East, South America, the Western Pacific, and the Caucasus region. The global model represents one of the most geographically and ethnically diverse medical datasets ever assembled for AI training purposes, spearheaded by one of the largest medical AI collaborations in history.

The “Global RETFound” initiative represents one of the largest medical AI collaborations ever undertaken. It has secured collaborations with over 100 study groups across more than 65 countries, using over 100 million colour fundus photographs to advance medical innovations worldwide.
Professor Carol Cheung Yim-lui, from the Department of Ophthalmology and Visual Sciences at CU Medicine, emphasised the broader implications: “This initiative has the potential to establish new international benchmarks for generalisability and fairness in medical AI. By providing researchers worldwide with access to a globally trained foundation model, we can accelerate development of AI tools tailored to local clinical needs with substantially reduced data and computational requirements.” Researchers also include Dr Emma Ran Anran, scientific officer; and Dr Harry Jiang Hongyang, Research Assistant from Department of Ophthalmology and Visual Sciences at CU Medicine.
“Current foundational models are trained on data that is geographically and demographically narrow, which limits their effectiveness and can perpetuate existing health inequities,” explained Dr Tham Yih Chung, an Assistant Professor at NUS Medicine. “The ‘Global RETFound’ Consortium addresses this challenge through innovative approaches that enable broad international participation while maintaining strict privacy protections.”
A more inclusive model with privacy secured
The model revolutionises medical AI development by offering a flexible, two-pronged data sharing framework, designed to accommodate varying technical capacities and regulatory requirements across participating institutions. It involves local fine-tuning of generative AI models at individual institutions, with only model weights shared centrally – ensuring no patient data leaves the originating site. It also enables direct sharing of de-identified data through secure infrastructure for institutions that do not have local GPU resources or technical expertise.

The medical AI foundation model features a flexible, two-pronged data-sharing framework, which allows local fine-tuning of generative AI models at individual institutions, with only model weights shared centrally – ensuring no leakage of patient data. For institutions lacking GPU resources or technical expertise, the framework also supports direct sharing of de-identified data via secure infrastructure.
“This dual approach allows participation from research groups regardless of their resource levels,” noted Professor Pearse Keane from University College London. “By combining real and synthetic data generation techniques, we can build a diverse, globally representative dataset without compromising security.”
The “Global RETFound” model will undergo comprehensive evaluation across multiple ophthalmic and systemic diseases, including diabetic retinopathy, glaucoma, age-related macular degeneration, cardiovascular disease, neurodegenerative diseases, and diabetic vascular complications. It will be released under a Creative Commons license, making it freely available for non-commercial research worldwide.
The consortium aims to share its methodologies widely to lay the groundwork for similar initiatives across medical specialties other than ophthalmology, and welcomes additional researchers and institutions to join its collaborative effort towards more inclusive medical AI development.