EfficientNetB0-Based End-to-End Diagnostic System for Diabetic Retinopathy Grading and Macular Edema Detection

Introduction

Diabetic retinopathy (DR) is one of the most common complications in patients with diabetes. With the increasing number of diabetic patients, DR has become one of the leading causes of vision loss and blindness.1 Based on the degree of retinal damage, DR is classified into three stages: mild to moderate non-proliferative, severe non-proliferative, and proliferative stages.2,3 The clinical presentation and treatment strategies depend on the disease stage, with proliferative diabetic retinopathy representing the most severe form. If not treated in a timely manner, it may lead to irreversible vision loss.4 In addition, diabetic macular edema (DME), a common complication of DR, is typically characterized by retinal leakage and edema, significantly impairing central vision. DME is also one of the leading causes of vision loss in diabetic patients.5 Therefore, early diagnosis and intervention are crucial to preventing vision loss.6

Fluorescein fundus angiography (FFA) is a key ophthalmic imaging technique widely used to diagnose and evaluate DR.7,8 FFA provides clear visualization of retinal vascular morphology, leakage, and hemodynamic changes, assisting clinicians in more accurately determining the stage of the disease, assessing the degree of macular edema, and monitoring treatment outcomes.9 However, the interpretation of FFA images is complex and time-consuming, with diagnostic accuracy varying depending on the expertise of different ophthalmologists.10 Therefore, the development of an automated diagnostic system that can quickly and accurately analyze the progression of diabetic retinopathy based on FFA images holds significant clinical value.

With the advancement of the Fourth Industrial Revolution in the 21st century, the application of artificial intelligence (AI) in the field of retinal diseases has grown rapidly.11,12 Although AI has demonstrated significant potential in the early screening, accurate diagnosis, and prediction of treatment outcomes for diabetic retinopathy (DR), age-related macular degeneration (AMD), and myopic retinal diseases, most existing studies still primarily rely on fundus photography.13–15 However, compared to fundus photography, fluorescein angiography (FFA) provides more detailed vascular information and dynamically reveals vascular leakage and neovascularization, offering unique advantages, particularly in the early diagnosis of diabetic retinopathy and the assessment of its proliferative stages.

Therefore, this study developed an artificial intelligence-based automated diagnostic system using a dataset of fluorescein angiography (FFA) images from patients with diabetic retinopathy (DR). The system efficiently processes and analyzes FFA images, enabling rapid assessment of the progression of diabetic retinopathy. By accurately identifying vascular abnormalities in FFA images, the system assists clinicians in precisely classifying the severity of DR and provides critical insights for subsequent treatment decisions. The system is designed to offer effective support in busy clinical settings, helping physicians improve diagnostic efficiency and accuracy in high-pressure, high-volume environments.

Methods Data Collection

We utilized a total of 19,031 fundus fluorescein angiography (FFA) images (768 x 768 pixels) acquired from 2753 patients between June 2017 and March 2024 to develop our system. The dataset spans nearly seven years and includes diverse and representative cases, ensuring robust training and evaluation for our analysis framework. These FFA images form the primary dataset for the study, aimed at advancing the diagnostic accuracy of retinal and choroidal pathologies.

All FFA images were captured following a standardized protocol to ensure consistency and reproducibility:

Device Setup: The imaging device was calibrated according to the manufacturer’s instructions before each session to ensure accurate image quality and fluorescence detection. Subject Positioning: Subjects were seated in front of the imaging device in a dimly lit room to minimize external light interference. Distance Measurement: The imaging device lens was positioned approximately 2–3 centimeters from the subject’s eye, with careful alignment to avoid motion artifacts. Subject Instructions: Subjects were instructed to look straight ahead, fixating on a specific target, and to keep their eyes as wide open as possible. Adequate guidance was provided to ensure compliance. Image Focus and Capture: The operator ensured the imaging device was focused on the retina, with particular attention to the regions requiring detailed examination. Multiple images were captured to document both early and late phases of fluorescein dye circulation.

This standardized protocol was followed throughout the data collection period to ensure the uniformity of the imaging process. The dataset was subsequently processed and annotated for training and validation purposes.

Annotation

High-quality FFA images were selected by a blinded technician based on predefined image quality criteria. The criteria included sufficient resolution and clarity to identify retinal vascular details and abnormalities, proper focus to ensure clear visualization of the retina, inclusion of both central and peripheral retinal areas, uniform fluorescence distribution without overexposure, and capture during the appropriate phase of fluorescein dye circulation for diagnostic accuracy. Images that were duplicates, contained motion blur or artifacts, or lacked diagnostic relevance due to incomplete fluorescence filling or premature dye leakage were excluded.

The selected FFA images were annotated by three ophthalmologists with over five years of clinical experience. Initially, two ophthalmologists independently reviewed and labeled each image by referencing electronic medical records, including patient histories, clinical presentations, retinal assessments (eg, fundus examination and OCT findings), laboratory results, and therapeutic responses. A third ophthalmologist subsequently reviewed all the data and confirmed the final labels based on the presence of specific retinal pathologies and macular edema patterns observed on FFA, corroborated by clinical findings and therapeutic outcomes.

Each image was annotated with two sets of labels to achieve precise classification of retinal lesions and macular edema. The first set of labels focused on retinal diseases, where 0 represented no apparent retinal abnormalities (normal); 1 indicated mild to moderate non-proliferative diabetic retinopathy (mild-to-moderate NPDR); 2 corresponded to severe non-proliferative diabetic retinopathy (severe NPDR); 3 identified proliferative diabetic retinopathy (PDR); and 4 referred to post-retinal laser photocoagulation (post-retinal laser). The second set of labels was specifically designed for macular edema, with 0 representing non-diabetic macular edema (non-DME) and 1 representing diabetic macular edema (DME). This dual-labeling system comprehensively covered the various pathological types of retinal lesions and macular edema, facilitating more accurate clinical diagnosis and image analysis.

Image Preprocessing

We employed the Albumentations library for image preprocessing to standardize the input data and enhance the performance of our deep learning model. The input images were resized to a fixed resolution of 256×256 pixels while maintaining three color channels (RGB) to ensure uniform input dimensions across the dataset. The dataset was split into training, validation, and test subsets in a ratio of 7:1.5:1.5 to maintain a balanced representation of classes in each set. The batch size for training was set to 32 to facilitate efficient computation while leveraging GPU acceleration.

All images were transformed into tensor format to ensure compatibility with PyTorch, the deep learning framework used in this study. Following this, normalization was applied using ImageNet-standard mean values of [0.485, 0.456, 0.406] and standard deviations of [0.229, 0.224, 0.225] for the RGB channels. This normalization step standardized the pixel intensity values, aligning them with the distributions expected by the EfficientNet16 architecture and thereby improving model convergence and training stability. These preprocessing steps ensured consistency and optimized the data for robust training and evaluation.

Development of a Deep Learning System

The development of the deep learning system was carried out in two stages, with the primary objective of accurately diagnosing diabetic retinopathy (DR) and its complications from FFA (fluorescein fundus angiography) images. In the first stage, the model was trained using EfficientNetB0, which outputted a 1024-dimensional feature vector. This feature vector was then passed through a fully connected layer to perform a 5-class classification. The categories were defined as follows: 0: No apparent diabetic retinopathy; 1: Mild non-proliferative diabetic retinopathy (NPDR) or moderate NPDR; 2: Severe NPDR; 3: Proliferative DR with vitreous hemorrhage; 4: Post-retinal laser photocoagulation. The model was designed to classify images into these five categories, enabling it to distinguish between normal retinal conditions, various stages of DR, and post-treatment conditions. For images that are initially classified as “no apparent diabetic retinopathy” (ie, category 0), the system will automatically terminate subsequent pathological analysis and diagnostic procedures based on the preliminary classification results, and directly generate and output the diagnostic findings.

In the second stage, the system further specialized by focusing on the images that were classified as “abnormal” in the first stage (ie, categories 1–4, where DR or complications were present). The 1024-dimensional features generated by EfficientNetB0 in the first stage were passed through an additional 1024-dimensional fully connected layer to refine the feature representation. These refined features were then used to perform a 2-class classification, specifically detecting the presence or absence of diabetic macular edema (DME). By narrowing the focus to these more complex cases, the second stage aimed to increase specificity and better detect subtle, clinically significant changes, particularly in identifying DME in the retinal images.

Throughout both stages, all images were pre-processed into 256×256 RGB images. Training utilized the OneCycleLR learning rate scheduler with a starting learning rate of 0.001, and the model was trained for 100 epochs using the Adam optimizer.

Once both stages were trained, the model was evaluated on an independent test dataset, unseen during the training process. This final evaluation confirmed the system’s ability to generalize and accurately diagnose diabetic retinal conditions. The two-stage approach, with broad classification in the first stage followed by specialized refinement in the second stage, was designed to optimize the model’s diagnostic performance and reduce errors in classifying diabetic retinopathy and its complications.

Interpretation of AI Diagnosis

To aid clinicians in better comprehending the inferential processes of deep learning models in the classification of DR and DME, we employed Gradient-weighted Class Activation Mapping (Grad-CAM) to generate visual explanations for Fundus Fluorescein Angiography (FFA) images from our test dataset. This technique leverages gradient information with respect to the target class and backpropagates it to the final convolutional layers of the model, producing a localization map that highlights regions of the image deemed most critical for classification decisions. In the resulting heatmaps, the red areas signify regions that the model considers most influential when making its predictions. In the first phase of the task, these red regions correspond to feature areas of high importance for grading DR. In the second phase, they highlight key regions relevant to the diagnosis of DME.

Statistical Analysis

All statistical analyses in this study were performed using the Python programming language (version 3.8.5), with key statistical computations carried out using the scikit-learn library. To comprehensively evaluate the model performance, we calculated several commonly used classification metrics, including accuracy, area under the curve (AUC), precision, recall (sensitivity), F1 score, and Cohen’s kappa. Additionally, a confusion matrix was employed to further assess the model’s classification performance across different categories, helping to visually analyze the prediction accuracy and revealing the distribution of false positives (FP) and false negatives (FN), thereby identifying the model’s performance in each category and potential misclassification issues.

Results Image Data Characteristics

The study was divided into two phases: the first phase involved the grading of diabetic retinopathy, and the second phase focused on the identification of diabetic macular edema. The study design is depicted in Figure 1. A total of 19,031 images were included for model development and validation. In the first phase, all images were categorized into five classes: Normal, Mild-Moderate NPDR, Severe NPDR, PDR, and Post-Retinal Laser. The number of images in the training dataset, internal validation, and external test dataset were 13,321, 2854, and 2856, respectively. In the second phase, all images were divided into two categories: non-DME and DME. The number of images in the training dataset, internal validation, and external test dataset were 10,683, 2289, and 2290, respectively. A summary of the detailed information of all datasets is presented in Table 1.

Table 1 Characteristics of the Dataset

Figure 1 Overview of the experimental workflow and model architecture.(A) The experimental workflow of the entire experiment. Stage 1 for grading Diabetic Retinopathy (DR) and Stage 2 for diagnosing Diabetic Macular Edema (DME). (B) Architecture of the EfficientNet-B0 model.

Evaluation of the First-Stage Model Performance

In the first stage of the task, the model demonstrated high accuracy and discriminative ability for the grading of DR. The accuracy on the validation set was 0.7257, with an AUC of 0.9139, precision of 0.7017, recall of 0.7006, F1 score of 0.6987, and a Kappa coefficient of 0.6239. The performance on the test set was slightly lower than that on the validation set, with accuracy of 0.7036, AUC of 0.9062, precision of 0.7055, recall of 0.7036, F1 score of 0.7023, and Kappa coefficient of 0.6271, as shown in Table 2. The ROC curve and confusion matrix were plotted for the external test set, further confirming the model’s strong discriminative ability, with AUC values for multiple categories approaching or exceeding 0.9, as shown in Figure 2A. The confusion matrix provided a more detailed view of the model’s grading performance. The values along the diagonal, representing correctly classified samples, included 315 Normal cases, 353 Mild-Moderate NPDR cases, 232 Severe NPDR cases, 421 PDR cases, and 480 Post-Retinal Laser cases. These prominent diagonal values demonstrated the model’s capability to correctly identify different DR stages. The off-diagonal values, representing misclassified samples, were relatively low for each stage of DR, as illustrated in Figure 2C.

Table 2 Model Performance Evaluation on the Validation and Test Set

Figure 2 ROC curves and confusion matrices on the test set. (A) ROC curve for grading of Diabetic Retinopathy (DR). (B) ROC curve for diagnosis of Diabetic Macular Edema (DME). (C) Confusion matrix for grading of Diabetic Retinopathy (DR), Class labels: 0 = Normal; 1 = Mild-Moderate NPDR; 2 = Severe NPDR; 3 = PDR; 4 = Post-Retinal Laser. (D) Confusion matrix for diagnosis of Diabetic Macular Edema (DME), Class labels: 0 = Non-DME; 1 = DME.

Evaluation of the Second Stage Model Performance

In the second stage of the task, the model’s objective was to identify diabetic macular edema (DME). The accuracy on the validation set was 0.7558, with an AUC of 0.7797, precision of 0.7559, recall of 0.7558, F1 score of 0.7558, and a Kappa coefficient of 0.3968. The performance on the test set was slightly lower, with an accuracy of 0.7258, an AUC of 0.7530, precision of 0.7184, recall of 0.7258, F1 score of 0.7214, and a Kappa coefficient of 0.3293, as shown in Table 2. These results indicate that the model also demonstrated good performance in the DME identification task. The ROC curve on the external test set illustrated the model’s discriminative ability in the binary classification task, with an AUC of 0.75. Although this was lower than the performance in the first phase, it still represented a moderate level of performance, as shown in Figure 2B. The confusion matrix for the external test set revealed some errors in the model’s ability to identify DME. Among the 1600 non-DME cases, the model correctly identified 1321, but misclassified 279 non-DME cases as DME. For the 690 DME cases, the model correctly identified 341, but erroneously classified 349 DME cases as non-DME, as illustrated in Figure 2D.

Heatmap Visualization for Deep Learning Explainability

We employed heatmaps to visualize the regions that have the greatest influence on the model’s decision-making process. In the first-stage task, for mild, moderate, and severe NPDR as well as PDR, the heatmaps predominantly highlight the fluorescent areas in the FFA images, as shown in Figure 3(a1-a4). The corresponding original FFA images are presented in Figure 3(A1-A4). For Post-Retinal Laser, the heatmaps are focused on the laser spots, as depicted in Figure 3(a5) with their respective original images in Figure 3(A5. In the second-stage task, the heatmaps are primarily concentrated in the macular region, as illustrated in Figure 3(b1-b5), with the corresponding original images shown in Figure 3(B1-B5).

Figure 3 Heatmap Visualization for Deep Learning Explainability. The red areas indicate regions that significantly influence the model’s decision-making process. The first Stage (A, a) and second stage (B, b) of task. Normal (A1, a1); Mild-Moderate NPDR (A2, a2); Severe NPDR (A3, a3); PDR (A4, a4); Post-Retinal Laser (A5, a5); DME (B1, B2, B3, b1, b2, b3); Non-DME (B4, B5, b4, b5).

Discussion

In this study, we employed the EfficientNetB0 model to implement an end-to-end DR diagnostic pipeline. First, a five-class classification model was used to categorize FFA images into normal, Mild-Moderate NPDR, severe NPDR, PDR, and post-retinal laser treatment categories. Subsequently, for images identified as abnormal in the first stage, additional fully connected layers were utilized for further analysis to detect the presence of DME, thus achieving a complete automated workflow from image input to DR and its complications diagnosis.

In previous studies, deep learning techniques based on optical coherence tomography (OCT) image analysis have been extensively applied for the diagnosis of diabetic retinopathy (DR) and diabetic macular edema (DME). The OCT-NET model proposed by Perdomo et al17 achieved an accuracy of 93% and an area under the curve (AUC) of 0.99 in the classification of diabetic retinopathies. The CNN-RNN hybrid deep learning model developed by Rodríguez-Miguel et al18 detected DME by analyzing complete OCT cube images and achieved an AUROC value of 0.94 in real-world screening programs. Although OCT has significant advantages in retinal structural imaging, it still has limitations in detecting dynamic changes such as retinal vascular leakage and neovascularization. In contrast, fluorescein fundus angiography (FFA) can dynamically display retinal vascular leakage and neovascularization, and it has unique advantages in the early diagnosis and proliferative phase assessment of DR. Therefore, the deep learning system based on FFA developed in this study can not only automatically identify different stages of DR and DME but also has the potential to be combined with OCT and OCT angiography, which is expected to promote multimodal data fusion and improve the accuracy and comprehensiveness of diagnosis.

In traditional DR diagnostic models, non-end-to-end approaches were commonly used, relying on multiple independent steps and different models to handle various diagnostic tasks for DR.10,19 This approach was not only time-consuming but also required the involvement of multiple specialists, increasing the complexity and cost of the diagnostic process. In contrast, this study employed an end-to-end model, using the EfficientNetB0 network to simultaneously process both DR grading and macular edema classification tasks.20 The model automatically extracted features from FFA images and performed joint optimization, providing direct output for DR grading and macular edema diagnosis. This integrated approach not only improved diagnostic efficiency but also reduced the potential for human error, ensuring the accuracy and consistency of the results.

In clinical practice, doctors in functional examination rooms face demanding and highly complex tasks.21 They are required to acquire high-quality FFA images within a limited time frame, while also performing detailed analysis of these images and writing accurate diagnostic reports. This dual workload often leads to increased stress, especially when the number of patients is high, potentially impairing work efficiency. Prolonged high-intensity work not only results in physical and mental fatigue but may also increase the risk of missed or incorrect diagnoses, thus affecting timely diagnosis and treatment, and ultimately influencing patient prognosis and quality of life.22,23 At the same time, DR has become one of the leading causes of adult blindness in China, and its prevalence continues to rise as the number of diabetic patients increases annually, putting greater diagnostic pressure on clinicians.24,25 To address this challenge, this study developed a system capable of rapidly and accurately processing and analyzing FFA images. This system not only automatically identifies different stages of diabetic retinopathy but also detects diabetic retinal complications, significantly reducing the time doctors spend on image interpretation, thus allowing them to focus more on patient communication and clinical decision-making.

Deep learning models are often regarded as “black boxes” due to their opaque and hard-to-interpret decision-making processes, which limits their widespread clinical application and reduces doctors’ trust in them. To address this issue, this study introduced the Grad-CAM technique to enhance model interpretability. By generating heatmaps, Grad-CAM clearly visualizes the key regions that the model focuses on during FFA image classification, thereby revealing the rationale behind the model’s decisions.26 This visualization not only helps clinicians understand the model’s reasoning process and enhances their trust in the diagnostic results but also aids in identifying and addressing potential model flaws, ultimately improving the accuracy and reliability of the model.

This study has several limitations. Firstly, the dataset was obtained from a single center, which limited the diversity of patient ethnicity and imaging devices. This limitation may compromise the generalizability of the study results to other populations and clinical settings. Secondly, the dataset only included FFA images of patients with diabetic retinopathy (DR) and lacked data from other retinal disease types. This may restrict the model’s applicability to a broader range of diseases. Furthermore, the current method focused on the annotation of single FFA images and did not involve the analysis of time-series data, which precluded the capture of dynamic disease progression. In future research, we will expand the disease types, incorporate multicenter and multimodal data (such as OCT and color fundus photography), and introduce time-series analysis to further enhance the model’s generalizability and clinical value.

Conclusion

In conclusion, this study successfully developed an end-to-end diabetic retinopathy (DR) diagnostic system based on the EfficientNetB0 model. The system first utilizes a five-class classification model to grade FFA images, and then applies additional fully connected layers to detect diabetic macular edema (DME) in images identified as abnormal. This approach enables accurate and efficient diagnosis of DR and DME, offering effective support in busy clinical settings.

Data Sharing Statement

Due to privacy concerns, access to certain sensitive data may be restricted. To request access to this data, please contact the first author, Fan Gan, at the following Email address: [email protected].

Ethics Statement

This study adheres to the tenets of the Declaration of Helsinki and was approved by the Ethics Committee of the Affiliated Eye Hospital, Jiangxi Medical College, Nanchang University (YLP20220802). Given the retrospective nature of the analysis, the requirement for informed consent was waived. All patient data were anonymized and maintained in strict confidentiality to safeguard privacy.

Author Contributions

All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.

Funding

This work was supported by grants from the Health Commission of Jiangxi Province [grant numbers 2023ZD004 (ZPY)].

Disclosure

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

1. Baltă F, Cristescu I-E, Mirescu A-E. et al. Investigation of retinal microcirculation in diabetic patients using adaptive optics ophthalmoscopy and optical coherence angiography. J Diabetes Res. 2022;2022:1516668.

2. Graves LE, Donaghue KC. Management of diabetes complications in youth. Ther Adv Endocrinol Metab. 2019;10:2042018819863226.

3. Rivera JC, Dabouz R, Noueihed B, et al. Ischemic retinopathies: oxidative stress and inflammation. Oxid Med Cell Longev. 2017;2017(1):3940241.

4. Shah CP, Chen C. Review of therapeutic advances in diabetic retinopathy. Ther Adv Endocrinol Metab. 2011;2(1):39.

5. Jorge EC, Jorge EN, Botelho M, et al. Monotherapy laser photocoagulation for diabetic macular oedema. Cochrane Database Syst Rev. 2018;2018(10):CD010859.

6. Rodríguez ML, Pérez S, Mena-Mollá S, et al. Oxidative stress and microvascular alterations in diabetic retinopathy: future therapies. Oxid Med Cell Longev. 2019;2019:4940825.

7. Courtie E, Veenith T, Logan A, et al. Retinal blood flow in critical illness and systemic disease: a review. Ann Intensive Care. 2020;10(1):152.

8. Lee D-H, C YH, Bae SH, et al. Risk factors for retinal microvascular impairment in type 2 diabetic patients without diabetic retinopathy. PLoS One. 2018;13(8):e0202103.

9. Ishibazawa A, Nagaoka T, Takahashi A, et al. Optical coherence tomography angiography in diabetic retinopathy: a prospective pilot study. Am J Ophthalmol. 2015;160(1):35–44.e1.

10. Gao Z, Pan X, Shao J, et al. Automatic interpretation and clinical evaluation for fundus fluorescein angiography images of diabetic retinopathy patients by deep learning. Br J Ophthalmol. 2023;107(12):1852–1858.

11. Ye T, Xue J, He M, et al. Psychosocial factors affecting artificial intelligence adoption in health care in china: cross-sectional study. J Med Internet Res. 2019;21(10):e14316.

12. Akpinar MH, Sengur A, Faust O, et al. Artificial intelligence in retinal screening using OCT images: a review of the last decade (2013-2023). Comput Methods Programs Biomed. 2024;254:108253.

13. Li B, Chen H, Yu W, et al. The performance of a deep learning system in assisting junior ophthalmologists in diagnosing 13 major fundus diseases: a prospective multi-center clinical trial: 1. NPJ Digit Med. 2024;7(1):1–11.

14. Dai L, Sheng B, Chen T, et al. A deep learning system for predicting time to progression of diabetic retinopathy. Nat Med. 2024;30(2):584–594.

15. Qi Z, Li T, Chen J, et al. A deep learning system for myopia onset prediction and intervention effectiveness evaluation in children. NPJ Digit Med. 2024;7(1):206.

16. Tan M, Le QV. EfficientNet: rethinking model scaling for convolutional neural networks: arXiv:1905.11946. arXiv. 2020.

17. Perdomo O, Rios H, Rodríguez FJ, et al. Classification of diabetes-related retinal diseases using a deep learning approach in optical coherence tomography. Comput Methods Programs Biomed. 2019;178:181–189.

18. Rodríguez-Miguel A, Arruabarrena C, Allendes G, et al. Hybrid deep learning models for the screening of diabetic macular edema in optical coherence tomography volumes. Sci Rep. 2024;14(1):17633.

19. Wang X, Shen P, Zhao G, et al. An enhanced machine learning algorithm for type 2 diabetes prognosis with a detailed examination of Key correlates. Sci Rep. 2024;14(1):26355.

20. Baskaran L, Maliakal G, Al’Aref SJ, et al. Identification and quantification of cardiovascular structures from CCTA: an end-to-end, rapid, pixel-wise, deep-learning method. JACC Cardiovasc Imaging. 2020;13(5):1163–1171.

21. Wang H, Zheng X, Liu Y, et al. Alleviating doctors’ emotional exhaustion through sports involvement during the COVID-19 pandemic: the mediating roles of regulatory emotional self-efficacy and perceived stress. Int J Environ Res Public Health. 2022;19(18):11776.

22. Johnson J, Ford E, Yu J, et al. Peer support: a needs assessment for social support from trained peers in response to stress among medical physicists. J Appl Clin Med Phys. 2019;20(9):157.

23. Zhao S, He Y, Qin J, et al. A semi-supervised deep learning method for cervical cell classification. Anal Cell Pathol Amst. 2022;2022:4376178.

24. Yu Y, Ren K-M, Chen X-L. Expression and role of P-element-induced wimpy testis-interacting RNA in diabetic-retinopathy in mice. World J Diabetes. 2021;12(7):1116.

25. Zhang L-Y, Liu Q-L, Yick K-L, et al. Analysis of diabetic foot deformation and plantar pressure distribution of women at different walking speeds. Int J Environ Res Public Health. 2023;20(4):3688.

26. Selvaraju RR, Cogswell M, Das A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization. IEEE Int Conf Comput Vis. 2017;ICCV.

Comments (0)

No login
gif