Diabetic retinopathy DR is the leading cause of blindness for people aged 20 to 64, and afflicts more than 120 million people worldwide. Fortunately, vigilant monitoring greatly improves the chance to preserve one’s eyesight. This work used deep learning to analyze images of the retina and fundus for automated diagnosis of DR on a grading scale from 0 normal to 4 severe. We achieved substantial improvement in accuracy compared to traditional approaches and continued advances by using a small auxiliary dataset that provided low-effort, high-value supervision. Data for training and testing, provided by the 2015 Kaggle Data Science Competition with over 80,000 high resolution images greater than
4 megapixels, required Amazon EC2 scalability to provide the GPU hardware needed to train a convolutional network with over 2 million parameters. For the competition, we focused on accurately modeling the scoring system, penalizing bad mistakes more severely, and combatting the over-prevalence of grade-0 examples in the dataset. We explored ideas first at low resolution on low-cost single-GPU instances. After finding the best methodology, we showed it could be scaled to equivalent improvements at high resolution, using the more expensive quad-GPU instances more effectively. This prototype model placed 15 out of 650 teams across the world with a kappa score of 0.78. We’ve now advanced the model via a new architecture that integrates the prototype and a new network specialized in finding dot hemorrhages, critical to identifying early DR. By annotating a small set of 200 images for hemorrhages, the performance jumped to a kappa of 0.82. We believe strategies that employ a bit more supervision for more effective learning are pivotal for cracking deep learning’s greatest weakness: its voracious appetite for data.
Ещё видео!