For any number of diseases involving the skin, research into causes and cures requires isolating and quantifying in a reliable way the proportion of affected skin, one research subject after another, the more the better.
This is achieved with medical photography, computer monitors, and mouse-dragging by a research dermatologist to carefully demarcate affected areas. With areas of interest highlighted, software does the final step of quantifying the proportion of affected skin.
Massive sets of relevant medical photographs are available for research, filed away in hospitals and clinics, but “the time and expense involved in having experts endlessly pore over these images is a major impediment, and from one study or one expert to the next the consistency in the application of the relevant visual evaluation scales tends to be poor,” said Eric Tkaczyk, MD, PhD, assistant professor of dermatology and biomedical engineering.
Artificial intelligence is poised to provide, at a fraction of the cost, speedy and consistent automated interpretation of such images. However, machine learning for these evaluations will require prodigious numbers of reliably annotated images, amassed as training sets.
“A solution for economically generating the needed training sets could streamline research into a host of diseases and conditions and benefit patient evaluation to boot. We wondered, particularly with today’s gig economy, what sorts of results might be achieved by giving non-experts a few pointers and letting them demarcate images in a web interface. How might pooled non-expert evaluations stack up against expert evaluation?” Tkaczyk asked.
He and Daniel Fabbri, PhD, assistant professor of biomedical informatics, and colleagues test this notion in a new crowdsourcing study appearing in Skin Research & Technology. Joining Tkaczyk and Fabbri were Benoit Dawant, Cornelius Vanderbilt Professor of Engineering and director of the Vanderbilt Institute for Surgery and Engineering; electrical engineering Ph.D. student Jianing Wang; computer science Ph.D. student Cheng Ye; biomedical engineering research assistant Fuyao Chen; Madan Jagasia, MBBS, MSCI; and Joseph Coco, application developer in biomedical informatics.
They tested crowd worker evaluation of a sometimes killer, chronic graft-versus-host disease (cGVHD).
Skin is the most commonly affected organ in cGVHD, which is the leading long-term cause of morbidity and mortality (other than cancer relapse) following stem cell transplantation.
The study uses 41 3-D photographs taken of cGVHD patients. The visible burden of cGVHD in these images was first highlighted by a board-certified dermatologist with particular interest in the disease.
Seven crowd workers, in this case medical students and nurses, were given two-dimensional projections of the 3D images and were asked to emulate the expert, based on a slide presentation about cGVHD and a small set of marked-up images as guiding examples.
The researchers evaluated the crowd’s work pixel by pixel. When they threw out extremes of least or most pixels highlighted, in terms of the pixel-by-pixel match with expert evaluation, across 410 images the median accuracy of the pooled evaluations of four crowd workers was 76 percent.
“This places this group of crowd workers, as a collective, very much on a par with expert evaluation for cGVHD,” Tkaczyk said. “Our results establish that crowdsourcing could aid machine learning in this realm, which stands to benefit research and clinical evaluation of this disease.”
The study was supported by grants from the U.S. Department of Veterans Affairs (CX001785) and the National Institutes of Health (CA090652, CA203708).