Enlarge / The new face of medicine?
While there are still many visual tasks where humans perform better than computers, computers are catching up. Part of the reason for computers’ progress has been the development of what are called “deep neural networks,” which chain together multiple layers of analysis.

These have significantly boosted computers’ performance in a variety of visual challenges.
The latest example of this progress comes in a rather significant field: medical diagnosis.

A group of Stanford researchers has trained one of Google’s deep neural networks on a massive database of images that show skin lesions.

By the end, the neural network was competitive with dermatologists when it came to diagnosing cancers using images. While the tests done in this paper don’t fully represent the challenges a specialist would face, it’s still an impressive improvement in computer performance.
Deep neural networks may sound like a jargonish buzzword, but they’re inspired in part by how we think the brain works.

The brain’s visual system uses different clusters of neurons to extract specific features of a visual scene.

This information is gradually integrated to create a picture.

The neural network used here, GoogleNet Inception v3, has a similar architecture. You can view it as a long assembly line, except any stage of the assembly line may have multiple image classifiers operating in parallel.
Sporadically, these parallel tracks are merged and the results separated again.

According to a diagram of the system included in the new paper, information can be processed by as many as 70 individual stages before reaching the end of the system (or as few as 33, if it goes down alternate paths).
Read 9 remaining paragraphs

Leave a Reply