Enlarge (credit: Ivan Etimov et al. / Thinkstock)
Progress in the field of machine vision is one of the most important factors in the rise of the self-driving car.

An autonomous vehicle has to be able to sense its environment and react appropriately.

Free space has to be calculated, solid objects avoided, and any and all of the instructions we helpfully leave everywhere—painted on the tarmac or posted on signs—have to be obeyed.
Deep neural networks turned out to be pretty good at classifying images, but it’s still worth remembering that the process is quite unlike the way humans identify images, even if the end results are fairly similar.
I was reminded of that once again this morning when reading about a method of spoofing road signs.
It’s a technique that just looks like street art to you or me, but it completely changes the meaning of a stop sign to the machine reading it.
No actual self-driving cars were harmed in this study
First off, it’s important to note that the paper is a proof-of-concept; no actual automotive-grade machine vision systems were used in the test.

Covering your local stop signs in strips of black and white tape is not going to lead to a sudden spate of car crashes today.
Ivan Evtimov—a grad student at the University of Washington—and some colleagues first trained a deep neural network to recognize different US road signs.

Then, they created an algorithm that generated changes to the signs that human eyes find innocuous, but which changed the meaning when a sign was read by the AI classifier they just trained.
Read 7 remaining paragraphs

Leave a Reply