He wasn’t there again today, I do wish he would go away
How much like a face does an image have to be, to trick the standard Voila-Jones facial recognition algorithm? Not very much, it turns out.
Two researchers from the University of California, Berkeley, have spoofed the algorithm into recognising a handful of dots, barely recognisable as an image, as a human face.
Another image, more dense but still random, fooled the algorithm 97 per cent of the time even after it had been printed, and then scanned with a camera.
Voila-Jones works on greyscale images that classifies image intensity between rectangles, on a loop that starts with few weak classifiers on large rectangles, iterating towards more classifiers and smaller rectangles until it decides whether or not the image contains a face. Wikipedia outlines it here.
Michael McCoyd and David Wagner’s work, presented at Arxiv here, isn’t about to get you through security, that’s the ultimate aim.
As the researchers write, however, spoofing a facial recognition system “so the system sees an authorised person who is not actually there and allows the attacker through”.
Not only that, but as one of their example images below shows, if you carried these images past a human guard, they’d be unlikely to notice you were trying to spoof your way past the facial recognition system.
Look closely, because even the dots to the left pass the algorithm. Michael McCoyd and David Wagner
The paper continues: “Many variants of this attack scenario are possible.
Instead of a printed image, the attacker could carry a flat-panel display as part of the cover of a notebook, which might allow finer control over the displayed image.”
Similarly, they envisage a spoofing attack that lets someone access a face-protected machine (such as a laptop) with a printout.
To achieve this, the researchers added random pixels to a face image, and applied their own machine learning to the result: how far could they vary an image before their Voila-Jones implementation failed to detect a face? They then reverted the last change, and tried a different pixel to change.
“Our attack procedure has two parts: a search routine picks a suitable attack image while an oracle evaluates that attack image.
The search is very simple. We have a loop that picks a random pixel, changes its intensity halfway closer to the intensity of the corresponding pixel in the cover image, but rejects the change if the oracle says the face is no longer detected.”
An example of some of the images that were detected as faces is below (top row).
Feed the top row to Voila-Jones and it sees faces; printing and scanning turn the images into a “miss”
These images only worked if fed directly to the algorithm; if they were printed out and scanned into the algorithm, they failed.
The paper notes that running the images through the physical world changes them in at least seven ways: it brightens the image centre, adds noise, adds Gaussian blur, reduces dark contrasts, replicates pixels; and, two effects the researchers didn’t model, it changes alignment and introduces barrel distortion.
McCoyd and Wagner were able to tweak their spoofing so it was able to pass through the print-and-scan process and fool the Voila-Jones algorithm 97 per cent of the time. ®
Sponsored: Global DDoS threat landscape report