Enlarge / “Let’s go shopping!” “No, let’s Amazon Go shopping.” “Dave, I hate your puns.” (credit: Sam Machkovech)
These days, most announcements by tech companies are pretty meh.

Details either leak months ahead of time or reveal themselves to be pretty unimpressive.

But lately, we’ve had some real surprises. Months ahead of releasing the Switch this spring, Nintendo decided the future of consoles was its past with the NES Classic. (any pixel power watchers be damned). And when Google’s AI-powered AlphaGo defeated over Lee Se-dol in a best-of-five Go competition, that victory ran counter to experts who believed such results were at least a decade away.
Amazon’s December 2016 announcement of Amazon Go—a retail store where you could simply walk in, grab items, and walk out—was another shocker in that AlphaGo vein. Grab-and-go has been the “future of retail” and “just a few years away” for a while.
I worked in robotics research for over a decade at Caltech, Stanford, and Berkeley, and now I run a startup making outdoor home security cameras. Computer vision has made up a lot of my work in recent years. Yet just a few months before the Amazon announcement, I confidently told someone that it would take a few more years to get get a grab-and-go retail experience to consumers.

And I wasn’t alone in thinking this way; Planet Money had an entire episode on self-checkout just two months earlier.
So when Amazon went and surprised us all by going and building the thing, the first question was obvious: how will it work? The launch video drops buzz words like computer vision, deep learning, and sensor fusion.

But what did all that mean, and how would you actually put these things together?
Read 48 remaining paragraphs

Leave a Reply