Enlarge (credit: DeepMind)
While artificial intelligence software has made huge strides recently, in many cases it’s only been automating things that humans already do well.
If you want an AI to identify the Higgs boson in a spray of particles, for example, you have to train it on collisions that humans have already identified as containing a Higgs.
If you want it to identify pictures of cats, you have to train it on a database of photos in which the cats have already been identified.
(If you want AI to name a paint color, well, we haven’t quite figured that one out.)

But there are some situations when an AI can train itself: rules-based systems in which the computer can evaluate its own actions and determine if they were good ones. (Things like poker are good examples.) Now, a Google-owned AI developer has taken this approach to the game Go, in which AIs only recently became capable of consistently beating humans.
Impressively, with only three days of playing against itself with no prior knowledge of the game, the new AI was able to trounce both humans and its AI-based predecessors.
Read 9 remaining paragraphs

Leave a Reply