Google DeepMind's AlphaGo was an extraordinary breakthrough for Artificial Intelligence. The game of Go has 1.74×10^172 unique positions and is about a 'googol' times harder to calculate than chess. Experts thought it would take at least another decade before A.I. would be able to beat the best human players. So how did DeepMind tackle this problem? What algorithms did they use and how do they work?
What will the audience learn from this talk?
During this talk we'll explore several algorithms that can be used to make a program play games, we'll start simple (Tic Tac Toe) and as the games get harder, the A.I.'s need to become smarter.
Does it feature code examples and/or live coding?
The talk has some code (Java/pseudo code) and there is a small live demo explaning neural networks, not a lot of code. The focus is on exploring the algorithms.
Prerequisite attendee experience level: