AI GOES ALL IN

When American mathematician Claude Shannon published his paper ‘Programming A Computer For Playing Chess’ in 1950, it would have been difficult to imagine the sophistication of today’s artificial intelligence (AI) and its seeming omnipresence in our business and personal lives.

Shannon demonstrated extraordinary foresight with regards to the possibilities this technology would bring to the world. Introducing the potential of a chess-playing computer, he said at the time, “Although perhaps of no practical importance, the question is of theoretical interest, and it is hoped that a satisfactory solution of this problem will act as a wedge in attacking other problems of a similar nature and of greater significance.”

He envisaged several potential applications for the technology:

1. Machines for designing filters, equalisers, etc.

2. Machines for designing relay and switching circuits.

3. Machines to handle routing of telephone calls based on the individual circumstances rather than by fixed patterns.

4. Machines for performing various symbolic (non-numerical) mathematical operations.

5. Machines capable of translating from one language to another.

6. Machines for making strategic decisions in simplified military operations.

7. Machines capable of orchestrating a melody.

8. Machines capable of logical deduction.

Looking back at his technological forecast from a modern perspective, it’s clear Shannon was a true visionary, with many of his predictions having long since been brought to fruition.

The chess test

In 1997, IBM’s Deep Blue became the first computer system to defeat a reigning world champion in a chess match, beating Garry Kasparov before being retired by its development team. The complexity of chess and its seemingly infinite number of possible moves made this a watershed moment for AI, and one that was hard-won: almost half a century had elapsed between Shannon’s original conception of a chess-playing computer and Deep Blue’s eventual triumph.

Of course, the computers of Shannon’s day were incapable of dealing with the quantities of data we’re accustomed to now, and learning to intelligently play chess required systems to deal with an almost endless number of variables. Shannon calculated a conservative lower bound of the game-tree complexity of chess (ie the number of distinct games that could conceivably be played) of 10120, or 1 followed by 120 zeroes.

For comparison, there are generally estimated to be around 1080 atoms in the observable universe. Clearly then, building a computer able to consider even a fraction of these variables was a monumental challenge, and the 47 years that elapsed between the publishing of Shannon’s paper and Kasparov’s defeat should be considered relatively rapid progress. In fact, Deep Blue’s developers, among others, were laying the foundations for an ultra-accelerated technological boom this century, during which the capabilities of AI would be compounded and increase exponentially.

Building on the chess success

Chess is an extremely sophisticated game, but in recent years computer programmers have overcome even greater challenges.

The apparent simplicity of the ancient Chinese board game Go, which doesn’t feature a range of pieces moving in different ways like chess, may lead some to believe this would be an easier game to program a computer to play. However, Go is estimated to have a game-tree complexity of around 10360, meaning there are some 10240 times as many possible games as there are in chess.

Clearly then, Go was a worthy challenge for Google’s DeepMind developers, whose program AlphaGo became the first computer to beat a human professional player without handicaps on a full-sized 19×19 board in 2015. In less than 20 years, developers had gone from a computer mastering chess with its game-tree complexity of 10120, to mastering Go with its complexity of 10360.

The number of variables that AI is finding its way around is increasing exponentially and shows no sign of slowing down. The DeepMind team has suggested that AlphaGo is a step towards creating algorithms that can intelligently tackle some of today’s greatest scientific challenges, from designing new medicines to accurately modelling the effects of climate change.

Bots ’n’ big blinds

Last year, an AI-based computer program named Libratus was able to beat some of the world’s top poker players, prompting one to claim he felt the machine “could see my cards”.

This achievement involved further layers of complexity when compared to chess and Go, due to the inherent characteristics of poker. Firstly, chess and Go are both skill-based games with minimal elements of chance, whereas poker is a much subtler mix of the two.

In chess and Go, players also have access to complete information about the game: the pieces their opponent has, the moves they have already made, what moves are available to each player, and so on. But in Texas Hold’em, the information offered to both parties is incomplete, and while they can both see what’s on the flop, turn and river, the absence of information about your opponent’s hand is what makes the game worth playing (thanks, Captain Obvious).

Furthermore, when considering the use of bluffing and elements of randomness in both betting strategy and cards dealt, we can begin to appreciate some of the problems facing developers in creating a poker-playing bot that can keep up with human play, let alone that of a top-ranked professional.

Libratus’ success was made possible through machine learning, the process by which the bot is able to ‘practise’, playing against itself and refining its strategies. Clearly, the millions of simulated hours the machine dedicated to this were enough to outsmart even the world’s most formidable players.

Machine learning allows AI systems to constantly adapt to new information, change their strategies and effectively analyse action that has already taken place. In fact, it may have been Libratus’ continual learning between games, and its analysis of information from the tournament’s first few days that allowed it to refine its strategies and emerge victorious.

This year, the bots have stepped up once again and are now taking on the world of esports under the watchful eye of billionaire schoolboy Elon Musk. OpenAI Five is an artificially intelligent esports team, which in August played in DOTA2’s The International, the world’s largest professional tournament in one of esports’ most sophisticated titles. And OK, it didn’t win. Yet.

Thus far, the OpenAI team has shown a few less-than-ideal playing strategies but they have held their own overall. Their first match in The International saw the bots face off against pro team paiN, in a close-fought battle lasting more than 45 minutes. While a victory would surely have demonstrated the program’s superiority over human players, its ability to at least give them a run for their money should tell us something about AI’s dynamism in 2018. There are incalculable variables involved in a game like this, which is played between two teams of five choosing from over a hundred characters, each with unique abilities, in a huge virtual arena. The ability of a machine to adapt to the unexpected demonstrates its human-like cognitive abilities, and raises further questions of where this tech may take us in the future.

During the training process, OpenAI Five bots are capable of simulating 180 years of experience (over 1.5 million hours) each, every day. That’s more than 900 years of experience, daily, that the team amasses. Given the old adage that it takes 10,000 hours to become a true expert at something, what will machines that have had 10,000 years’ experience or training be capable of in the future?

Given the exponential rate of growth in this area, will it be possible for the machines of the future to simulate hundreds of thousands or even millions of years’ experience, every day?

What’s the big idea?

So, what does this mean for the gaming world? Integrity ought to be a major concern here. As AI technology becomes more widely adopted and available, what systems will be put in place to prevent online poker players from running an AI bot alongside their virtual table, providing them with the moves that will always give them a statistical advantage? Will the fraudsters of the future enter into amateur esports competitions equipped with bots that have effectively trained for thousands of years?

Perhaps new systems will be developed to determine when an AI bot is masquerading as a human, but how soon will this become a pressing need for the gaming industry?

Will applications advance on both sides to create a kind of technological arms race?

Bots playing online poker are already a concern for recreational players, though the sophistication of these is typically quite far from the dizzying heights of Libratus. Many more accomplished online players feel they’re able to identify when they’re playing against a bot, and some claim the ability to exploit their idiosyncrasies for profi However, does this act as a barrier and prevent non-players from getting involved? If the overarching view of online poker is that there are people out there using complex algorithms to cheat amateurs out of their hard-earned money, new players will surely be put off and opt for a less risky game to play.

So how can operators tackle this? You guessed it – more AI! Increasingly accurate personal identification solutions are helping operators to know their customers more intimately than ever, and in-depth data analysis can give them a great deal of insight into how those customers play. By strictly verifying each customer’s identity and analysing their playing styles, operators should be able to protect their honest players while eliminating the threat of them being thrown to the technological sharks.

Whatever happens next, it’s clear that no company can turn a blind eye to advances in AI technology. If it isn’t used for maintaining integrity and responsibility in gaming, we can be sure that someone out there will use it for precisely the opposite purpose.

Leave a Reply

Your email address will not be published. Required fields are marked *