‘Bot’ This Too
1 Aug 2008
Arthur S. Reber in Poker, Pokerlistings.com

Last column we discussed issues raised by the recent successes of a poker ‘bot’ named Polaris. This device is a sophisticated artificial intelligence (AI) program, and the brainchild of the Computer Poker Research Group at the University of Alberta in Calgary. It plays limit Hold ‘Em (LH) about as well as any sentient human and has earned its stripes by beating several experienced professionals in heads-up play. We’ve already examined a number of features of the bot itself. Here I’d like to explore some of the psychological factors of man vs. machine play.

 

Emotion: When a bot plays against a human, there is a compelling affective asymmetry. Humans feel, bots do not. Humans experience the pain of loss and the euphoria of a win. They alter their games in reaction to emotional stress. A run of bad cards can make some feel insecure and they gear down their aggression; others are provoked and become hyper-aggressive. Some react strongly to being challenged by an opponent; others ignore such affronts. If your girl friend just dumped you, it probably won’t do much for your game.

 

Polaris doesn’t have a girl friend. It is devoid of affective states; it’s as dead as a post. In various circles, this lack of emotional response in an AI is a topic of considerable discussion. Debates range from discourses among neuroscientists and philosophers on the links between cognition and emotion to musings among sci fi enthusiasts over whether androids should be portrayed as less than human in virtue of being bereft of emotions.

 

Is Polaris’s lack of emotional reactions a long-term plus or a long-term minus? Frankly, I have no idea. It could be a bonus because its game won’t get derailed by two or three horrific and mathematically unlikely beats. But this lack of emotion could hurt because Polaris never gets “stoked” by events and, thereby, take its game to a higher level. This line of argument, of course, depends on there being a higher level to the game that bots can’t attain (yet).

 

The accepted wisdom is that the absence of emotional reactions in an AI is a benefit. This may be right today; tomorrow, maybe not.

 

Cognition: Cognition is thinking; cognitive functions are those that are involved in deliberation, decision making and analysis, processes critical to any intellectually complex task. They include those that are overt and conscious, like calculating pot odds to determine the expected value of a call. They also include processes that are covert and unconscious, such as experiencing a vague, intuitive sense that you’re just beat in a hand. But, no matter how you cut it, these cognitive functions involve knowing, in any of the several senses of the word.

 

Well, one of the signature features of Polaris is that it doesn’t know anything about poker! Despite its ability to outplay some of the LH players in the world, it’s just a collection on-off gates. In fact, it doesn’t know anything about anything. Just like feeling, knowing isn’t part of what it does.

 

An AI is just a program running on a silicone based device we call a computer. It’s affectively, epistemically empty. Oh, sure, you could program Polaris to say things like “Hmmm, I’ve got to think this one through,” or to laugh when it steals a pot or throw a tantrum when it ends a session with a big loss but it wouldn’t be thoughtful, happy, sad or angry. It would just be a bunch of on-off switches instantiated in a sea of transistors simulating these states.

 

This raises questions about exactly what we mean by thinking or feeling, not to mention whether it is possible to ever build an AI that can become truly aware of itself and the world about it. Such speculations, of course, go somewhat beyond poker but they are worth contemplating.

 

Reading a bot: Can a human player “read” a bot? Perhaps. If you can ascertain the patterns of play that have been programmed in, you ought to be able to put the device on a range of hands, just like you would a human opponent. When chess champion Gary Kasparov defeated Deep Blue I this was his strategy. Deep Blue II was made less transparent and Kasparov, no longer able to make such inductions, lost. It’s worth noting that one of Polaris’s programmers (who plays high-stakes poker) says he cannot beat it.

 

On-line, where “tells” are usually timing tells, it’s going to be “advantage Polaris.” I suspect that some of the difficulty that professionals have had playing Polaris can be traced to difficulties reading its silicon “mind.”

 

Bots reading you: The flip side here is also important. Can Polaris read you? Actually, it’s likely to be better at this than you think. Because of its enormous computational capacity, it will divine patterns in your game faster and more accurately than you will in its. And, because it is an AI, it has subroutines that enable it to learn from experience. In order to have a chance beating Polaris a player is going to have to take the adage “mix up your game” to new heights.

 

Paranoia: Bots like Polaris generate paranoia for two reasons. One, it plays very good LH. Two, it would likely pass a restricted version of the Turing Test. Alan Turing argued, famously, that if a computer were switched with a person with whom you were conversing and you didn’t realize it then the computer could be called a genuine “artificial intelligence.” A full Turing test doesn’t place limits on the topics so Polaris couldn’t pass it but it does appear to satisfy it so long as the topic is limit poker, played heads-up.

 

You can’t get a copy of Polaris and the designers won’t allow it to be used by anyone. But there are other bots around, many available commercially. None are very good (see http://www.purely-poker.com/pokerbot.htm) so keep your paranoia bottled up. Their main use is making pre-flop “fold” decisions enabling one to play more tables. But the future will be different; it usually is.

 

 

Article originally appeared on Arthur S. Reber (http://arthurreber.com/).
See website for complete article licensing information.