AI and the Why

AlphaGo is an award-winning documentary available on Youtube. 

Spoiler Alert! 

It’s about the ancient Chinese board game called Go being challenged by an artificial intelligence program developed by Google’s DeepMind. 

It was once said that computers are not equipped to make cognizant or conscious decisions based on feelings or intuition, rather they are better suited for simple computation at scale, such as computing how far the earth is from the sun at any given moment. 

But that’s where the challenge of the game Go fits in. For AI and machine learning professionals defeating grand masters at the game of Go was the ultimate test. 

The AI couldn’t calculate the best moves mathematically because there is not enough computing power available to weigh all of the options. Professional Go players often make moves just because they feel right, not because of weighing every possible outcome five, ten or twenty moves in advance. 

So the programmers at DeepMind had to write a program that didn’t use brute force computing power to overcome the problem, but instead the program had to be nimble, resemble Human feelings or intuition, and not be focused solely on how to win. 

The program had to learn why certain moves were better than others, and as a result of it’s learning, it developed an incredibly unique play-style. Moves that looked like mistakes were considered creative and innovative after further review.  

The program, called AlphaGo, defeated the best players in the world.

All that being said, machines of the past have been known for doing repetitive brute-force tasks well, but machines of the future will be able to resemble feelings, learn the why, and interpret complex situations they’ve never seen before. 

AlphaGo was consistently put in situations it had never seen before, and was able to adapt and weigh options real time without knowing how the situation (or game) would end.

Being able to weigh the pros and cons without knowing how things will end is part of human consciousness, and we can say that this process of weighing options based on underlying beliefs is called “the why”. Weighing options without knowing the outcome is all part of our Human experience, and our underlying beliefs influence our decisions. Especially in the face of uncertainty, the why drives our actions. We can say that AI is learning the why, or at least moving closer to it. 

Some situations are more straightforward for AI, like learning how to play MarioKart. There is a fixed track or circuit with limited inputs. 

But the future of AI and computing is in the why, that means Human Beings having conversations with computers, computers anticipating problems and providing suggestions, and computers even being designed to understand our issues, needs and goals to help us solve problems we don’t even realize we have. 

The future value of AI is in computers aiding humans in ways we don’t even realize, maybe even simulating a human heart.