You should start learning about artificial intelligence. Here's how.

The artificial intelligence company we profiled in our recent video about Vicarious, an AI startup, won’t affect your life tomorrow. Vicarious’ goal of building human-level AI — basically, software that can think as creatively as we humans think — is a long-term project. We don’t know how long, and Vicarious doesn’t either.

But that’s OK, because basic AI is already here and it’s pretty exciting, even if it can’t pass for human. Siri is a form of artificial intelligence. So is the feature on some cars that helps you parallel park. We’re going to see — though maybe not notice — this kind of technology in more and more of our hardware and software. As Kevin Kelly wrote in Wired, “(T)he business plans of the next 10,000 startups are easy to forecast: Take X and add AI.” It’s been two years since Kelly made that guess, and Silicon Valley has yet to prove him wrong.

siri
Apple’s Siri is a form of AI many of us interact with everyday

So what is artificial intelligence? Broadly speaking, it’s non-human technology that can solve problems with little or no human assistance. Ideally, we want AI to solve those problems as well as we do, or better. And that means we want AI to find the optimal solution more quickly than a human could. Something like what IBM’s Watson did earlier this year:

University of Tokyo doctors report that the artificial intelligence diagnosed a 60-year-old woman’s rare form of leukemia that had been incorrectly identified months earlier. The analytical machine took just 10 minutes to compare the patient’s genetic changes with a database of 20 million cancer research papers, delivering an accurate diagnosis and leading to proper treatment that had proven elusive. Watson has also identified another rare form of leukemia in another patient, the university says.

Watson made the correct diagnosis really quickly because it can do two things we can’t: Consume a vast amount of information in a short amount of time and then make relevant connections within that massive pool of data in an equally short amount of time. Can humans consume lots of knowledge? Of course! But we’re constrained by our slow rate of consumption, our limited ability to retain information, and the high transaction costs of sharing what we know. I can only read so fast, I can only remember so much, and I can’t talk and listen at the same time. By comparison, Watson’s ability to do those things is essentially unlimited.

An early prototype of IBM’s Watson / Image via Wikimedia Commons

Existing AIs are amazing, but they all still have to be trained to solve specific problems. Consider AlphaGo, the Google DeepMind AI developed to play the ancient Chinese game of Go, which is like a complicated version of Chess. Go is so complicated, in fact, that you can’t format every possible move and then have a computer learn them. Instead, you teach the computer how the game is played, and then have it play games over and over. That’s what Google did:

The key to AlphaGo is reducing the enormous search space to something more manageable. To do this, it combines a state-of-the-art tree search with two deep neural networks, each of which contains many layers with millions of neuron-like connections. One neural network, the “policy network”, predicts the next move, and is used to narrow the search to consider only the moves most likely to lead to a win. The other neural network, the “value network”, is then used to reduce the depth of the search tree — estimating the winner in each position in place of searching all the way to the end of the game.

AlphaGo had to understand the rules of Go, then it had to play enough games of Go to be able to understand what was likely to happen throughout the rest of the game. Which means after every move, AlphaGo recalculated what its opponent was likely to do next. Google accomplished this by giving its AI 30 million moves from actual Go games and then having it play thousands of games against itself. This required “a huge amount of compute power,” according to Google, and it ultimately led to AlphaGo defeating South Korean Go grandmaster Lee Se-Dol earlier this year.

That’s an amazing accomplishment, but it’s not a cure for cancer or HIV; it can’t tell us how to solve the world’s biggest problems, and it’s definitely not Skynet. Basically, AlphaGo and Watson tell us AI still has a long way to go.

In the meantime, you should be learning about AI. What it can and can’t do, and how it works. Here are some easily accessible resources for doing just that. You should start with the basics of machine learning, which underpins today’s most advanced AI:

Layman’s Intro to #AI and Neural Networks

A Visual Introduction to Machine Learning

Then dive into the ethical and philosophical implications of AI:

AI Revolution 101: Our last invention, greatest nightmare, or pathway to utopia?

Is Artificial Intelligence Taking Over Our Lives?

Do you want to learn how to make AI? Because you totally can. This list is a great place to start.

And if you haven’t already, you should absolutely watch our Vicarious episode.

Related
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Can we automate science? Sam Rodriques is already doing it.
People need to anticipate the revolution that’s coming in how humans and AI will collaborate to create discoveries, argues Sam Rodrigues.
AI is now designing chips for AI
AI-designed microchips have more power, lower cost, and are changing the tech landscape.
Why futurist Amy Webb sees a “technology supercycle” headed our way
Amy Webb’s data suggests we are on the cusp of a new tech revolution that will reshape the world in much the same way the steam engine and internet did in the past.
AI chatbots may ease the world’s loneliness (if they don’t make it worse)
AI chatbots may have certain advantages when roleplaying as our friends. They may also come with downsides that make our loneliness worse.
Up Next
Exit mobile version