This story is part of a series on how we learn—from augmented reality to music-training devices.
Our in-house Know-It-Alls answer questions about your interactions with science and technology.
Q: How do machines learn?
By now you must have heard the good news about our savior, artificial intelligence. It makes you look better in selfies, prevents blindness, and can even turn water into tastier beer. Tech giants and governments say we’re living in a golden age of AI. Roll out the self-driving cars!
Truth is, most times you hear the term artificial intelligence, the specific technology at work is called machine learning. Despite the name, it relies heavily on human teaching.
Back in the 20th century, computer programmers had to get their electronic charges to do things by tapping out lines of code specifying exactly what needed to be done. Machine learning shifts some of that work away from humans, forcing the computer to figure things out for itself.
Recent advances in software that reads medical images provide a good example. Computer vision programs used to rely on people specifying what features they should look for—say, tell-tale shapes or shading indicating a broken bone on an x-ray. It worked, but not that well.
Machine learning can now get much better results, rivaling the accuracy of human doctors. Instead of specifying what their software should look for, programmers “train” it with a collection of example images. Companies working on machine learning for health care, like Google, create vast collections of medical images labeled by doctors. Machine learning algorithms are set loose on these visual data sets, looking for statistical patterns to figure out what features of an image suggest it deserves the particular label or diagnosis it’s been given.
New Computers, Old Tricks
Machine learning sounds modern, but it’s one of the oldest ideas in computer science. In 1959, a room-filling computer called the Perceptron set a milestone in artificial intelligence when it learned to distinguish shapes such as triangles and squares.
It was built on an approach to machine learning called artificial neural networks—which also power most of the AI projects grabbing headlines today. Neural networks in the cloud or even on our phones are behind virtual assistants and goofy photo filters.
Neural networks old and new are based on math inspired by simple models of how neurons function in the brain. Alexa wasn’t invented in 1959 because not long after the debut of the Perceptron, researchers mostly abandoned neural networks—it wasn’t clear how they could be scaled up to tackle larger problems. The technique spent decades as a fringe interest in computer science.
Around 2012, the small community still working on the neural network approach to machine learning showed groundbreaking new results on speech and image recognition. Machine learning was suddenly the hottest thing in tech. This year, three researchers who brought about that revolution won the Turing Award, the Nobel Prize of computing.
Machine Learning Is Not Smart
The resurgence of neural networks has made machine learning part of everyday life. All big tech companies’ plans for the future hinge on it, whether Alphabet’s ambitions to predict kidney failure before it happens or Amazon’s concept of stores without checkouts.
All that is genuinely exciting. Computers are becoming more capable at interacting with and understanding the world—and us. But don’t get swept away by the hype: machine learning doesn’t make computers anything like people.
It’s true that bots powered by machine learning can play tricky board games and videogames better than the most skilled human. Yet they require careful construction, and their statistical way of learning makes their talents narrow and inflexible. Humans can think about the world using abstract concepts, and can remix those concepts to adapt to new situations. Machine learning can’t.
That rigidity limits what AI has been able to do for us. It’s one reason self-driving cars struggle with unexpected traffic situations. Machine learning’s tightly scoped skills can also produce entertaining or nasty surprises.
Gaming bots powered by machine learning have found ways to hack the simulations they were being tested in. Image and text processing software sometimes learns to repeat or amplify societal stereotypes about race and gender. Machines can learn—but still need careful instruction by humans.