The future won’t be made by either humans or machines alone – but by both, working together. Technologies modeled on how human brains work are already augmenting people’s abilities, and will only get more influential as society gets used to these increasingly capable machines.
Technology optimists have envisioned a world with rising human productivity and quality of life as artificial intelligence systems take over life’s drudgery and administrivia, benefiting everyone. Pessimists, on the other hand, have warned that these advances could come at great cost in lost jobs and disrupted lives. And fearmongers worry that AI might eventually make human beings obsolete.
However, people are not very good at imagining the future. Neither utopia nor doomsday is likely. In my new book, “The Deep Learning Revolution,” my goal was to explain the past, present and future of this rapidly growing area of science and technology. My conclusion is that AI will make you smarter, but in ways that will surprise you.
Recognizing patterns
Deep learning is the part of AI that has made the most progress in solving complex problems like identifying objects in images, recognizing speech from multiple speakers and processing text the way people speak or write it. Deep learning has also proven useful for identifying patterns in the increasingly large data sets that are being generated from sensors, medical devices and scientific instruments.
The goal of this approach is to find ways a computer can represent the complexity of the world and generalize from previous experience – even if what’s happening next isn’t exactly the same as what happened before. Just as a person can identify that a specific animal she has never seen before is in fact a cat, deep learning algorithms can identify aspects of what might be called “cat-ness” and extract those attributes from new images of cats.

The methods for deep learning are based on the same principles that power the human brain. For instance, the brain handles lots of data of various kinds in many processing units at the same time. Neurons have many connections to each other, and those links strengthen or weaken depending on how much they’re used, establishing associations between sensory inputs and conceptual outputs.
The most successful deep learning network is based on 1960s research into the architecture of the visual cortex, a part of the brain that we use to see, and learning algorithms that were invented in the 1980s. Back then, computers were not yet fast enough to solve real-world problems. Now, though, they are.
In addition, learning networks have been layered on top of each other, creating webs of connections more closely resembling the hierarchy of layers found in visual cortex. This is part of a convergence taking place between artificial and biological intelligence.

Deep learning in real life
Deep learning is already adding to human capabilities. If you use Google services to search the web, or use its apps to translate from one language to another or turn speech into text, technology has made you smarter, or more effective. Recently on a trip to China, a friend spoke English into his Android phone, which translated it to spoken Chinese for a taxi driver – just like the universal translator on “Star Trek.”
These and many other systems are already at work, helping you in your daily life even if you’re not aware of them. For instance, deep learning is beginning to take over the reading of X-ray images and photographs of skin lesions for cancer detection. Your local doctor will soon be able to spot problems that are evident today only to the best experts.
Even when you do know there’s a machine involved, you might not understand the complexity of what they’re actually doing: Behind Amazon’s Alexa is a bevy of deep learning networks that recognize your request, sift through data to answer your questions and take actions on your behalf.
