This Is Paragraph Spacing
This Is Non-Breaking Space
This Is Italic
This Is Inline Text Formatting
Table of Contents:
Toronto students will soon be able to walk along the Great Wall of China or snorkel in the Great Barrier Reef from their classrooms with the Google Expeditions virtual reality field trip experience.
They are annotated with points of interest, ambient sound and other cues for teachers to integrate into their lessons, whether they’re teaching 4th grade math or 12th grade history.
IN OPEN SOURCING its artificial intelligence engine—freely sharing one of its most important creations with the rest of the Internet—Google showed how the world of computer software is changing.
These days, the big Internet giants frequently share the software sitting at the heart of their online operations. Open source accelerates the progress of technology. In open sourcing its TensorFlow AI engine, Google can feed all sorts of machine-learning research outside the company, and in many ways, this research will feed back into Google.
But Google’s AI engine also reflects how the world of computer hardware is changing. Inside Google, when tackling tasks like image recognition and speech recognition and language translation, TensorFlow depends on machines equipped with GPUs, or graphics processing units, chips that were originally designed to render graphics for games and the like, but have also proven adept at other tasks. And it depends on these chips more than the larger tech universe realizes.
According to Google engineer Jeff Dean, who helps oversee the company’s AI work, Google uses GPUs not only in training its artificial intelligence services, but also in running these services—in delivering them to the smartphones held in the hands of consumers.
AI is playing an increasingly important role in the world’s online services—and alternative chips are playing an increasingly important role in that AI.
The change is good news for nVidia, the chip giant that specialized in GPUs. And it points to a gaping hole in the products offered by Intel, the world’s largest chip maker. Intel doesn’t build GPUs. Some Internet companies and researchers, however, are now exploring FPGAs, or field-programmable gate arrays, as a replacement for GPUs in the AI arena, and Intel recently acquired a company that specializes in these programmable chips.
The bottom line is that AI is playing an increasingly important role in the world’s online services—and alternative chip architectures are playing an increasingly important role in AI. Today, this is true inside the computer data centers that drive our online services, and in the years to come, the same phenomenon may trickle down to the mobile devices where we actually use these services.
Deep Learning in Action
But, typically, when these companies put deep learning into action—when they offer a smartphone app that recognizes cats, say—this app is driven by a data center system that runs on CPUs. According to Bryan Catanzaro, who oversees high-performance computing systems in the AI group at Baidu, that’s because GPUs are only efficient if you’re constantly feeding them data, and the data center server software that typically drives smartphone apps doesn’t feed data to chips in this way. Typically, as requests arrive from smartphone apps, servers deal with them one at a time. As Catanzaro explains, if you use GPUs to separately process each request as it comes into the data center, “it’s hard to get enough work into the GPU to keep it running efficiently. The GPU never really gets going.”
That said, if you can consistently feed data into your GPUs during this execution stage, they can provide even greater efficiency than CPUs. Baidu is working towards this with its new AI platform. Basically, as requests stream into the data center, it packages multiple requests into a larger whole that can then be fed into the GPU. “We assemble these requests so that, instead of asking the processor to do one request at a time, we have it do multiple requests at a time,” Catanzaro says. “This basically keeps the GPU busier.”
It’s unclear how Google approaches this issue. But the company says there are already cases where TensorFlow runs on GPUs during the execution stage. “We sometimes use GPUs for both training and recognition, depending on the problem,” confirms company spokesperson Jason Freidenfelds.
That may seem like a small thing. But it’s actually a big deal. The systems that drive these AI applications span tens, hundreds, even thousands of machines. And these systems are playing an increasingly large role in our everyday lives. Google now uses deep learning not only to identify photos, recognize spoken words, and translate from one language to another, but also to boost search results. And other companies are pushing the same technology into ad targeting, computer security, and even applications that understand natural language. In other words, companies like Google and Baidu are gonna need an awful lot of GPUs.
Typically, when you use a deep learning app on your phone, it can’t run without sending information back to the data center. All the AI happens there. When you bark a command into your Android phone, for instance, it must send your command to a Google data center, where it can processed on one of those enormous networks of CPUs or GPUs.
But Google has also honed its AI engine so that it, in some cases, it can execute on the phone itself. “You can take a model description and run it on a mobile phone,” Dean says, “and you don’t have to make any real changes to the model description or any of the code.”
This is how the company built its Google Translate app. Google trains the app to recognize words and translate them into another language inside its data centers, but once it’s trained, the app can run on its own—without an Internet connection. You can point your phone a French road sign, and it will instantly translate it into English.
That’s hard to do. After all, a phone offers limited amounts of processing power. But as time goes on, more and more of these tasks will move onto the phone itself. Deep learning software will improve, and mobile hardware will improve as well. “The future of deep learning is on small, mobile, edge devices,” says Chris Nicholson, the founder of a deep learning startup called Skymind.
GPUs, for instance, are already starting to find their way onto phones, and hardware makers are always pushing to improve the speed and efficiency of CPUs. Meanwhile, IBM is building a “neuromorphic” chip that’s designed specifically for AI tasks, and according to those who have used it, it’s well suited to mobile devices.
Today, Google’s AI engine runs on server CPUs and GPUs as well as chips commonly found in smartphones. But according to Google engineer Rajat Monga, the company built TensorFlow in a way that engineers can readily port it to other hardware platforms. Now that the tool is open source, outsiders can begin to do so, too. As Dean describes TensorFlow: “It should be portable to a wide variety of extra hardware.”
So, yes, the world of hardware is changing—almost as quickly as the world of software.
YESTERDAY MY FACEBOOK feed filled up with pictures of friends’ kids clutching cardboard boxes to their faces. Well, I should say, Cardboard boxes.
That’s because subscribers to The New York Times’ Sunday print edition received a Google Cardboard virtual reality headset, wrapped in the standard-issue blue plastic bag, as part of the Times’ rollout of its own VR content.
Cardboard isn’t much to look at. It’s a bit of corrugated, yes, cardboard and some velcro that you fold to create a slot for your smartphone and a pair of flaps to block your peripheral vision. Inside is the crucial component, the pair of cheap plastic lenses that that transform the flat, doubled-up images on your phone’s screen into the illusion of an immersive 3-D environment.
Kids who’ve had the VR experience have a new set of expectations of what it should mean to interact with a computer.
Okay, I’m sure that among Times subscribers, several were savvy enough to already have some kind of VR rig on hand and have been probing the virtual depths for a while now. But embarrassing confession time: I’m an editor at WIRED—you know, where we cover the future—and it just hadn’t sunk in that VR was something I could do, too. Yes, a bit of that was brand blindness; Samsung has been pushing its own Gear headset for a while, but no highly visible headset targeting iOS users has emerged yet. In fact, when I asked our Gear team what I could use to watch VR on an iPhone, the response was, “It’s basically just Cardboard.”
Whatever the reason for my myopia, it was awfully convenient that, just a few days after I started idly searching Cardboard options on Amazon, one showed up in my driveway. I suspect that, like many of those other 1.3 million, the first thing I did was to put it on my kid. And I’m pretty sure that means everything.
New Is Normal
If you’re a kid, on the other hand, there’s a good chance you’ve grown up assuming that portable touchscreen portals to a significant portion of human knowledge, entertainment, and communication are a given. Yes, you think your dad’s iPhone is pretty cool. But then yesterday you put on Google Cardboard and watched a train come hurtling toward you before you flew up into the sky and into the embrace of a giant baby. And you said, “Yeah, now we’re talking.”
I don’t know what the exact year is, but I believe that up to a certain age, any technology a kid encounters registers as “normal.” To me, a world without color TV or personal computers is an abstraction. For a host of kids as of yesterday, so is a world without VR.
This is why distributing something as unpolished as Google Cardboard in a way that’s as gimmicky (and anachronistic) as handing it out with a newspaper turns out to be such a big deal. Sure, we’re talking about a tiny subset of kids. But they’ll tell their friends. Their parents are already telling their friends. And a technology that once seemed remote is suddenly accessible.
And in the case of this particular technology, accessibility translates almost immediately into visceral intimacy. Experiencing VR for the first time isn’t just cool; it’s revelatory. This is why so many of us made sure to capture the moment of our kids’ first encounter. Most parents, I hope, don’t make videos of their kids’ reactions when we unbox our latest iPhones. But I believe we had a collective sense that our kids were experiencing something meaningfully new—not just an encounter with a new technology, but with a new way of relating to technology.
Especially as a medium for non-fiction, I believe the hype that VR can act as a powerful empathy engine, a uniquely direct way to put us in someone else’s world. This makes me hopeful that VR will become much more than the next level of escapism for an already screen-addled generation. I know that’s some serious parental wishful thinking. But for good or ill, Google Cardboard is just good enough to imprint a new paradigm on a nation of 8-year-olds. From now on, kids who’ve had the VR experience have a new set of expectations of what it should mean to interact with a computer. Imagine what they’ll expect by the time they’re 18.
GOOGLE IS UPGRADING its quantum computer. Known as the D-Wave, Google’s machine is making the leap from 512 qubits—the fundamental building block of a quantum computer—to more than a 1000 qubits. And according to the company that built the system, this leap doesn’t require a significant increase in power, something that could augur well for the progress of quantum machines.
Together with NASA and the Universities Space Research Association, or USRA, Google operates its quantum machine at the NASA Ames Research center not far from its Mountain View, California headquarters. Today, D-Wave Systems, the Canadian company that built the machine, said it has agreed to provide regular upgrades to the system—keeping it “state-of-the-art”—for the next seven years. Colin Williams, director of business development and strategic partnerships for D-Wave, calls this “the biggest deal in the company’s history.” The system is also used by defense giant Lockheed Martin, among others.
Though the D-Wave machine is less powerful than many scientists hope quantum computers will one day be, the leap to 1000 qubits represents an exponential improvement in what the machine is capable of. What is it capable of? Google and its partners are still trying to figure that out. But Google has said it’s confident there are situations where the D-Wave can outperform today’s non-quantum machines, and scientists at the University of Southern California have published research suggesting that the D-Wave exhibits behavior beyond classical physics.
Over the life of Google’s contract, if all goes according to plan, the performance of the system will continue to improve. But there’s another characteristic to consider. Williams says that as D-Wave expands the number of qubits, the amount of power needed to operate the system stays roughly the same. “We can increase performance with constant power consumption,” he says. At a time when today’s computer chip makers are struggling to get more performance out of the same power envelope, the D-Wave goes against the trend.
Two qubits, then, can hold four values at any given time (00, 01, 10, and 11). And as you keep increasing the number of qubits, you exponentially increase the power of the system. The problem is that building a qubit is a extreme difficult thing. If you read information from a quantum system, it “decoheres.” Basically, it turns into a classical bit that houses only a single value.
D-Wave believes it has found a way around this problem. It released its first machine, spanning 16 qubits, in 2007. Together with NASA, Google started testing the machine when it reached 512 qubits a few years back. Each qubit, D-Wave says, is a superconducting circuit—a tiny loop of flowing current—and these circuits are dropped to extremely low temperatures so that the current flows in both directions at once. The machine then performs calculations using algorithms that, in essence, determine the probability that a collection of circuits will emerge in a particular pattern when the temperature is raised.
Reversing the Trend
D-Wave says that most of the power needed to run the system is related to the extreme cooling. The entire system consumes about 15 kilowatts of power, while the quantum chip itself uses a fraction of a microwatt. “Most of the power,” Williams says, “is being used to run the refrigerator.” This means that the company can continue to improve its performance without significantly expanding the power it has to use. At the moment, that’s not hugely important. But in a world where classical computers are approaching their limits, it at least provides some hope that the trend can be reversed.
Microsoft has rebooted its popular fitness tracker, bringing us this year’s new Band 2. Packed with 11 sensors, including a heart rate monitor, a UV monitor, and on-board GPS for tracking runs, hikes, and bike rides, it’s one of the most capable wearables on the market. The Band 2 isn’t perfect, but if you can deal with the bulk (it’s huge) and the $250 price (it’s expensive), the bracelet can be purchased from Microsoft and ships within a month.