‘Quantum Computing … Changes Everything’

Google’s Eric Schmidt talks to Foreign Policy about the future of technology, security, and killer robots.

By , a former editor in chief of Foreign Policy.
Alphabet executive chairman Eric Schmidt, left, speaks to Foreign Policy editor in chief Jonathan Tepperman at the Halifax International Security Forum on Nov. 18. (Halifax International Security Forum)
Alphabet executive chairman Eric Schmidt, left, speaks to Foreign Policy editor in chief Jonathan Tepperman at the Halifax International Security Forum on Nov. 18. (Halifax International Security Forum)
Alphabet executive chairman Eric Schmidt, left, speaks to Foreign Policy editor in chief Jonathan Tepperman at the Halifax International Security Forum on Nov. 18. (Halifax International Security Forum)

On Nov. 18, Jonathan Tepperman, Foreign Policy’s editor in chief, interviewed Eric Schmidt, the executive chairman of Alphabet (Google’s parent company), at the Halifax International Security Forum. Video of the full interview is embedded below. The following excerpt of the conversation has been edited for grammar and length.

On Nov. 18, Jonathan Tepperman, Foreign Policy’s editor in chief, interviewed Eric Schmidt, the executive chairman of Alphabet (Google’s parent company), at the Halifax International Security Forum. Video of the full interview is embedded below. The following excerpt of the conversation has been edited for grammar and length.

Jonathan Tepperman: British Prime Minister Theresa May recently lashed out at tech companies like Google, claiming that you weren’t doing enough to counter violent extremism online. Just this week, the New York Times reported that YouTube, which Google owns, is finally taking down videos featuring Anwar al-Awlaki, the famous, now-deceased ISIS recruiter. What took you so long? And what are Google and Alphabet’s other divisions doing to fight extremism and fake news?

Eric Schmidt: All democratic countries are facing these challenges together, and the tech industry is also facing these challenges together. We are aware of them, we understand them, and we’re working on them. With regard to YouTube, there have been cases where people have uploaded things that violate our terms of service or things that have been used incorrectly. We have very, very detailed terms of service, and after enough people mention [a problematic post], we typically look at it and decide. In this case, I think it was fairly clear. Perhaps it was overdue.

Ten years ago, I thought that everyone would be able to deal with the internet because we all knew the internet was full of falsehoods as well as truths. Crazy people, crazy ideas, and so forth. But the new data show that [bad] actors that are trying to either spread misinformation or worse have figured out how to use that information for their own good, whether it’s by amplifying a message or repeating something 100 times so people actually believe it even though it’s obviously false.

My own view is that these patterns can be detected, and they can be taken down or deprioritized. One of the problems in the industry is that we came from a naive position that these actors would not be so active. But now, faced with the data and what we’ve seen from Russia in 2016 and with other actors around the world, we have to act.

[vimeo 243643331 w=800 h=420]

JT: Say a little bit more about how companies like yours help fight fake news and violent extremism — and what you can do that governments can’t.

ES: Well, the first thing is that Alphabet created a company called Jigsaw, whose primary purpose is to analyze and pursue these issues. And the first project that Jigsaw did was to study violent extremism and the use or misuse of information. We also have internal groups that watch for this kind of amplification and manipulation of information and then try to deal with it.

The most important thing that we can do is make sure that as the other side gets more automated, we also are more automated. Much of what Russia did [during the election] was largely manual. [But] you can imagine that the next round will be much more automated.

JT: How do you think about the fact that you’re inevitably becoming more aggressive in determining what people can and can’t see?

ES: We started with the American view that, in a crowded network, bad speech will be replaced by good speech. But that may no longer be true in certain situations, especially when you have a well-funded opponent who’s trying to actively spread information. What everybody’s grappling with is, where is the line? Everyone has an opinion.

Nations have relatively broad definitions of where the line is, and [if you cross them], you’re then subject to sanction, regulation, courts, and so forth. For example, the European Court of Justice has passed a rule that, under certain situations, when we’re asked, we’re required to take down personal information about people who are not public figures and not materially relevant to the conversation. But [the court] didn’t define those two terms. If you’re going to regulate that particular thing, you better be prepared to define it or figure out who’s going to make that decision. We’ve found ourselves having to make that decision. I think we’ve done a really good job at it. But you could imagine an educated critic disagreeing.

JT: You made headlines about two weeks ago by suggesting that the U.S. risks losing the race for artificial intelligence to China. What is the state of the competition today, and why is China such a threat?

ES: The rise of China will be the big news for the rest of our lives. And I mean that in an economic, cultural, structural sense. China’s artificial intelligence machine-learning policy, which I’ve actually read, says that they’re behind us right now, but by 2020, they’ll be caught up; by 2025, they’ll be ahead of all of us; and by 2030, they will dominate. These are their words. There is no equivalent document that I can find in the United States, which invented most of this (along with Canada). And remember the Chinese have enormous resources in terms of smart people, educated technical folks, and a will to implement this.

JT: But we have all those things, too.

ES: They have more. They appear to have five times more [people] by math, and I’m very impressed by the quality of their work.

JT: Is part of the equation that you didn’t mention the role that the government plays there versus the role the government does or doesn’t play here?

ES: The Chinese model is very much government-driven, central planning-driven. There’s lots of money and investment. They also don’t have the kinds of privacy and data restriction rules that you would expect in a democracy. On the other hand, in our system, you have a thousand flowers running around, you have all sorts of new innovation from different things, you have a pro-immigration policy with respect to technical people (recently under threat). You have lots of reasons to think the United States, Canada, the Western world, can respond well.

JT: You are a strong proponent of the value of AI. Others are less sanguine. We all know the risks that AI poses to jobs. But tell us how AI is going to transform international security. Do we all need to fear the coming of the killer robots?

ES: Well, I think we’ve all seen those movies, and it’s important to remind people that those are movies.

Let’s talk a little bit about where we are. Right now, the technology of artificial intelligence and machine learning is largely focused on very powerful pattern matching. The impact that it’s going to have on society is probably first and most importantly in health care. Think about what doctors do. A lot of doctors are using intuition and their eyes. Literally examining what they see, reasoning, looking at patterns, and so forth. Those abilities that humans have are very well augmented. This is a case of doctor-plus. Not doctor-minus — doctor-plus. For the next five years, that’s going be the big narrative.

The other big narrative is the ability to use large data sets of information to find hidden cues of one kind or another, as well as finding patterns. Computers are very, very good at looking at very deep patterns that humans don’t see. I’ll give you an example. When you ask people what they’d like a robot to do, the thing that they’d like more than anything else is [for it to] clean up the dishes in the kitchen. That is literally the No. 1 request. That turns out to be an extraordinarily difficult problem — and why? Well, think about what it requires. You have to walk into the situation. You have to access where things are. You have to identify everything. You have to remember where it goes. You have to move in an appropriate way. And you have to do all this in real time. So it’s much more likely that computers will be understood as savants, helping humans see deeper, plan deeper.

My hope is that in 10-15 years you’ll have physicists who will ask the computer a question, a deep physics question, and while they’re sleeping, the computer will do an analysis of all of the papers, read them all, and come up with some other scenarios for this physicist to start to think about.

JT: I still want to hear about the killer robots that you guys are working on.

ES: In fact, we have a policy against building killer robots, so we’re clear.

JT: Tell us a bit more about how you see AI starting to affect major security issues in the not-too-distant future.

ES: There’s a movie where there’s a drone swarm that’s autonomously directed, gets released, and overwhelms the opponent. Now, what’s the technology behind that? Well, today in universities in the United States, we already have autonomous drone swarms that we can demonstrate. You throw them all up, and they self-configure and figure out where they’re going using principles derived from the animal world. You can imagine the implications there.

My observation about the military is that it spends most of its time watching things. Well, humans get really bored watching the same thing, especially if it doesn’t change. Computers don’t. So a simple example of how this can be used is to improve the ability to detect changes.

JT: Over the years, Google has taken on an increasing number of jobs traditionally done by governments. One recent example is Project Loon, under which Google’s X deployed a number of high-tech balloons to Puerto Rico in order to supply internet connections that the federal government and the Puerto Rico government haven’t been able to restore. As Google takes on more of these quasi-governmental functions, how should its relationship with government change? And how will Google balance its obligations to the public versus its obligations to its shareholders?

ES: We celebrate that we’re run under a set of principles. We stipulated this right from the beginning of going public, and we feel very strongly about it. Something like the Loon initiative has nothing to do with revenue. It has everything to do with making the world a better place. In Puerto Rico, for example, the government and telecom companies welcomed us. As long as we’re on that side of these issues, we’re going be just fine.

JT: But what can the public do to ensure that Google does stay on the right side of these issues?

ES: Well, you have a Congress, a president, laws, regulations, and so forth. We operate under the laws of every country we operate in, and we operate in pretty much every country.

JT: What is the big tech-related security issue that none of us are thinking about right now and that we need to start thinking about?

ES: One is the development of quantum computing and how that changes everything. Another is the rise of autonomy and its implications. Another is the fact that, from a military perspective, we’re getting to a point where there’s no place to hide assets.

But the biggest problem is the fact that there’s a generation of young people, a new set of brains and talent, that democracies need to get in their organizations at the national level, at the military level, at the NGO level, at the think tank level. The talent is there, but I have a feeling that they’re not coming to you. Figure out a way to make your situations, which have very interesting problems, attractive, and you’ll be in a much stronger position.

Jonathan Tepperman is a former editor in chief of Foreign Policy and the author of The Fix: How Countries Use Crises to Solve the World’s Worst Problems. Twitter: @j_tepperman

More from Foreign Policy

An illustration shows the Statue of Liberty holding a torch with other hands alongside hers as she lifts the flame, also resembling laurel, into place on the edge of the United Nations laurel logo.
An illustration shows the Statue of Liberty holding a torch with other hands alongside hers as she lifts the flame, also resembling laurel, into place on the edge of the United Nations laurel logo.

A New Multilateralism

How the United States can rejuvenate the global institutions it created.

A view from the cockpit shows backlit control panels and two pilots inside a KC-130J aerial refueler en route from Williamtown to Darwin as the sun sets on the horizon.
A view from the cockpit shows backlit control panels and two pilots inside a KC-130J aerial refueler en route from Williamtown to Darwin as the sun sets on the horizon.

America Prepares for a Pacific War With China It Doesn’t Want

Embedded with U.S. forces in the Pacific, I saw the dilemmas of deterrence firsthand.

Chinese Foreign Minister Wang Yi, seen in a suit and tie and in profile, walks outside the venue at the Belt and Road Forum for International Cooperation. Behind him is a sculptural tree in a larger planter that appears to be leaning away from him.
Chinese Foreign Minister Wang Yi, seen in a suit and tie and in profile, walks outside the venue at the Belt and Road Forum for International Cooperation. Behind him is a sculptural tree in a larger planter that appears to be leaning away from him.

The Endless Frustration of Chinese Diplomacy

Beijing’s representatives are always scared they could be the next to vanish.

Turkey's President Recep Tayyip Erdogan welcomes Crown Prince of Saudi Arabia Mohammed bin Salman during an official ceremony at the Presidential Complex in Ankara, on June 22, 2022.
Turkey's President Recep Tayyip Erdogan welcomes Crown Prince of Saudi Arabia Mohammed bin Salman during an official ceremony at the Presidential Complex in Ankara, on June 22, 2022.

The End of America’s Middle East

The region’s four major countries have all forfeited Washington’s trust.