We are delighted to welcome to Comment is Freed Eric Schmidt. Eric is former CEO of Google, chair of the Special Competitive Studies Project, an honorary KBE and founding partner of Innovation Endeavors. He has written extensively on how AI is transforming armed conflict, and will soon be publishing Genesis: Artificial Intelligence, Hope, and the Human Spirit, (New York: Little Brown), co-authored with Craig Mundie and the late Dr. Henry Kissinger.
In our conversation we talked about the impact of AI on warfare, drones, and deterrence.
Lawrence Freedman (LF): Three years ago you published a book, also with Henry Kissinger, on AI. And now you have published a new one. Why now? Is it just because technology has developed even faster than you thought it would, or because new issues have arisen?
Eric Schmidt (ES): The first book came out right before ChatGPT. We did talk about GPT and other transformer architectures, but I don't think people understood its importance. Now we understand that these neural networks, and in particular, this next generation of multi-modal transformer-based architectures, have much more power than people thought. The book therefore is not about why LLMs [Large Language Models] have become more powerful. It’s a book about what happens to society as they get more powerful.
The book, which is called Genesis, starts by talking at some length about polymaths. Many of the people who we study when we're in high school were the polymaths of the time, such as Leonardo da Vinci. They are unique and very rare, and they move science, art, culture, society, forward. There are polymaths in every society, in every religion. What happens when you have a polymath of that level in your pocket, available to you as an individual? This is a major change in human experience. It really changes the definition of what it is to be human.
LF: Is it fair to say that you are using the contrast with the machine to illuminate the essential attributes of humanity?
ES: Henry Kissinger, who's the lead author and who literally wrote this on his death bed – it was finished in the last week of his life - was a student of Kant. He was interested in the relationship of truth, perception and human behavior, and he was also a realist. And what he figured out ahead of pretty much everybody else that if you simply extrapolate this rise of an alien intelligence it ultimately changes the human lived experience.
It's good in the sense that it enables humans to operate at their highest level. If you use Google, you get more information, and you're smarter than if you didn't have Google. But then let's assume that you and I are negotiating against each other, and you're very good and I'm very good, but each of us has our own data sources. Now imagine that there are AI systems that know how to do optimal game theory, and they have infinite information. Is there a scenario in which the system, recognizing that you are a powerful opponent with a similar system, will decide that the only solution to the problem is to attack you? How then do you do your strategy when every single person, in theory, can have that agent with them?
Keep reading with a 7-day free trial
Subscribe to Comment is Freed to keep reading this post and get 7 days of free access to the full post archives.