While speaking at The Wall Street Journal’s CEO Council Summit in London, Schmidt said that he said he is concerned that AI is an “existential risk.”
“And existential risk is defined as many, many, many, many people harmed or killed,” Schmidt, who served as Google CEO from 2001 to 2011, was quoted as saying.
“There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues, or discover new kinds of biology. Now, this is fiction today, but its reasoning is likely to be true. And when that happens, we want to be ready to know how to make sure these things are not misused by evil people,” he added.
Zero-day exploits are software vulnerabilities that are discovered by cyber criminals before the vendor has become aware of them.
Regulate AI, says Schmidt
Schmidt said that AI needs to be regulated and governments need to know how to make sure the technology is not “misused by evil people”. Schmidt pointed out that it is a “broader question for society how AI should be regulated.
A report in CNBC mentions that Schmidt was part of the National Security Commission on AI in the US which began a review of the technology in 2019. The commission, in its review published in 2021, warned that the US was underprepared for the age of AI.
Regulation of AI technology
Not only critics and policymakers but top executives of companies like Google, Microsoft and ChatGPT maker Sam Altman have also touched upon the dangers of AI if the technology is misused.
Google CEO Sundar Pichai, who recently oversaw the launch of the company’s own chatbot called Bard AI, recently said the technology will “impact every product across every company.”
In a piece written in The Financial Times, he said that “AI is the most profound technology humanity is working on today” but “what matters even more is the race to build AI responsibly and make sure that as a society we get it right.”