Misplaced concern about existential risk is impeding the opportunity to expand human potential, writes venture capitalist Vinod Khosla. From his op-ed: I was the first venture investor in OpenAI. The weekend drama illustrated my contention that the wrong boards can damage companies. Fancy titles like “Director of Strategy at Georgetown’s Center for Security and Emerging Technology” can lead to a false sense of understanding of the complex process of entrepreneurial innovation. OpenAI’s board members’ religion of “effective altruism” and its misapplication could have set back the world’s path to the tremendous benefits of artificial intelligence. Imagine free doctors for everyone and near free tutors for every child on the planet. That’s what’s at stake with the promise of AI.
The best companies are those whose visions are led and executed by their founding entrepreneurs, the people who put everything on the line to challenge the status quo — founders like Sam Altman — who face risk head on, and who are focused — so totally — on making the world a better place. Things can go wrong, and abuse happens, but the benefits of good founders far outweigh the risks of bad ones. […] Large, world-changing vision is axiomatically risky. It can even be scary. But it is the sole lever by which the human condition has improved throughout history. And we could destroy that potential with academic talk of nonsensical existential risk in my view.
There is a lot of benefit on the upside, with a minuscule chance of existential risk. In that regard, it is more similar to what the steam engine and internal combustion engine did to human muscle power. Before the engines, we had passive devices — levers and pulleys. We ate food for energy and expended it for function. Now we could feed these engines oil, steam and coal, reducing human exertion and increasing output to improve the human condition. AI is the intellectual analog of these engines. Its multiplicative power on expertise and knowledge means we can supersede the current confines of human brain capacity, bringing great upside for the human race.
I understand that AI is not without its risks. But humanity faces many small risks. They range from vanishingly small like sentient AI destroying the world or an asteroid hitting the earth, to medium risks like global biowarfare from our adversaries, to large and looming risks like a technologically superior China, cyberwars and persuasive AI manipulating users in a democracy, likely starting with the U.S.’s 2024 elections.