Evaluating a rising technology is often a binary operation, stuck in time. Does it work well? Yes or no?
Has adoption lived up to expectations? Are the products and services built on the tech meeting revenue forecasts? Or not?
All perfectly reasonable questions. Except that many of today’s foundational technologies would at one point or another have gotten a categorical “no.”
The transformational impacts of the printing press, electrification and the telephone were hardly obvious in the very early going. In the 1980s, for instance, AT&T decided not to pursue the cellphone business, pegging the technology as largely a local business, the Wall Street Journal later reported. (AT&T eventually reversed course and in 1993 bought a cellphone business.) The Xerox PARC lab famously developed a graphical user interface in the 1970s, but left it to others, like Steve Jobs, to lead its commercialization.
Now the world is deep into a new era, and everyone is trying to call the future of AI, electric vehicles, self-driving cars, robotics, bitcoin, nuclear fusion and quantum computing.
Luckily, it’s possible to get better answers about the future of technology. But we need to start with better questions.
In 2019, venture capitalist Vinod Khosla invested $50 million in OpenAI, twice the biggest initial investment he had ever made. It was a year before OpenAI released GPT-3, the generative AI model that provided the foundation for the conversational ChatGPT app in 2022. And Khosla had begun the investment process in 2018, at a time when he judged that the performance of AI-based products like virtual assistants could be poor, even laughable, relative to humans.
Yet he invested in OpenAI anyway. I wondered how he knew.
“It was the rate of change,” Khosla told me when I asked.
He didn’t mean rapid revenue growth, the kind of change that startups often use to impress investors. Earliest-stage investors by definition must act before revenue ramps up.
Predicting technology’s trajectory
No, Khosla had been struck in 2018 by the magnitude of talent going into AI, for one thing, and by its overall progress. He was particularly impressed by developments at Google parent Alphabet, which had spun out its Waymo autonomous vehicle unit in 2016, and by Alphabet’s DeepMind, which had developed AlphaFold 1, a breakthrough in predicting protein structure. China’s Baidu was also displaying progress in its AI efforts, like its autonomous driving system Apollo.
As a venture capitalist, however, he couldn’t invest in large public companies, and he wasn’t investing in China. OpenAI struck him as the best option in the field, given its technological trajectory and its ability to attract top-notch engineers.
He didn’t focus simply on how OpenAI’s models performed at that moment. Instead, he looked at the rates of advance within OpenAI as well as those in the broader world of AI.
“You can predict the direction of technology and what is scalable, with 60% to 70% accuracy,” says Khosla, who is also a co-founder of Sun Microsystems.
You just can’t get there by asking up-or-down questions about a moment in time.
Khosla said he previously focused on the rate of change in internet protocol technology in the 1990s, which convinced him that traditional telecom networking standards would be overrun and a new generation of networking companies would arise. That led to his $3 million investment in networking pioneer Juniper Networks as a partner at venture-capital firm Kleiner Perkins, a bet that eventually made the firm $7 billion.
An AI Quantum Leap?
Khosla today sees comparable rates of change—and opportunity—in a few areas, including AI, robotics and the life sciences.
In AI, he forecasts that researchers will achieve artificial general intelligence within five to seven years.
AGI has any number of definitions, but Khosla describes it as the point at which AI can perform 80% of the work involved in 80% of the world’s economically valuable jobs.
“At this point, I think it is almost beyond question that we will achieve AGI,” Khosla avers. In fact, the conversation has moved on to the development of what comes after that—or superintelligence, Khosla says.
OpenAI Co-founder and Chief Executive Sam Altman drove home the same point in a recent blog post. “We are now confident we know how to build AGI as we have traditionally understood it,” he wrote. “….We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word.”
“Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own,” he added, “and in turn massively increase abundance and prosperity.”
There are skeptics, however, who argue AI is bound to underperform those airy expectations.
“Many leading figures in the field have acknowledged that we may have reached a period of diminishing returns of pure LLM [large language model] scaling, much as I anticipated in 2022,” cognitive scientist Gary Marcus said in an X thread responding to Altman’s post. “It’s anybody’s guess what happens next.”
And Khosla has been wrong before. In 2007, Khosla Ventures made one of its worst-performing investments, putting nearly $160 million into KiOR, a biofuel startup that had aspirations of reaching oil-company scale. The startup ended up losing money on every gallon of fuel it produced before filing for bankruptcy in 2014.
But whether Khosla’s assessment of AI turns out to be correct has very little to do with how useful today’s models are—and everything to do with how quickly they continue to improve.
Write to Steven Rosenbush at steven.rosenbush@wsj.com