The AI safety debate is tearing Silicon Valley apart


The long-simmering fault lines within OpenAI over questions of safety with regard to the deployment of large language models like GPT, the engine behind OpenAI’s ChatGPT and DALL-E services, came to a head on Friday when the organization’s nonprofit board of directors voted to fire then-CEO Sam Altman. In a brief blog post, the board said that Altman had not been “consistently candid in his communications.” Now rumors are swirling about Altman’s next move—and possible return.

But OpenAI is not the only place in Silicon Valley where skirmishes over AI safety have exploded into all-out war. On Twitter, there are two camps: the safety-first technocrats, led by venture firms like General Catalyst in partnership with the White House; and the self-described “techno-optimists,” led by libertarian-leaning firms like Andreessen Horowitz.

The technocrats are making safety commitments and forming committees and establishing nonprofits. They recognize AI’s power and they believe that the best way to harness it is through cross-disciplinary collaboration.

Hemant Taneja, CEO and managing director of General Catalyst, announced on Tuesday that he had led more than 35 venture capital firms and 15 companies to sign a set of “Responsible AI” commitments authored by Responsible Innovation Labs, a nonprofit he cofounded. The group also published a 15-page Responsible AI Protocol, which Taneja described on X as a “practical how-to playbook.”

Taneja’s tweet was quickly ratioed. Praying for Exits, a Silicon Valley meme account and investor, posted a screenshot of messages between an AI researcher, named Rohan Pandey, and an investor at Insight Partners, which also signed the Responsible AI commitments, in which Pandey canceled their upcoming meeting; Pandey said the commitments would “endanger open-source AI research & contribute to regulatory capture.”