OpenAI’s idealism was no match for capitalism

The AGI company slides from transparent to opaque in the face of competition.

OpenAI, an artificial general intelligence (AGI) research laboratory, has long been portrayed as being full of earnest people trying to create ethical, superhuman intelligence. But as an illuminating MIT Technology Review report by Karen Hao shows, the company has struggled with the magnetism of profitability, transparency about its breakthroughs and business practices, and diversity.

Elon Musk, who co-founded OpenAI in 2015 before stepping down in 2018, responded to the report by calling for transparency and regulation. Musk also tweeted his confidence in research director Dario Amodei “for safety is not high,” despite the report’s somewhat positive inclinations about his intentions.

The wobbling foundation of OpenAI — Early on, OpenAI was a non-profit organization that wanted to safely move the needle forward on AGI. When Tesla and SpaceX CEO Moneybags Musk stepped down due to conflicts of interest, the company opened itself up to more investors and ultimately a $1 billion dollar investment from Microsoft. OpenAI now exclusively uses Microsoft’s Azure for its expansive cloud computing needs, but Microsoft has no special claim to any AGI breakthroughs.

Transitioning into a capped-profit company, OpenAI released a new charter frequently referenced in Hao’s report with the reverence of a sacred text. The new charter was meant to assuage fears about the company’s mission amid criticism of its media hype machine and hesitance to publish comprehensive models for peer review. The latter criticism generally rests around its language model that can produce believable articles and essays.

The shift raised concerns for people in and outside of the company, but Hao’s perception of the workforce seems to be one of an insular community rooted in skeptical to eager optimism. Despite Amodei’s desire for “variation and diversity” in AGI safety research, Hao reports very little diversity within this community (noting the contributions of computer science academia in this homogeneity).

“How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how?” Greg Brockman, CEO and co-founder, told Hao about integrating social ethics with technical expertise. “One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need. I don’t think that that strategy is likely to succeed.”

This attitude is emblematic of more narrow AI applications and the tech industry overall. Building the technical side of anything without considering the ethical ramifications from the start "bakes" in implicit bias. If you think about the choice of words here, Brockman doesn’t see ethics as an ingredient in AGI, but as an ancillary topping that can be applied later.

With facial recognition AI, we can most easily see how this approach disproportionately affects non-white people. And while this tech was once reserved for the federal government or (relatively) innocuous Facebook photo-tagging, Clearview AI and other companies like it are partnering with local police departments throughout the country.