When Sam Altman first advised her that he’d by no means let OpenAI go company, that what he and his colleagues had been constructing was too highly effective to be pushed by buyers, Catherine Bracy roughly believed him.
The dialog occurred in 2022, when Bracy, CEO and founding father of the social mobility-focused nonprofit TechEquity, was interviewing Altman for a ebook she was writing in regards to the risks of enterprise capital. It was earlier than Altman’s mysterious firing and unfiring a 12 months later, after which he principally stopped responding to Bracy’s texts.
And ever since then, OpenAI — which was initially based as a nonprofit in 2015 to “advance digital intelligence in the best way that’s probably to learn humanity as an entire, unconstrained by a must generate monetary return” — has been publicly attempting to flee the confines of its charitable roots. At present, OpenAI incorporates each a company arm centered on constructing and promoting AI and a nonprofit arm with a said mission of guaranteeing that AI advantages individuals.
Through the controversial technique of attempting to totally sever the 2 in 2024, OpenAI misplaced about half of its AI security staffers and far of its senior management. That was adopted by an intensified scrutiny from state attorneys basic, nonprofit authorized specialists, competitor corporations, efficient altruists, Nobel Prize winners, huge swaths of California’s philanthropic neighborhood, and certainly one of its unique funders, Elon Musk. Totally different sides had totally different pursuits, however the total argument was that shifting to a for-profit mannequin would create a fiduciary obligation to buyers that may inherently conflict with its unique mission of security and public profit.
Is OpenAI’s new basis a $180 billion distraction?
- Final October, OpenAI agreed to make its nonprofit arm very wealthy. The OpenAI Basis is now value about $180 billion and it has two major goals:
- Serving to the world adapt to and profit from AI by giving cash to charity.
- Performing as an ethical compass for OpenAI the corporate, particularly in relation to security and safety selections.
- The inspiration has already given away about $40.5 million to this point, a small fraction of the billions it plans to ultimately donate. However critics see the donations as a distraction.
- Whereas OpenAI says its basis has the ultimate say on safety and safety-related selections, the corporate has come beneath scrutiny in current months for hanging a take care of the Pentagon, preventing towards statewide AI laws, and testing adverts without spending a dime customers.
- Even when the inspiration does ultimately give away billions of {dollars}, it might by no means be sufficient to make up for what the general public misplaced in permitting OpenAI to go company.
Nonetheless, OpenAI did lastly strike a contortive restructuring deal final October. Primarily, the for-profit arm turned what is called a public profit company (PBC), referred to as the OpenAI Group. The unique nonprofit turned the OpenAI Basis, which has a 26 % stake at the moment value $180 billion within the PBC, plus a sliver of unique authorized management over sure main selections.
One impact of the transition was that it primarily required OpenAI to place a quantity on what it owed the general public for changing what had been a undertaking for all humanity into one thing that the majority instantly advantages the corporate’s buyers. The ensuing stake of the OpenAI Basis is large enough to immediately make it one of many wealthiest charities within the nation, or in OpenAI’s phrases, the “best-equipped nonprofit the world has ever seen.” On paper, not less than, the inspiration is now considerably richer than your entire nation of Luxembourg. Even the Gates Basis has solely $77.6 billion in belongings, lower than half of what the OpenAI Basis can draw from, although it’s necessary to notice that many of the wealth of the OpenAI Basis is locked in pretty illiquid shares throughout the nonetheless non-public firm, which limits how rapidly any cash will be given away.
Nonetheless, its sheer measurement implies that the OpenAI Basis stands to ultimately be a transformative presence on the philanthropic stage, a method or one other. However whereas OpenAI says the inspiration will ultimately give out many billions of {dollars} in philanthropy to make sure that “synthetic basic intelligence advantages all of humanity,” it’s unsure {that a} socially useful philanthropy can exist facet by facet with an organization that’s preventing an existential battle over who will dominate the AI trade.
“The unstated reality right here is that they’re by no means going to decide that’s dangerous for the corporate,” Bracy stated. “These two entities can not reside beneath the identical roof” the place “the mission is in management.” (Disclosure: Vox Media is certainly one of a number of publishers which have signed partnership agreements with OpenAI. Our reporting stays editorially unbiased.)
The inspiration’s first presents got here within the type of $40.5 million in no-strings-attached grants to over 200 neighborhood nonprofits, like church buildings, meals banks, and afterschool applications. Notably, most grantees had little to no connection to AI or know-how — and simply as notably, a number of of those early grantees simply so occur to be members of EyesOnOpenAI, a coalition of California nonprofits vital of OpenAI’s privatization that fashioned in 2025.
However there are indicators the inspiration will quickly pivot into grantmaking that’s extra clearly related to the corporate’s unique constitution, which aimed to make sure that the advantages of AI are broadly distributed whereas additionally prioritizing long-term security within the know-how’s improvement. On Feb. 19, OpenAI — the corporate, not the inspiration — introduced a $7.5 million grant along side Microsoft, Anthropic, Amazon, and different main tech corporations for a new, worldwide undertaking geared toward researching learn how to make AI techniques safer.
“The unstated reality right here is that they’re by no means going to decide that’s dangerous for the corporate.”
— Catherine Bracy, TechEquity founder and CEO
The actual questions across the OpenAI Basis have much less to do with how a lot it’s giving and to whom than whether or not it’s really capable of perform its contractual oversight function. In idea, the inspiration must be guaranteeing that OpenAI is the standard-bearer for moral decision-making on the frontier of AI improvement. That may be a novel contribution to the sphere — and an embodiment of OpenAI’s unique mission — that no quantity of grantmaking might change. But, a sequence of troubling current selections by the corporate hardly appears to bear out that imaginative and prescient.
OpenAI has begun its new company journey by debuting adverts on its free tier service, firing an govt who raised security considerations a few soon-to-come NSFW mode for ChatGPT on fees of sexual discrimination towards a male colleague, and burning money whereas its president funnels hundreds of thousands of {dollars} into Donald Trump’s tremendous PAC. OpenAI President Greg Brockman has additionally teamed up with the non-public fairness agency Andreessen Horowitz and Palantir’s co-founders to fund a $125 million tremendous PAC geared toward selling AI-friendly insurance policies. Together with Google, xAI, and Anthropic, OpenAI has additionally come beneath scrutiny in current weeks for its protection contracts with the Pentagon.
When OpenAI succeeded in its marketing campaign to cede its foundational new know-how from nonprofit management, it opened the door for a lot of of those selections. Even $180 billion in charity won’t be sufficient to make up for the distinction.
How OpenAI shed its nonprofit pores and skin
Company charity is ubiquitous within the tech world, particularly among the many largest gamers. Microsoft plans to donate $4 billion in money and AI cloud know-how to varsities and nonprofits by 2030. Google offers away some $100 million yearly, usually to organizations centered on synthetic intelligence and know-how.
However from the start, OpenAI was totally different. Reasonably than getting cash and giving a few of it to charity, OpenAI was the charity. It was based as a nonprofit analysis lab with about $1 billion in start-up donations, principally from tech titans like Altman, Brockman, and Elon Musk.
There are some structural benefits to being a charity. You may’t settle for investments, however you possibly can settle for donations and also you don’t should pay most taxes. What’s extra, in these early days, OpenAI’s said mission — to construct secure AI with out the pressures of economic incentive — gave it a significant enhance when it got here to recruitment for rarified expertise. Machine studying prodigy Ilya Sutskever advised Wired in 2016 that he selected to go away Google to change into OpenAI’s chief scientist “to a really giant extent, due to its mission.”
However there have been limits to being a completely nonprofit entity. In pursuit of financing amid the rising computing prices of cutting-edge AI, OpenAI created its capped-profit subsidiary in 2019 to handle a brand new $1 billion funding from Microsoft. Three years later, ChatGPT took the world by storm. Sutskever, and different members of OpenAI’s board, tried and finally failed to oust Altman amid accusations of dishonesty in 2023. (Altman denied these accusations.) In 2024 — one 12 months after Sutskever and different members of OpenAI’s board tried and finally did not oust Altman amid accusations of dishonesty — the group introduced its intention to go totally company and splinter off the nonprofit into its personal totally unbiased entity.
The transition to for-profit “simply didn’t odor proper,” stated Orson Aguilar, head of LatinoProsperity, an financial justice nonprofit and Bracy’s co-leader at EyesOnOpenAI. He wasn’t alone: By early 2025, a dozen former OpenAI staff filed an amicus temporary geared toward stopping the conversion as a result of it will “essentially violate its mission.” And greater than 60 nonprofit, philanthropy, and labor leaders, a lot of them based mostly in OpenAI’s residence state of California, agreed that the try and privatize felt unfair given the extent to which the corporate benefited from its tax-free standing throughout its early improvement.
To understand what this all means, strive pondering of OpenAI’s for-profit arm as an angsty tween and the nonprofit as her well-meaning, however usually powerless mother or father. For years, the tween had been allowed to do her personal factor, however solely inside sure limits — she nonetheless needed to do her homework and get residence by a sure time. Now think about, she’s sick of getting a curfew. “No person else has one!” She nonetheless lives in her mom’s home, however she needs to comply with her personal guidelines.
That’s form of what occurred right here. Up till now, OpenAI’s for-profit subsidiary had a capped-profit mannequin, which means there have been limits on how a lot cash buyers might make. However this new deal paved the best way for the for-profit to change into a full-time company girlie, charitable bylaws be damned. And whereas OpenAI’s new public profit company nonetheless technically exists beneath the unique nonprofit’s management, it principally follows its personal guidelines. It may elevate as a lot cash because it needs and ultimately, it’s going to doubtless go public.
However California historical past did present some hope that the general public may not less than get some significant profit from the transition. Again within the Nineties, California’s department of the well being insurer Blue Cross Blue Protect — then a nonprofit referred to as Blue Cross of California — determined to denationalise. After some haggling with state regulators, the corporate agreed to forfeit all of its belongings, value $3.2 billion, to a pair of unbiased nonprofits in trade for going non-public. The outcome was the California Endowment, which is now the state’s largest well being basis.
Many nonprofit leaders in California hoped that OpenAI, which is headquartered within the state, would strike an analogous deal, ceding a majority of its belongings to a completely unbiased nonprofit. And people belongings had been and are monumental.
Gary Mendoza, a former state official who oversaw the Blue Cross deal, estimated the OpenAI nonprofit’s rightful belongings at over $250 billion, or half the corporate’s $500 billion value. “Something in need of 50 %,” he advised the San Francisco Examiner final 12 months, “is a missed alternative.” And past cash for the general public, assuming the nonprofit stored its shares, it will add as much as sufficient affect to actually form OpenAI’s company decision-making at a key second for the way forward for synthetic intelligence.
Provided that the OpenAI Basis ended up with little greater than 1 / 4 of the ultimate firm, that is clearly not what occurred. However EyesOnOpenAI’s years-long lobbying effort was not a complete bust. The criticism proved highly effective sufficient that final Might, OpenAI was pressured to surrender on an preliminary plan to restructure away its nonprofit belongings into a brand new group wholly disconnected from OpenAI, which might have left the nonprofit with no authorized management over the for-profit arm.
On paper, the new deal consists of some significant concessions. It contractually requires the nonprofit mission to come back first on security and safety points, with no regard to shareholder pursuits. The memorandum additionally calls on OpenAI to “mitigate dangers to teenagers” particularly. It made the inspiration the controlling shareholder of the company, affording it the precise to nominate company administrators and oversee vital selections like a sale.
If OpenAI abided by all of its phrases and ultimately began freely giving billions of {dollars} of philanthropy every year, then the world — or not less than California, the place a lot of OpenAI’s grants have been concentrated — might stand to significantly profit from it.
Random acts of company kindness
And this brings us to the $40.5 million that OpenAI gave to over 200 nonprofits towards the top of final 12 months.
Many of those charities utilized to the grant with subtle concepts round learn how to assist their communities combine or adapt to AI, although they’ll finally use the grants nevertheless they see match. Amongst them had been public libraries, Boys and Ladies Golf equipment, church buildings, meals banks, and authorized support nonprofits. Coming at a second when nearly all of the nation’s nonprofits face existential funding cuts, “it was simply the right timing,” stated Thomas Howard Jr, head of Kidznotes, a North Carolina nonprofit centered on music schooling that acquired $45,000 in OpenAI’s first spherical of grants.
“There’s nothing I’ve seen that offers me reassurance that they’ll catch the necessary issues of safety once they come up — or that they’ll be doing an intensive investigation of the grantmaking alternatives.”
— Tyler Johnston, Midas Challenge govt director
So civil society’s battle over the OpenAI transition gained not less than sufficient concessions to assist these worthy organizations and retain some semblance of nonprofit management over among the for-profit’s actions. So why accomplish that many individuals within the philanthropic neighborhood stay so unfavorable in regards to the basis?
“I’m all for nonprofits getting cash,” stated Bracy, the top of TechEquity. “I don’t begrudge any organizations that took the cash, however I don’t assume it’s some indication that OpenAI resides as much as the mission of the nonprofit.”
$40.5 million, in fact, is barely 0.02 % of the OpenAI Basis’s on-paper $180 billion windfall. How the inspiration will ultimately spend the opposite 99.98 % stays to be seen, although the inspiration has stated that not less than $25 billion will finally go to scientific analysis and what it’s calling “technical options for AI resilience.” The corporate plans to announce a second wave of grants directed at organizations utilizing AI to work throughout points like well being within the coming months.
“We’re doing the necessary work of partaking with specialists, studying from communities, and shaping a viewpoint of the place Basis investments could make the best distinction,” the OpenAI Basis’s board of administrators stated in response to a request for readability on the place future funding will go. “We look ahead to sharing extra quickly.”
However to this point, critics stay skeptical. OpenAI has accomplished little to show that its newfound philanthropy is extra than simply “a smoke and mirrors present,” argued one member of the Coalition for AI Nonprofit Integrity (CANI) — a coalition composed largely of AI insiders, together with former OpenAI staff, furiously against the restructuring. He spoke on the situation of anonymity as a result of he feared retaliation from OpenAI, which has accused CANI of being a entrance funded by Musk. (CANI has denied receiving any such funds — although not for lack of attempting. For those who scroll to the underside of OpenTheft, a web site created by CANI, you’ll discover a direct plea to Musk for donations.)
Whereas a spokesperson for OpenAI stated that the inspiration is within the technique of constructing a devoted crew, and has sought the enter of each nonprofit leaders and specialists in how society can adapt to AI, the corporate has but to make any main staffing bulletins for its grantmaking arm. For now, excluding Zico Kolter, the top of the nonprofit’s security committee, the inspiration board nonetheless shares the identical members as the company board, together with CEO Sam Altman. The concept is that these board members can placed on totally different hats when assembly about nonprofit versus company priorities, asserting the inspiration’s oversight when wanted. Nevertheless it has created the looks of a battle of curiosity.
When requested for mechanisms and examples for the way the inspiration has responded to conditions the place its mission conflicts with shareholder pursuits, given the overlapping board membership, the spokesperson stated that OpenAI has conflict-of-interest insurance policies and governance procedures in place to make sure its administrators solely contemplate the mission once they meet, as they often do, about nonprofit points.
The corporate additionally stated the inspiration board consistently workouts its oversight function, together with for all new main product releases, like the discharge of GPT‑5.3‑Codex, a complicated agentic coding mannequin, final month. The AI watchdog group the Midas Challenge, a frequent thorn in OpenAI’s facet, accused the corporate of violating security requirements, an allegation that OpenAI fervently denied.
In any case, for the reason that OpenAI Basis shouldn’t be a separate entity with its personal unbiased board, some critics have in contrast it to different feel-good company social duty ventures, just like the McDonald’s Ronald McDonald Home, Walmart’s wholesome meals program, and Residence Depot’s work with veterans.
Company social duty has its place, and it could actually do actual good. However Bracy believes that based mostly on the OpenAI Basis’s structuring and the way they’ve carried out their grantmaking to this point, it’s going to in all probability by no means fund something “they see as a risk to the expansion of the corporate,” stated Bracy, even supposing the necessity for guardrails on unrestricted AI improvement featured prominently within the firm’s unique mission. “They’re going to do what’s greatest for the underside line of the for-profit.”
Critics like Bracy additionally doubt the OpenAI Basis’s different major prerogative, which is to manipulate all security and ethics-related points for the broader group, together with the duty to assessment new merchandise.
“As a substitute of a car to serve humanity, it’s change into a car to serve one particular person and some of his buddies and buyers.”
— Nameless member of CANI
Whereas the nonprofit and its mission do legally retain management over the OpenAI company — notably in relation to issues of safety — which will add as much as little, provided that the OpenAI Basis doesn’t appear to be an independently ruled basis. It isn’t, the truth is, even technically a basis, however a public charity, which implies it’s not required to pay out a sure share of its belongings every year beneath IRS necessities.
And whereas the nonprofit retains important oversight powers on paper — together with the authority to halt AI releases it deems unsafe — in follow, critics say, it’s unclear whether or not it will ever use them.
More and more, OpenAI has additionally been wading into political lobbying efforts that appear at odds with its mission to advertise long-term security in AI improvement. When California lawmakers had been debating SB 53, a regulation requiring transparency experiences from main AI corporations, OpenAI lobbied towards it. And the corporate has come beneath intense scrutiny in current weeks for its contract with the Pentagon, which has blacklisted its rival firm Anthropic for elevating moral considerations about using its know-how.
Why the battle shouldn’t be over
OpenAI’s new company association may be very, very new. It’s nonetheless doable that OpenAI’s grantmaking arm actually does workers up, and the nonprofit builds an unbiased board that has the ability to implement exhausting moral selections for the corporate, even when it hurts buyers’ returns.
“They’ve a variety of freedom to proceed to do good,” stated Tyler Johnston, govt director of the Midas Challenge, however that may require them to “really shake issues up” and “present that they’ve created the scaffolding that can allow them to actualize their mission.”
However to this point, “there’s nothing I’ve seen that offers me reassurance that they’ll catch the necessary issues of safety once they come up,” he stated. “Or that they’ll be doing an intensive investigation of the grantmaking alternatives.”
If OpenAI doesn’t abide by the phrases of its new contract — if the corporate, for instance, tries to thwart an try and roll again a harmful new instrument — then California’s legal professional basic does have the ability to demand solutions from the corporate, and in idea, revisit the settlement’s phrases.
Past the settlement, there are a couple of fairly public means by which OpenAI’s former lovers, skeptics, and nemeses are nonetheless attempting to press rewind on the restructuring.
Chief amongst them is Elon Musk, OpenAI’s most distinguished unique donor and co-founder. In between buying and selling embarrassing jabs with Altman on X, Musk took OpenAI to courtroom final 12 months over claims that he was “assiduously manipulated” into donating tens of hundreds of thousands of {dollars} to a nonprofit analysis lab that was an “opaque net of for-profit OpenAI associates.”
A choose has discovered sufficient trigger for the case to proceed to trial this April. Musk is suing for as much as $134 billion in damages, although OpenAI has advised its buyers that it believes it will solely be on the hook for Musk’s $38 billion in unique donations. OpenAI, for its half, has accused Musk of an “illegal marketing campaign of harassment.”
In the meantime, CANI continues to be holding out hope that it could actually persuade the individuals of California to vote for a hyperspecific poll measure, the California Charitable Property Safety Act, which might reverse the choice to permit OpenAI — or some other “organizations creating transformative applied sciences” — to go company.
“They’re chopping corners on security due to the race to synthetic basic intelligence that they only need to win,” stated the member of CANI. “As a substitute of a car to serve humanity, it’s change into a car to serve one particular person and some of his buddies and buyers.”
So possibly the battle over OpenAI’s restructuring isn’t utterly over — but it surely’s in all probability on its final legs. And in the event that they proceed on the identical path, it’s unlikely that the general public will ever actually profit in the best way they must, given the charitable advantages OpenAI loved in its early days. On the very least, $40.5 million is simply not going to chop it. Even $180 billion may fall far quick.
“I believe it’s them saying, ‘Pay attention, I dare you to implement this,’” stated Bracy, who believes OpenAI is “banking on the truth that they’re value virtually a trillion {dollars}, and so they have countless assets — and the state of California doesn’t.”


