OpenAI has called on the U.S. government to create a public wealth fund, overhaul social safety net programs, and fast-track electrical grid expansion in a sweeping set of policy recommendations published on April 6, 2026, designed to prepare the country for AI-driven economic disruption.

The recommendations, reported by Bloomberg Technology, mark one of the most explicit acknowledgements by a leading AI developer that the technology it is building could destabilize labour markets and strain public infrastructure on a significant scale. They also represent OpenAI's most detailed engagement with domestic economic policy to date, moving the company well beyond its earlier focus on safety standards and export controls.

OpenAI's proposal for a public wealth fund signals a notable moment in which a frontier AI lab is openly planning for the economic fallout of its own products.

A Public Wealth Fund — and What It Would Do

The centrepiece of OpenAI's economic proposals is the creation of a public wealth fund — a government-managed investment vehicle that, in theory, would allow ordinary citizens to share in the financial gains generated by AI adoption. The concept draws on models used in countries such as Norway and Singapore, where sovereign wealth funds distribute returns from national assets to the broader population.

OpenAI has not specified, according to Bloomberg's reporting, exactly how such a fund would be capitalised — whether through taxes on AI-generated profits, licensing revenues, or some other mechanism. The precise governance structure and target beneficiaries also remain undefined in the publicly available summary of the recommendations. That ambiguity will matter enormously if the proposal is to move from advocacy document to enacted policy.

The jurisdiction for these proposals is the United States federal government. As of publication, none of the recommendations carry binding force — they are advisory, representing OpenAI's public policy position rather than any regulatory or legislative commitment.

Grid Infrastructure: The Energy Bottleneck

Alongside the economic proposals, OpenAI is pushing for accelerated development of the U.S. electrical grid — a priority that reflects the company's own operational reality as much as broader public interest concerns. Training and running large AI models is extraordinarily energy-intensive, and grid constraints have become a genuine bottleneck for data centre expansion across the industry.

OpenAI's recommendation aligns with a growing consensus among AI companies, utility operators, and some policymakers that current grid permitting and construction timelines — which can stretch to a decade or more — are incompatible with the pace of AI infrastructure buildout. The company is advocating for streamlined permitting and faster interconnection processes, though the specific legislative vehicles it is backing have not been detailed in available reporting.

Grid modernisation sits at an intersection of energy policy, environmental regulation, and national competitiveness — making it one of the more politically complex asks in the package.

Safety Net Reforms: Speed Over Structure

The third pillar of OpenAI's recommendations focuses on reforming existing social safety net programs to respond more rapidly to economic shocks caused by job displacement. Current U.S. unemployment and retraining systems were largely designed for slower-moving disruptions — factory closures, regional downturns — rather than the kind of broad, sector-agnostic displacement that widespread AI adoption could produce.

OpenAI is advocating for programs that can deploy support faster and more flexibly, though the company has not, based on available reporting, specified whether it is endorsing particular existing proposals such as universal basic income pilots, expanded earned income tax credits, or new workforce retraining authorities. The framing is directional rather than legislative.

This matters for how seriously the recommendations will be taken in Washington. Broad principles are easy to endorse; the hard work of AI economic policy lies in the specific programme design, funding mechanisms, and eligibility rules that determine who actually benefits and when.

Industry Context: Who Else Is Saying This

OpenAI is not alone in flagging AI's potential economic disruption. The International Monetary Fund warned in early 2024 that AI could affect 40% of jobs globally, with advanced economies facing higher exposure. Several U.S. think tanks and academic economists have raised similar concerns about the adequacy of existing safety nets.

What is less common is a frontier AI developer putting forward concrete policy asks on economic redistribution. Most major AI companies have focused their policy engagement on safety frameworks, liability rules, and international competitiveness — areas that more directly affect their regulatory environment. OpenAI's willingness to engage on wealth distribution and labour market support represents a different kind of political positioning, whether driven by genuine concern, reputational strategy, or anticipation of regulatory pressure.

It is also worth noting that OpenAI has a direct commercial interest in accelerated grid development and government AI investment — both of which appear in the same policy document as the more redistributive proposals.

What This Means

OpenAI's recommendations put pressure on U.S. lawmakers to treat AI economic policy as urgent rather than speculative — but without binding enforcement mechanisms, the proposals' impact depends entirely on whether Congress and the executive branch choose to act on them.