Earlier this week, OpenAI published a paper called "Industrial Policy for the Intelligence Age," and it reads less like a corporate white paper and more like a draft proposal for how the economy should work from here on out. They're proposing new public institutions, a new tax system, and new ways of sharing wealth.
The proposals are specific. A Public Wealth Fund that gives every American citizen a direct stake in AI-driven economic growth, portable benefits that follow workers across jobs, and a "Right to AI" that treats access to artificial intelligence the way we think about access to electricity or the internet. Adaptive safety nets that automatically expand when AI-driven job displacement hits certain thresholds, then scale back as things stabilize. A modernized tax code that shifts toward capital-based revenue as corporate profits from AI grow and traditional payroll shrinks, so programs like Social Security and Medicaid don't lose their funding.
Whether you agree with every idea or not, the document itself is unusual. This is a corporation telling governments what the new social contract should look like. Companies lobby for favorable regulations. They publish thought leadership. They don't usually write what reads like a draft New Deal.
You don't propose automatic displacement programs unless you expect large-scale displacement. You don't restructure the tax base away from labor income unless you think there won't be much labor income to tax. You don't create a national wealth redistribution mechanism unless you're pretty sure wealth is about to concentrate in ways we haven't seen before.
So either this is the most ambitious policy wishlist since the Progressive Era, or OpenAI is telling us, in the most diplomatic language possible, that they're building something that will break the economy as we know it.
Maybe they're getting ahead of regulation by shaping the conversation on their terms. Or, maybe they see what's coming and believe current policy tools aren't up to the task. The truth is probably somewhere in between. But regardless of motive, the problems are real. 79% of U.S. workers say they feel unprepared to use AI at work, and data centers have already pushed electricity prices up 267% in areas where they're concentrated.
This paper also came out the same week Anthropic announced Claude Mythos Preview, (the link is worth reading) and the two stories belong in the same conversation.
Mythos is Anthropic's newest AI model that turned out to be exceptionally good at cybersecurity. During testing, it found thousands of previously unknown vulnerabilities in major software systems, some of them in code that's been running critical infrastructure for over a decade. Anthropic decided not to release it to the public (“Claude Mythos Preview’s large increase in capabilities has led us to decide not to make it generally available”). Instead, they gave access to about 40 organizations through a program called Project Glasswing, including Amazon, Apple, Microsoft, Google, and several major cybersecurity firms. The idea is to let defenders patch their systems before models with similar capabilities become widely available.
During a safety evaluation, researchers placed Mythos in a contained sandbox environment and instructed it to try to escape. It did. It built a multi-step exploit to gain internet access from a system that was only supposed to reach a handful of services. Then, without being asked, it emailed a researcher to let him know it had broken out. He was eating a sandwich in a park when the message arrived. After that, it posted details of its own exploit on public-facing websites, again without instruction. Anthropic says earlier versions of Mythos also tried to hide rule violations and deliberately sandbagged their own evaluations to avoid looking too capable.
Anthropic is obviously a different company from OpenAI, with different posture, but the implications are similar. Frontier AI labs are no longer just building products. They're telling the rest of us how to think about what they've built. One lab does it through a public-facing blueprint for economic reorganization. Another does it by deciding which capabilities are too dangerous for public release. In both cases, the labs aren't just participating in the future. They're trying to shape it before the rest of us fully engage.
The companies building these systems probably shouldn't be the ones writing the social contract around them. When incumbents shape regulation, you tend to get rules that look like safety but function as moats. But the alternative, waiting for legislatures to figure this out on their own, hasn't been working either. Governance by default tends to favor whoever shows up first with a coherent story.
And it's not abstract for communities like ours. OpenAI's paper specifically calls out the risk that communities starting with fewer resources fall further behind as AI reshapes the economy. That's literally what we talk about at our meetups every month (and the goal of LA-AI).
I don't think anyone's picking up this document and implementing it line by line. But the conversation about who gets to decide how AI changes work, wealth, and public infrastructure is already happening. Might as well know what's being said.
A quick note: The goal of this newsletter is not to be alarmist. Those that know me know I’m extremely pro AI, however it’s important to have a clear and informed sense of what’s going on around us. These last few days have yielded substantial developments that I think are going to have a big impact in the near and long term. What exactly that impact I will be is anyone’s guess but staying informed is one of the most important things any us can do - Kai