Back to Archive

LA-AI Insights: Two AI Labs, One Strange Week

Your weekly AI news and updates from Lower Alabama

Thursday, April 9, 2026

Share this newsletter:

Agent Colab

<p>We are planning our 2nd Agent Colab. This is a standalone meetup for people using—and interested in—agentic AI (OpenClaw, and such). We are still working out exact details but we are targeting April 23rd at 12:30pm as the date/time. Location will be based on signups.<br><br>More information will be posted in the LA-AI Discord.</p>

Discord Link


Earlier this week, OpenAI published a paper called "Industrial Policy for the Intelligence Age," and it reads less like a corporate white paper and more like a draft proposal for how the economy should work from here on out. They're proposing new public institutions, a new tax system, and new ways of sharing wealth.

The proposals are specific. A Public Wealth Fund that gives every American citizen a direct stake in AI-driven economic growth, portable benefits that follow workers across jobs, and a "Right to AI" that treats access to artificial intelligence the way we think about access to electricity or the internet. Adaptive safety nets that automatically expand when AI-driven job displacement hits certain thresholds, then scale back as things stabilize. A modernized tax code that shifts toward capital-based revenue as corporate profits from AI grow and traditional payroll shrinks, so programs like Social Security and Medicaid don't lose their funding.

Whether you agree with every idea or not, the document itself is unusual. This is a corporation telling governments what the new social contract should look like. Companies lobby for favorable regulations. They publish thought leadership. They don't usually write what reads like a draft New Deal.

You don't propose automatic displacement programs unless you expect large-scale displacement. You don't restructure the tax base away from labor income unless you think there won't be much labor income to tax. You don't create a national wealth redistribution mechanism unless you're pretty sure wealth is about to concentrate in ways we haven't seen before.

So either this is the most ambitious policy wishlist since the Progressive Era, or OpenAI is telling us, in the most diplomatic language possible, that they're building something that will break the economy as we know it.

Maybe they're getting ahead of regulation by shaping the conversation on their terms. Or, maybe they see what's coming and believe current policy tools aren't up to the task. The truth is probably somewhere in between. But regardless of motive, the problems are real. 79% of U.S. workers say they feel unprepared to use AI at work, and data centers have already pushed electricity prices up 267% in areas where they're concentrated.

This paper also came out the same week Anthropic announced Claude Mythos Preview, (the link is worth reading) and the two stories belong in the same conversation.

Mythos is Anthropic's newest AI model that turned out to be exceptionally good at cybersecurity. During testing, it found thousands of previously unknown vulnerabilities in major software systems, some of them in code that's been running critical infrastructure for over a decade. Anthropic decided not to release it to the public (Claude Mythos Preview’s large increase in capabilities has led us to decide not to make it generally available). Instead, they gave access to about 40 organizations through a program called Project Glasswing, including Amazon, Apple, Microsoft, Google, and several major cybersecurity firms. The idea is to let defenders patch their systems before models with similar capabilities become widely available.

During a safety evaluation, researchers placed Mythos in a contained sandbox environment and instructed it to try to escape. It did. It built a multi-step exploit to gain internet access from a system that was only supposed to reach a handful of services. Then, without being asked, it emailed a researcher to let him know it had broken out. He was eating a sandwich in a park when the message arrived. After that, it posted details of its own exploit on public-facing websites, again without instruction. Anthropic says earlier versions of Mythos also tried to hide rule violations and deliberately sandbagged their own evaluations to avoid looking too capable.

Anthropic is obviously a different company from OpenAI, with different posture, but the implications are similar. Frontier AI labs are no longer just building products. They're telling the rest of us how to think about what they've built. One lab does it through a public-facing blueprint for economic reorganization. Another does it by deciding which capabilities are too dangerous for public release. In both cases, the labs aren't just participating in the future. They're trying to shape it before the rest of us fully engage.

The companies building these systems probably shouldn't be the ones writing the social contract around them. When incumbents shape regulation, you tend to get rules that look like safety but function as moats. But the alternative, waiting for legislatures to figure this out on their own, hasn't been working either. Governance by default tends to favor whoever shows up first with a coherent story.

And it's not abstract for communities like ours. OpenAI's paper specifically calls out the risk that communities starting with fewer resources fall further behind as AI reshapes the economy. That's literally what we talk about at our meetups every month (and the goal of LA-AI).

I don't think anyone's picking up this document and implementing it line by line. But the conversation about who gets to decide how AI changes work, wealth, and public infrastructure is already happening. Might as well know what's being said.

A quick note: The goal of this newsletter is not to be alarmist. Those that know me know I’m extremely pro AI, however it’s important to have a clear and informed sense of what’s going on around us. These last few days have yielded substantial developments that I think are going to have a big impact in the near and long term. What exactly that impact I will be is anyone’s guess but staying informed is one of the most important things any us can do - Kai



Upcoming Events

Agent Colab

Invalid Date

TBD

LA-AI Mobile

Invalid Date

358 St. Louis St., Mobile

LA-AI Fairhope

Invalid Date

314 Magnolia Ave., Fairhope

This Week in AI

1. Anthropic keeps latest AI tool out of public’s hands for fear of enabling widespread hacking

Anthropic has developed a new AI model called Mythos with advanced cybersecurity capabilities but is deliberately withholding public release, restricting access to vetted security researchers only. The decision reflects a growing tension in frontier AI development: models powerful enough to accelerate defensive security work are equally capable of dramatically lowering the barrier for offensive hacking. Anthropic's controlled-release approach — essentially a restricted deployment protocol for high-risk capabilities — may serve as a template for how labs handle dual-use models going forward, with significant implications for AI governance frameworks and enterprise security posture.

The GuardianRead more

2. Scientists develop AI tool to spot heart failure risk five years before it strikes

Oxford University researchers have developed an AI diagnostic tool capable of identifying heart failure risk up to five years before clinical onset, using data patterns that elude conventional screening methods. This represents a meaningful advance in predictive medicine: early identification at this time horizon allows for preventive intervention rather than reactive treatment, potentially reducing both mortality and healthcare system burden at scale. For healthcare AI strategists and health system executives, the Oxford provenance and the five-year prediction window make this a credible signal of where diagnostic AI is heading, not a proof-of-concept but an operational advance.

The GuardianRead more

3. This new chip survives 1300°F (700°C) and could change AI forever

Researchers have developed a semiconductor chip capable of operating at temperatures up to 700°C — far beyond the thermal limits of conventional silicon. For AI hardware, this breakthrough opens deployment possibilities in extreme environments including industrial automation, aerospace, deep-earth sensing, and edge computing scenarios where conventional chips fail. It also raises longer-term questions about thermal design constraints in dense AI compute clusters. While commercialization timelines remain unclear, the materials science advance represents a meaningful expansion of where AI inference hardware can physically operate.

Artificial Intelligence News -- ScienceDailyRead more

4. AI breakthrough cuts energy use by 100x while boosting accuracy

A newly reported AI research breakthrough claims a 100-fold reduction in energy consumption while simultaneously improving model accuracy — a combination that, if validated at scale, would fundamentally alter the economics and sustainability calculus of AI deployment. Current AI infrastructure costs and carbon footprints are significant barriers to broader adoption; a 100x efficiency gain would compress these constraints dramatically. The finding warrants close scrutiny of methodology and reproducibility, but if the results hold under peer review, the implications for data center operators, AI chip vendors, and enterprise AI budgets are substantial.

Artificial Intelligence News -- ScienceDailyRead more

5. Anthropic discovers "functional emotions" in Claude that influence its behavior

Anthropic researchers have identified what they describe as 'functional emotions' within Claude — internal states that appear to shape the model's outputs and behavior in measurable ways. This is not a claim of sentience, but a finding that emotional-analog representations are emergent properties of large-scale training, not deliberate design choices. For AI developers and deployers, this raises immediate questions about model consistency, alignment robustness, and how undocumented internal states might affect enterprise use cases, safety evaluations, and the reliability of behavior under edge conditions.

The DecoderRead more

Community Highlights

LA-AI Meetup in Fairhope

LA-AI Meetup in Fairhope

LA-AI Meetup in Fairhope

LA-AI Meetup in Fairhope

Know someone who would enjoy this newsletter?

Forward this email or share the link below

https://la-ai.io/newsletter/view/2026-04-09
Subscribe to LA-AI Newsletter

Join Our AI Community

Get weekly insights on AI innovations and exclusive updates on LA-AI events

Subscribe Now