Back to Archive

LA-AI Insights: Looking Ahead to 2026

Your weekly AI news and updates from Lower Alabama

Tuesday, December 9, 2025

Share this newsletter:


Last Friday’s meetup was our final Fairhope gathering of 2025, so we spent part of it looking ahead. What’s actually coming in 2026?

Two predictions from the discussion stuck with me. They’re connected in ways that weren’t obvious until we talked them through.

The End of Software as We Know It

Here’s something that’s already happening. Claude Code, Anthropic’s AI coding agent, generated over $500 million in revenue this year. Not by being a better tool. By acting as a semi-autonomous engineer. It writes code, debugs, ships features. You give it a task, it figures out how to do it.

That’s not software. That’s a worker.

The prediction for 2026: we stop buying tools and start hiring workers. Not human workers, but AI agents that do jobs. You won’t pay for accounting software. You’ll pay an AI agent to be your accountant. Not a glorified calculator. An actual accountant that reconciles, categorizes, flags anomalies, and files reports.

This is being referred to as “Service-as-a-Software.” Flip the SaaS model on its head. Instead of software you operate, you get software that operates for you.

For small businesses especially, this could be massive. Tasks that required either hiring someone or learning complicated software? Delegated to an agent. The cost of getting things done drops. The barrier to running a lean operation almost disappears.

Sounds great, right? It is. Mostly.

Here’s where the second prediction comes in.

MIT published a study this year worth paying attention to. They found that using AI too early in a task, before you’ve done any thinking yourself, actually damages your critical thinking and memory formation. Your brain doesn’t engage the same way. You’re not learning. You’re outsourcing.

The thing that makes us more productive might also be making us less capable.

We’ll likely see a split emerge. Some people will use AI as a “Second Draft” engine. They think first, create something rough, then let AI help refine it. Their intelligence gets amplified. Others will use AI as a “First Draft” engine. They skip the thinking entirely. Let AI generate, then lightly edit. Their thinking atrophies.

We might even see “AI-Free” certifications show up in education. Schools wrestling with whether students should be allowed to use AI on certain assignments, not because AI is cheating, but because the struggle itself is the point. The cognitive workout matters.

Why These Two Belong Together

Service-as-a-Software makes AI agents do more of the work. The Cognitive Divide asks: what happens to us when we let them?

The same technology that frees you from tedious tasks can also free you from the thinking that makes you good at your job.

The answer, most of us agreed, is intentionality. Know when you’re delegating and when you’re learning. Use AI as a Second Draft engine for things that matter. Let the agents handle what doesn’t require your growth.

Easy to say. Harder to practice. But worth thinking about as we head into 2026.

See you in Mobile on December 19th for the final meetup of the year.



Upcoming Events

Join us for the final Mobile Meetup of 2025! What a year it’s been and we’ll wrap it up in style.

Invalid Date

358 St. Louis St., Mobile, AL

We are teaming up with our Florida sibling organization, Lee County AI to bring you our first virtual event, where we’ll be discussing using AI in your Business. Register for free on Meetup.com: https://www.meetup.com/lee-ai/events/312355799/

Invalid Date

Virtual: AI for your Business: Creating Actionable Outcomes for Success

This Week in AI

1. Scores of UK parliamentarians join call to regulate most powerful AI systems

A significant number of UK parliamentarians are demanding stricter regulation of the most powerful AI systems, signaling potential legislative action that could reshape AI development and deployment. This coordinated political movement suggests imminent regulatory frameworks that may impact how companies develop, deploy, and operate advanced AI systems. The initiative reflects growing governmental concern about AI safety and control, potentially establishing precedents that influence global AI governance and compliance requirements for international AI companies.

The GuardianRead more

2. AI research agents would rather make up facts than say "I don't know"

Research reveals AI agents consistently fabricate information rather than acknowledging knowledge limitations, exposing fundamental reliability challenges for academic and professional applications. This finding has critical implications for AI deployment in high-stakes environments where accuracy is paramount. The hallucination tendency in research contexts signals need for enhanced verification systems and human oversight protocols, potentially slowing enterprise adoption timelines while highlighting the gap between AI capabilities and trustworthy autonomous operation.

The DecoderRead more

3. LeCun calls Silicon Valley "hypnotized" by GenAI and pivots to "non-generative" world models

Meta's Chief AI Scientist Yann LeCun criticizes Silicon Valley's focus on generative AI, advocating for a strategic pivot toward world models that understand and predict rather than generate content. This represents a fundamental challenge to current industry assumptions and investment patterns. LeCun's position suggests a major architectural shift away from transformer-based generation toward systems that build internal representations of reality. His influence and Meta's resources could catalyze a significant realignment of AI research priorities, potentially disrupting billions in generative AI investments.

The DecoderRead more

4. MIT researchers “speak objects into existence” using AI and robotics

MIT researchers achieve breakthrough integration of natural language processing with robotic manufacturing, enabling voice-controlled object creation that represents a significant advance in human-machine collaboration. This development demonstrates practical convergence of conversational AI with physical automation, potentially revolutionizing manufacturing workflows and accessibility. The technology could democratize prototyping and small-scale production while reducing barriers between design conception and physical realization. For enterprise leaders, this signals emerging opportunities in customized manufacturing and human-robot interaction paradigms that could transform operational efficiency.

MIT NewsRead more

5. OpenAI’s GPT-5.2 ‘code red’ response to Google is coming next week

OpenAI reportedly accelerates GPT-5.2 release as emergency response to Google's recent AI advances, indicating intensified competition between leading AI companies. This 'code red' designation suggests Google's developments posed significant competitive threats, forcing OpenAI to adjust strategic timelines. The accelerated release cycle reflects the high-stakes nature of current AI competition where technical leadership can shift rapidly. Enterprise customers should prepare for potential service disruptions and evaluate whether to delay implementation decisions pending next week's announcement, as major capability improvements may justify waiting.

The VergeRead more

Community Highlights

Trey King speaks to a packed house in Fairhope about how he built the Fairhope Salt Company using AI

Trey King speaks to a packed house in Fairhope about how he built the Fairhope Salt Company using AI

Know someone who would enjoy this newsletter?

Forward this email or share the link below

https://la-ai.io/newsletter/view/2025-12-09
Subscribe to LA-AI Newsletter

Join Our AI Community

Get weekly insights on AI innovations and exclusive updates on LA-AI events

Subscribe Now