Back to Archive

LA-AI Insights: Open Source AI in 2025

Your weekly AI news and updates from Lower Alabama

Monday, November 3, 2025

Share this newsletter:


You're reading this on a device that probably runs on open source software, even if you've never thought about it that way.

Mac? That's BSD Unix from the 1970s under the hood. Android phone? Linux. Your smart TV, router, car's entertainment system? Linux again. Most of the world's infrastructure—web servers, databases, development tools—runs on software that anyone can inspect, modify, and share.

We take this for granted now, but it wasn't always obvious that giving away valuable software would work. Microsoft's Steve Ballmer called Linux "a cancer" in 2001. He was spectacularly wrong. Open source became the foundation of modern computing.

We're watching the same debate play out with AI now. Should the most powerful AI models be controlled by a handful of tech giants, or should they be open for everyone to use, study, and improve? The gap between proprietary models and their open source cousins shrank from about 8% to less than 2% over the past year. But unlike early open source software, where "open" had a clear meaning, AI has sparked a fight over what "open" even means.

The Open Source Initiative spent two years defining "open source AI." They landed on three requirements: the model's architecture and code, the trained parameters, and detailed information about the training data.

That third requirement caused problems. Some argue you need the actual training data, not just a description. Others point out that's legally impossible for healthcare data or copyrighted books that companies are already being sued for using.

Most AI models everyone calls "open source" don't actually meet this definition. Meta's Llama models have been downloaded over 650 million times and are widely considered open source. But they fail the test. What we're really talking about are "open weights" models. You get the final trained model, which you can run and fine-tune. Companies save 40-60% on costs compared to proprietary APIs. But you don't get the training data or the full recipe. You can't reproduce it from scratch or audit it for bias. You're getting the finished cake without the recipe.

The models making the biggest impact tell an interesting story. Meta's Llama 3.1 with 405 billion parameters matches GPT-4 on several benchmarks. A model you can download and run yourself performing at OpenAI's level changes the economics completely. Meta isn't doing this from charity. They're trying to commoditize the model layer and compete on infrastructure where they're strong.

DeepSeek might be the most important story of 2025. This Chinese company claims they built a reasoning model matching OpenAI's performance for just $5.6 million in training costs instead of $100 million. These frontier models—the most advanced AI systems at the cutting edge of what's currently possible—typically cost hundreds of millions to train. The figure is disputed, but if even partially accurate, it proves you don't need infinite money to build cutting-edge AI. The model was downloaded over a million times within weeks.

About 89% of organizations using AI incorporate open source models somewhere, and 63% run them in production serving real customers. The reasons are straightforward: massive cost savings, keeping sensitive data on their own infrastructure, and the ability to customize models for specific domains. Most companies use both open and closed models strategically. Closed models for customer-facing chatbots where polish matters. Open models for internal tools and high-volume processing.

The safety debate gets uncomfortable. Open models are more vulnerable to jailbreaking and adversarial attacks. You can't patch an open model once it's released. Anyone can strip out safety features. But closed models aren't exactly safe either. GPT-4 gets jailbroken successfully 87.2% of the time in certain tests. Open models offer transparency. Security researchers can study them, find vulnerabilities, and develop defenses. More access means more potential for misuse but also more transparency and collective oversight.

The money situation is tricky. Training frontier AI models costs a fortune that keeps growing. Since 2020, closed-source AI companies raised $37.5 billion. Open-source alternatives got $14.9 billion. Only Meta has the resources to sustainably develop truly open frontier models as a strategic investment. Smaller developers face tough questions about funding development while giving models away.

The legal landscape adds complexity. The EU AI Act creates exemptions for open source AI, but most practical applications fall into categories that aren't actually exempt. In the US, there's no comprehensive federal framework yet. China has emerged as a major player, adding international complications.

You might wonder why this matters if you're not training AI models yourself.

It matters because AI is becoming infrastructure like electricity or the internet. If AI remains controlled by a handful of companies, those companies decide what applications are allowed, what content gets filtered, what data gets collected, what prices get charged.

Open source AI distributes that power. Researchers can study these systems independently. Small companies can build competitive products. Developing countries can access cutting-edge technology without dependence on Silicon Valley. Communities can create AI serving their specific needs and languages. But it also means less centralized safety controls and more potential for misuse.

Open source AI has moved from experimental curiosity to production infrastructure. Enterprises depend on it. Researchers rely on it. Communities worldwide build on it. The tension between openness and control, innovation and safety, access and concentration won't resolve cleanly. We're learning to live with that tension, building systems that balance competing values rather than choosing one side absolutely.

The open source AI movement in 2025 isn't about one paradigm defeating another. It's about expanding possibilities so no single approach monopolizes our most transformative technology. The choices we make in the next few years will determine whether that expansion serves broad human interests or narrow ones.



Upcoming Events

We've got a special guest (surprise) speaker lined up so let's throw a shrimp on the barbie and get to this meetup!

Invalid Date

314 Magnolia Ave., Fairhope, AL

This Week in AI

1. OpenAI Signs $38 Billion Cloud Computing Deal With Amazon

OpenAI has signed a massive $38 billion cloud computing agreement with Amazon Web Services, marking a significant strategic shift toward multi-cloud infrastructure dependency. This deal represents one of the largest AI infrastructure investments to date, enabling OpenAI to scale its operations while potentially reducing reliance on Microsoft's Azure platform. The agreement signals the enormous capital requirements for AI scaling and highlights how cloud providers are becoming critical gatekeepers in the AI ecosystem, with implications for competitive dynamics and market consolidation across the industry.

New York TimesRead more

2. NHS hospitals to test AI tool that helps diagnose and treat prostate cancer

The UK's National Health Service will pilot AI diagnostic tools for prostate cancer detection and treatment planning across multiple hospitals, marking a significant deployment of AI in clinical practice. This initiative represents a critical test case for AI integration in healthcare systems, with potential implications for diagnostic accuracy, treatment outcomes, and healthcare cost reduction. The pilot's results could influence AI adoption strategies across global healthcare systems and establish precedents for regulatory approval of AI medical devices in critical care applications.

The GuardianRead more

3. LongCat-Flash-Omni: A SOTA Open-Source Omni-Modal Model with 560B Parameters with 27B activated, Excelling at Real-Time Audio-Visual Interaction

A new open-source omni-modal AI model with 560 billion parameters has achieved state-of-the-art performance in real-time audio-visual interactions, using an efficient activation strategy that only engages 27 billion parameters during inference. This breakthrough demonstrates significant advances in multimodal AI efficiency and accessibility, potentially democratizing access to sophisticated AI capabilities previously available only through proprietary systems. The open-source nature could accelerate innovation across industries requiring real-time multimodal AI applications, from robotics to interactive media and customer service.

MarkTechPostRead more

4. Microsoft brings autonomous AI agents to 365 Copilot

Microsoft's integration of autonomous AI agents into 365 Copilot represents a strategic shift toward enterprise-grade AI automation. These agents can independently execute complex workflows across Microsoft's productivity suite, reducing manual intervention and transforming business process automation. This development positions Microsoft to capture significant enterprise AI market share while establishing new standards for workplace AI integration. The move signals broader industry adoption of autonomous AI systems in mission-critical business environments, potentially reshaping productivity software economics and competitive dynamics.

The DecoderRead more

5. AI Model Growth Outpaces Hardware Improvements

Analysis reveals AI model complexity is expanding faster than hardware performance improvements, creating a fundamental scaling challenge for the industry. This divergence threatens the sustainability of current AI development approaches and demands strategic shifts in model architecture, training efficiency, and infrastructure planning. The trend signals that future AI breakthroughs will increasingly depend on algorithmic innovations rather than brute computational force, fundamentally altering competitive dynamics and investment priorities across the AI ecosystem.

IEEE SpectrumRead more

6. Extropic's 10,000x AI energy breakthrough

Extropic claims a breakthrough achieving 10,000x energy efficiency improvements in AI processing through thermodynamic computing approaches. This potential advancement addresses one of AI's most pressing challenges: the exponentially growing energy costs of model training and inference. If validated, this technology could fundamentally reshape AI economics by dramatically reducing operational costs and environmental impact, potentially enabling new scales of AI deployment previously considered economically unfeasible.

The Rundown AIRead more

Community Highlights

Ian McDonald took the stage and blew us away with his demo of LaunchBox

Ian McDonald took the stage and blew us away with his demo of LaunchBox

Know someone who would enjoy this newsletter?

Forward this email or share the link below

https://la-ai.io/newsletter/view/2025-11-03
Subscribe to LA-AI Newsletter

Join Our AI Community

Get weekly insights on AI innovations and exclusive updates on LA-AI events

Subscribe Now