Most of us are used to thinking about AI as something that talks about work. It explains things, summarizes documents, maybe drafts an email or two. But over the past few weeks, something different has started to land: AI that doesn’t just describe what you should do next, it actually goes and does it for you—clicking, typing, and navigating software like a very patient, slightly nerdy assistant.
Anthropic calls their version “computer use.” Perplexity calls theirs simply “Computer.” Under the branding, the core idea is the same: you describe an outcome in plain English, the AI sees a screen the way you do, and then it works through the clicks and keystrokes to get the job done. It can open a browser, log into a portal, move data from one place to another, and save files—without you writing code or building a custom integration.
If that sounds abstract, bring it closer to home. Imagine telling an assistant: “Pull last month’s invoices from our vendor portal, update the spreadsheet in my Dropbox, and email me a quick summary of any invoices over $10,000.” With these new tools, that’s no longer a thought experiment. Anthropic’s computer use model is already showing examples where Claude fills in web forms using data from a local spreadsheet, plans events by hopping between maps, reviews, and calendars, and automates the sort of tedious screen-hopping that eats whole afternoons. Perplexity’s Computer is being used to assemble huge spreadsheets, draft reports, and coordinate multi-step workflows for non-technical professionals by chaining together research, writing, and data work behind a single prompt.
The important nuance for our readers is this: you don’t have to be “technical” to benefit from this, but you do have to think in terms of outcomes and guardrails. Perplexity Computer is the more approachable example right now. It runs in the cloud as part of a premium plan, and from the user’s perspective it looks like a powerful assistant that takes a goal—“create a market brief on mid-sized logistics companies in the Southeast and structure it as a slide deck for my Monday meeting”—and quietly spins up a whole set of sub-agents and models to research, analyze, and package the result. You never see the 19 different AI models Perplexity is juggling in the background; you just see the finished work.
Anthropic’s computer use, by contrast, is still more of a power-user and developer tool today. It’s designed to run on an actual desktop environment, with Claude controlling a virtual mouse and keyboard, a text editor, and a terminal. That makes it incredibly flexible—anything you can do on a computer, it can in theory learn to do—but it also means someone has to set up the environment, supervise what it’s doing, and think carefully about access and security. It’s a glimpse of where this is headed on local machines, whereas Perplexity is showing what it feels like when the same idea is wrapped in a consumer-ready product. Right now the “Computer Use” applications are bundled int opremium plans but as with everything technology-related, those features will find themselves being made available to the lower tiers, and ultimately free tiers.
The bigger story, though, is the rise of “agents” as a category. For the past year or so, analysts have been arguing that AI agents—systems that can plan, take actions, and adapt based on feedback—would become a defining workforce trend. Market forecasts now project tens of billions of dollars flowing into this space by the end of the decade, with especially fast growth in areas like finance, professional services, and operations-heavy roles. In other words, the jobs where your day is mostly screens and systems are exactly where these agents are going to show up first.
So what does that mean for someone running a business here on the Gulf Coast who does not want to install Docker or read API docs? A few very practical things:
First, if a large chunk of your day is spent clicking through the same three to five systems—accounting, CRM, property management, inventory, HR software—this is the category to watch. Early adopters are already using AI agents to automate invoice processing, routine reporting, data cleanup, and information gathering that used to require a person bouncing between tabs. Second, “using an agent” will look less like learning a new tool and more like describing a recurring task clearly once, then supervising the AI as it learns to handle that task consistently. You’re shifting from “Do this click, then this click” to “Every Friday, prepare this report and highlight anything unusual.”
Third, and this is the caveat, none of these systems are truly “set and forget” yet. They can misread a screen, click the wrong button, or misunderstand a label. Think of them less as autopilot for your business and more as a junior analyst who works very fast but still needs review on anything financial, legal, or reputational. The leaders who get the most value from this will be the ones who learn how to delegate clearly to an AI, structure repeatable tasks, and build simple checks into their workflows.
Over the next year, you’re going to see “computer use” and “AI agents” show up in more products you already use—office suites, CRMs, vertical SaaS. The interesting question isn’t whether the technology is real; it clearly is. The question is how quickly each of us can redesign our own work so that a digital assistant doing the clicking is normal, and humans are spending more time on judgment, relationships, and strategy.
P.S. You may have noticed a fair number of em dashes in today's newsletter. You're also probably very aware that em dashes are a tell-tale sign of AI writing. But interestingly I, as a human, also use em-dashes which leads to a very interesting quandary: do I change my human writing style to not appear to be AI-written or do I just let it roll? I chose to let it roll with the full acknowledgement that I use AI in just about everything I do.