While you're still asking questions, AI is already acting


About two years ago, when platforms like ChatGPT began gaining mainstream adoption, the initial assessment was simple: a new way to look things up. A more sophisticated search engine with natural language responses. Useful, but still passive.
That first wave moved users away from traditional search and into language models. ChatGPT led the shift, but others quickly followed: Claude, from Anthropic; DeepSeek; Manus; Kimi. Each with its own strengths, each competing for space in a category that was still being defined.
What many people haven’t realized is that phase is already over.

One of the most significant and least discussed shifts is happening in how people relate to their own health. Submitting blood tests, medical reports, and imaging scans to language models has become common practice. Not as a replacement for physicians, but as an additional layer of analysis: cross-referencing what was said in the consultation with what the data actually shows.
One case illustrates the potential well. The son of a technology professional had been living with an undiagnosed intestinal condition for over five years. Different specialists, different opinions, no resolution. The decision was made to compile every exam accumulated over that period and submit them to a language model. The model cross-referenced the data and identified a specific, treatable condition, a hypothesis none of the physicians had raised. The treatment was applied. The problem was resolved.
Over five years. Multiple specialists. No conclusion. A language model cross-referenced the data and reached a conclusion the conventional process had not.
This is not a criticism of the healthcare system. It’s a demonstration of what happens when a volume of information that no single human can fully process is handed to a system capable of identifying patterns at scale. The model doesn’t replace clinical judgment, but it can expand the scope of investigation in ways that the traditional process, by its natural constraints, cannot reach.
No profession has been more directly impacted by AI than software development. The question about replacement comes up constantly, and the most honest answer is: not replacement, but structural transformation.
A focused developer writes, on average, five thousand lines of code per day. Depending on the model used and the processing capacity applied, AI models can produce one to two million lines, with technical quality and built-in tests. For pure development tasks, the computational capacity of specialized models like OpenAI’s Codex already surpasses what any human team can deliver at comparable speed.
That doesn’t eliminate the developer. It changes what’s expected of them.
The developer who understands software architecture, who knows how to break a system into layers, who thinks in terms of scalability, that professional has exactly what models don’t: structural reasoning about the problem before the solution.
What’s emerging is a new profile: the engineer who doesn’t write code directly, but builds prompts with the technical precision needed for AI to produce what actually needs to be built. Engineering logic remains the differentiator. What changes is the layer where it’s applied.
The tool that most popularized this transition was Lovable. With it, anyone can describe what they want to build in plain language, and the platform handles the layout, architecture, user experience, and code quality, without writing a single line.
The growth was striking: from zero to over $150 million in annual recurring revenue in one month, one of the fastest product ramp-ups on record. The movement inspired direct competitors like Vercel’s v0 and Replit, and began displacing part of the agency ecosystem that previously built websites and apps on demand.
The concept got a name: vibe coding. Building software from intent, not technical instruction. The barrier to creating digital products has dropped in a way that has no recent precedent.
For technology leaders, this has a direct implication: team evaluation can no longer be based solely on technical output volume. The ability to define what needs to be built, with enough clarity for an agent to build it correctly, has become just as critical as execution itself.
Looking up information and generating code are already well-understood capabilities. What is solidifying now is different: language models connected directly to the operating environment, with access to files, email, browsers, APIs, and terminals. Agents that don’t respond. They execute.
This shift in category brings with it questions that still aren’t being asked with the seriousness they deserve. When an agent acts autonomously, with the credentials and context of whoever configured it, the question of accountability, covering what it does, what it decides, and what it affects, stops being philosophical and becomes operational.
UPX has been closely tracking this movement. OpenClaw Shield was developed to bring visibility to the behavior of autonomous agents, monitoring in real time what they execute, detecting anomalous behavior, and generating the records that security teams need to investigate and respond. A natural extension of the expertise UPX has built over decades of operating security for critical infrastructures, now applied to a new attack surface.
Adoption is moving forward. Visibility is not. And when no one is monitoring what an agent does, the risk decision has already been made, it just hasn’t been noticed yet. In most cases, that awareness only comes when an incident makes it unavoidable.
In the next article, we go deeper into how autonomous agents are already operating in real corporate environments, and what happens when no one is watching.

Two years ago, everyone said AI was the new Google. They were wrong.
AI isn’t just answering questions anymore. It’s making decisions, writing code, and quietly changing how technical teams operate, faster than most leadership teams have accounted for.
In this article, our CEO, Bruno Prado, breaks down what actually changed, why most organizations are still operating on outdated assumptions, and what the move from passive AI to autonomous execution means for security and tech leaders right now.
A good read to start your Wednesday with the right questions on the table.