
In my last piece (All AI Infrastructure Market Maps Are Wrong), I used Ensemble's proprietary market data to reiterate a widely acknowledged truth: the AI stack is fragmenting too quickly for any static framework to stay relevant. Each layer—data, models, orchestration, deployment—is morphing in real time, and what once looked like a coherent map has become a fluid system with few shared standards. The result: navigating this space means aiming at a moving target.
It’s worth reiterating that we are in the midst of a watershed moment in the history of technology: the pace of AI development is so rapid that both our high-level mental models and the low-level tools we use are evolving on separate, unsynchronized timelines. We build systems to match our current understanding—but by the time they’re ready, the technological substrate has already shifted beneath our feet, forcing us to rethink the entire conceptual framing. Like the mythical hydra, new challenges emerge with each perceived victory– faster than ever, and with ever-more-unpredictable repercussions.
Now, the emergence of "Agentic AI" has accelerated that transformation. These are systems that don’t just reason with language models, but act, learn, remember, and orchestrate multi-step workflows across real software environments. If a single AI agent is an autonomous program executing a task, Agentic AI refers to networks of agents coordinating toward complex, dynamic goals. And as this paradigm takes hold, the question is no longer what can AI think?—it’s what should AI do, how, and through what infrastructure?
In this piece, we explore three forces reshaping the AI infrastructure stack under the pressure of Agentic AI:
- From Brains to Nervous Systems: The value is shifting from standalone models to orchestration layers—how systems connect, coordinate, and act across tools, data, and memory. Intelligence is no longer about just thinking fast; it’s about executing reliably.
- The Retrofits and the Natives: Most infrastructure players weren’t built for agents—they’re retrofitting. We look at the distinction between ML-native, LLM-native, and Agent-native companies, and what advantages (and blind spots) each brings to the emerging stack.
- The Deployment Bottleneck: Despite the hype, very few organizations have brought agents into real production environments. We dig into the practical barriers—evaluation, security, trust—and why solving them is the key to making Agentic AI more than a demo.
__________
LLMs: From "future-redefining technology" to "one actor in the ensemble" in just a few years
The rise of LLMs triggered an industry-wide scramble to integrate them everywhere. In the early days, their generative power felt world-changing—reasoning, writing, summarizing, translating. That was enough for a while. But as companies rushed to operationalize these models, a core limitation came into focus: they were brilliant minds in locked rooms. No memory. No access to tools. No ability to follow through.
What good is intelligence if it can’t act?
This is the line the industry is now trying to cross. We’re no longer just asking how well LLMs can think, but if they can actually do the work. While the term “agent” may sound like a buzzy rebrand of “automated workflows,” what’s emerging is something far more fundamental: a push to extend AI beyond isolated outputs and toward integrated, persistent systems that can remember, reason, and interact with the software world around them.
It's a bit jarring: we barely had time to celebrate LLMs as the saviors of work before realizing the journey to meaningfully deploying these models in the workplace was a massive undertaking. The more we tried to plug them in, the more the limits showed. Now, expectations have leapfrogged again.
AI will save the world... eventually?
And that lack of practical impact has real-world consequences, especially against the backdrop of entire swaths of the economy declaring themselves ready to lighten workforces and move to implement AI solutions. This drive for tangible outcomes is why the agentic space is exploding; a recent Cloudera article reports that 96% of organizations are planning to deploy agents for everything from system optimization to development and automating core business processes. The view from the ground is a stark contrast: a recent Fortune study found basic AI chatbots made 'no significant impact on earnings or recorded hours' in a massive workplace sample. So, the demand didn't just shift, it intensified.

As the diagram illustrates, an agentic system is a multi-layered construct. It's not just about the Foundation Model. Crucially, it involves:
- Tool Integration: The agent's "hands and feet"—APIs, webhooks, SDKs, code execution.
- Agent Core Systems: The "brain" with memory, planning (like ReAct), reasoning, and tool selection.
- Orchestration Layer: The "conductor" managing tasks, evaluation, and workflow.
- User Interface Layer: The interaction point via chat, visualizations, etc.
This architecture makes Agentic AI a full-stack beast, merging ML/LLM Ops with traditional software engineering. AI Infrastructure is expanding to support these layers, but—you guessed it—new challenges arise. Despite 65% of organizations piloting AI agents, full deployment stagnates at a mere 11%, according to a recent KPMG survey. This deployment gap highlights significant infrastructural hurdles.
Every company is now an AI company. Every AI company is now an AI agent company.
This challenge of bringing agents to robust, production-level deployment is driving a significant evolution within the AI infrastructure market. As companies adapt, their origins offer crucial context. My analysis of over 400 AI Infrastructure companies indicates that 68% began as traditional ML companies, 25% are LLM-native, and ~7% are truly Agent-native. Additionally, 58% of these LLM-native companies are now incorporating agentic features. We see ML-Native giants like Scale AI and Dataiku evolving into the LLM/Agent space, and LLM-Native platforms like LangChain now focusing on agent capabilities. Meanwhile, Agent-Native startups like Arcade (tool integration) and CrewAI (multi-agent orchestration) are emerging to tackle these new challenges head-on.
The reality is the market is still largely anchored in older principles. This shift towards agents means many "agentic" solutions are likely retrofitted onto existing infrastructure. Is this a natural evolution, or are we asking systems built for one purpose to perform tasks they weren't designed for? While initial modifications might work, this approach risks masking deeper architectural mismatches that surface as agents demand more sophisticated orchestration and true autonomy.
Let’s break down what this means for key AI Infrastructure components, contrasting MLOps, LLMOps, and emerging AgentOps:

- Data, Trust & Security (The Unwavering Core): Foundational elements like data quality, robust storage, security, and observability remain vital. With Agentic systems, the stakes skyrocket. The KPMG survey found 82% of leaders expect risk management to be their biggest GenAI challenge, with data privacy and security top concerns (73%). These aren't afterthoughts; they are table stakes for enterprise adoption.
- Evaluation's Rising Complexity: ML had deterministic metrics. LLMs brought subjective evaluations. AgentOps? It’s a new game, demanding assessment of multi-step task completion, tool efficacy, reasoning, and cost attribution.
- Memory, Tool Use & Orchestration (The Agentic Differentiators): These are fundamental for agents. The need for persistent memory and seamless integration/orchestration of diverse tools is where the demand for hyper-connectivity and adaptable infrastructure explodes.
The next wave of winners will connect the dots
These evolving component-level demands point to a new set of focus areas for anyone building or deploying agentic AI. The ability to seamlessly manage the interplay between a variety of models, specialized tools, persistent memory stores, and communication channels (all while making sure your systems are secure and trustworthy) will define the next wave of AI infrastructure companies. This isn't just about having the parts; it's about how they connect, communicate, and collaborate. This pressing need for interoperability and standardized interaction is why protocols like Anthropic's Model Context Protocol (MCP), Google’s A2A, and IBM’s ACP are so vital – they are attempts to create the standardized "nervous system" for agentic ecosystems.
As we navigate this evolving landscape, two realities become clear for any organization looking to harness the power of Agentic AI:
- Data, Trust, and Security Remain Paramount: These principles are amplified in AgentOps. When agents act autonomously, the integrity of their data, security of their environment, and trustworthiness of their decisions are non-negotiable, demanding integration from day one.
- Orchestration and Connectivity are Critical: Agentic AI truly is a "full-stack beast." The emphasis is now on sophisticated orchestration and seamless connectivity. This means creating platforms and adopting practices that allow for the fluid, reliable, and observable management of complex, multi-step agentic workflows across a diverse software landscape.
So, while the headlines might still chase the next "breakthrough" model, the real work of building the robust, interconnected, and trustworthy infrastructure for Agentic AI is where the value will be found. It’s less about the 'brains' in isolation and all about the 'nervous system' that allows AI to act intelligently and safely in our world. The question for every company in this space isn't just if they'll adopt agents, but if their underlying infrastructure is truly ready for the revolution.
____
This market doesn't wait, and neither can our analysis. Staying ahead requires understanding the infrastructure powering the revolution, and only those building on the most dynamic and resilient foundations will lead the pack. Stay tuned – the deep dives are coming.
If you or someone you know is building this future, please shoot us a note.
Lilly Vernor (lilly@ensemble.vc)
Gopi Sundaramurthy (gopi@ensemble.vc)
Ian Heinrich (ian@ensemble.vc)
WATCH: Collin West talks next-gen VC on the VC10X podcast with Prashant Choubey
In his recent appearance on the VC10X Podcast, Ensemble founder Collin West laid out the art of data-driven venture and the future of the industry. As we've said many times before, there's a lot more to our approach than just collecting data. Over the last decade, Ensemble has been at the forefront of developing the processes integral to converting data insights into actionable, outbound strategies.
Press Release: Ensemble VC “Unity” launch
Ensemble VC launches Unity, a first-of-its-kind AI platform to derisk venture capital investing by ranking startup teams with hundreds of objective data points. Already in use by Draper Associates, Boost, Overmatch, and Wilson Sonsini, Unity empowers VCs to source, select, and support high-potential startups with precision across sectors like SaaS, AI, defense tech, and space.
Welcome to the team, Hannah Vu!
We’re excited to welcome Hannah Vu to Ensemble VC as a Data Engineer, where she’ll help scale the back-end infrastructure that powers Unity, Ensemble's proprietary data platform.
Rare Earth Elements: The Next Big Thing Is Buried
In a world obsessed with software, the next decacorn might come from a mine. With China controlling 90% of rare earth processing, a new industrial frontier is opening for founders to rebuild the backbone of modern technology—starting with the ground beneath our feet.
Welcome to the team, Aidan Gold!
Ensemble is thrilled to add Aidan Gold to the team. Aidan leads the investment pipeline at the earliest stages of company development.
CHAOS Industries reaches $2B valuation in Series C led by Accel, NEA
CHAOS Industries, a next-generation DefenseTech startup backed by Ensemble at Series B, has raised $275M in a Series C led by Accel and NEA at a $2B valuation, cementing its status as one of Fund II’s breakout successes and validating Ensemble’s data-driven approach to early access in frontier sectors.