In January 2026, Klarna — the Swedish buy-now-pay-later company — published a striking data point in its annual report. Its AI assistant, built on OpenAI technology, was handling the equivalent work of 700 full-time customer service agents. Response times had dropped from eleven minutes to under two minutes. Customer satisfaction scores had held steady. The company had not replaced 700 people overnight — it had not backfilled roles as people left, and it had redirected existing staff toward higher-complexity work. But the math was in the report for anyone to read: one AI deployment, the productivity equivalent of 700 humans.
Klarna was not unique. It was just unusually candid. Across industries — insurance, banking, software development, legal, healthcare administration — the same shift is happening at varying speeds and with varying degrees of transparency. AI agents are being given access to systems, assigned responsibilities, and evaluated on performance metrics. They are not exactly employees. But they are not exactly tools either. They occupy a new category that organizations are still figuring out how to think about, manage, and integrate.
What These AI Employees Actually Do
The range of tasks being handled by AI agents in production environments in 2026 is broader than most public coverage suggests. Customer-facing agents handle the full spectrum of tier-one support — account inquiries, order tracking, subscription changes, complaint intake, refund processing — with escalation protocols to human agents for situations that exceed their confidence threshold or involve exceptional circumstances. In insurance, agents are processing claims that fall within standard parameters: verifying documentation, calculating payouts, initiating payments. In software development, coding agents are writing feature implementations, reviewing pull requests, generating test suites, and maintaining documentation.
The common thread is not that these tasks are simple — many of them are not. It is that they are bounded. The agent operates within a defined domain with defined tools and a defined escalation path. It is not making strategic decisions or navigating novel ethical terrain. It is doing the cognitively demanding but procedurally understood work that used to require a junior professional.
This distinction matters enormously. The threat is not that AI will replace senior professionals who spend their time on judgment-intensive, relationship-dependent, strategically complex work. The threat — or the opportunity, depending on your perspective — is in the middle layers: the analysts, the junior associates, the support specialists, the coordinators who spend the majority of their working hours on work that is skilled but procedurally defined.
The Companies Building This Infrastructure
A generation of startups has emerged specifically to sell AI workforce infrastructure to enterprises. Sierra AI, backed by significant venture capital and founded by ex-Google and Salesforce veterans, sells AI agent platforms specifically designed for customer-facing deployments with the reliability and compliance guarantees enterprise buyers require. Cognition AI, maker of the coding agent Devin, is targeting the software development market. Cohere, Adept, and several dozen other companies are building industry-specific agent deployments for legal, financial, and healthcare workflows.
The hyperscalers are also moving aggressively. Microsoft Copilot, integrated across the Office 365 suite, has become the most widely deployed AI agent platform in enterprise simply through distribution. Google Workspace is following the same playbook. Salesforce Einstein has been rebranded and rebuilt around agentic capabilities. The agent layer is becoming the new battleground for enterprise software, replacing the platform wars of the previous decade.
The Management Challenge Nobody Prepared For
Deploying AI agents at scale creates management problems that have no established playbook. When an AI agent makes an error — and they do make errors — the accountability structure is genuinely unclear. Is it the vendor? The team that deployed and configured the agent? The manager who approved the deployment? The organization is responsible for the outcomes of its AI agents in the same way it is responsible for the outcomes of its human employees, but the mechanisms for oversight, correction, and accountability are not yet well developed.
Performance management is equally novel. AI agents do not have annual reviews. They do not respond to feedback the way humans do. Improving an agent that is underperforming requires retraining, reconfiguration, or switching vendors — none of which maps cleanly onto the organizational processes built for human workforce management. HR departments are being asked to weigh in on decisions that are more engineering than people management, and engineering teams are being asked to think about workforce implications that go beyond their traditional scope.
The early adopters navigating this well share a common approach: they treat AI agent deployment as a product launch rather than a hiring decision. They define success metrics before deployment, build monitoring infrastructure that surfaces errors and edge cases, establish clear escalation paths to human oversight, and iterate on configuration based on performance data. The organizations that treat agent deployment as a set-and-forget automation tend to have significantly worse outcomes.
The Human Side of the Equation
The people most immediately affected by AI agent deployment are not always the ones whose jobs are being replaced — often they are the ones left doing the work that AI cannot yet handle. When an AI agent resolves the straightforward cases and escalates the complex ones, the human agents who remain are handling a higher concentration of difficult, emotionally demanding, edge-case situations. The work does not get easier when AI handles the easy cases — it gets harder, because what remains is everything the AI could not manage.
This dynamic is showing up in burnout data from early large-scale deployments. Support teams that had their headcount reduced after AI deployment, with remaining staff absorbing the escalated complex cases, are reporting higher stress and lower job satisfaction — not because AI is doing their job badly, but because the nature of the remaining human work has shifted significantly toward its most demanding components.
The organizations getting this right are actively redesigning the human role around AI rather than simply removing headcount and hoping the remaining humans adjust. They are investing in upskilling, redefining roles around judgment and relationship management, and being honest with their teams about what the transition looks like and what the organization's commitments are to the people going through it. That transparency and investment is not universal. But where it exists, the evidence suggests it produces better outcomes — for the organization and for the people.
What Hiring Looks Like Now
The most telling indicator of how AI agents are reshaping work is what organizations are hiring for. Entry-level analyst and coordinator roles that were standard pipeline positions for new graduates are appearing less frequently in job postings at technology-forward companies. What is growing is demand for people who can define, configure, evaluate, and improve AI agent deployments — a skill set that requires both domain expertise and technical comfort. The person who can bridge the gap between what a business needs and what an AI system can do is, in 2026, one of the most sought-after professionals across industries.
This does not mean the workforce is fine and no one needs to worry. The transition is real and the displacement is real. But the shape of that displacement — who it affects, on what timeline, and what the alternative opportunities look like — is not predetermined. It is being actively shaped by the decisions organizations and policymakers make right now about investment, training, safety nets, and what kind of future they are trying to build.
