John Moore
by on February 20, 2026
48 views

Executive Summary: In 2025, the conversation around AI in the workforce has fundamentally shifted. The question is no longer if AI will replace human workers, but how organizations can effectively augment human capabilities with autonomous agents. Recent surveys of HR leaders reveal that while nearly 90 percent express optimism about AI's potential, only about 60 percent have moved beyond pilot phases to active implementation. Security concerns have more than tripled as organizations gain hands-on experience. Meanwhile, a critical "imagination deficit" threatens to undermine these investments: the vast majority of business leaders recognize the need to keep human capabilities in pace with technology, yet almost none report making meaningful progress. At the same time, the EU AI Act introduces binding compliance obligations with penalties reaching into the tens of millions. The key finding: AI agents will not replace humans at scale, but humans working effectively alongside AI will replace those who don't

1. The CHRO reality: cautious optimism meets governance

A pulse survey of hundreds of chief human resources officers conducted in mid-2024 revealed a striking paradox. While nearly nine out of ten HR leaders expressed a positive outlook on AI's potential, their adoption pace told a more nuanced story. Three-quarters of C-level executives reported they'd begun their AI adoption journey, but only about three out of five CHROs were piloting AI projects or implementing AI in business processes. This gap reflects not resistance, but responsible stewardship.

One CHRO captured the sentiment perfectly: "I'm extremely optimistic about the impact AI will have on our business and how we execute on our talent and engagement strategy." Yet security concerns have intensified dramatically. The share of HR leaders worried about deploying AI securely has more than tripled compared to the previous year. This increase suggests that as organizations test AI more extensively, they develop a healthier respect for its risks.

CHRO Sentiment on AI Adoption (2024):

  • Nearly 90% have a positive outlook on AI's potential
  • About 60% are piloting or implementing AI projects
  • Almost 75% are concerned about secure AI deployment (more than triple the prior year)
  • Nearly 60% believe their organizational design isn't flexible enough for AI

The message from HR leaders is consistent: proceed with enthusiasm, but also with guardrails. As one CHRO noted, "We are proceeding carefully in close partnership with our legal and compliance teams, as we want to ensure we are examining the potential risks of each AI use case."

2. The imagination deficit: AI's hidden bottleneck

A major global study of business and HR leaders across nearly 100 countries identified a critical vulnerability that researchers dubbed the "imagination deficit." As generative AI becomes ubiquitous, organizations are struggling to envision new ways of working that harness the combined strengths of humans and machines.

The statistics are sobering. While a substantial majority of respondents say it's important to ensure human capabilities keep pace with technological innovation, only a tiny fraction report making meaningful progress. This readiness gap—one of the widest measured in years—suggests that most organizations are ill-equipped to navigate the transition.

"AI cannot replicate the curiosity and empathy that fuel imagination and lead to creative invention. This involves the drive to explore, to craft narratives, and to team—work that requires thinking like a researcher and asking the right questions as much as delivering on preprogrammed objectives."

Four signs indicate your organization may be facing an imagination deficit:

  • Recognition without direction: Workers and leaders know they need to reimagine work but don't know where to start
  • Soft skills signaling: Hiring managers increasingly seek curiosity, collaboration, and social intelligence
  • Acquisition dependency: The organization relies on hiring or acquisitions to inject fresh thinking
  • Entry-level contraction: Noticeable decreases in entry-level job openings within the ecosystem

Addressing this deficit requires deliberately cultivating human capabilities that AI cannot replicate: curiosity and empathy, informed agility, resilience, connected teaming, divergent thinking, and social intelligence. Organizations that prioritize these capabilities will be better positioned to harness AI's potential while maintaining their competitive edge.

 The Interconnectd discussion on human-driven AI explores how organizations are bridging the imagination gap through practical experimentation.

3. From replacement to augmentation: the evidence

Perhaps the most important correction to public discourse comes from detailed workforce analysis: AI is not replacing HR workers at scale, and it won't. At a major HR technology conference in late 2024, analysts presented data showing that even in companies where chatbots handle most routine tasks, headcount reductions would be less than five percent. In fact, HR departments might actually see headcounts increase to be able to manage the bots that occasionally misbehave.

This reframes the entire conversation. The challenge isn't managing displacement—it's managing supervision. As AI agents become more capable, organizations will need new roles: AI behavior specialists, agent supervisors, and human-AI workflow designers.

Looking ahead, a significant portion of new software applications will be automatically generated by AI without direct human involvement. This creates both opportunity and risk. The same research indicates that overreliance on generative AI could weaken critical thinking skills and produce lower-quality outputs. Consequently, the vast majority of organizations using AI technology will set aside dedicated budgets for information validation.

Forward-looking projections:

  • One-quarter of new software will be AI-generated without human involvement within two years
  • Four out of five organizations will budget for information validation by 2027
  • Nearly all job candidates will heavily use AI to generate profiles within three years
  • More than one-quarter of those profiles may contain fabricated elements

The last point is particularly significant. As AI-generated applications become indistinguishable from human-written ones, employers will invest heavily in verification technologies. This creates a fascinating dynamic: AI generates content, and AI verifies it, with humans overseeing both processes.

4. The EU AI Act: compliance as competitive advantage

The European Union's Artificial Intelligence Act, which entered into force in mid-2024, represents the world's first comprehensive AI regulation. Its extraterritorial reach means that any organization deploying AI systems that affect EU residents—regardless of where the company is headquartered—must comply.

The Act's risk-based approach creates a clear compliance framework:

  • Unacceptable risk: Social scoring, manipulative AI (banned outright)
  • High risk: Employment, education, critical infrastructure (strict requirements for conformity assessments, risk management, and human oversight)
  • Limited risk: Chatbots, emotion recognition (transparency obligations)
  • Minimal risk: Most other applications (no additional requirements)

For HR leaders, the implications are immediate. AI systems used for recruitment, employee management, and promotion decisions are classified as "high-risk." This means organizations must conduct conformity assessments, implement risk management systems, ensure data governance, maintain technical documentation, and enable meaningful human oversight.

The penalties are substantial: violations can reach into the tens of millions of euros or a significant percentage of global annual turnover. Even providing incorrect information to regulators can result in fines in the millions.

 The Agentic AI discussion on Interconnectd includes real-world examples of companies navigating these compliance requirements.

A survey of companies conducted in late 2024 found that the vast majority are already using AI systems, and an even larger share acknowledge that more AI knowledge and training is needed. Critically, AI literacy requirements became binding in early 2025, obligating employers to ensure their staff have sufficient AI knowledge to operate systems safely and competently.

This creates both a compliance burden and an opportunity. Organizations that invest in AI literacy and governance will not only avoid penalties but build trust with employees, customers, and regulators.

5. Organizational redesign: the CHRO's mandate

Research from 2024 reveals that most CEOs plan to use AI to maintain or increase revenue. Yet organizational design poses a significant barrier. A majority of CHROs believe their organizational design isn't flexible enough, and a substantial portion say it actively hinders employee productivity. Only a minority of CHROs are confident they can deliver on their organizational design goals in the near term.

Forward-thinking CHROs are responding with a two-phase approach:

Near term: Minimize existing barriers

  • Design human-AI workflows: Use friction points as catalysts for process transformation, creating adaptable workflows with clear collaboration guardrails
  • Embrace intentional friction: Build pause points where employees can scrutinize AI-generated work, reducing errors and unwanted friction later

Long term: New structures for agility

  • Flatten hierarchies thoughtfully: When technology reduces talent demand, streamline hierarchies while focusing on reskilling and redeployment
  • Pilot fusion teams: Multidisciplinary teams where business and technology experts work together, sharing accountability for outcomes
  • Enable self-nominated rotations: Allow employees to choose short-term assignments that build digital skills and cross-functional experience

One organization implemented a draft system where employees self-nominated for team rotations, allowing them to learn digital skills they wouldn't have developed in their existing roles. This approach builds organizational agility while signaling commitment to employee development.

 The Creative AI thread shows how fusion teams are already working across disciplines to imagine new applications.

6. The talent implications for 2026 and beyond

Synthesizing the available evidence from workforce studies and regulatory frameworks, several clear implications emerge for talent strategy:

First, AI management becomes a core competency. Every manager will need skills in supervising synthetic workers—giving feedback, setting goals, auditing work, and intervening when agents fail. This requires new training programs and performance frameworks.

Second, the skills gap shifts from technical to supervisory. Instead of "prompt engineering," the demand will be for people who can train, supervise, and collaborate with AI agents. Job posting data bears this out: postings for roles like "AI supervisor" and "agentic workflow manager" have increased dramatically.

Third, verification becomes a distinct function. With the near-certainty that most candidates will use AI to generate application materials—and a significant portion of those materials containing fabrications—employers will invest heavily in validation technologies. This creates new roles focused on information integrity.

Fourth, entry-level pathways will transform. The contraction in entry-level roles noted by workforce researchers suggests that organizations must rethink how junior employees develop skills. Apprenticeships, rotations, and fusion teams may replace traditional apprenticeship models.

7. Open questions and the path forward

Despite the wealth of available data, critical questions remain unresolved:

  • How do organizations measure and reward human capabilities like curiosity and empathy at scale?
  • What governance structures ensure AI agents remain aligned with organizational values?
  • How should performance management evolve when humans and AI collaborate on every task?
  • Who bears liability when an AI agent causes harm—the vendor, the deployer, or the supervisor?
  • How can unions and worker representatives participate in shaping AI deployment?

The EU AI Act provides a framework but leaves many implementation details to organizations. The organizations that thrive will be those that treat these questions not as obstacles but as design opportunities.

 The AI for solopreneurs thread offers practical examples of how smaller operations are navigating these same challenges with fewer resources.

Parting thought

The AI talent war isn't about humans versus machines. It's about organizations that cultivate imagination, curiosity, and human judgment competing against those that don't. The evidence is consistent: AI augments; it doesn't replace. But augmentation requires intentional design, ongoing investment in human capabilities, and governance structures that build trust.

As one major study concluded, "To harness the extraordinary potential of this moment, organizations and workers alike should counter their fear with curiosity and imagination." The organizations that embrace this challenge will define the future of work. Those that don't will be defined by it.

— AI Talent Research Group, March 2025

 For historical context, the Brief History of Thinking Machines traces how we arrived at this inflection point.

Further reading and discussions:

#AI, #HRTech, #FutureOfWork, #TalentManagement, #DigitalTransformation, #HRAI, #WorkforcePlanning, #Leadership

Like (3)
Loading...
3