In the Messy Middle: Why AI’s “Messy Transition” Demands More Than Adoption Metrics

Image Credit: Visual generated by Google’s Generative AI.

Yesterday—March 31—I spent the day at Convergence AI Dallas, one of the largest gatherings on applied AI in North Texas. The conference is exactly what its name promises: not hype, but a room full of wisdom. It feels like everyone is learning how to take AI and scale it thoughtfully: to improve lives, to reinvent business models, and to strengthen communities.

What stuck with me most wasn’t the flashy demos. It was the quiet honesty about where we actually are.

Laura Ullrich from Indeed put it perfectly: “We are in a messy transition.” AI is creating opportunity and causing disruption at the same time. New roles are appearing. Old ones are shifting. People are managing agents alongside their regular work while quietly wondering what their job will look like in eighteen months. That tension is the daily reality for workers right now.

Tosan Ojeahere from Thomson Reuters drove the point home even harder: adoption metrics are not enough. Just because tools are in people’s hands doesn’t mean value is flowing. She called it out directly, and I found myself nodding so hard. This mirrors exactly what Deloitte’s State of AI in the Enterprise 2026 report found: worker access to sanctioned AI tools jumped 50% in a single year, yet only a quarter of organizations have moved 40%+ of their experiments into production. This suggests that most companies are still optimizing what already exists rather than reimagining what’s possible. Maybe that’s okay for now—we’re all still learning, so it makes sense to start with the familiar. One could argue this is a productive first step.

Then the governance conversation got real. Chris Gustafson from Okta and Jeff Wang from Cognition highlighted how Shadow IT has leveled up. Anyone can spin up an agent now. No ticket. No review. Governance and security has never been harder. The old “we’ll just approve the tools” model is dead. We’re in the age of the digital workforce, and the perimeter is gone.

Anne Maroni from American Airlines brought the operational truth. Their planning world used to live in silos, network planning over here and operations over there. Now they’re moving to continuous planning. She literally called out the need to “redo the planning ontology.” Agile hasn’t died; it’s just been supercharged. The need to continuously plan is even more evident with agent-augmented work changing how decisions actually get made.

Dominic Manning from Palantir made the distinction I keep coming back to: dashboards versus decision systems. A dashboard shows you the problem. A decision system lets you act on it—right there, in the flow of work, with attribution and feedback loops. That’s the difference between “we have AI” and “AI is changing how work gets done.”  

In the Physical AI session, the question landed like a hammer: Does the human intuitively trust the robot? The answer has to be an emphatic yes. Insiqa Lokhandwala called it “embodied empathy.” Without that trust, we stay stuck in pilot purgatory—that place where endless experiments never scale. 

And then there was the bigger cultural question, from Dean Beall of the Foundation for American Innovation: Dynamism. America is 250 years old. Are we still capable of embracing the new while letting go of the old? That one lingered. 

Mark Cuban closed the day with his trademark candor. He compared today’s AI to a “drunk intern”—brilliant, tireless, and always available, yet still erratic and badly in need of adult supervision. At first his tone sounded tough and sharply critical. Yet as he shared concrete examples of how he’s already putting the technology to work across his own businesses, the real message landed clearly: this is an extraordinarily powerful tool, and those who choose not to embrace it risk being left far behind.

Stepping Back: My Reflections

The metrics that actually matter, several speakers noted, aren’t just ROI on a slide. They’re workforce sentiment and confidence in AI output. How do people feel about the tools? Do they believe (trust) the output enough to act on it? Those are the leading indicators of whether this transition becomes productive partnership or quiet resistance.

All of this lands squarely in the middle of what I’m researching at Purdue: how organizations invest (or don’t invest) in their people shapes whether employees experience AI as a threat or partner. The social contract is still in play. My sense is when we wrap experts in AI tools instead of replacing them, fear turns to FOMO. 

The day was tremendous. Valuable. A reminder that we’re not at the beginning or the end—we’re in the messy middle. And that’s exactly where the real work (and the real opportunity) lives.

What’s one thing you’re seeing in your own organization right now that feels like part of this messy transition? I’d love to hear it. 💚

Leave a comment