
On March 30, I attended Day One of Convergence AI Dallas — and I’ll be back for Day Two tomorrow — spending the morning with people who are doing the real, operational work of AI, not just talking about it.
What I heard wasn’t the usual hype cycle. The vibe was quieter, more grounded, and carried real urgency. The through-line? AI won’t deliver its promised value unless we stop treating it like a faster way to do yesterday’s work and start treating it like a partner that requires new structures, new habits, and — most importantly — trust that must be deliberately designed into the system. With agentic AI and autonomous agents operating at scale, trust can no longer be taken for granted; it has to be architected through governance, traceability, and guardrails from day one.
Here’s what stuck with me.
Opening with Khan Academy’s learning vision
Khan Academy’s Chief Learning Officer, Dr. Krista DiCerbo, is clearly passionate about delivering meaningful education to young people. The guiding principle behind the work she and her team are driving is to use AI in a way that supports learning rather than circumventing the learning process itself.
To make this principle concrete, Dr. DiCerbo shared a common scenario from the classroom. Teachers want to create lesson plans faster. Smartly, Dr. DiCerbo shared that the real question they’re asking isn’t “How do we save time?” it’s “What is the problem we’re actually trying to solve beyond saving time?”. Teachers really want ways to make better instructional decisions, not just faster ones.
At Khan Academy, the focus is on developing AI agents that don’t just consume knowledge — they apply it. That distinction feels foundational. Because if the goal is productive struggle (the kind that builds real capability), then our job is to design for safety, architect for trust, and resist the temptation to automate away the very friction that creates learning.
I kept thinking about my doctoral research at Purdue University and my work as a change management consultant guiding organizations through operating model transformations. When organizations invest in their people — not just tools — employees experience AI as a partner instead of a threat. This session felt like living proof of that principle in education.
A Texas AI moment over lunch
I grabbed lunch and ended up sitting next to Shannon Belew. She casually mentioned she’d just launched a podcast — IntoTexas AI. I listened to the inaugural episode on the drive home (just 10 minutes, and excellent). Her closing line from that first episode is now living in my head:
“Texas isn’t just participating in the AI economy. It’s becoming where the AI economy runs.”
(The episode dives into the Abilene data center shifts with Oracle, OpenAI, and Microsoft. Go Texas!)
“The autonomous workforce as the next frontier” — and scaling beyond the POC
Dalia Powers and Kalyana Bedhu co-presented in the session AI Advantage Beyond the POC. The session description captured it perfectly:
“Many AI initiatives stall after the pilot phase—this session focuses on moving past that barrier. Learn the strategies, operational shifts, and measurement frameworks needed to scale AI from proof-of-concept to production impact.”
Dalia called the autonomous workforce “the next frontier.” But agents aren’t magic. They need:
- clear goals
- context and memory
- connection to tools
- a degree of autonomy
Scaffolding is everything — and that scaffolding includes governance tools and processes from day one.
They talked about model drift detection, traceability, model gardens, ownership of the outputs, and evaluating those outputs. They also emphasized the need for a solid risk framework. Governance isn’t a checkbox. It’s the foundation.
Throughout the session they shared important practices for building a guardrail culture:
- Reward iteration.
- Measure the value of the model/agent by actual business outcomes, not just activity.
- Write MDM files as policy documents.
- Build tool registries and agent registries.
All of it circles back to one word: trust.
I left the conference with three questions I can’t stop turning over:
- If AI agents are going to apply knowledge (not just consume it), how do we make sure the humans they work alongside are still growing — not just managing outputs?
- What does an Agentic OODA Loop actually look like in practice — how do humans and AI learn to observe, orient, decide, and act together at the speed DFW is moving right now?
- When Texas is becoming the place “where the AI economy runs,” are we building the social contract with the workforce at the same pace we’re building the data centers?
If you’re leading an AI initiative, living through one, or just trying to separate signal from noise, drop a comment.
Let’s figure it out together. 💚
Leave a comment