How Do You Know That?

How Do You Know That?

I was at a roundtable recently when a leader said, with complete confidence, that AI would do two things: reduce headcount and lower pay. Because “AI could do it better.”

I wanted to ask: How do you know that?

Not “do you believe it” or “does it seem logical.” But how do you know that AI doing work independently produces better outcomes than humans using AI as a thinking partner? Where is the evidence that this leads to higher-quality decisions, better innovation, or work that actually holds up over time?

I didn’t ask. The moment passed, but the question stayed with me.


We Are Making Massive Decisions on Thin Evidence

Organizations are restructuring teams, rewriting job descriptions, and renegotiating what work is worth, based largely on what we think AI can do. Not on rigorous evidence of how people and AI actually work together over time, under real conditions, with real stakes.

That’s not a critique of AI. It’s a critique of how we’re making decisions about it.

I’ve spent a decade inside operating model transformations. One pattern holds across every one of them: the assumptions made at the beginning rarely survive contact with reality. Not because leaders are wrong, but because change is complex, there are many factors, and the real pressure to show results that tie to saving or making money. That pressure makes it easy to skip past the slower, fuzzier questions. Questions like: how do we actually know this?

We are at the beginning of a change as significant as the computer or the internet. It will be with us for a long time. Yet we’re skipping straight to the cost-reduction conversation without pausing to ask whether we’re even measuring the right things.


The Question Nobody Is Pausing On

I’m hearing more and more from that organization’s top performers that they’re now 10x more productive with AI. I believe them! And then I ask: productive at what? For whom? And what happened to the people who weren’t in the top tier?

Productivity is a ratio, output over input. But organizations are currently measuring output while ignoring a significant part of the input: the human cost of rapid, under-supported technological change. The cognitive load. The anxiety. The quiet recalibration happening in every team meeting where someone wonders if their judgment still matters.

A 2025 Reuters/Ipsos poll found that 71% of respondents were concerned that AI will be ‘putting too many people out of work permanently.’ I call this FOJO — Fear of Job Obsolescence. And it’s not the technology itself that’s driving it. It’s the way AI is being introduced, leaving people unsure of where they fit and whether their experience still matters.

That’s a signal worth examining before we declare the productivity gains a success.


What I’m Studying — And Why It Matters Here

I’m a doctoral researcher at Purdue University investigating how investing in workforce development shapes employee responses to AI adoption. Put simply: does it make a difference if organizations help their people learn with AI, and if so, how much?

I don’t have the answers yet. The research is still in progress. But I keep noticing a pattern: organizations are rolling out AI tools faster than they’re building the skills people need to use them well.

The roundtable leader was right to focus on efficiency, as it’s a real concern for organizations. What concerned me was the certainty and the lack of curiosity about what we still don’t know. That curiosity is what this blog is for.


The Key Question

Next time someone at work confidently says what AI will do, like cutting costs, improving quality, or replacing a role, try asking the quieter question behind it:

How do we know that?

Ask not to challenge, but out of genuine curiosity.


Let’s figure it out together. 💚

Leave a comment