Pricing strategy isn’t my world, at all. But when I came across “How to Use Generative AI for Pricing” in MIT Technology Review, I couldn’t put it down. Not because of the pricing part. Rather what it quietly reveals about where humans still matter.
Here’s the short version of what researchers found: when they ran experiments using large language models (LLMs) for pricing decisions, the quality of the prompt mattered enormously. Better prompts produced better outputs.1 So far, so expected. But here’s where it gets interesting.
The better the AI output, the more it demanded of the human reviewing it.
Let that sit for a second or re-read that last sentence.
More Capability, More Responsibility
The article puts it plainly: the technology can help users validate their intuitions and explore market positioning, but they must remain vigilant about unintended discrimination patterns.
Vigilant. That’s a word that requires a thinking, experienced human behind it. Not a checkbox. Not a quick scan. Genuine critical analysis.
What the researchers are describing isn’t a world where AI does the work and humans rubber stamp it. It’s a world where AI handles the computational heavy lifting, and humans are responsible for the harder, fuzzier work of judgment. Catching bias. Questioning assumptions. Deciding what the output actually means in context.
In other words, the analysis bar didn’t lower. It shifted.
A Useful Complement, Not a Replacement
One line from the article stuck with me: LLMs can serve as a useful complement to traditional approaches, with impact largely relying on human input.
A useful complement. I love that framing. Not a takeover. Not a revolution. A complement, like a good thinking partner who is exceptionally fast at certain things and completely dependent on you for others.
This is where I think the conversation about AI in the workplace gets muddled. We jump between two extremes: AI will do everything, or AI is just a fancy autocomplete. The research suggests something more nuanced. AI is genuinely capable, and its capability is directly shaped by the quality of human engagement with it.
What This Means for FOBO
Fear of Becoming Obsolete (FOBO), it’s real, it’s widespread. A 2025 Reuters poll found 71% of workers are concerned AI will put too many people out of work permanently.
I’m not going to tell you FOBO is irrational. Organizations are making decisions right now that affect real people’s jobs. That fear is well founded.
But here’s what I took from this research: the story isn’t as simple as “AI replaces human judgment.” What this study suggests is that AI actually surfaces the need for better human judgment. The people who understand how these models work, who know how to prompt well and interrogate outputs critically, are not being replaced. They’re becoming more valuable.
Knowing how to engage critically with AI that is how to prompt well, interrogate outputs, and catch what the model misses, is a highly transferable skill!
The Learning Takeaway
I’m still learning how LLMs are built and how they behave. But reading research like this reinforces something I keep coming back to: the people who will navigate this well aren’t necessarily the most technical. They’re the most curious. The ones willing to learn how the tool actually works, engage with it critically, and not simply accept what it produces.
The robot still needs you. The question is whether you’re ready to meet it halfway.
Let’s figure it out together. 💚
Source: Cohen, M.C. "How to Use Generative AI for Pricing." MIT Technology Review, 2025.1

Leave a comment