The messy middle is where AI quality gets decided

Image Credit: Generated with Gemini

Teams today sit somewhere between eagerness and quiet confusion about generative AI. A few people use it daily with confidence. Others still wonder whether it is safe to put company data into Gemini, Claude, etc. This gap is normal. Yet it quietly slows projects, creates bottlenecks, and leaves good ideas on the table.

This is the messy middle. It’s where most teams actually live with AI right now. Not the early days of “we don’t use that here.” Not the future state of “we’ve integrated it everywhere.” Just the in-between, where some people are flying, some are stuck, and the work product shows it.

And the work product, lately, has a name. Work slop. You know it when you see it. A polished-looking deck with shallow analysis. A summary that misses the actual point. A doc that reads fine but says nothing your team didn’t already know. AI didn’t invent slop, but it sure made it easier to produce.

Here’s where I land. Work slop isn’t really a tooling problem or a training problem. It’s a culture problem. And the people with the most leverage on culture are middle managers, who happen to be the same people already absorbing the squeeze from both sides.

Why middle managers feel this most

If you’re a middle manager right now, you’re probably feeling this most. Senior leadership is asking for AI-enabled everything. Your team is somewhere on the spectrum from “I use it for everything” to “I’m not sure I’m even allowed to.” You’re the one in the middle trying to make it add up. That’s a lot to hold.

I don’t think middle managers get enough credit for the cultural work they’re already doing. You translate fuzzy priorities into daily decisions. You notice when a deliverable is off and quietly send it back. You decide, in small moments, whether your team treats AI like a shortcut or a craft. Nobody puts that on a job description, but it shapes everything.

A 2026 paper by Steinhauser and Heid in R&D Management helps explain why those small moments matter so much. The authors find that cultural readiness, meaning shared values and a supportive mindset toward AI, matters more for using AI well than any one person’s technical skill. In other words, the team’s collective attitude beats individual expertise. The implication is you can’t just train a team out of work slop. The groundwork has to come first, and that’s the work middle managers are quietly doing.

You’re probably already building it, even if you haven’t called it that.

Start with the through-line

The first move is connecting the firm’s priorities to what the team actually does on Monday morning. Most teams know the strategy in the abstract. I think few can tell you how their next project moves a top-line goal. When that line is clear, AI use stops being random experimentation. People know what good looks like, so they can tell when AI is helping them get there and when it’s just generating noise.

This sounds basic. I think it’s the step that’s skipped.

Pair people on small projects

Once the through-line is clear, pair team members on small, real pieces of work. Not training exercises. Actual deliverables. One person who’s a confident AI user, one who isn’t. Or two people with different styles, one who pushes for speed, one who pushes for quality. They learn from each other in the doing.

This is also how people learn to become AI orchestrators, rather than just users. A user types a prompt and accepts what comes back. An orchestrator decides what to ask, what to verify, what to throw out, and what to keep. Pairs build that muscle faster than any course will.

A way to get started: 1, 2, 4, All

If you want a concrete first move, try a liberating structure called 1-2-4-All. Liberating structures are simple group exercises designed to give everyone a real voice in a conversation, not just the loudest people in the room. They take about thirty minutes, and you don’t need a facilitator certification to run one.

Here’s how to use it for AI:

  1. 1 (solo, 2 minutes). Each person writes down one thing they’ve done with AI recently that either went well or didn’t.
  2. 2 (pairs, 5 minutes). Pairs share what they wrote. If it went well, the person explains what worked so their partner can try it. If it didn’t, they talk through what could change.
  3. 4 (groups of four, 8 minutes). Two pairs join up. Each pair brings a technique that’s working and one that isn’t. The group surfaces patterns.
  4. All (full group, 10 minutes). Each person names one thing they learned and how they plan to apply it.

That’s it. No deck. No expert. Just your team, talking honestly about what AI is actually doing for them. The first time I ran a version of this, I could feel the room shift. People seemed more at ease, more curious. That’s culture making. 

Stepping back

The reason 1-2-4-All works is that it makes cultural readiness visible. It signals that this is a team where talking about AI honestly is normal. Where admitting that something didn’t work is allowed. Where the goal isn’t to look like an expert, it’s to get better together.

Steinhauser and Heid would probably call this cultivating cultural readiness. I’d call it a Tuesday afternoon well spent. Either way, it’s the work that has to happen before any of the rest of it sticks.

Work slop is the symptom. The messy middle is where you’re operating. Middle managers are the ones who decide whether their teams get unstuck or stay stuck. If that’s you, you have more leverage than you probably think.

What’s one thing your team has produced lately that you suspect was work slop, and what would have to change for the next version to be different?

Let’s figure it out together. 💚


Reference: Steinhauser, H., & Heid, P. (2026). Organizational readiness, AI literacy, and the new frontier of R&D: How generative AI shapes innovation capacity. R&D Management. https://doi.org/10.1111/radm.70038

Leave a comment