Why AI Workflows Click for Some People
It's not about intelligence. It's about cognitive fit.

Intelligence isn’t the variable. Neither is technical skill, or how many hours you’ve put in with the tools.
I’ve been deep in agentic AI workflows for months, building infrastructure, running agents, shipping features at a pace I’ve never hit before. And I’ve been watching other people try to adopt similar workflows and struggle. They’re not less capable. The way they’re approaching the tools just doesn’t match how their mind actually works.
That mismatch is the problem, and trying harder with the same approach won’t fix it.
The Observation
Some people sit down with an AI assistant and immediately start getting value. Others follow the same tutorials, read the same guides, and feel like they’re fighting the tool the entire time. The common assumption is that the first group is just better at prompting. That’s not what I’ve seen.
What I’ve seen is that the people who take to it quickly have, usually by accident, landed on a workflow that aligns with how they naturally think. The people who struggle are trying to use the tool in a way that fights their cognitive patterns.
Some developers thrive with TDD and others find it suffocating. Some people love pair programming and others need to think alone first. It’s not a methodology problem. It’s a fit problem.
There’s established research behind this intuition. Vessey and Galletta’s Cognitive Fit Theory (1991) showed that performance improves when information presentation matches how a person processes it. Goodhue and Thompson’s Task-Technology Fit model (1995) showed that technology improves individual performance when its capabilities match both the tasks and the person using it. Neither finding is new. What I haven’t seen is anyone applying them to individual AI adoption: asking not “is this a good AI tool?” but “is this a good AI tool for how you think?”
Some Patterns I’ve Noticed
These are patterns I’ve observed in how people approach problems. Most of us are a blend, and there are certainly more than three. But recognizing which pattern dominates for you in a given context changes which AI workflow you should try first.
There are more patterns: someone who needs to understand the entire system before touching any part of it, someone who’s comfortable delegating but doesn’t know what to ask for, someone who thinks by teaching and needs to explain the problem to the AI before they can solve it themselves. The taxonomy isn’t the point. The recognition that the pattern exists, and that it has design implications, is.
The Underlying Idea
The linear executor wanting structure isn’t a failing. Neither is the explorer’s need to wander, or the verifier’s demand for proof before trusting output. These are just different entry points into the same set of tools. The tools don’t dictate which entry point you use.
There’s a well-established principle in cognitive science that your mind extends into the tools you use. Your notebook, your IDE, your AI assistant aren’t just things you interact with; they’re part of how you think. If that’s true, and there’s decades of research saying it is, then choosing the wrong tool interface isn’t just friction. It’s like trying to think with the wrong part of your brain. Not laziness. Not resistance. Architectural mismatch.
I spent a lot of time recently examining how my own mind actually works — not how I think it should work, but how it actually does. What sustains my focus. What breaks it. Where I get stuck, and what the pattern is. That kind of examination sounds abstract until it changes something concrete: I stopped using chat-first AI interfaces for certain tasks entirely, because I think by writing and the back-and-forth was breaking my concentration, not extending it. It drew on ideas from cognitive work analysis, distributed cognition, metacognitive research. The principles are out there. They just haven’t been applied to “how should I specifically adopt AI tools?”
Five Questions Worth Sitting With
No right answers. They’re meant to surface something about how you work that might change which AI workflow you try next.
When you get interrupted mid-task, how long does it take you to get back to where you were? Seconds? Minutes? The rest of the afternoon? This tells you something about your context reconstruction cost, and whether you need an AI that preserves state for you or one that helps you rebuild it.
Do you know what you want to build before you start, or do you figure it out by building? Neither is wrong. But if you’re a figure-it-out-by-building person using AI as a specification executor, you’re going to hate it. Try using it as a thinking partner instead.
When someone gives you feedback on your work, do you want to understand their reasoning or just know what to change? This tells you whether you’ll get more value from AI that explains its choices or AI that just produces output. Both exist. I defaulted to the wrong one for months before noticing.
What’s the last tool or process you abandoned, and why? Don’t stop at “it didn’t work.” What specifically felt wrong? Too rigid? Too loose? Too much overhead? Too little structure? The friction pattern you find here probably applies to AI tools too.
When do you do your best thinking: alone or in conversation? If you think by talking, AI chat interfaces are a natural fit. If you think by writing, try using AI as a reviewer rather than a generator. If you think by doing, skip the chat entirely and go straight to AI-powered code completion or inline suggestions.
The Bigger Picture
Here’s the thing that changed how I think about tool adoption in general: the reason something doesn’t work for you usually isn’t that you’re doing it wrong. It’s that the tool’s assumptions about how you think don’t match how you actually think.
That mismatch is diagnosable.
The field has been studying cognitive patterns and tool effectiveness for decades: cognitive psychology, human factors, human-computer interaction. Mostly in academic papers about safety-critical systems and organizational design. What I’m trying to do is take those ideas and apply them somewhere more immediate: how you and I adopt the tools that are changing our work right now.
If you recognized yourself in one of the patterns above, or a question surfaced something you hadn’t looked at before, there’s more behind it. I’ve been developing a methodology for this kind of self-examination that goes deeper than five questions, built on established cognitive science applied in a way I haven’t seen elsewhere.
Start with the questions. See what comes up.