The request came in on a Tuesday afternoon. "We need training on the new onboarding process. Sales reps are confused." The subject line had three exclamation points. The VP who sent it was cc'd on the email chain, which meant it had already been elevated before it reached me.

And there it was: that moment. You can ask the right question, or you can say yes and start building. Most of us say yes. Because yes is easier, yes keeps the relationship, yes feels like being a good partner. And three months later you're on revision 7 of a module that 40% of the sales team has completed and zero people are using to do their jobs differently.

I said yes too many times before I understood what was actually being asked. The request wasn't "build us training." The request was "fix this problem." And in most cases, training was the solution the business partner had already decided on, not the solution that would actually fix the problem. The real work was figuring out the difference before a single storyboard frame got designed.


TL;DR
  • Thomas Gilbert's Behavior Engineering Model (BEM) found that environmental factors account for roughly 75% of performance problems. Most training requests are not training problems.
  • A 10-minute triage process (not a full needs assessment) can identify whether training will actually solve the problem before you commit team capacity to building it.
  • The goal isn't to say no to business partners. It's to change the conversation from "what training do you need?" to "what performance outcome are we solving for?" That shift, done well, makes you a more trusted partner, not a gatekeeper.

🛑 The Order-Taking Trap

Here's how teams end up in the order-taking trap. It usually starts with a stretch of high-visibility projects where L&D builds something and it goes well. Leadership notices. Business partners start coming to L&D earlier in their planning process. More requests come in. The team says yes to most of them because declining feels like failing to be a strategic partner.

Then the backlog starts. Six months of demand ahead of capacity. Then twelve. The team is working hard, producing content, hitting deployment targets. And yet somehow, none of the quarterly business metrics they were supposed to move have budged. The training is getting completed. The performance problem is still there.

That pattern is the order-taking trap in its final form. The team became very good at building things the business asked for, and never asked whether building those things was the right move. There's a research thread here that's been around since at least the 1970s and that most of us were taught in our ID programs and then quietly set aside when real work started: most performance problems are not training problems.

Thomas Gilbert's Behavior Engineering Model found that environmental factors (the information people have access to, the resources available to them, the incentive structures they're operating under) account for roughly 75% of performance gaps. Training addresses knowledge and skill deficits, which are real, but they're the minority of the performance problem in most cases. We've been treating a 25% problem like it's a 100% solution.

Mager and Pipe's classic decision tree, published decades ago and still frustratingly relevant, puts training as one of the last interventions on the flowchart. The first questions are all about environment: Do they know what's expected? Do they have the resources to perform? Are there consequences for not performing? Is there a simpler fix? Only when those are ruled out does training enter the picture.

Knowing this doesn't make it easier to act on when a VP with three exclamation points in the subject line wants training by end of quarter.


📋 What a 10-Minute Triage Actually Looks Like

A full needs assessment is the right tool when you've confirmed a training problem exists and you need to understand the specifics of what to build. A triage is what happens first. It's the conversation that determines whether you should be doing a needs assessment at all.

The ATD 7-question needs assessment framework gives us a good starting scaffold. But in my experience, you can compress the diagnostic conversation to five questions that take about 10 minutes to work through. These aren't interrogation questions. They're conversation starters that move the discussion from "what training do you need" to "what outcome are we solving for."

1. What are people doing now vs. what you need them to do? This question immediately reveals whether there's a behavioral gap or a structural one. If the answer is "they're not following the process," the follow-up question is whether they know the process, or whether following it is made difficult by something in the environment. If the answer is "they don't know how to use the new tool," that's a clearer training signal.

2. Have they ever done this correctly? Gilbert's model distinguishes between a skill deficit (they never knew how to do it) and a performance deficit (they used to do it and stopped, or they can do it inconsistently). The intervention for each is different. Training addresses skill deficits. Coaching, job aids, or process changes address performance deficits. A lot of training requests are actually performance deficit problems in disguise.

3. What happens when they do it wrong? This question surfaces the consequence structure. If the answer is "nothing, really" or "they catch it in QA eventually," you may be looking at an incentive or feedback problem, not a knowledge problem. If incorrect performance has no immediate consequence, adding knowledge about the correct performance is unlikely to change behavior.

4. Is there a simpler fix? Cathy Moore's "Will Training Help?" decision tree has this embedded as a core branch. Could a job aid solve this? A process change? A checklist? A 2-paragraph email clarifying an expectation? If the answer to any of these is yes, that simpler intervention should be tested before committing to building a course. I've seen problems solved by a one-paragraph memo that had been generating training requests for months.

5. What does success look like, and by when? This question does two things: it establishes the evaluation framework before work begins, and it surfaces whether the business partner's timeline is actually compatible with building something that works. If success is "reps following the new process correctly within 30 days" and they want the training launched in two weeks, the conversation about what's achievable needs to happen now, not after you've built something.


🚨 The ER Triage Model

Emergency rooms don't operate first-come-first-served. They triage. A patient who walked in with a headache waits while a patient in cardiac arrest gets immediate attention. The severity of the need, not the order of arrival, determines the priority.

Most L&D teams operate first-come-first-served. The request that arrives Monday gets worked on before the request that arrives Friday, regardless of which one has higher impact on the business. This isn't a moral failing. It's what happens when there's no explicit system for evaluating relative priority. And when there's no triage system, the requests that get prioritized are often the ones from the most persistent requestors, not the ones with the highest need.

We borrowed RICE scoring from product management to build a simple priority framework. RICE stands for Reach (how many people are affected), Impact (how significant is the performance gap), Confidence (how sure are we that training will help), and Effort (how much development capacity will this require). Each dimension gets a rough score, and the scores produce a priority ranking.

This isn't precise. The numbers are estimates. But the process of estimating forces a conversation about relative priority that doesn't happen when requests just go into a queue. When a new request comes in scoring high on Reach and Impact but the team is already at capacity with three projects scoring similarly, the conversation with the business partner becomes: "Here's where this falls in our current priority stack, and here's what we'd need to move or de-scope to address it sooner." That's a strategic conversation. It's also a more honest one than "we'll get to it as soon as we can."


🤝 The Relationship Question

Every time I describe this triage framework, I get some version of the same objection: "But won't this damage the relationship with the business? If I push back on their request, they'll stop coming to us."

It's a fair concern, and I want to address it directly. How you have this conversation matters enormously. There's a version of this triage that feels like gatekeeping, where the L&D team positions itself as the authority on whether a request deserves resources, and business partners feel interrogated rather than helped. That version does damage relationships. I've seen it happen.

There's another version where the triage is actually collaborative, where the five questions are asked in service of "let's make sure we're solving the right problem together," and business partners leave the conversation feeling more supported, not less. When done well, triage builds trust. It signals that L&D is taking the problem seriously enough to understand it before building something.

The line between those two versions is in the language and the posture. "That's not a training problem" is a door closing. "Let's figure out what will actually move the needle for your team" is a door opening. The outcome might be the same: we redirect toward a non-training intervention. But the relationship trajectory is completely different.

There's a practical truth underneath this: when people trust you, the entire conversation changes. A business partner who trusts that L&D is focused on their performance outcome rather than on filling the development queue will bring you into problems earlier, give you better access to context, and accept it more readily when you say "I don't think training is the right move here." That trust is built by being right about what solves the problem, repeatedly, over time. The triage process is part of how you get there.


📊 The Backlog Crisis Is a Triage Failure

The 6-month backlog that becomes a 12-month backlog is almost always a triage failure in disguise. Teams with chronic backlog problems are almost universally operating as order-takers, accepting every request that comes in, building a queue, working through it, and watching the queue grow faster than the team can shrink it.

The throughput problem is real, but it's often addressed by trying to build faster rather than by building less of the wrong things. I managed a team with a $500K+ annual L&D budget and 637 modules in production. At that scale, building the wrong thing isn't just inefficient. It's expensive. A 40-hour module that doesn't move the needle costs roughly the same as one that does, plus the opportunity cost of the thing you didn't build while you were building the wrong one.

When we implemented the triage framework, roughly 30% of incoming training requests were redirected before entering the development queue. Not rejected. Redirected. Some became job aid requests. Some became process change recommendations. A few became "let's revisit this in 60 days after the new process has been running" conversations. About 10% of the total incoming volume was requests where training really wasn't the right answer and we were able to identify a simpler intervention together with the business partner.

That 30% redirection rate freed up significant team capacity for the work that actually needed to be built. It also, over time, changed the nature of the requests coming in. When business partners see that L&D evaluates requests through a performance lens and routes them to the most effective intervention, they start framing their requests differently. They come in with "here's the outcome I need to move" rather than "here's the training I need you to build." That's a different, and much more productive, conversation.


🔧 The Quick-Fix Test

There's a practical technique I've used in triage conversations that's worth naming explicitly. When a business partner describes a performance problem, I propose the simplest possible intervention and ask: "What if we tried that first? If it works, great: we've solved this quickly. If it doesn't work in 30 days, we have real data that training might be the right next step."

The thing I've noticed: business partners rarely come back. Not because the quick fix always works perfectly, but because often the quick fix works well enough, or the problem resolves through other means, or the urgency that generated the training request subsides once other changes are made. In my experience, maybe a third of the redirected requests come back for a training conversation. The other two-thirds stay redirected.

This is the data underlying the Mager and Pipe flowchart. When you put people through the work of answering "is this really a training problem," a meaningful percentage of them discover it isn't. The problem wasn't lack of knowledge. It was lack of information, or unclear expectations, or a process that made the right behavior unnecessarily difficult. Training couldn't have fixed any of those, no matter how good the module was.

The quick-fix test also gives you something valuable when the business partner is skeptical: you're not saying their problem doesn't matter. You're saying you want to solve it as quickly and efficiently as possible, and the simplest intervention should be tested before committing to a longer development cycle. That's hard to argue with.


⏱️ Building the 10-Minute Habit

None of this works if triage is treated as an optional step that gets skipped when a request feels obvious. The discipline is running the five questions even when you're pretty sure you already know the answer, because the times when the answer turns out to be different from what you expected are exactly the times that most needed the triage.

We built it into our intake form. Every training request submitted to the team had to answer four of the five questions in the submission. Not a full needs analysis, just a few sentences per question. This did two things: it forced business partners to think about the problem before they finished writing the request, and it gave us the information we needed to do a fast triage before the intake conversation.

The intake conversation itself became much more productive. Instead of starting from zero and asking exploratory questions, we started with "here's what we read in your submission, and let us push on a few of these." The conversation moved faster, covered more ground, and arrived at the actual problem more reliably than open-ended intake conversations had.

The 10-minute target is achievable when you have the submission data in front of you and the five questions as your structure. Without the submission form, it's 20-30 minutes. Both are defensible. The time saved by avoiding a wrong build is orders of magnitude larger than the time cost of the triage. But the 10-minute version requires the upfront work of building an intake process that collects the right information before the conversation starts.


📝 When It Is a Training Problem

I've spent most of this article on the cases where training isn't the answer, because that's where the discipline is hardest and the impact is biggest. But I want to be clear: training is sometimes exactly the right answer, and the triage process is designed to confirm that, not just to rule it out.

A genuine training need looks like this: people need to learn something they actually don't know how to do, the required skill is teachable, they have the opportunity to apply it, and the environment otherwise supports the performance you're looking for. When all of those are true, a well-designed learning experience can move the needle in real, measurable ways.

The triage process actually makes training more effective in those cases, because you've confirmed the diagnosis before you start building. You know what specific knowledge or skill gap you're addressing. You know what the performance outcome looks like. You know what "done" means. That clarity, rare without a triage process, is what makes the difference between a module that changes behavior and a module that gets completed and forgotten.

The 72 hiring decisions I made over hundreds of interviews taught me something about high-stakes triage: the questions that seem obvious are usually the ones most worth slowing down on. "Is this person qualified?" seems obvious. "Is this the right problem to hire for?" is the question that prevents a lot of expensive mistakes. Same dynamic in L&D. "Is this a training problem?" seems obvious. But it's almost never asked explicitly, and the expensive mistakes live in the gap.


🎯 The One Thing to Do This Week

Pick the next training request in your queue and run it through the five triage questions before any development begins. Write down the answers. If the answers point to a non-training intervention, take that finding back to the business partner with a specific alternative recommendation. No AI required, no tools needed. Just the five questions and the discipline to let the answers change the plan.


If you've developed your own triage process, or if you've been stuck in the order-taking trap and found a way out, I'd really like to compare notes. What question do you find most useful? What conversation is hardest? Find me on LinkedIn.

-- Eian

Sources

  • Gilbert, T. F. (1978). Human competence: Engineering worthy performance. McGraw-Hill.
  • Mager, R. F., & Pipe, P. (1997). Analyzing performance problems: Or, you really oughta wanna (3rd ed.). CEP Press.
  • Moore, C. (2024). Will training fix it? Cathy Moore. cathy-moore.com
  • ATD. (2023). Needs assessment: A practical guide. ATD Blog. td.org
  • Cagan, M. (2018). Inspired: How to create tech products customers love (2nd ed.). Wiley.
  • Project Management Institute. (2022). Pulse of the profession 2022: Ahead of the curve: Forging a future-focused culture. pmi.org
  • Thalheimer, W. (2024). LTEM Version 13. Work-Learning Research. worklearning.com
  • Cognota. (2026). LearnOps: The state of learning operations. cognota.com
  • ATD. (2025). Benchmarks and trends from the 2025 State of the Industry report. td.org