18 days. That's how long our last storyboard sat in a SME's inbox before we got a response. The course was designed. The deadline had passed. The Slack message was marked "seen" 9 days ago.

As of March 2026, I keep seeing this question in L&D forums, and it hits close to home every time: "I've sent three follow-up emails, and my SME still hasn't reviewed the storyboard. At what point do I escalate?" The fact that this gets asked so often tells us something. We're not dealing with isolated bad luck. We're dealing with a systemic design flaw.

When the feedback finally arrives, it says: "Looks good, but can we make it more engaging?" No specifics. No corrections. No indication of whether the technical content is even accurate. Just... "more engaging."

Here's the thing I keep thinking about, what if the problem isn't our SMEs? What if the problem is that we've been handing them a 47-page storyboard with the instruction "please review" and expecting magic? I spent years blaming busy SMEs before I realized the review process itself was broken. I'm still not sure we've fully fixed it.


TL;DR
  • Bryan Chapman's research shows eLearning development takes 49-716 hours per finished hour, and SME review is where most of that time disappears into silence (Chapman Alliance, 2010)
  • ATD research confirms that few L&D practitioners have formalized the process of collecting SME feedback, leading to vague, non-substantive responses that require additional rounds of revision
  • Our team cut SME review from 6 weeks to 2 weeks by combining structured review frameworks with AI-generated draft content that SMEs correct rather than create, contributing to a 31% overall development cycle time reduction

📊 The Numbers That Should Scare Us

Let's start with what we know about how long content development actually takes. Bryan Chapman's research (still the most-cited benchmark in our field, even from 2010) surveyed approximately 4,000 learning professionals across 250 organizations. The development ratios per one finished hour of eLearning content were staggering:

  • Basic eLearning (content pages, text, simple graphics): 49 hours
  • Interactive eLearning (moderate interactivity): 184 hours
  • Advanced eLearning (simulations, games, rich media): 716 hours

Christy Tucker, whose practitioner-level estimates I trust because she actually tracks her own time, puts interactive eLearning at roughly 150 hours per finished hour. She notes her estimates run 10-20% higher than industry averages to account for revision cycles and tool learning curves.

Someone on an ATD community thread put it bluntly: "Everyone talks about AI reducing development time by 30-40%, but nobody mentions SME review as the actual bottleneck." That's the part that keeps surfacing. We obsess over authoring tool efficiency and forget that calendar days aren't eaten by Storyline alone. They're eaten by waiting.

So where does all that time go? How much of a 184-hour development cycle is actual design work versus waiting?

I'd guess (and I don't know the exact split) that 30-40% of elapsed project time is waiting. Waiting for SME review. Waiting for stakeholder sign-off. Waiting for "one more round of feedback." ATD's traditional ADDIE timeline puts a typical corporate training course at 8-12 weeks. How many of those weeks are active work versus queued work?


🔍 Why SMEs Give Bad Feedback (It's Our Fault)

Here's something ATD's research makes painfully clear: few L&D practitioners have formalized the process of collecting feedback or preparing reviewers to write it. We send SMEs a document and say, "Let me know what you think." Then we're surprised when they focus on comma placement instead of content accuracy.

One ID on the Articulate community put it perfectly: "My SME is editing grammar and formatting instead of checking whether the content is technically accurate." I'd bet most of us have lived that exact scenario. The SME spends 45 minutes fixing Oxford comma usage and never once flags that the procedure in Step 3 was updated last quarter.

The ATD blog on preparing SMEs to review content highlights this same pattern. SMEs tend to focus on reviewing training materials for minor grammatical issues rather than ensuring the content is accurate and complete. Non-substantive responses slow down the instructional design process, requiring additional meetings to get the information we actually need.

Can we blame them, though? If someone handed us a 30-page document from a field we're not experts in and said, "Review this," what would we do? We'd probably fix the typos too.

And then there's the scope problem. I keep seeing posts like this: "My SME wants to dump their entire 40-page procedure manual into a 15-minute module." That instinct makes sense from their perspective. They live in the complexity. They don't naturally distinguish between "need to know" and "nice to know." That's our job to scaffold, and we often don't do it clearly enough before the review starts.

Cathy Moore's action mapping work gives us a clue about what's going wrong. She argues that SMEs included in the process from the beginning are less likely to add extraneous information. A two-hour meeting with the client and SME can create the "heart of the map." But how many of us actually include SMEs at the design stage? Or do we only bring them in at the end to "validate" something they had no hand in shaping?

On my team, we didn't always follow that advice, like many of you. We sometimes treated the SME review as the last checkbox before launch. When we did, the feedback was consistently worse.


🧩 The Three Places SME Review Breaks Down

After running a global L&D team across 3 continents and managing the production of 637 eLearning modules in a single year, I've seen SME reviews fail in three predictable ways. Does this match what others are seeing?

1. The Unstructured Ask. We send a storyboard or draft with no review framework. No specific questions. No indication of what we need validated versus what's already locked. The SME doesn't know if we want a grammar check or a technical audit. So they do whatever feels easiest. Someone in an L&D forum asked: "Has anyone created an SME review guide or checklist they share at kickoff?" The fact that this is still a question tells us how few teams have formalized this step.

2. The Expertise Mismatch. We ask one SME to review everything: instructional approach, technical accuracy, tone, branding, and accessibility. That's five different review lenses. Connie Malamed's work at The eLearning Coach emphasizes the importance of filtering content through the right expertise. And when we have multiple SMEs? "I have three SMEs on one project, and they're giving me contradictory feedback." That's a coordination failure on our side, not a SME problem. Are we asking too much of single reviewers and giving too little structure to multi-reviewer projects?

3. The Feedback Void. SMEs give feedback. We incorporate it. We send it back. More feedback. More revisions. "The SME keeps changing the content after we've already built the module. We're on version 7." I've lived version 7. The cycle repeats with no clear decision point. ATD's guidance on formalizing feedback suggests that it should be "as specific and succinct as possible," with reviewers identifying both the type of feedback and the location of issues. But how many of us provide that structure upfront?

These three covered probably 80% of our delays, but I could be wrong about the universality of that pattern.


🤖 How AI Changes the Review Dynamic

Here's where things get interesting, and where I want to be careful not to oversell. AI doesn't fix broken processes. But it can change the fundamental dynamic of what we're asking SMEs to do.

The shift I'm most excited about (and still testing) is this: moving SMEs from creation mode to correction mode.

Traditional workflow: We interview the SME. We take notes. We draft content. We send the draft back. The SME says, "That's not quite right," and we start over.

AI-assisted workflow: We feed existing documentation and process guides into Claude Opus 4.6 (or whichever model fits the use case). The AI generates a first draft. The SME corrects the draft instead of creating it from scratch.

Why does this matter? Correcting is cognitively easier than creating. When a SME reads a draft that says "The quarterly compliance review requires sign-off from three department heads," they can immediately say "No, it's two department heads plus the VP of Operations." That's a 10-second correction. Asking them to write that paragraph from scratch? That email sits in their inbox for two weeks.

But I also want to be honest about a concern I keep hearing: "If I use AI to generate a first draft, will my SMEs trust it less?" It's a fair question. Some SMEs approach AI-generated content with more skepticism, which can actually be a good thing (they review more carefully). Others dismiss it outright. We've had both reactions on our team, and the trust-building took deliberate effort, including being transparent about how the draft was generated and what we needed them to focus on.

Josh Cavalier's ATD certification program on applying AI in L&D makes a similar case. His book, Applying AI in Learning and Development: From Platforms to Performance (ATD Press, November 2025), provides frameworks for exactly this kind of workflow redesign. We're not replacing the SME. We're restructuring what we ask them to do.

Is this working perfectly? No. I've had AI-generated drafts that were confidently wrong in ways that could have slipped past a busy reviewer. We still need careful human judgment. But the correction model is consistently faster than the creation model in our experience.


📋 What We Actually Changed (6 Weeks to 2 Weeks)

When our content approval cycle was 6 weeks, a 6-week review cycle would have made scaling up 637 modules mathematically impossible. Something had to change.

Here's what we implemented. I'm sharing specifics because vague "best practices" advice never helped me.

Structured review templates. Instead of "please review," every review request included three specific questions the SME needed to answer. "Is the process in Section 2 still current as of Q1?" "Does the scenario in Module 3 reflect a realistic customer interaction?" "Are there any safety or compliance concerns we missed?" This alone probably cut one full round of revision.

Smaller review chunks. We stopped sending entire courses for review. Instead, we sent individual modules or sections with a 15-minute estimated review time clearly stated. Would we read a 50-page document someone dropped on our desk? Probably not. Would we spend 15 minutes reviewing 3 pages with specific questions? Much more likely. One common question I see: "What's a reasonable turnaround time to give an SME?" The community consensus seems to land around 3-5 business days for a focused, scoped review. We found that range worked, but only when the review itself was genuinely scoped to 15-20 minutes of work.

AI-generated first drafts. Using Claude, we started generating draft content from existing documentation, then routing those drafts to SMEs for correction. This shifted the SME's task from "tell us everything about this topic" to "Is this draft accurate?"

Clear decision points. We established that after two rounds of review, the content moved to the next stage unless the SME flagged a compliance or safety issue. This prevented the endless revision loop.

Parallel reviews. Instead of sequential review (SME, then manager, then legal), we ran reviews in parallel, with each reviewer focused on their specific domain.

Did all of this come together perfectly from day one? Absolutely not. The first month was messy. Some SMEs pushed back on the structured templates ("just let me review it my way"). Some AI drafts needed significant rework. But by month three, our average approval cycle was hovering around 2 weeks, contributing to an overall 31% reduction in development cycle time.


🔬 What the Research Tells Us We're Still Getting Wrong

Will Thalheimer's LTEM framework (now Version 13, released October 2024) pushes us to think about evaluation beyond the smile sheet. Are we applying that same rigor to our development processes? Someone recently asked: "How do I actually measure our SME review cycle time? What should I be tracking?" It's a great question, and I think the honest answer is that most of us aren't tracking it systematically.

We didn't always measure it rigorously. Some of our "6 weeks to 2 weeks" story is based on project management data, and some is based on team perception. That's a gap I'm still trying to close. If we're serious about improving SME review, we need to track the elapsed time per review round, the number of review rounds per project, and the ratio of substantive to cosmetic feedback. Those three metrics would tell us more than any satisfaction survey.

The 2025-2026 L&D trend data shows that 73% of businesses plan to increase investment in learning technology. But investment in tools without investment in process redesign is just faster chaos. Connie Malamed's Mastering Instructional Design community offers a full workshop on working with SMEs, and the demand for that kind of training tells us something. This isn't a solved problem.


💡 What This All Means

The SME review bottleneck isn't a technology problem, nor a people problem. It's a design problem. We've been asking SMEs to do unstructured work on our timeline, without clear frameworks, without appropriate scaffolding, and then wondering why the process takes forever.

AI helps, but not in the way most vendors pitch it. The value isn't "AI replaces SME review." The value is "AI changes what we're asking SMEs to review." Correction instead of creation. Specific questions instead of open-ended requests. Smaller chunks instead of massive documents.

Could we achieve the same results without AI? Probably some of them. The structured templates and smaller review chunks don't require any technology. But the AI-generated first drafts fundamentally changed the speed at which we could get content in front of reviewers. And speed matters when we're scaling from 50 to 637 modules.

What questions am I still sitting with? Whether the correction model introduces new risks (SMEs skimming AI content and missing errors). Whether structured templates feel too rigid for creative SMEs.


The SME review problem is part of a larger system I've been designing: an L&D AI Operating System that integrates intake, development, review, and measurement into a single practice. I'll be sharing more on that soon.


🎯 The One Thing to Do This Week

Pick one upcoming SME review and replace "please review this document" with three specific questions the SME can answer in 15 minutes. Track whether the feedback comes back faster and whether it's more actionable. That single change, no AI required, is where most of us should start.


If you're experimenting with structured SME reviews or AI-assisted content development, I'd love to hear what's working. What patterns are emerging in your team? What's failing? We're all figuring this out together.

-- Eian

Sources

  • Chapman Alliance. (2010). How long does it take to create learning? [Research study, ~250 organizations, ~4,000 professionals]
  • Tucker, C. (2024). Time estimates for elearning development. Experiencing eLearning. christytuckerlearning.com
  • ATD. (2023). How to prepare subject matter experts to review content. ATD Blog. td.org
  • ATD. (2023). Formalizing feedback from SMEs. ATD Blog. td.org
  • Moore, C. (2024). Action mapping: A visual approach to training design. cathy-moore.com
  • Cavalier, J. (2025). Applying AI in learning and development: From platforms to performance. ATD Press.
  • Malamed, C. (2024). Methods for capturing SME knowledge. The eLearning Coach. theelearningcoach.com
  • Thalheimer, W. (2024). LTEM Version 13. Work-Learning Research. worklearning.com
  • eduMe. (2025). Learning and development trends shaping the frontline experience. edume.com
  • Cognota. (2026). How long does it take instructional designers to create one hour of learning? cognota.com