Think about the last time you started a new training project. How much time did you spend re-researching things you already knew? Tracking down that article about spaced repetition you read six months ago? Rebuilding the same context for your AI tool that you built last week?

L&D professionals are some of the most research-intensive knowledge workers in any organization. We read constantly. We synthesize across domains: adult learning theory, business strategy, compliance requirements, technology platforms, behavioral science. We pull from vendor whitepapers, ATD research, academic papers, Slack conversations with SMEs, conference notes, and a dozen other sources every single project.

And almost none of it compounds.

Every project starts from scratch. The research from the last initiative lives in a Google Doc nobody will open again. The SME insights from the onboarding redesign are buried in an email thread. The evaluation framework you built for the compliance program exists in a slide deck on someone's desktop. You know this information exists. You just can't find it when you need it.

There's a better way to work, and it doesn't require a new platform, a budget request, or permission from IT.


TL;DR
  • L&D work is research-intensive, but most teams have no system for retaining and reusing what they learn between projects.
  • Obsidian (free, local, plain text) combined with Claude creates a personal knowledge system that grows smarter with every project.
  • Two patterns: persistent memory (Claude remembers your context across sessions) and the LLM Wiki (Claude builds a searchable, interlinked knowledge base from your research).
  • Practical L&D applications: SME knowledge capture, research synthesis, onboarding acceleration, framework libraries, and evaluation design.
  • Setup takes 30 minutes. No plugins required. No vendor lock-in.

The Problem We Don't Talk About

L&D teams have gotten good at producing content. Authoring tools are better than ever. AI can generate first drafts. Templates and design systems make production faster.

But production was never the bottleneck. The bottleneck is the research, synthesis, and decision-making that happens before a single slide gets built. Which framework applies here? What did the data say last time we tried this approach? What did that SME tell us about how the process actually works on the floor? What does the research say about assessment design for this type of learning?

That knowledge work takes 60-70% of most project timelines. And almost none of it is captured in a way that's reusable.

We have LMS platforms for delivering content. We have authoring tools for building it. We have project management tools for tracking it. But we have nothing for the knowledge that informs all of it. The institutional memory of an L&D team usually lives in the heads of senior people, and when they leave, it walks out the door with them.

This isn't a technology problem. It's a workflow problem. And it's solvable right now.


What a Personal Knowledge System Looks Like

The concept is straightforward: a local folder of markdown files that an AI can read, write, search, and maintain. Andrej Karpathy described the pattern in April 2026, calling it the LLM Wiki. Raw sources go into a folder. An LLM reads them and incrementally compiles a structured, interlinked wiki. The wiki grows with every source you add, every question you ask, and every synthesis the AI produces.

The tool is Obsidian. It's free, runs locally, and stores everything as plain text files. No cloud dependency. No subscription. No vendor lock-in. If you stop using it tomorrow, your files are still markdown files in a folder.

The AI is Claude (or any capable model with file access). Claude Code can read and write to your Obsidian vault directly. No API setup. No plugins required for the basic workflow.

Together, they create something that didn't exist before: a knowledge base that gets smarter the more you use it, maintained by an AI that doesn't get tired of cross-referencing and indexing.


What This Means for L&D Work

Here's where it gets practical. These aren't hypothetical use cases. They're the actual workflows where this system pays for itself.

Research That Compounds

Drop every article, paper, and report you read into your vault's raw/ folder. Tell Claude to process it: summarize the key findings, identify concepts, update cross-references, and connect it to what's already in the wiki.

Six months from now, when you need to design an assessment strategy and you vaguely remember reading something about cognitive load and scenario-based testing, you don't search your email. You don't scroll through bookmarks. You ask Claude. It searches your wiki, pulls the relevant pages, and gives you a synthesis grounded in sources you've already vetted.

The ATD research report you read in January. The Philippa Hardman article from March. The SAGE Journals study on AI-generated assessment quality. They're all in the wiki, linked to each other, with summaries Claude wrote when you first ingested them. Your past research is now a queryable asset, not a forgotten bookmark.

SME Knowledge Capture

Every L&D professional knows the pain: an SME gives you 45 minutes of gold in a kickoff call, and three weeks later you're trying to remember what they said about the exception handling process. Your notes are incomplete. The recording is an hour long. The context is gone.

Instead: after the SME call, drop the transcript or your notes into raw/. Tell Claude to extract the key process knowledge, flag ambiguities that need follow-up, and file the insights into the wiki under the relevant project and concept pages. The SME's knowledge is now captured, indexed, and connected to everything else you know about that topic.

Next time you have a question about that process, you don't schedule another meeting. You query the wiki.

Framework and Methodology Library

How many times have you explained Kirkpatrick's four levels to a new team member? Or walked someone through when to use Bloom's taxonomy versus Mager-style objectives? Or debated whether ADDIE or SAM is the right fit for a particular project?

Build a wiki section for your methodologies. Not textbook definitions, but how your team actually uses these frameworks. Which Bloom's levels matter most for your compliance training. How you've adapted Kirkpatrick for programs where Level 3 data is hard to collect. The specific evaluation rubrics that worked for your last three projects.

This becomes the onboarding document you wish you had when you started. Except it's alive, updated by Claude every time you add a new source or refine an approach.

Project Memory

Set up a persistent memory system (the other half of the Obsidian + Claude pattern) and Claude remembers your projects across sessions. Your content approval workflow. Your stakeholder preferences. Your LMS constraints. The naming conventions your team uses. The style guide for your brand voice.

Instead of re-explaining all of this every time you start a new Claude session, it loads from memory. The AI picks up where you left off. The 15 minutes you used to spend rebuilding context is now 15 minutes of actual work.

Evaluation and Measurement Design

Measurement is where most L&D teams struggle, not because they don't understand the theory, but because translating theory into practice requires judgment built on experience. What did the Level 3 survey data actually tell us last quarter? Which leading indicators predicted business impact? Where did our measurement approach break down?

Capture those lessons in the wiki. Over time, you build a measurement knowledge base specific to your organization, your programs, and your data. When it's time to design the evaluation plan for a new initiative, Claude can draw on everything you've learned from past programs, not just what you remember.


How It Works (The Non-Technical Version)

The system has two parts that work together:

Part 1: Memory (Claude Remembers You)

Create a memory/ folder with files that describe your context: your role, your projects, your preferences, your tools. Create an index file (MEMORY.md) with one-line descriptions of each memory file. Claude reads the index at the start of every session and loads the relevant details on demand.

This is the equivalent of giving a new contractor a thorough brief before they start work, except you write it once and it persists forever.

Part 2: Wiki (Claude Builds Your Knowledge Base)

Create a topic folder with a raw/ subdirectory for source materials and a wiki/ subdirectory for Claude's compiled pages. Drop sources into raw/. Tell Claude to process them. It creates summaries, concept pages, cross-references, and an index. Ask questions. File the answers back into the wiki.

The wiki is Claude's domain. You curate what goes in. Claude handles the organization, cross-referencing, and maintenance. That's the division of labor that makes this sustainable.

What You Need

  • Obsidian (free, runs on Mac, Windows, Linux, iOS, Android)
  • Claude Code or any Claude interface with file access
  • 30 minutes for initial setup
  • No plugins, no APIs, no budget approval

For the full technical setup guide with folder structures, frontmatter conventions, and token optimization strategies:

Click Here to Access the Guide


The Compounding Effect

Here's why this matters more for L&D than for most other roles: our work is inherently cumulative. Every project builds on past projects. Every framework we learn informs the next design decision. Every SME conversation adds to our understanding of the business.

But without a system, that accumulation happens only in our heads. It's fragile (people leave), incomplete (memory is selective), and unsearchable (try finding a specific insight from a meeting eight months ago).

A knowledge system makes the accumulation concrete and queryable. Three months in, you have a wiki with a few dozen pages and a memory system that saves you 15 minutes per session. Six months in, you have a genuine research library that answers complex questions about your domain. A year in, you have something that would take a new team member months to build from scratch.

That's the compounding effect. Every source you add makes the next query better. Every session builds on the last. Your knowledge stops being ephemeral and starts being infrastructure.


What This Isn't

This isn't a replacement for an LMS. It's not a content authoring tool. It's not a team collaboration platform. It's a personal (or small team) knowledge system that makes the research and decision-making behind L&D work faster and better informed.

It also isn't a "set it and forget it" tool. The system gets better because you use it. Drop sources in regularly. Ask questions. Run lint passes to catch gaps. The AI handles the maintenance, but you drive the direction.

And it's not locked to Claude. The vault is plain markdown files. If a better model comes along next year, point it at the same folder. Your knowledge base goes with you.


How to Get Started Now

  1. Download Obsidian if you don't already have it. Create a new vault.
  2. Create two folders: memory/ and a topic folder (e.g., ai-in-ld/) with raw/ and wiki/ inside it.
  3. Write a MEMORY.md index with five entries about yourself: your role, your current projects, your biggest L&D challenges, your tool preferences, your team context.
  4. Drop five articles you've read recently into the raw/ folder. Use Obsidian Web Clipper for web articles, or just paste the text into markdown files.
  5. Open Claude Code (or Claude with file access), point it at your vault, and say: "Process the articles in raw/ into wiki pages. Create summaries, identify key concepts, build an index, and cross-reference related ideas."
  6. Ask it a question that spans multiple sources. See what comes back.

That's the proof of concept. From there, it's just a habit: every article you read, every SME conversation you have, every framework you apply goes into the system. The wiki grows. The memory deepens. The AI gets more useful with every session because it has more of your context to work with.

The tools are free. The setup takes 30 minutes. The only cost is the AI usage, which drops over time as the memory system reduces redundant context loading.

The question isn't whether L&D professionals need a personal knowledge system. We've always needed one. The question is why it took this long for the tools to make it practical.

—Eian

Sources & Further Reading