Overview
Personal Knowledge Management (PKM) is the practice of capturing, organising, and connecting knowledge over time. The dominant tools (Obsidian, Notion, Roam) place the human at the centre of that labour — you build the backlinks, maintain the graph, keep the plugins working. The emerging counter-position is PKA (Personal Knowledge Assistance): AI does the building, indexing, and retrieval; you only touch conclusions.
Key Concepts
PKM (the old model)
- Human does the bookkeeping — tagging, linking, filing, maintaining
- The app is the knowledge base — structure lives inside the tool
- Switching tools means rebuilding or migrating everything
- Output is organised notes; insight requires the human to synthesise
PKA (the emerging model)
- AI does the bookkeeping — builds the index, surfaces connections, retrieves on demand
- The folder is the knowledge base — plain markdown files, tool-agnostic
- Switching AI models (Claude → Gemini) requires zero changes to the files
- Output is conclusions — the AI synthesises; the human reviews and directs
The Platform-Independence Principle
The core insight: the app is a viewer, not the knowledge base. As long as your knowledge lives in plain text files in a local folder, any tool can sit on top — Obsidian, VS Code, a custom interface, or nothing at all. Locking knowledge into a proprietary format (Notion databases, Roam’s graph export) creates migration risk and maintenance overhead.
Conclusions over Inputs
A knowledge base storing raw inputs (notes, summaries, clippings) requires human synthesis to extract value. A knowledge base storing synthesised conclusions is immediately useful. PKA shifts the work of synthesis to the AI, making the base valuable by default.
Synthesis
The PKM → PKA shift is a specific instance of a broader pattern: AI removes the manual labour that previously justified complex tools. When the tool’s value proposition was “we help you organise and connect your knowledge,” that value collapses if AI can do it automatically. What remains is the underlying data — plain files — and the quality of your conclusions.
This vault is already positioned well: markdown files on disk, Obsidian as a viewer, wiki layer storing synthesised conclusions. The wiki ingest workflow (Claude builds and maintains the wiki, human reviews and directs) is the PKA model in practice.
Contradictions / Open Questions
- Does PKA work without a heavy AI dependency? If Claude goes down, the wiki still exists — but active knowledge building pauses. Single point of dependency risk.
- The “conclusions not inputs” principle assumes good AI synthesis. For nuanced domains (language learning, finance), AI-generated conclusions need human verification before they’re trusted.
- Tom’s setup (local folder + Claude Code) works for technical users. PKA for non-technical users is unsolved.